General webdev notes

From Helpful
Jump to: navigation, search
Related to web development, hosting, and such: (See also the webdev category)
jQuery: Introduction, some basics, examples · plugin notes · unsorted

Server stuff:

Dynamic server stuff:

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.



Mobile usability / legibility

"Responsive"

Viewport

Mobile-only CSS

Many people cheat with something like:

@media only screen and (max-device-width: 480px) { ... }

...because that's more or less the largest edge on the largest phone, and anything much larger starts being able to render as desktop pages - though tablets are a gray area there.

It's still a good idea to deal with how much screen there is (this is what responsive refers to), rather than what type of device it may be.

img srcset and sizes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

HTML5 additions that allow the browser to be in more control of which image it fetches and shows, based on what kind of resolution it's running on / the window is at.


For backwards compatibility with non-supporting browsers, also set src


It is aimed at two distinct goals:

  • using smaller images on smaller layouts
  • layouting the same, but having different resolutions on different devices

More on that below, first the tag gritty:


srcset is a comma-separated list of either url widthdescriptor or url pixeldensity

  • width descriptors
image's inherent width, in pixels, with unit w, e.g. 600w
  • pixel density descriptors, e.g. 1x

These are separate goals. Mixing width and pixel density in the same srcset is invalid


sizes is not about image choice, but about layouting.

It's a comma-separated list of pairs:

  • media condition, e.g. (max-width:480px) meaning "I am in that small a small window/device"
  • the width it should fill, either in percent or physical size


Practical - image choice

The srcset attribute defines the alternative images there are, and their pixel size.

srcset="image1_14.jpg 140w, 
        image1_20.jpg 200w, 
        image1_48.jpg 480w,
        image1_80.jpg 800w"

The browser decides how it uses this information, though (without a sizes attribute) it'll generally pick the one closest to the viewport size.(verify).

Since that's roughly the document width, and on mobile that's often the device width, that's basically what you want.


The main reason for this is to cut bandwidth for mobile devices.

People have suggested you could have smaller images cropped on just its subject, while larger screens show more context.

This is true to a degree, but keep in mind windows that change size don't obey that rule: since the point was to avoid unnecessary transfers, it will resize a larger cached image for a smaller view. This doesn't matter on mobile much, though makes testing this on desktop more interesting.


The sizes attribute, while it has no direct relation to srcset's sizes, lets you hint which of the options is best at certain window/document sizes.

Separating it from srcset means e.g. that the logic doesn't necessary have to be based on knowledge of the alternatives.


Say we have the srcset above, and you know that you'll be showing two images side-by-side, and some whitespace. You could say that:

sizes="(max-width: 320px) 150px,
       (max-width: 480px) 200px,
       800px"

Notes

  • keep in mind that scaling down a desktop browser probably keeps using the largest image, rather than downloading a smaller one.
  • the second number is a 'choose closest to this'
'closest' means it allows different units - px, em, percent, or whatnot
  • if omitted, it acts as if sizes="100vw", meaning 100% of the viewport width
  • the last entry acts as a default
in general use of inline images on mobile, 100vw can make more sense
  • the first matching condition applies (and max-wdith is an "is it less than?"), so order matters

Practical - resolutions

Pixel density means "choose this if you have at least this many device pixels per CSS pixel"

Basically, this makes it easy to specify "on devices with high DPI, use a higher-res image within exactly the same layout".


Right now this is primarily about retina displays and such, which to CSS have a lower logical resolution than the physical pixels it has.


This is confusing due to the way it CSS defines the pixel. You could read up, but it may make you grumbly, so you can generally just take a good stab at what amount of pixels and pixel density covers most cases (regular is roughly 150DPI, retina roughly 300DPI, so in generall a 1x and 2x is enough, though a 1.5x as well can be more flexible).

Which variant you specify in src as a fallback is up to you.

Notes:

  • If not specified, 1x is assumed.
  • having two entries with the same pixel density is invalid
  • In supporting browsers, if src is specified, is interpreted as a candidate with implied pixel density descriptor 1x

Communicating both ways

Sometimes the server has things to say.

Polling

"Hey server, do you have something new to say"

Is the laziest option, but not a highly efficient one if you care about the server's message being here sooner rather than later. Want it to be there within 100ms? WellWell then, you need ten XHR-style queries per second. And hope that doesn't get backlogged, or throttled.


Hanging GET

(See also COMET[1])

This is an XHR-style call on a URL that, as long as there is nothing to say, simply doesn't respond yet.

This works, but is a bit hacky on the browser side.

For robustness you also have to consider timeouts, routers that kill idle connections, etc. so be ready to re-establish these when needed. So basically the non-stupid version of polling.

One-way, mostly.


Server-Sent Events

Basically a formalisation of the hanging get that makes the browser side much cleaner.

What you get is serialization, some browser hooks, automatic reconnection, and you can define fairly arbitrary events.

Optionally, event IDs let the browser know what it saw last before a reconnect, so it's easier for servers t support sending an appropriate backlog.

Useful for notifications and such, and avoids polling and its latency and connection overhead.

Still plain HTTP.

One-way, mostly.

http://caniuse.com/#search=server%20sent%20events


WebSockets

WebSockets are a bidirectional (full-duplex) connection, and browser-side API is event-based and focused on messages (text or binary).

Not HTTP, though the initial handshake is HTTP-like.


Varying interests made the specs a bit messy.

While explained as intended API sending and receiving messages, at a lower level the units of what are sent is frames, which can actually be terminated by a frame flag (FIN)[2] so it's really a stream-based protocol in that it can be infinitely long, you can send data without first buffering, or knowing the message size ahead of time.

Yet there are a number of footnotes and details that mean you probably want to avoid streaming. Or huge messages. Or huge frames (can be petabytes).

These reasons include that APIs may expose messages (not frames), and that compression, if present, is typically per-message.



Server side needs something that can be connected to.


Security implications:

  • as a separate, non-HTTP connection, there is no auth.
  • WebSockets are not restrained by the same-origin policy
by default, anyway; note restrictions via CSP and openings via CORS
  • making Cross-Site WebSocket Hijacking[3] an interesting thing
  • also means there are things you could do (e.g. tunneling things) that you really shouldn't.


Much of this is only really an issue if the page that initiates a WS connection is comprimised (at which points other bets are off as well), but it's still something to keep in mind.



See also:


Making things work in more browsers

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

The eternal pain. Even the terms are getting more detailed.


graceful degradation - accept that you may not get a particular feature, and make sure that you get something that looks close enough but may not look or perform quite as well, be static rather than dynamic, etc..


progressive enhancement - the opposite approach to the same thing: start showing the core of the content and the things supported across the board, then add on the shinier bits.

The argument seems to be that the UX will be easier to guarantee because you've thought about the steps more
...though depending on how you do it, it may be just as much as a pain as with GD, e.g. if either way means it relayouts half a dozen times when you just wanted to read some article text.
Seems to be coined from a "make purely semantic HTML5" angle.


shim and polyfill are about making a feature work in more browsers.

usually about APIs. Where newer browsers will do it directly/natively (sometimes faster), the shim/polyfill implementation will make it work at all elsewhere, often emulating it somehow
Shim is a more general term generally meaning "a thing that makes this work elsewhere"
Polyfill often means "I want this new API to also work in older browsers that don't natively do it"

Drawing things

Backing tech

Libaries

D3

Pixi

UI frameworks

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.


Bootstrap


Foundation (ZURB)

  • responsive, mobile, semantic


Skeleton


React


Angular, Angular2


Semantic


uikit


jQuery Mobile


Material UI


Pure (yahoo)



Kendo http://www.telerik.com/kendo-ui


Sencha touch

robots.txt

Intro

/robots.txt lets you ask spiders/robots not to visit URLs or directories, to opt out of robot's basic find-everything behaviour.

Assuming they look this file. And respect it.


Practical uses include:

  • preventing unfinished work or relatively private data from appearing in web searches - an easier alternative to making sure you have no links to it (though at the same time it's a security issue in that you reveal where your relatively private data sits)
  • prevent spiders from wasting bandwidth on things like:
    • temporary directories
    • short-term caches
    • very large files (e.g. where you may want to put raw originals, downloads (assuming you have a page describing it that will get indexed), and such
  • selectively disallowing spiders, such as wayback, programs (BlackWidow, wget, etc.), or search engines.


You can expect spiders to take a few days to noice change in robots.txt and make it current throughout its (usually distributed) setup. It isn't really easily controlled or predicted, which makes robots a poor choice for temporary blocks.

For temporary blocks and for spiders that don't respect robots.txt, you could so agent checks in the server. It gives more control, but is more complex and more work. You could do it in apache with mod_rewrite (except in shared hosting), or in any dynamic generation (but that's more work).

The contents and logic

About Disallow:

  • basically lets you specify a starting path
  • Wildcards are not supported, but:
    • Disallow: /
      means disallow all
    • Disallow:
      (no value) means allow all
    • Strings act as 'starts with' strings, so /index would block both /index/ as a directory directory and /index.html
  • You get to specify one path per Disallow; use multiple Disallows if you want to disallow a list of things things


About User-agent:

  • User agent names should be used interpreted case-insensitively
  • You can use
    *
    meaning 'everything'.


Further notes:

  • A spider that checks whether robots.txt has something to say about a given URL will use the first (applicable_user-agent, applicable_disallow) pair and stop processing.
  • The default if-no-rules-match policy is to allow, but a catch-all disallow at the end is possible.
  • ...which in combination means that order matters, and allows slightly more complex constructions when you use allows - you can do both agent whitelist and blacklist.
  • Googlebot has some extensions, including wildcards and an Allow, but these aren't supported by many other things
  • Don't list secrets. People with bad intentions may look at your robots just to find interesting things


Examples

Some user agents (bots):

  • Media bots: Googlebot-Image, yahoo-mmcrawler, psbot, etc.
  • General bots: googlebot, msnbot, yahoo-slurp, teoma, Scooter, etc.


Some disallows:

#Nothing should index...
User-agent: *
#...volatile things, 
Disallow: /cache/
Disallow: /tmp/

#...or development, if accidentally linked to somewhere (lines copied from somewhere)
Disallow: /_borders/
Disallow: /_derived/
Disallow: /_fpclass/
Disallow: /_overlay/
Disallow: /_private/
Disallow: /_themes/
Disallow: /_vti_bin/
Disallow: /_vti_cnf/
Disallow: /_vti_log/
Disallow: /_vti_map/
Disallow: /_vti_pvt/
Disallow: /_vti_txt/
# dynamic apps when they generate almost infinite links to themselves
Disallow: /cgi-bin/linker
# and you can have more specific disallow combinations
User-agent: Googlebot
Disallow: /dynamic.html


See also