Unsorted webdev notes

From Helpful
Jump to: navigation, search
Related to web development, lower level hosting, and such: (See also the webdev category)

Lower levels

Server stuff:

Higher levels

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.

Communicating both ways between client and server

Sometimes the server has things to say.


"Hey server, do you have something new to say since last I asked?"

Is the laziest option, but not a highly efficient one if you care about the server's message being here sooner rather than later.

Want it to be on screen there within order of 100ms?

Well then, you need ten XHR-style queries per second.
And hope it doesn't get backlogged, or throttled.

Hanging GET

(See also COMET[1])

This is an XHR-style call on a URL where the server keeps the connection open, but only responds once it has something to say.

This works, and in some ways this is a cleverer variant of polling, but:

  • is a bit hacky on the browser side
for robustness you must deal with timeouts, routers that kill idle connections, etc. so be ready to re-establish these when needed.
  • occupies a lot of connections on the server side.
which at scale can run it out of sockets
  • is one-way (server to client), mostly.

Server-Sent Events

Supported by most browsers since 2015, but IE never did, and Edge only did in 2020.

Could be considered a formalisation of the hanging get with a cleanish browser API.

One-way (server to client), mostly.

What you get is serialization, some browser hooks, automatic reconnection, and you can define fairly arbitrary events.

Optionally, event IDs let the browser know what it saw last before a reconnect, so it's easier for servers t support sending an appropriate backlog.

Useful for notifications and such, and avoids polling and its latency and connection overhead.

Still plain HTTP.


WebSockets are

  • two-way
  • kept open
  • sends whole messages (text or binary), in that the browser-side API is event-based.

...so allow both push as well as pull systems.

The initial initial handshake is HTTP-like (in part so that you can use the same port, and get the same webserver/proxy to deal with both similarly(verify)), but the communication after then switches to entirely its own thing.

Fairly widely supported since around 2014 [2]

Upsides are that

  • server push of arbitrary messages
  • lower latency than request-response, more standard than hanging GET and more flexible than Server-Sent Events

Downsides include that

  • they don't reconnect automatically(verify)
though it's not very hard to set that up
that does have implications on protocols where it's important you don't miss events
  • it's basically only network later - hence 'socket'.
you don't even get to add HTTP headers for the HTTP phase of the setup
you must implement your own protocol (events back and forth tend to be simple enough)
you must implement your own semantics
you must implement your own cacheing
  • you have to avoid common design pitfalls in the process
  • you may need to implement your own DDOS alleviation / put some software in front to do so for you
and you basically can't do that with HTTP auth; WS doesn't allow that
  • holds open a connections; there's a limit per server
so it's likely something you use for logged in users, not arbitrary pages
  • WS allows some things

Messages and frames

Frames are header + body.

A message is made up of one or more messages frames, in that frames can be marked as a continuation fragment of the previous - and so of an overall message, to be reassembled as a whole.

Websocket proxies are free to reframe messages.

Messages or streaming?

Websockets are often explained as an API sending and receiving messages as a whole, suggesting you create whole messages, calculate their size, and then send.

And that's done for you. It's often considered whole-message based, because the API on top usually does that.

The specs are more interesting than that -- but in messy ways that you probably don't care for, because it tried to meet people with varied interests.

So the specs allow sending data without buffering, and without knowing the message size at the time you start sending, by terminating messages with the FIN frame flag[3] instead, which are provisions to use it as a stream-based protocol as well.

Practically, there are a number of footnotes that mean you probably want to

  • avoid infinitely long messages (streaming allows this)
  • avoid streaming in general, except in cases you really want it
  • avoid huge messages
one reason being that APIs typically expose messages (and not frames).
exposing a frame API is not required by specs
  • avoid huge frames within messages
frames can be petabytes in theory

While receiving streams is required to be in-spec, exposing a frame API is not, so standard implementations will receive a message before letting you consume data from it.

Receiving data as a stream would have to be a lowish level API change at both sides, so in the real world, e.g. streaming media with websockets would require you both chop into reasonably sized frames (which a proxy could do) but also into reasonably sized messages.

That is, if you want a standard websocket client to use consume your stream. And if you meant 'browser', then you do.

Maybe they figured we could figure out streaming by a later-standardized extension? That doesn't seem to have happened, though.

A smaller reason for 'have small messages (and by extension small frames)' is compression.

While the basic websocket spec doesn't have compression, it has extensions (and negotiation of such) meaning it effectively allows it.  : A few extensions have popped up[4] but the only one remotely standard one is permessage[5].
Which implies compression is effectively exclusive with streaming - unless you implement it on top. (This isn't a huge issue you'ld generally only want to stream media, and probably only compress text which is likely to never be very large and rarely streamed)

Practical issues

While intended to support a browser, and the connection should come from a browser, the ws:// / wss:// location is public, so anyone can connect to a websocket.

This is both a security issue and potentially a DoS issue.

Neither are new on the web at all, it's just that basically no existing solutions apply, so you have to do it yourself.

That said, you would probably only use WS on authenticated users, and not expose it to the world.

Auth tickets plus using WS over TLS goes a long way. And while you have to implement the auth yourself, that's not the worst.

On DoS

To deal with DoS attacks, implementations should probably drop connections with

  • very high message size (and implicitly fragment size) leading to high memory allocation
mentioned in the standard (10.4).
  • very high fragment size and/or very high fragments per message (to deal with streaming)
  • very low transfer rate (...while actually sending)

On security

  • there is no auth on this separate connection (10.5)
the HTTP nature of the handshake allows some form of auth that way, but you may want to add more
browsers will help by not allowing willy-nilly connections, but that only helps connections from browsers
  • a server is effectively a separate server open to the internet (exposed via an URL), so even without auth you probably want to somehow verify a connection came from a page
it's up to you to implement that, e.g. a token system
  • Technically, WebSockets are not necessarily restrained by the same-origin policy
by default, anyway; Origin: header is optional. This isn't an issue from browsers, because they'll probably use it.
keep in mind restrictions via CSP and openings via CORS (verify)
  • ...making Cross-Site WebSocket Hijacking[6] (like CSRF but exposing a potentially interactive protocol) an interesting thing
session cookies help
  • there are also things you could do that you proably really shouldn't, like tunneling things

Much of this is only really an issue if the page that initiates a WS connection is comprimised (at which points most bets are off anyway), but it's still something to keep in mind.

Making things work in more browsers

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

For context

Shim is a generic programming term, often about whatever patches you need to make something work, at all, in a different environment

In general sense and in the webdev sense it's not necessarily about APIs -- yet practically that's often a sensible central spot to make that change.

Reasons might include:

makes a newer API work in an older environment
makes an older API work in a newer environment
then often called a compatibility layer
running programs on different software platforms than they were developed for
e.g. wine has a shim intercepting syscalls
Microsoft's compatibility layer, ensuring that programs continue to function after major windows version upgrades, involves a whole lot of API interception (based on knowledge of specific programs, and/or the compatibility mode you've selected)

altering performance to an API, without changing its overall function
e.g. wikipedia's example of hardware accelerated APIs

Specifically in webdev

...both shim and polyfill tend to exist because the real world of browsers is varied in support, and we need code for "making a feature you expect to be there universally" - and shim and polyfill cover variants of that:

  • Shim, often about whatever patches you need to make something work, at all, in a different environment
often APIs you want to use that are not supported in all current browsers
again, it's not always about APIs, but frequently it is.
  • Polyfill usually specifically means "shim for a web browser API", and then often one of:
making it work in pre-standard
making it work in the few browsers that are lagging behind universal support (mostly if you can imitate it with something that exists)
making it work in in browsers that specifically chose not to implement it, or implemented it poorly

...but people are inconsistent with the terms, so they're sometimes just the same thing.

Both come down to "someone else's javascript duct-taped on", but in a nice way.

graceful degradation:

accept that you may not get a particular feature
ensure that when that features is missing, the fallback will not break
...though it's allowed to look a little worse, not perform quite as well, be static rather than dynamic, etc.

progressive enhancement

start showing something supported by everything, then add things that make it shinier
since that probably means less-universally-supported things, only when
you know (e.g. detect) it will work
or when it falls back to just not do that shinier thing

The argument to both GD and PE is sort of "having an approach to consistent UX, is better than none at all".

In theory PE is an easier user-experience guarantee than GD, in that you start with something that works and is more established.

In practice,

both can be broken, depending on how you do it.
both can be poorer UX by being one reason it may take half a dozen relayouts before people can read your damn webpage text

GD and PE also make you think a little about spiders (and SEO),

e.g. if you're thinking about putting navigation in scripting (only; this is why there are tricks like "generate regular HTML links, have JS strip that and convert it into event based"; this has been done for many years, but these days that's often called a router).

Inline data URLs



Example: (source)




The inline image

  • will always be larger than the original data,
saving a round trip may outweigh that, but only for extremely tiny images
  • will not be cached.
though when placed in CSS it's effectively cached as part of that

So arguably this is nly useful for very small images, where saving a roundtrip outweighs both of those.


  • Modern browsers will also treat data URLs as having unique origins - not the page's.
  • IE/Edge only support this for images. But that's probably what most people would use this for.

See also:

HTML5 isn't a singular standard (or, "why WHATWG isn't very related to W3C")

Audio and video



Web Content Accessibility Guidelines


Accessible Rich Internet Applications

aria-* attributes

Custom attributes

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

We have had a long-standing question of "can we just add non-standard attributes to HTML"

This is arguably four questions:

  • can we add them to the serialized HTML document?
  • can we add them to the serialized XHTML document?
  • can we add them to the DOM in JS? (this is nicknamed expando attributes)
  • will this clash with something in future standards?

The answers are roughly:

  • For HTML you can get away with it.
HTML4 won't validate, but nothing else bad will happen
HTML5 doesn't do validation anymore.
HTML5 allows it - and suggests using
prefix for practical reasons (see below)
  • For XHTML, browsers are somewhat likelier to actively complain, where for HTML most rarely would.
maybe less so these days? Test that before doing it(verify)
  • In JS you could basically always get away with it.
Basically no browser will really care about DOM alterations it doesn't understand.
Apparently IE once leaked memory around expando attributes - but who cares about IE anymore?

data-* attributes

Basically, HTML5 declaring a prefix that it won't use itself in the future.

In modern browsers you can also get to it slightly more easily than an explicit getAttribute(). For example, with:


You can do:


(and in theory in CSS with attr(), but only for content: [7])

(You can pass data to CSS more variably - see var())

Practically speaking, data- attribute probably won't cause clashes within any one site or app, because you control it and can fix it.

If you're going to do something interoperable (like frameworks or libraries or web components), document it - and note that that can still be unhelpful when it means you have to change one codebase or other.

See also:

Semantic elements

Drawing things

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Reasons for:

  • all in DOM
  • purely in standards
  • can do gradients
  • can do shapes via clipping
  • can do animations
keyframed from-to-stuff,
and animating properties (beyond positing: opacity, rotation, skew, scale) and combinations

Reasons against:

  • can't do much more than that
  • some things not so implemented yet[8]

HTML5 <canvas>

Controlled by JS

Reasons for:

  • well supported[9]

Reasons against:

  • contents are not in the DOM, so avoid putting main website text in it - consider screen readers and search engines
  • Requires JS
  • not always many upsides over raster images or SVG
  • 2D only


Reasons for:

  • Well supported[10]
  • part of the DOM means flexible (also meaning e.g. some basic :hover-style interactivity with plain CSS)
  • basics well supported
  • may compress well
  • scales automatically, less worry on high-DPI
  • can do interactivity

Reasons against

  • for a long time, not all effects/animations were universally supported
  • some CSS-SVG was similarly weird


Is basically JS-controlled OpenGL ES2 on top of a HTML5 <canvas> (so inherits canvas' upsides/downsides)

Reasons for:

  • may be very smooth due to hardware acceleration

Reasons against:

  • those for canvas
  • can be heavy on CPU/GPU/battery / low-spec machines
and generally doesn't perform nearly as well as you would expect for complex things(verify)
  • privacy worries (mostly minor - machine fingerprinting and such)

All have basic supported in modern browsers


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


D3 notes







Unity web

Hashbang URLs