Javascript notes - browser related, APIs
Related to web development, lower level hosting, and such: (See also the webdev category)
Lower levels
|
Contents
- 1 Global objects
- 2 Navigating and altering the DOM
- 3 On events
- 4 Network related
- 5 Storage related
- 6 Execution related
- 7 dev stuff
- 8 The mess that is modules (and related scope details)
- 9 Unsorted
Global objects
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
For context, Node types include
- Element (HTML element)
- Text
- Comment
- CDATA
- and a few special-cased things
Mostly we care to work with Elements, but sometimes we care about the other node types too.
Selecting stuff
- searches subtree for element with given id= attribute
- searches subtree for elements with given name= attribute
- searches subtree for elements with given tag name
- searches subtree for elements with given class name(s) (string can contain space separated classes)
Supported since DOM1, so bascically forever.
- search for first match' for a CSS query
- returns Element
- search for all matches for a CSS query
- returns NodeList
You would probably start searching on document.querySelector, but every Element has these functions.
Supported since 2010ish[1]
- id
- nodeName
- nodeValue
- getAttribute(str)
Navigation
- parentNode
- childNodes (a NodeList)
- nextSibling (any node, e.g. including Text, Comment, Element)
- nextElementSibling (an Element)
- previousSibling
- previousElementSibling
Events
- addEventListener(type, listener)
- removeEventListener(type, listener)
Altering
- setAttribute(str,val)
- removeAttribute(str)
- appendChild()
- removeChild()
- Element.innerHTML (not standardized(verify), but widely implemented for convenience)
- Keep in mind that manipulating and appending Elements is often faster, because every innerHTML set requires a parse
- Element.insertAdjacentHTML[2]
- similar to but preferred over innerHTML
- since 2012ish
- Node.textContent[3]
- faster (and safer) when just inserting text, but has some practical footnotes
- since 2011ish
On events
In particular IE didn't care about standards and did its own thing, so before we could consider ignoring it (around 2013), you'd basically be implementing your own event library, so there was a good argument for using someone else's, or a fuller javascript libraries that handled further other browser variation.
Varied events
Propagation
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Events propagate two ways (basically a treewalk), first by being captured down the tree (root first, on its way to the target), then bubbling up it from the target to the root element. Note that W3's terms argue more from the perspecive of the node itself than from an ongoing process of treewalking.
If something captures, it will be called before getting to an object.
Controlling propagation
Some events can be cancelable (like clicks), some are not (like losing focus)
- ...which means your event handler can cancel propagation to further event handlers
- do so with event.stopPropagation()
You can usually argue whether you want to do this per action type. For example
- when you want things to follow your mouse, you may want mousemoves to not be blocked when another element happens to capture it, whereas
- when you click a button, you probably want only the frontmost visible thing to react to it, not things under it
- keep in mind that aborting all handling of the current event may break complex interfaces that rely on multiple independent handlers
Event target
Consider the following as test code:
function evtest(event) { alert(event.currentTarget.nodeName); alert(event.target.nodeName); } window.onload = function() { document.getElementsByTagName('body')[0].addEventListener('click', evtest); }
target and currentTarget can be different for any event that bubbles up through the DOM, which are most of them (exceptions include focus, blur, mouseenter, mouseleave)
- event.target
- element the event was dispatched from
- basically which in a text document is likely to be a p, div, button, img, or such - the foreground thing your cursor was on
- it's effectively the place that actually started the event happening
- event.currentTarget is
- element that the eventListener was attached to
- body, in the above
- it's effectively the thing that really listens to the event
- this
- same as event.currentTarget (...within event handler callbacks)
Which is more intereting arguably depends more on your code style.
It can e.g. make sense to attach listeners to every single DOM node you care to listen to
- in which case the two may often be the same, though you may prefer .currentTarget (particularly if you stored some state on there for the handler to use)
Attaching event handlers
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Adding event handlers to DOM nodes
The most flexible, standard way is to use addEventListener (see e.g. [4])
- you can set any amount
- you can remove individual ones later [5]
- identified by (eventtype,function) so you'd want to keep the funcion reference
- (IE had its non-standard and different-behaved attachEvent instead)
Assigning onsomething="code" inside HTML ("inline")
- not standard
- yet typically supported
- you can set only one (because you assign it)
- nor is it a clean way of structuring things (separation of responsibilities, escaping, more)
- but which you can assume all widely used browsers will support(verify).
- Note they usually put these events in the bubbling phase.(verify)
Assigning domreference.onsomething=functionref; from a script
- not standard?(verify)
- yet typically supported
- not nearly as dirty as shoving code in the HTML - but can be less future-proof when even handling can become more interesting
- mostly the same as the previous, though you have more control over scope
- you can set only one (because you assign it; if you want more, you'd want a wrapper, or to call what you found there before/after)
- Note: You should use all-lowercap names for most-browser compatibility.
Using angular, jquery, or whatnot
- because these will abstract away the browser muck
keyboard events
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
(For historical reasons) there is some inconsistency between
- ...the keycodes they are generated (varied slightly per browser)
- ...where exactly they are reported
- event.charCode, event.keyCode, event.which - now considered deprecated (though sometimes better supported)
- Since 2017ish[6] you can use event.key (with some footnotes)
As such, if you use a javascript library, see if it normalizes that muck and makes your life easier.
event.key members include
- .altKey, .ctrlKey, .metaKey, .shiftKey - whether Alt(/Option), Ctrl, or Meta/Command was down when this was pressed
- .key - a string representing the key
- see this list
- note that
- Basically includes shift processing (because keyboard layout differences mean you have to)
- some things may report as 'Unidientified' (even if they exist in that list, e.g. Fn on my laptop), due to context-specific constraints
- .location (helps distinguish keys that are there more than once, like alt, keypad)
- .repeat - being held down?
Deprecated but will probably still work (for the time being):
- .keyCode
- .which
- .charCode
- .keyIdentifier
- .keyLocation
See also:
- https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/keyCode
- https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/which
- https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/charCode
XmlHttpRequest
XHR (a.k.a. AJAX around when it was new and hip) in itself was decently supported since 2006ish(verify)
XHR level 2 since ~2015ish[7] (if you ignore IE)
- adds things like timeouts, progress,
Writing code for XHR that properly handles regular edge cases is a bit long, so people generally either have their own helper function, or use a library to make life easier. (Note that now there is also fetch(), which is more featured, though not necessarily that much shorter when you do all the error handling)
If you do do a little manual XHR handling, the following object members/functions= may be interesting:
- readyState
- 0: Uninitialized/Unsent (initial value, and set when manually abort()ed)
- 1: Opened (open() was successful)
- 2: Sent/Headers received: UA completed request, response headers received, waiting for response data)
- 3: Loading/Receiving: and immediately before receiving message body chunks (may be called several times, and happens only after receiving headers)
- 4: Loaded/Done: All data received
- status and/or statusText
- status (an int) and statusText (a string) must reflect the HTTP status when set.
- They must be available when readystate is 3 or 4; if not, should be unavailable and access should (!) raise an exception. Unavailability may also be caused by the UA never being able to parse a status out of the response.
- setRequestHeader(header, value);
- abort()
- responseText or responseXML
- getAllResponseHeaders'() or getResponseHeader(header)
See also
Data is null
Context being AJAX call in firefox (for me in jQuery and with getJSON, but it probably applies to all AJAX and more libraries).
One of the causes is that the fetch is against security policy (probably the same-source policy).
These rules vary between browsers, which may make this look like a bug.
Suggestions:
- make the URL refer to the same domain (host? port? It may even just be tripping over the fact that you're using an IP instead of name to the same host)
- use build a proxy script
- if the source is necessarily remote, try using a <script> block that has a call that hooks into your page (JSONP-style)
See also:
- http://docs.jquery.com/Release%3ajQuery_1.2/Ajax#Cross-Domain_getJSON_.28using_JSONP.29
- http://stackoverflow.com/questions/2396943/jquery-json-and-apache-problem
XMLHttpRequest cannot load 'URL'. No 'Access-Control-Allow-Origin' header is present on the requested resource
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Short version: You are doing a request from a page from siteA to a resource on siteB, and siteB does not actively say that's okay.
Which is what CORS prevents, and fixes.
Fetch
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Wide support since 2018ish (if you ignore IE) [8]
Fetch is intended as a modern variant of XHR, standardizing interplay with various other things that happened since XHR got defined - consider CORS, request headers, request body, cache interaction, whether to follow redirects, and such. See e.g. [9]
When I first looked at his, both XHR and Fetch had a few things the other couldn't do.
https://stackoverflow.com/questions/35549547/fetch-api-vs-xmlhttprequest
Since XHR is more manual, Fetch is potentially a lot shorter to write, like
fetch( url ).then( result => result.text() ).then( text => console.log(text) );
Writing proper error-handling code is as involved as XHR, but it'll do a bit more in the process.
On error handling: Fetch does not consider things like 404 or 500 reason to reject the Promise (fetch itself does not) - instead, look at ok. This may make sense assuming you want more subtle handling of all the different statuses, but it may effectively force you to write that code always - or hope it doesn't matter.
The response body is a ReadableStream. You would often probably just force it into text (like the example above) or JSON.
https://css-tricks.com/using-fetch/
XDomainRequest
Basically an IE8 and IE9 drop-in for XHR with pre-standard CORS.
Not present in IE10 or IE11, or any other browser at all.
It's useful for libraries to get CORS behaviour in wider set of browsers, but not to use directly.
Cache API
Web Storage API: localStorage and sessionStorage
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Good browser support since 2010 [10]
The Storage interface is basically a key-value store (both strings only), that the browser will persist between page loads.
There are two of those:
- sessionStorage - which doesn't persist across browser runs (still useful for repeated visits within a browser run)
- localStorage - which does persist across browser runs (useful for more uses, what many programmers opt for)
Mainly there's
- setItem(key, val)
- getItem(key)
- removeItem(key)
- length
- clear()
Notes:
- browsers will strive keep it around for a while, but may still consider it transient storage and so clean it up out of your control
- perhaps more easily so on mobile.
- while the storage is shared per origin, there are no formal definitions about concurrent access
- assume alterations are atomic but you don't get transactions
- events are fired on storage change
- which means this is also usable as communication between different tabs from the same origin
- The current top-level browsing context keeps storage for each origin (the wording roughly means "it's stuck on the window, and separated for frames").
See also:
IndexedDB
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB
Some comparison
Not standard, but you mare care about...
Web workers
- belong to a tab/window
- live no longeer than that tab
- are targeted at parallelism
Service workers
- are separete
- have their own lifecycle logic
- are targeted at offline support
Web Workers
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Have been around since 2010 and are supported basically everywhere since 2014ish [11]
Given the right programming models, JS gets pretty far with a single thread.
We're doing cooperative rather than pre-emptive multitasking, so we should never ever do anything that takes more than a few milliseconds in a synchronous manner, because it will hold up your one thread - i.e. absolutely everything.
So number crunching is right out.
Web Workers are meant for any computation that
- takes long enough that it might noticeably hold up that main thread.
- is not related to DOM, layouting, etc.
It's mainly useful for such coarse-grained work - and not that much else.
A Web Worker
- is a separate single thread of JS running
- tied to communicate with the script that launched that worker
- ...only, though shared web workers is a separate thing not all browsers support
- communicates using a message channel
- lifetime is limited to page lifetime
They can't share scopes/state, can't see the DOM, can't see window, document, or console, can't use libraries/frameworks no localStorage
Can communicate, e.g. do XHR, fetch, and WebSockets.
See also https://nolanlawson.github.io/html5workertest/
Number crunching is just the more extreme example,
but actually user interaction can potentially benefit from web workers,
XHR/fetch, indexeddb
https://developer.mozilla.org/en-US/docs/Web/API/Worker
Worklets
Service workers
Background sync
WebAssembly
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
WebAssembly is an intermediate language, interpreted by a VM that runs sandboxed in your browser, accessible by JS.
That language is designed as a stack machine and should be simple to run efficiently and optimize JiT-wise, (...mostly. Note also there is some tradeoff between startup overhead of the possible optimizations).
It's sometimes also nice that wasm actually has integer types (it has i32, i64, f32, f64), unlike JS.
Speed
While not native code (e.g. for safety reasons), for number crunching it will typically parse and execute faster than javascript does.
From some random googling, people report it as somewhere between 40-90% of native speed, depending on the actual code
Don't expect the high end from anything but manufactured benchmarks, but you may get 70% moderately easily.
How much faster it is than plain JS depends on the type of task, and JiT optimization possible.
In some cases JS runs fast enough that wasm might actually be worse, though in general it seems wasm may be 30% faster than JS (ballpark), and in some more extreme cases, JS may be factors slower.
It's intended for number crunching and some inner-loop optimization sort of things,
when it can be faster than JS execution (and until you understand exactly where the slowness comes from,
the best way to tell is just to test it).
You could compile any language to it, though in practice you probably want one that has been integrated nicely. C and Rust seems a common choice in examples, and AssemblyScript to make it easier to craft yourself.
Limitations
- passing structured data in and out takes time. This means short tasks may not be faster
- still in development, feature sets unclear (e.g. multithreading or not(verify))
- There are limitations the VM may enforce, like not allocing all your memory. (verify)
- does not officially have threads just yet (verify)
To thread or not to thread?
design decisions around WebAssembly threading
- the VM starts threads, not you
- blocks of shared memory
- atomic operations (helps locks, barriers and such make sense)
- wait/notify
Note: Shared memory seems to rely on JS's SharedArrayBuffer, which was disabled in most browsers in reaction to the Meltdown and Spectre exploits, and apparently Safari has not resolved this yet[12].
https://blog.scottlogic.com/2019/07/15/multithreaded-webassembly.html
dev stuff
source maps
For debugging ease.
In production you often want minified/combined/transcompiled JS/CSS, but that makes line numbers pretty meaningless to finding the real bug.
sourceMappingURL and sourceURL let you providing the original code (and named sources for anonymous functions, respectively).
Debug tools such as those in chrome and FF can, basically, know where to point both in the original JS/CSS as well.
Source maps
Can be specified
- X-SourceMap HTTP header
- sourceMappingURL=/path/to/script.js.map
- sourceMappingURL=sourceMappingURL=data:application/json;base64,eyJ2ZX...
the source map itself is a JSON file
http://blog.teamtreehouse.com/introduction-source-maps
Soft and hard reload
Early days: sequentially throwing things on a global pile
The actual inclusion bit - inline or not
fake scopes for less clobbering
pre-standard modules; stuff that involves script loaders
CommonJS
AMD
UMD
script loader implementations
ES6 modules
load ordering, and delayed loading
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
loaded/ready state and events
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Reading
Unsorted
Geolocation API
This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, or tell me) |
Requires
- browser support
- basically everything. Desktop tends to fall back to IP based geolocation.
- user consent
- access to navigator.geolocation functions should cause the browser to ask for permission.(verify)
- secure context, i.e. for the page to be served via HTTPS (it didn't always use to)
- (you could check that window.location.protocol == "https:" to avoid a JS error(verify))
The methods this might be backed by include:
- GPS receiver
- in cities, expect at most ~3m in a relatively open area, at most 10m with trees and tall buildings around, and basically nothing indoor (may fall back to any of the below)
- tends to update no faster than once per second
- WiFi based positioning
- (based on seeing known Wifi hotspots that were previously detected by devices that did have their GPS on at the time) [13]
- assume you won't often get better than 20m accuracy - and no coverage unless you're in a city
- though indoor it may be down to ~5m if your specific AP happens to be submitted
- assume you won't often get better than 20m accuracy - and no coverage unless you're in a city
- cell tower mapping (same idea as with WiFi, but using cell phone towers)
- ...which cover a much larger area, so assume this won't get better than 1km
- IP geolocation (basically as a fallback, particularly as the above three will probably only appear on mobile devices)
- assume you usually won't get better than 1km, sometimes better and sometimes a lot worse
- Mostly based on how addresses are assigned and divided. Sometimes that's predictable, sometimes not at all.
- so you can probably place someone in the right country, regularly in the right state/province, sometimes within dozens of km (so within a city or two), and occasionally down to a university campus or such (1km, but only because universities often got a large fixed subnet, and someone will submit that to the lookup database)
- the funny thing is that even if the location is relatively accurate, you won't know that - and unless the lookup tells you the precision, you can't know that where it is on the scale of 100m to 100km.
API
navigator.geolocation.getCurrentPosition( success_callback, error_callback, options )
- one time, not blocking; won't call the callback until the position is available - or not at all, if it hits the timeout first.
navigator.geolocation.watchPosition( success_callback, error_callback, options )
- will call the success callback whenever position changes - or error at timeouts or errors
- until stopped with clearWatch()
- (behaviour is probably more refined than getCurrentPosition on a setInterval would be)
- the rate at of callback will vary with the method of location being used
- you may want to assume you may not get either callback until the timeout
- GPS generally is 1 per second (...once it has a fix)
the options is an object with [14]
- maximumAge- maximum age of cached position, in milliseconds
- e.g. 0 means "don't use cached position at all, only get new values" (default)
- Infinity means "allow any cached position"
- timeout
- maximum length of time the device should take, in milliseconds
- Infinity means "don't do callback until we have an actual position" (default)
- keep in mind that a cold or warm GPS start may take a while
- enableHighAccuracy
- if true, it may take a slower method, e.g. wait for GPS fix(verify)
- false allows a faster and less-accurate option (default)
Notes:
- depending on the actual source of geolocation, the interval between position changes will vary
- so useful timeout/maximumAge values will vary along (verify)
- a lower timeout is useful to have watchPosition report something, even if it's just to report "waiting for first fix..."
- though note that
- None of the methods will update faster than once per second (GPS seems potentially the fastest at once per second - once it has fix)
- ...so a timeout below 1000ms is pointless
https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/getCurrentPosition
The success callback gets a GeolocationPosition object, which has
- .coords [15]
- .latitude
- .longitude
- .accuracy
- .altitude (can be null)
- .altitudeAccuracy (can be null)
- .heading (can be null if not supported, and NaN wen speed==0)
- .speed (can be null)
- .timestamp
- 0: unknown
- 1: PERMISSION_DENIED (message may be "User denied geolocation prompt")
- can be caused by non-secure context?(verify)
- can sometimes be caused by privacy features?Template:Veify
- 2: POSITION_UNAVAILABLE (why?(verify))
- 3: TIMEOUT
- presumably can happen before a fix, particularly with enableHighAccuracy==true(verify)
- might also happen if the page has been out of focus (if it means hitting a timeout first)
- so you probably want to be able to ignore a few of these
See also: