Javascript notes - browser related, APIs

From Helpful
(Redirected from XMLHttpRequest)
Jump to navigation Jump to search
Related to web development, lower level hosting, and such: (See also the webdev category)

Lower levels


Server stuff:


Higher levels



Global objects

Navigating and altering the DOM

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

For context, Node types include

  • Element (HTML element)
  • Text
  • Comment
  • CDATA
  • and a few special-cased things

Mostly we care to work with Elements, but sometimes we care about the other node types too.


Selecting stuff

searches subtree for element with given id= attribute
searches subtree for elements with given name= attribute
searches subtree for elements with given tag name
searches subtree for elements with given class name(s) (string can contain space separated classes)

Supported since DOM1, so bascically forever.


querySelector

search for first match' for a CSS query
returns Element

querySelectorAll

search for all matches for a CSS query
returns NodeList

You would probably start searching on document.querySelector, but every Element has these functions.

Supported since 2010ish[1]


Element objects have things like

  • id
  • nodeName
  • nodeValue
  • getAttribute(str)


Navigation

  • parentNode
  • childNodes (a NodeList)
  • nextSibling (any node, e.g. including Text, Comment, Element)
  • nextElementSibling (an Element)
  • previousSibling
  • previousElementSibling


Events

  • addEventListener(type, listener)
  • removeEventListener(type, listener)

Altering

  • setAttribute(str,val)
  • removeAttribute(str)
  • appendChild()
  • removeChild()
  • Element.innerHTML (not standardized(verify), but widely implemented for convenience)
Keep in mind that manipulating and appending Elements is often faster, because every innerHTML set requires a parse
  • Element.insertAdjacentHTML[2]
similar to but preferred over innerHTML
since 2012ish
  • Node.textContent[3]
faster (and safer) when just inserting text, but has some practical footnotes
since 2011ish

On events

In particular IE didn't care about standards and did its own thing, so before we could consider ignoring it (around 2013), you'd basically be implementing your own event library, so there was a good argument for using someone else's, or a fuller javascript libraries that handled further other browser variation.


Varied events

Propagation

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Events propagate two ways (basically a treewalk), first by being captured down the tree (root first, on its way to the target), then bubbling up it from the target to the root element. Note that W3's terms argue more from the perspecive of the node itself than from an ongoing process of treewalking.

If something captures, it will be called before getting to an object.


Controlling propagation

Some events can be cancelable (like clicks), some are not (like losing focus)

...which means your event handler can cancel propagation to further event handlers
do so with event.stopPropagation()


You can usually argue whether you want to do this per action type. For example

when you want things to follow your mouse, you may want mousemoves to not be blocked when another element happens to capture it, whereas
when you click a button, you probably want only the frontmost visible thing to react to it, not things under it
keep in mind that aborting all handling of the current event may break complex interfaces that rely on multiple independent handlers


There are default actions for some events, to allow the browser to do sensible things like selecting text. These are independent of your own functions and always fire unless you call an event's preventDefault() before it bubbles back up to them.



(There have been ways of stopping propagation that aren't stopPropagation, like event.cancelBubble, event.preventCapture(), and return false in the event handler. None of these are standard, and behaviour differs by browser)

Event target

Consider the following as test code:

function evtest(event) {
  alert(event.currentTarget.nodeName);
  alert(event.target.nodeName);
}

window.onload = function() {
  document.getElementsByTagName('body')[0].addEventListener('click', evtest);
}

target and currentTarget can be different for any event that bubbles up through the DOM, which are most of them (exceptions include focus, blur, mouseenter, mouseleave)

  • event.target
element the event was dispatched from
basically which in a text document is likely to be a p, div, button, img, or such - the foreground thing your cursor was on
it's effectively the place that actually started the event happening
  • event.currentTarget is
element that the eventListener was attached to
body, in the above
it's effectively the thing that really listens to the event
  • this
same as event.currentTarget (...within event handler callbacks)


Which is more intereting arguably depends more on your code style.

It can e.g. make sense to attach listeners to every single DOM node you care to listen to

in which case the two may often be the same, though you may prefer .currentTarget (particularly if you stored some state on there for the handler to use)

Attaching event handlers

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Adding event handlers to DOM nodes

The most flexible, standard way is to use addEventListener (see e.g. [4])

you can set any amount
you can remove individual ones later [5]
identified by (eventtype,function) so you'd want to keep the funcion reference
(IE had its non-standard and different-behaved attachEvent instead)


Assigning onsomething="code" inside HTML ("inline")

not standard
yet typically supported
you can set only one (because you assign it)
nor is it a clean way of structuring things (separation of responsibilities, escaping, more)
but which you can assume all widely used browsers will support(verify).
Note they usually put these events in the bubbling phase.(verify)


Assigning domreference.onsomething=functionref; from a script

not standard?(verify)
yet typically supported
not nearly as dirty as shoving code in the HTML - but can be less future-proof when even handling can become more interesting
mostly the same as the previous, though you have more control over scope
you can set only one (because you assign it; if you want more, you'd want a wrapper, or to call what you found there before/after)
Note: You should use all-lowercap names for most-browser compatibility.


Using angular, jquery, or whatnot

  • because these will abstract away the browser muck


keyboard events

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

(For historical reasons) there is some inconsistency between

...the keycodes they are generated (varied slightly per browser)
...where exactly they are reported
event.charCode, event.keyCode, event.which - now considered deprecated (though sometimes better supported)
Since 2017ish[6] you can use event.key (with some footnotes)

As such, if you use a javascript library, see if it normalizes that muck and makes your life easier.



event.key members include

.altKey, .ctrlKey, .metaKey, .shiftKey - whether Alt(/Option), Ctrl, or Meta/Command was down when this was pressed
.key - a string representing the key
see this list
note that
Basically includes shift processing (because keyboard layout differences mean you have to)
some things may report as 'Unidientified' (even if they exist in that list, e.g. Fn on my laptop), due to context-specific constraints
.location (helps distinguish keys that are there more than once, like alt, keypad)
.repeat - being held down?

Deprecated but will probably still work (for the time being):

.keyCode
.which
.charCode
.keyIdentifier
.keyLocation


See also:

URL related

URLSearchParams

To parse the current URL

For example

new URLSearchParams( window.location.search )

while I'm editing this page will give:

URLSearchParams(3) { title  "Javascript_notes_-_browser_related,_APIs", action  "edit", section  "9" }

That object acts like an iterable map, so similarly:

for( [k,v] of  new URLSearchParams( window.location.search ).entries()) {
   console.log(k, v);
}


To generate a parameter string

To start from scratch and eventually (e.g. take values from a from and) get a string you can feed to a GET or POST or such, consider:

let query_data = new URLSearchParams();
query_data.append('query', document.getElementById( 'query_field' ).value);

console.log(searchParams.toString());

//and eventually something like
fetch( url, { method: "POST", body:query_data } )


https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams

Network related

XmlHttpRequest

XHR (a.k.a. AJAX around when it was new and hip) in itself was decently supported since 2006ish(verify)

XHR level 2 since ~2015ish[7] (if you ignore IE)

adds things like timeouts, progress,


Writing code for XHR that properly handles regular edge cases is a bit long, so people generally either have their own helper function, or use a library to make life easier. (Note that now there is also fetch(), which is more featured, though not necessarily that much shorter when you do all the error handling)



If you do do a little manual XHR handling, the following object members/functions= may be interesting:

  • readyState
0: Uninitialized/Unsent (initial value, and set when manually abort()ed)
1: Opened (open() was successful)
2: Sent/Headers received: UA completed request, response headers received, waiting for response data)
3: Loading/Receiving: and immediately before receiving message body chunks (may be called several times, and happens only after receiving headers)
4: Loaded/Done: All data received
  • status and/or statusText
status (an int) and statusText (a string) must reflect the HTTP status when set.
They must be available when readystate is 3 or 4; if not, should be unavailable and access should (!) raise an exception. Unavailability may also be caused by the UA never being able to parse a status out of the response.
  • setRequestHeader(header, value);
  • abort()
  • responseText or responseXML
  • getAllResponseHeaders'() or getResponseHeader(header)


See also


Data is null

Context being AJAX call in firefox (for me in jQuery and with getJSON, but it probably applies to all AJAX and more libraries).


One of the causes is that the fetch is against security policy (probably the same-source policy).

These rules vary between browsers, which may make this look like a bug.


Suggestions:

  • make the URL refer to the same domain (host? port? It may even just be tripping over the fact that you're using an IP instead of name to the same host)
  • use build a proxy script
  • if the source is necessarily remote, try using a <script> block that has a call that hooks into your page (JSONP-style)


See also:


XMLHttpRequest cannot load 'URL'. No 'Access-Control-Allow-Origin' header is present on the requested resource
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Short version: You are doing a request from a page from siteA to a resource on siteB, and siteB does not actively say that's okay.

Which is what CORS prevents, and fixes.

Fetch

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Wide support since 2018ish (if you ignore IE) [8]


Fetch is intended as a modern variant of XHR, standardizing interplay with various other things that happened since XHR got defined - consider CORS, request headers, request body, cache interaction, whether to follow redirects, and such. See e.g. [9]


When I first looked at his, both XHR and Fetch had a few things the other couldn't do. https://stackoverflow.com/questions/35549547/fetch-api-vs-xmlhttprequest


Since XHR is more manual, Fetch is potentially a lot shorter to write, like

fetch( window.location ).then( result => result.text() ).then( text => console.log(text) );


The response body is a ReadableStream. You would often force it into text (like the example above) or JSON or such.


On error handling:

  • Writing proper error-handling code is as involved as XHR, but it'll do a bit more in the process.
...instead, look at ok.
This may make sense assuming you want more subtle handling of all the different statuses, but it may effectively force you to write that code always (or hope it doesn't matter)


https://css-tricks.com/using-fetch/

https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

XDomainRequest

Basically an IE8 and IE9 drop-in for XHR with pre-standard CORS.

Not present in IE10 or IE11, or any other browser at all.

It's useful for libraries to get CORS behaviour in wider set of browsers, but not to use directly.

Storage related

Cache API

Web Storage API: localStorage and sessionStorage

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Good browser support since 2010 [10]



The Storage interface is basically a key-value store (both strings only), that the browser will persist between page loads.

There are two of those:

  • sessionStorage - which doesn't persist across browser runs (still useful for repeated visits within a browser run)
  • localStorage - which does persist across browser runs (useful for more uses, what many programmers opt for)


Mainly there's

  • setItem(key, val)
  • getItem(key)
  • removeItem(key)
  • length
  • clear()


Notes:

  • browsers will strive keep it around for a while, but may still consider it transient storage and so clean it up out of your control
perhaps more easily so on mobile.
  • while the storage is shared per origin, there are no formal definitions about concurrent access
assume alterations are atomic but you don't get transactions
  • events are fired on storage change
which means this is also usable as communication between different tabs from the same origin
  • The current top-level browsing context keeps storage for each origin (the wording roughly means "it's stuck on the window, and separated for frames").


See also:

IndexedDB

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB

Some comparison

Not standard, but you mare care about...

Execution related

Web workers

  • belong to a tab/window
  • live no longeer than that tab
  • are targeted at parallelism


Service workers

  • are separete
  • have their own lifecycle logic
  • are targeted at offline support


Web Workers

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


"So I heard javascript has threads now"

tl;dr: lol no.

It's more like a "foreign function interface that calls JS from JS, and not a concurrency API in any practical sense.


Have been around since 2010, and supported basically everywhere since 2014ish [11]


Given the right programming models, JS gets pretty far with a single thread.

JS has told us to do cooperative/event based stuff rather than pre-emptive multitasking, so we should never ever do anything that takes more than a few milliseconds in a synchronous manner, because it will hold up your one thread - i.e. absolutely everything, which in a browser includes rendering and interaction.

So number crunching is right out (unless you have some aggressive, guaranteed yielding. Technically possibly, but awkward).


Web Workers are sold for any computation that takes long enough that it might noticeably hold up that main thread.

It seems some browsers can even send work to multiple cores(verify).

Workers can do their own communication, e.g. do XHR, fetch, and WebSockets. And, being unrelated to the document that created them, also apparently isn't limited by that origin's CSP policies.[12]

Documentation calls it a JS thread but functionally it's so isolated you might as well consider it a separate process.


What they do not tell you up front is that

  • you cannot share scopes/state,
  • you cannot see the DOM, or layouting
  • you cannot see window, document, or console,
  • you cannot use libraries/frameworks
  • you cannot access shared state like localStorage [13] (they can use indexeddb but apparently it's isolated?(verify))
  • you cannot



So it's mainly useful for such coarse-grained work - and not that much else.

Most of the demos seem to be either a toy fibonacci example, or webcanvas (but even that comes with footnotes about sharing data).


A Web Worker

is a separate single thread of JS running
tied to communicate with the script that launched that worker
...assume only that launcher -- shared web workers is a separate thing that not all browsers support
communicates using a message channel
lifetime is limited to page lifetime

See also https://nolanlawson.github.io/html5workertest/


For a basic example, your main JS may contain

const myWorker = new Worker('worker.js');

myWorker.postMessage('world');

myWorker.onmessage = function(e) {
  console.log(e.data);
}

And that worker.js may contain:

self.onmessage = function(e) {
  self.postMessage('Hello '+e.data);
}


https://developer.mozilla.org/en-US/docs/Web/API/Worker


Worklets

Service workers

Background sync

WebAssembly

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

WebAssembly is an intermediate language, interpreted by a VM that runs sandboxed in your browser, and which is accessible by JS.


That language is designed as a stack machine and should be simple to run efficiently and optimize JiT-wise, (...mostly. Note also there is some tradeoff between startup overhead of the possible optimizations).


It's sometimes also nice that wasm actually has actual integer types (it has i32, i64, f32, f64), unlike JS.


Speed

While not native code (e.g. for safety reasons), for number crunching it will typically parse and execute faster than javascript does.

From some random googling, people report it as somewhere between 40-90% of native speed, depending on the actual code.

You should probably not expect the high end from anything but manufactured benchmarks, though.


How much faster it is than plain JS depends on the type of task, and JiT optimization possible.

In some cases, JS already runs fast enough that wasm might add nothing, or even be worse, though in general it seems wasm may be 30% faster than JS (ballpark), and in some more extreme cases, JS may be factors slower.


Purpose


Certain kinds of number crunching can be faster than in JS.

That said, JS execution is a little weird due to its history, and choices, so until you understand exactly where the slowness comes from, the best way to tell if it does or doesn't help is probably just to test it.

Still, it can be great to have a predictably-fast thing that runs in all browsers (It was introduced in 2015 (in a minimal-viable-product state, it's under development) and had good browser support since 2017[14]).

And like JS, there are reasons to use it on the server side too.



Also, you could cross-compile various things to it so could theoretically any existing code. Yet most code that isn't making a web page doesn't really belong in the browser, so this is arguably mostly for backend code, and then perhaps only to reduce the amount of separate runtimes?

...though in practice you probably want a language that has been integrated nicely.


In theory emscripten would support any language that LLVM does, but we mostly see C and Rust in examples.

And AssemblyScript to make it easier to craft yourself,



Limitations

  • passing structured data in and out takes time
one of a few reasons short tasks may not be faster
  • still in development, feature sets unclear (e.g. multithreading or not(verify))
  • There are limitations the VM may enforce, like not allocing all your memory. (verify)



To thread or not to thread?

design decisions around WebAssembly threading

  • the VM starts threads, not you
  • blocks of shared memory
  • atomic operations (helps locks, barriers and such make sense)
  • wait/notify

Note: Shared memory seems to rely on JS's SharedArrayBuffer, which was disabled in most browsers in reaction to the Meltdown and Spectre exploits, and apparently Safari took longer to resolve this[15].

https://blog.scottlogic.com/2019/07/15/multithreaded-webassembly.html

dev stuff

source maps

For debugging ease.

In production you often want minified/combined/transcompiled JS/CSS, but that makes line numbers pretty meaningless to finding the real bug.

sourceMappingURL and sourceURL let you providing the original code (and named sources for anonymous functions, respectively).

Debug tools such as those in chrome and FF can, basically, know where to point both in the original JS/CSS as well.


Source maps

Can be specified

  • X-SourceMap HTTP header
  • sourceMappingURL=/path/to/script.js.map
  • sourceMappingURL=sourceMappingURL=data:application/json;base64,eyJ2ZX...

the source map itself is a JSON file

http://blog.teamtreehouse.com/introduction-source-maps


Soft and hard reload

The mess that is modules (and related scope details)

Early days: sequentially throwing things on a global pile

The actual inclusion bit - inline or not

fake scopes for less clobbering

pre-standard modules; stuff that involves script loaders

CommonJS
AMD
UMD
script loader implementations

ES6 modules

load ordering, and delayed loading

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
loaded/ready state and events
This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Reading

Unsorted

Geolocation API

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

For the concept, see Geolocation

This is about asking the browser, the client side, to do so for us - to then probably tell a server about it.


Requires

  • browser support
basically everything. Desktop tends to fall back to IP based geolocation.
  • user consent
access to navigator.geolocation functions should cause the browser to ask for permission.(verify)
  • secure context, i.e. for the page to be served via HTTPS (it didn't always use to)
(you could check that window.location.protocol == "https:" to avoid a JS error(verify))




API

navigator.geolocation.getCurrentPosition( success_callback, error_callback, options ) 
one time, not blocking; won't call the callback until the position is available - or not at all, if it hits the timeout first.


navigator.geolocation.watchPosition( success_callback, error_callback, options  )
will call the success callback whenever position changes - or error at timeouts or errors
until stopped with clearWatch()
(behaviour is probably more refined than getCurrentPosition on a setInterval would be)
the rate at of callback will vary with the method of location being used
you may want to assume you may not get either callback until the timeout
GPS generally is 1 per second (...once it has a fix)


the options is an object with [16]

  • maximumAge - maximum age of cached position, in milliseconds
e.g. 0 means "don't use cached position at all, only get new values" (default)
Infinity means "allow any cached position"
  • timeout
maximum length of time the device should take, in milliseconds
Infinity means "don't do callback until we have an actual position" (default)
keep in mind that a cold or warm GPS start may take a while
  • enableHighAccuracy
if true, it may take a slower method, e.g. wait for GPS fix(verify)
false allows a faster and less-accurate option (default)


Notes:

  • depending on the actual source of geolocation, the interval between position changes will vary
so useful timeout/maximumAge values will vary along (verify)
a lower timeout is useful to have watchPosition report something, even if it's just to report "waiting for first fix..."
though note that
  • None of the methods will update faster than once per second (GPS seems potentially the fastest at once per second - once it has fix)
...so a timeout below 1000ms is pointless


https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/getCurrentPosition


The success callback gets a GeolocationPosition object, which has

  • .coords [17]
    • .latitude
    • .longitude
    • .accuracy
    • .altitude (can be null)
    • .altitudeAccuracy (can be null)
    • .heading (can be null if not supported, and NaN wen speed==0)
    • .speed (can be null)
  • .timestamp


The error callback gives you an object with a .message and a.code[18]

  • 0: unknown
  • 1: PERMISSION_DENIED (message may be "User denied geolocation prompt")
can be caused by non-secure context?(verify)
can sometimes be caused by privacy features?Template:Veify
  • 2: POSITION_UNAVAILABLE (why?(verify))
  • 3: TIMEOUT
presumably can happen before a fix, particularly with enableHighAccuracy==true(verify)
might also happen if the page has been out of focus (if it means hitting a timeout first)
so you probably want to be able to ignore a few of these




See also:

Notifications API

Push API

WebRTC

WebRTC leaks

Bookmarks/favorites