Difference between revisions of "Javascript notes - syntax and behaviour"

From Helpful
Jump to: navigation, search
m (On timers and onload)
m (Random things)
Line 478: Line 478:
You may also care for things like:
function random_choice(l) {
function random_choice(fromlist) {
   return l[Math.floor(Math.random() * l.length)];
   return fromlist[Math.floor(Math.random() * fromlist.length)];
And (TODO: check for edge cases)
function random_weighed_choice(d) {
function random_weighed_choice(d) {
   /* given a string:weight mapping, return a weighed random string */
   /* given a string:weight mapping, return a weighed random string */
Line 497: Line 502:

Revision as of 18:36, 23 June 2021

Related to web development, hosting, and such: (See also the webdev category)
jQuery: Introduction, some basics, examples · plugin notes · unsorted

Server stuff:

Dynamic server stuff:


Optional function parameters

Javascript fills in undefined values for any omitted parameters, so all are optional.

Most code will assume you're using it sensibly, and trip over absence of some parameter. So it's optional-by-loose-contract.

You could write something along the lines of:

function upper(s,alsoStrip) {
  var r=s.toUpperCase();
  if (alsoStrip!==undefined) {r=r.strip();}
  return r;

In this particular case, undefined, null, false and 0 are all coerced to false. In other cases you may specifically want to have 0 be a value you can pass in, in which case you would probably check against undefined as the 'unspecified' value.


'this' is bound to different things in different context, which seem a little magic at first.

For example:

  • In object members,
    is the object
  • of functions used as event handlers,
    is the DOM element they are set on
but there are details, see events
  • in nonmethod functions,
    is the global object (in browsers meaning window)
...but not in strict mode
  • ...more?

See http://developer.mozilla.org/en/docs/Core_JavaScript_1.5_Reference:Operators:Special_Operators:this_Operator


You can take a basic function definition, such as:

function sorta_class() {}

and do

thing = new sorta_class()
The use of
means creating a new instantiated Object, in that
  • you use them that way
  • you can use this in that constructor
  • you can add prototypes to that sorta-class (lets you use this)
  • prototype inheritance stuff works

Half of that demonstrated:

function sorta_class(i) { this.i=i }
sorta_class.prototype.alerti = function() { alert(this.i); }

Forgetting new was a mistake - it doesn't fail but behaves differently (e.g. this doesn't refer to the object). So since ES6 it's cleaner to use:

class sorta_class { /* better_class, really */
  constructor(i) {
    this.i = i;
  alerti() {

after either of those you can do:

thing = new sorta_class(5);

The difference beyond that syntactic sugar:

  • ES6 gives you super()
  • ES6 doesn't let you forget the use of new
before it you could also directly invoke the thing. Often didn't make much sense, but you could.
  • function definitions are hoisted, meaning they act as if cut-and-pasted to the top and evaluated first, meaning order of declaration no longer matters
whereas classes only exist after execution has reached and passed their definition

Interesting details and a few gotchas

Easy regexp/substring test

//given the following string:
var s = 'foo';
   // ...is easier to type than...

Objects as (hash)maps

Any object (including browser DOM objects) acts like like a hash map.

That is, you can set and get any attribute and use it for data of your own. Sometimes makes for easy hacks.

Arguably you should only do this on {}, or new Object() informed of its members, just because on objects you don't know thoroughly you may overwrite members and so break things. Same for the DOM, plus you may cause the browser to do interesting things.

A caveat to using objects as (hash)maps is that keys are always strings.

Don't use objects as keys, because basically any non-string used as a key (including numbers) will effectively will be toString()'d first (and for objects that means "[object Object]".

Create a new object:

var o=new Object();
o['b']=5;  // arbitrary strings
o.a='3';   // this style accesses the same data, but requires syntactical validity
o[1]=2;    // implies "1"

for (var i in o)  alert(''+i+': '+o[i])

You can test for presence/absence by doing an identity comparison against undefined, e.g.:

var val=o[4];
if (val===undefined) alert('not there')

...except that you can also store undefined as a value; I'm not sure how to distingish, though in many cases you won't need to store undefined, especially when you also have null, and null!==undefined But! - null==undefined, so it is probably worth it to implement a few simple wrapper functions around this to avoid accidentally using == instead of ===.

Note: There is also a hasOwnProperty(s)

There is a minimal STL-like library (no algorithms, mostly just the basic collections), see [1].

for and objects

will iterate over object members (even if the object is an array).

This means any additional members (such as prototype functions added by libraries) will mess up iteration written with a foreach in mind, such as:

for (var e in arr) {

Instead, you want

for (var i=0; i<arr.length; i++) {

This pops up in different forms. element.childNodes is actually an associative array that just happens to have nice index-based keys, so:

for (i in element.childNodes) alert(element.childNodes[i]);

...prints nodes and two non-element members: item and length. Instead, you should use:

for (i=0;i<element.childNodes.length;i++) alert(element.childNodes[i]);

String replace (by default) replaces only first case

A string-string replace will only replace the first match:

'aa'.replace('a','b') == 'ba'

You can make that global with a syntax-sugared regexp:

'aa'.replace(/a/g,'b') == 'bb'

When the thing to be replaced comes from a variable you'll have to compile it into a regexp more explicitly:

var pattern='a'; 
'aa'.replace(new RegExp(pattern,'g'), 'b') == 'bb'

Page reload/navigate

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

I never know which to use. Are any of these incompatible?(verify)

It seems window is effectively interchangeable with document -- except of course in the presence of frames, since they may refer to different things.


  • if document has POST information, browser will resubmit, and ask the user about that



  • You can also assign to .location (and not its .href child), but that seems to be a bit of a compatibility hack that you cannot assume will always be there.


  • Similar to the last, but replaces the current history item (which is primarily handy when reloading this way)


  • There is an optional parameter: true instructs the browser it should reload form the web server (which it may not necessarily do(verify)), and false means it should use the cache.

sort() notes

(webdev search for javascript+sort)

JS sorting is a little peculiar.

For starters, basic sort is lexical:

sa=["11", "2", "1"]
...yields ["1","11","2"]
 // also:
...yields [1,11,2]

Yes, lexical even for numbers. This presumably means it is using the string representation - possibly a decorate-sort-undecorate thing?(verify)

Other sorters can be made by using comparison functions.

For example, a numerical sorter can be made by with something like:

function naturalSort(a,b) { return (a-b);}

This is also a natural sorter of strings-in-numbers, because arithmetic coerces numbered strings (e.g. "2"-"11"==-9).

...yields [1,2,11]
...yields ["1","2","11"]

You can apply whatever logic you want in the sort functions, sorting values arbitrarily, using external data, and whatnot. Example:

ta.sort(function(a,b){return a[1]-b[1]})
...yields [["Quu",0],["Smith",2],["Norg",6],["John",11]]

Reverse sorting can be done by inverting the logic:

function revNaturalSort(a,b) {return (b-a);}
function revLexSort(a,b)     {return (b<a);}

...or by using .reverse() on a list after sorting it.

helper stuff; some useful object functions / utility classes / class methods


I couldn't find a whitespace strip()per, so I made my own. Pretty basic, could possibly be faster with an index-based implementation.

String.prototype.strip=function(){return this.replace(/^[\s]+/,'').replace(/[\s]+$/,'');};
//and a test
alert('['+ '  a b  c  '.strip() +']');

Also because I needed it, a left-padder: (could use some cleaning)

//left-pads a string up to some length.  (using a space. Uses value of 'ch' instead, if given. 
//Make this a string of length 1)
String.prototype.lpad=function(to,ch) {
  var c=' '; if (ch) c=ch;
  var n=Math.max(0,to-(''+this).length);
  var ret='';
  for (var i=0;i<n;i++) ret+=c;
  return ret;

Calling 'foo'.lpad(5) would give "  foo", calling 'bar'.lpad(6,'-') would yield "---bar".

int / float parsing

Regular expressions

Random things

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

By default you only have Math.random(), which returns 0.0 ≤ value < 1.0

In all cases, note the ≤ and < difference.

To get a random integer 0≤value<n, you can do something like:

function randomInt(n) {
  return Math.floor(n*Math.random());

Or m≤value<n:

function randomInt(m,n) {
  return Math.floor((n-m+1)*Math.random())+ m;

You may also care for things like: function random_choice(fromlist) {

 return fromlist[Math.floor(Math.random() * fromlist.length)];


And (TODO: check for edge cases) function random_weighed_choice(d) {

 /* given a string:weight mapping, return a weighed random string */
 let total=0, upto=0;
 for (const key in d) { total+=d[key]; }
 let r = Math.random()*total;
 for (const key in d) {
   let w = d[key];
   if (upto + w > r)
     return key;
   upto += w;



To delay execution of code:

timerID = setTimeout(func, msdelay) [2]
clearTimeout(timerID) [3]

To trigger regularly until you clear the particular ID:

timerID = setInterval(func, msdelay) [4]
clearInterval(timerID) [5] 

In both cases, remembering the ID lets you cancel what you planned (though for a single timeout you often don't care to).

Event execution details

Using these effectively means moving the code out of the current code block, and into a future event.

JS is single-threaded, so any block of code delays others. Events in general (including timers, user input, and XmlHttpRequests) are added to a set. Whenever it's not busy working on another code block (in-page code or event handlers), JS chooses one (presumably with preference(verify)) and executes it.

There are also some interesting details when it comes to interaction with the browser - in part because the browser is multi-threaded.

Gets used for UI loops, and for magic spice that makes something display faster.

These days there are more elegant ways.

General notes

  • note that the delay on a timer is effectively a minimum delay.
  • The blocking, one-at-a-time nature of event execution implies that you can't expect timeouts and delays to be very exact, particularly if any code black can take a while.
  • Another reason is that the backing timer isn't guaranteed to be much higher resolution than perhaps 10ms (or Windows's 15.6ms).
  • ...which also sets a minimum delay you can expect.
  • Browsers may clamp these values to a minimum, such as 10, 4, 2, or 1 (often related to the minimum they can usefully handle). This means your setTimeout(f,1) may actually mean setTimeout(f,10) (unless 0 is a special case?(verify))
  • Interval timers
    • are not guaranteed to be particularly regular, nor guaranteed to fire when things are busy. The interval refers to when events are added, and each timer will only have one of its own event queued (presumably to avoid a number of problems). If the interval's even is still queued when a new one would be (because the code takes long, or was postponed long enough because of other code), a new event will not be queued.
    • may have a minimum - that may be larger than the setTimeout minimum, and may depend on other things (e.g. FF inactive tags)
  • lower-delay timers can be imitated via HTML5's postMessage[6] (example)
  • You can pass in string code or a function reference.
    • the string method just means an additional eval() (apparently at global scope), and can sometimes bothersome with quotes and escaping
    • with the function reference method, firefox allows you to pass in parameters. Other browsers (at least IE) does not support this. To do this anyway you can use an anonymous function that does the actual call you want (or some variation of that)
    • it seems to be a fairly common mistake to keep parentheses on the function parameter. In the string style it gets evaled and therefore executed immediately (in the regular style it would get executed before even being passed in, but people rarely do that).
  • Within event code, 'this' refers to the window object

On timers and onload

On being single-threaded

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

JsS is fundamentally single-threaded - one piece of code runs at a time.

Any block of code delays others. Events in general (including timers, user input, and XmlHttpRequests) are added to a set and JS chooses one to run whenever it's not busy working on one.

This is potentially quite neat - it's harder to create race conditions and you don't often need locks, and if everything you do fits a cooperative multitasking even model, you avoid a lot of scheduling. Node.js is in part an experiment to see how far you can take that on the server side too.

Take-home: don't make humongous functions, your won event-loop handlers, etc.

Each window/tab gets its own context (often isolated - with some exceptions, e.g. windows created by another(verify)), and each behaves as if it is a single thread.

localStorage seems to violate things a bit. That is, multiple contexts can access it at the same time. This could be a nice means of IPC -- except its API seems asynchronous so it will race.

...and browser interaction

Javascript, the DOM, and page rendering are separate communicating things (in a way that is not quite standardized, so behaviour can vary).

Assume that DOM updates and page rendering catches up only after each chunk of code.

This means that for responsiveness, it makes sense to do use setTimeout to queue code (is a bit more cooperative multitasting style). For example, when you want a "Loading..." message, you may find it gets shown later than you think, until you do something like:

function ajaxy_stuff() {
  loading_message.show(); //done now
  },1); //placed into event queue; possibly (but not necessarily) done almost immediately after

Note that in various browser implementations, setTimeout with a delay of 0 means "run immediately, and cause the browser's JS thread to yield"(verify) (as in cooperative multitasking), which lets the rendering thread catch up, quite possibly shows the message, and then continue with JS.

If you didn't chances are it would show and hide the message only when the fetch is done.

Number formatting

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

You'd think there would be something like C's sprintf, C#'s string.Format, or even Java's number formatting.

You'd be wrong.

There are various libraries (e.g. [7], [8], and more) that people have created.

Some helpful functions were introduced relatively recently:

toFixed, toPrecision, toExponential and toLocaleString (wide support since 2009ish?[9])

Number.toFixed(x) avoids scientific notation

(1.2).toFixed(2) == '1.20'
(1.2).toFixed(0) == '1'
(1201).toFixed(2) == '1201.00'
(1201).toFixed(0) == '1201'


(1.2).toPrecision(1) == '1'
(1.2).toPrecision(5) == '1.2000'
(1201).toPrecision(1) == '1e+3'
(1201).toPrecision(5) == '1201.0'

Number.toExponential(x) forces scientific/exponential notation even when it is not necessary:

(1.2).toExponential(1) == '1.2e+0'
(1.2).toExponential(5) == '1.20000e+0'
(1201).toExponential(1) == '1.2e+3'

toLocaleString() uses locale-specific thousand/digit separators: (use for final presentation only, since you can't assume they'll parse back)

(1201).toLocaleString() == '1.201'   // (...for me, that is)

There is a relatively established hack via Math.round():

Note that the result of the below is still a number, and that half the trick is to play JS's own limited-precision-showing, trailing-zero-chopping behaviour.

Math.round(1.234*10)/10       == 1.2
Math.round(1.234*100)/100     == 1.23
Math.round(1.234*1000)/1000   == 1.234 
Math.round(1.234*10000)/10000 == 1.234 

date and time

A little hairy. In general, when handling dates, 'consider using libraries (e.g. Moment.js, Date.js)

What I need most of the time is one of:

  • Date.now(): milliseconds since 1970 epoch.
  • new Date() without arguments constructs a Date for the current time
  • new Date() with a specific date as argument is one of:
    • new Date(number) - parsed as milliseconds since 1970 epoch, e.g. as fetched from Date.now()
    • new Date(year, month[, date[, hours[, minutes[, seconds[, milliseconds]]]]]) - constructed as such
    • new Date(dateString) - handed to Date.parse, which understands:
      • a basic subset of the ISO 8601 formats [10] (mostly fixed to Zulu?)
      • Conventionally also RFC 2822 style [11] but that's not standard.


  • A bunch of specific-format outputs (including ISO8601) but no strftime style formatter function (), you have to roll your own or find a library
is made more interesting because of the next two points.
  • month is 0-based
  • defaults are 0 for various things (verify)

See also:

Data structure stuff

Array ≈ stack ≈ queue

Given that:

  • push() is 'add to end'
  • pop() is 'take from end'
  • shift() is 'take from start'
  • unshift() is 'add to start'

...you can use an array as a stack using push() and pop()

...and as a queue using push() and shift() (or pop() and unshift())

They're probably not the fastest possible implementations of said data structures, but they can be convenient.

Also, it doesn't seem to be a linked list, so you can't efficiently take things from the middle, but for small arrays it's fine, and the simplest call is probably splice, specifically:


Set-of-string operations

One general programming trick is using the fact that hash keys are per definition unique to create set operations, like the following.

However, since hash keys are always strings in JS, this isn't type-robust - this only works when all values are strings, or rather, when all values don't lose meaning when you toString() them, because that's effectively what it will do.

//Unique-ify list. (set-in-list)
function unique(l) {
  var o=new Object(),ret=[];
  for(i in l){o[l[i]]=1}
  for(k in o){ret.push(k)}
  return ret;
function union(l1,l2) {
  var o=new Object(),ret=[];
  for(i in l1){o[l1[i]]=1}
  for(i in l2){o[l2[i]]=1}
  for(k in o){ret.push(k)}
  return ret;
function intersect(l1,l2) {
  var o=new Object(), ret=[];
  for (i in l1) o[l1[i]]=1; //the value is ignored 
  for (i in l2)
    if (o[l2[i]]!==undefined)
  return ret;
function except(l1,l2) {
  var o=new Object(), ret=[];
  for (i in l1) o[l1[i]]=1;
  for (i in l2) delete(o[l2[i]]);
  for (i in o) ret.push(i);
  return ret;

Array member tests

There has been an includes() with wide support since ~2017 [12]

Before that, you would need checking via iteration. One example is below.

Note that as per javascript's equivalence tests ('1'==1 being true and such), the below may be fuzzier than you want it to be. You can require the type to be the same using the second parameter.

a.contains(2); //true
(1).in(a,true); //type not identical, so false
(1).in(a);      //no type check, so true


function in_test(ary,stricttype) {
  for (var i=0; i<ary.length; i++) {
    if (ary[i]==this) 
      if (stricttype) return typeof(ary[i])==typeof(this)
      else return true;
  return false;
String.prototype.in = in_test;
Number.prototype.in = in_test;
//you could add it on Boolean, Date, but would you use it?
//  and/or
Array.prototype.contains = function(o,stricttype) {
  for (var i=0; i<this.length; i++) {
      if (this[i]==o)
        if (stricttype) return typeof(this[i])==typeof(o)
        else return true;
    if (this[i]==o) return true;
  return false;

Array iteration

Instead of writing something like

for (var i=0;i<ary.length;i+=1) {

You can write (since around 2011[13])

ary.forEach(element => console.log(element));

or write (since around 2015[14])

for (const element of ary) {


  • forEach is a series of callbacks
  • forEach cannot be stopped with anything other than an exception


Wide support since ~2015 [15]



Wide support since ~2015 [16]


Typing is exciting

Object types


  • Things you tend to use as if they are simple values report as themselves:
    • "number" "string" "boolean" "function", and "undefined"
  • Things like Array, Date, Math, RegExp and Error and null are considered:
    • "object"

You can use this e.g. to make libraries more flexible, by alowing them to take both a DOM element reference and a document ID.

Typing is interesting, because specs

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
There are no integers, only
, which is always 64-bit floating point.

However, you're fine thinking you have ints in the range of -9007199254740991 to 9007199254740991 (because floating point can by nature store integers exactly within a range), which suits many integer-ish uses.

Outside that range, though, thinking that integers exist will eventually bite you:

9999999999999999 === 10000000000000000

There are very few integer operations.

for bitwise NOT, that are internally calculated as two's-complement integers.

which also makes for some different rough (though understandable) edges, e.g.

~1E30 === -1

There are no 32-bit floats. ...weeelll, directly in the type system, anyway.

Since ES6 they exist in the form of Float32Array (one of the typed arrays (and were widely supported a few years before this [17]) which represents float32 in platform byte order, allowing for faster calculation. [18]

There are a few other details, like ES6 also introducing Math.fround which rounds a 64-bit to the nearest 32-bit float value, meant as a helper when comparing things between Number and values from a Float32Array.

(Also, there's a bunch of types sorta-there via TypedArrays)

BigInt has existed for a while (though apparently not been truly widely supported until 2020ish[19]), with syntax like
2n ** 160n
. But there is no elegant way to integrate it into the existing type system, so it's an entirely separate thing and you can never mix them with regular Numbers, e.g.
is a TypeError. BigInts absolutely have very useful uses, but in generic use they are often not worth the bother.)

Enthusiastic coercion means never having to say TypeError:

"str" + 1 is "str1"   
"str" - 1 is NaN      
"0"+1     is "01"     
"0"-1     is -1       lesson: + is overloaded in ways that make it coerce to string more easily,
                      where other operators would more easily convert (which also fails more often)

+[1]      is 1        because unary + is ToNumber, in this case ToNumber(toString(input)) [20]
+[1,2]    is NaN      because ToNumber can't deal with "1,2"

! + []    is true     because that's  unary !,  on unary +, on [],
                      and +[] reduces to ToNumber("")  which is 0,
                      and !0 is implicitly coerced to Boolean true

! + [1,2] is true     because that's  unary !,  on unary + on [],   so it works out as !NaN, which is true

3 - true ==  2       while true!==1  (essentially because Boolean is own type)
                     true in Number context becomes 1. Which, if you're going to allow it, is fair.

typeof(null)   == "object"         because specs says so, not for particularly good reason

typeof(Object) != "object"          ...because:
typeof(Object) == "function"        ...because it's uninstantiated;
                                    (and relates to classes not really existing beyond syntactic sugar)
typeof(new Object()) == "object"    

NaN !== NaN       which is correct according to floating point semantics,
                   because not all reasons for NaN are the same.   This is quite sensible.
NaN != NaN        ...can be considered slightly weirder, given how fuzzy and coercive == often is

The following are more contrived, but still sort of interesting

[] + []  is  ""                  even though toString([]) is [object Object]
[] + {}  is  "[object Object]"   

{} + []  is  0                   ...or "[object Object]", see below
[] + {} === {} + []           
{} + {}  is  NaN                 ...or "[object Object][object Object]", see below

Context matters.

For some of the last, what happens is that {}

in another code block, or on the console, will be an empty code block that is ignored
assigned to a variable will be an Object

This is contrived in that you wouldn't do that accidentally.

Still, consider
{} + []
in code context would become a loose
, and unary + coerces things to numbers, which is
(see above)
which is 0.
in expression context {} is an empty object
so this reduces to "[object Object]" + ""
{} + {}
in code context the the whole thing reduces to
which works out as
ToNumber("[object Object]")
, which doesn't parse
so becomes NaN
in expression context it's two empty objects, coerced to strings (I think by the +(verify))
so "[object Object][object Object]"

undefined is interesting in other ways

Array(3) == ',,'    
Array(3)[0] === undefined     yet
Array(3)[0].toString()        is a typeError because you can't dereference on a non-object, yet
{undefined:3}                 (i.e. a key coerced to a string) is allowed somehow
                              even though I'd expect that to involve toString

{} is object

And because of that, {} isn't really a hashmap, it's setting members on an object.

But the two are sorta kinda the same thing, because member lookup happens to be backed via a hashmap, plus Extra Details™: in particular it implies that setting keys via
is always coerced to strings, which means that some things are "[object Object]", numbers may be rounded, which leads to exciting bugs.

And you can't change the way it gets converted to string unless you do it all manually.

Well, at least we have Map now.

See also:

undefined, null and object detection in general


  • having both mean you can use
undefined as 'was declared/bound but never used' (...except you can assign undefined)
null as 'I've use it but want to use null as a special value'
which means you may want to check against both values specifically, and consider that
  • There is an argument that you don't need both. Yes, we've invented meanings for the difference, and they're sometimes handy, but but none of them are necessary and sometimes it's just weird. Also, it's unlikely to change so deal.

  • javascipt the core language
uses undefined in a bunch of places, e.g.
Array(2) is [undefined, undefined]
uninitialized function arguments are undefined
functions that don't explicitly return will return undefined
referencing object memebers that do not exist returns undefined
mostly doesn't use null
  • javascript itself basically never sets null(verify)
so you can often use it in the sense of "I did get around to setting this, there is just no sensible value for me to have set", or in a "...yet" sense.

  • type-wise,
null is a primitive value; null is not an object
...yet typeof null == 'object' and literally just because the specs say so (you can consider this a misfeature due to historical reasons)
undefined is a primitive value with its own type
  • (that sort of means) "is this unassigned or undeclared" can be done using:
if (typeof x === 'undefined')
...in part because typeof is a special case in that does not require its argument to exist
while if (x === undefined)
would be a ReferenceError in that case)
...in part because typeof undefined is 'undefined'
Note that this is not really necessary when asking for Object members (e.g. including window.something), because those will be undefined rather than ReferencErrors


Bitwise NOT

Unless you understand two's complement, you may have no use for this, other than maybe that
is effectively a shorthand for
Note there are other things that effectively do floor(), like
, and

(and remember does not have an integer type at all)

(all more or less the same speed, at least these days)

== versus ===


  • Double-equals: asks "is this equivalent enough?",
    • does type conversion to a common type (this is nontrivial, and involves a bunch of the language specs)
    • then returns true if both are equal as that type
  • triple-equals: is identity
    • does not convert to a common type before testing (e.g. meaning that values with different type can never be equal)
    • so returns true if both are the same object (verify).
...which can be implicitly so, when the language specs define singleton behaviour.
  • the language specs introduce/hide some funny cases.

See also:


syntax tricks


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Default argument values

rest parameters

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


For example,

function timeprint(...a) {
  console.log((new Date().getTime()/1000.0).toFixed(3), ...a);

...or some destructuring tricks.


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Given an object, you can take out specific values. For example[21]

user = {
    id: 42,
    is_verified: true
const {id, is_verified} = user;


x = [1, 2, 3, 4, 5];


[y, z] = x        // 1 and 2
[a, , b] = x      // 1 and 3          (ignoring elements)
[a, ...b] = x     // 1 and [2,3,4,5]  ('rest of the array')
[a=5, b=7] = [1]  // 1 and 7          (with defaults)


[, protocol, host, path] = /^(\w+)\:\/\/([^\/]+)\/(.*)$/.exec( some_url )
[a, b] = [b, a]   //swapping
const {length} = "foo";

Since 2017ish [22]

strict mode

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

ES5 introduced a strict mode you can opt into (not doing so is informally called sloppy mode).

strict mode

stricter handling of certain syntax, throwing exceptions for some things that would previously be silently ignored (enforcing some better coding practices in programmers),
and also made it easier to introduce new syntax.

You can declare each script to be in strict mode, by having the string 'use strict'; be the first expression within it.

Imported modules are implicitly in strict mode, even without that.

Anything else is in sloppy mode.




This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Wide browser support since 2015ish [23]

A Promise object mostly represents the bookkeeping of work that may not have finished yet, plus what should happen once it does finish.

I see Promises as a cleaner alternative to asynchronous-style callbacks.

They have a few benefits over them, primarily when you chain more than one. At least, as far as I can tell, a single step of Promise has no upsides over a callback, and is more typing (and under the hood they are implemented much like callbacks)

The way its syntax work is nice syntactic sugar, more composable (lets you chain them), has clearer semantics, can make it clearer what code's intent is, can be more readable in general, and makes it easier for error handling to be more thorough and/or readable.

There are very few cases of where they are really categorically better, but there are plenty of cases where they are at least somewhat nicer.

And yes, Promises are arguably not very well named, as you may think it's lazy execution (more so if you come from a language where a promise is specifically a model of doing so) - it isn't.}}

Introduction by example - chaining calls

Say you have an UI that triggers what comes down to:

work1();   // e.g. some fetching of data and processing what to show
work2();   // e.g. displaying the result

with no interaction, and no data passed (you can, but it keeps the example simpler).

Doing that in three calls in series like that basically will hold up JS until all that is completely done, and probably block the UI from even redrawing between these calls.

Before promises, a basic fix would be to have each function call the next by having each function end by calling the next via something like
- basically just callbacks.

One way to do that would be to hardcode that:

function collect() { console.log('collect, then'); setTimeout(work1,0) }
function   work1() { console.log('work1, then');   setTimeout(work2,0)}
function   work2() { console.log('work2'); }

Because of how setTimeout effectively schedules rather than calls the next bit, this is effectively co-operative multitasking towards the one thread that there is, so yay.

The above is somewhat awkward, and not composable, and it's not clear or controllable from the main call what will happen. When the implementations are one line like that, sure, but in reality each of these will be dozens of lines, probably in different places, and when debugging you'll be chasing code paths around.

You could hand in the callback functions, like

function collect(callback) { console.log('collect, then'); setTimeout(callback,0) }
function work1(callback)   { console.log('work1, then');   setTimeout(callback,0) }
function work2()           { console.log('work2');  }

But how do you call that? It'd be a little too easy to write something like:

collect( work1( work2 ) )

and be temporarily confused about why that's the wrong order (one is a call, one is a parameter), end up on

collect( () => work1( () => work2() ) )

and, whenever you skim past such code, still not entirely sure whether that's correct.

Both of the above are workable, but neither is elegant.

In neither case can you really follow the idea of "write code so that the intent is clear" if you wanted to.

Also, it's likely that you'll write these functions in a way that entangles these functions, rather than cleanly separating responsibilities.

Also, uncaught errors along the way stop the chain, and since the error paths are just as spread around as the code, making it harder to debug, and harder to wrote proper error handling, and if you do write such error handling it may imply further wrapping or entangling - even just to have a single overall error path guaranteed to be called when any part borks.

So if you need to write this sort of chained code, you'll probably start building something to help all these things, like some sort of job handler.

The promise syntax basically implies such a job handler.

Say you have:

function collect() { console.log('collect'); }
function   work1() { console.log('work1');   }
function   work2() { console.log('work2');   }

It lets you write the above something like:

new Promise( function (resolve_func, reject_func) { resolve_func() } ) 
 .then( function()     { collect(); } )
 .then( function()     { work1();  } )
 .then( function()     { work2(); } )
 .catch(function(error){ console.log("error doing this work", error); })
 .finally( function()  { console.log("we finished"); } );

(where finally here does nothing but demonstrate that it exists)

...or, if you like arrow functions,

new Promise( (resolve_func, reject_func) => resolve_func() ) 
 .then(       () => collect() )
 .then(       () => work1() )
 .then(       () => work2() )
 .catch( (error) => { console.log("error doing this work", error); } ).
 .finally(    () => { console.log("we finished"); } );

Okay, what did that first line even mean?

In the above example, we started with a do-nothing promise (which just resolves immediately, without a value (well, undefined)), to get to the .then() part - which are Promises of their own but less typing this way.

.then() sets up a new Promise, which is why you hand in functions.

That gets into how the way Promises were designed.

A Promise

  • in a practical sense will center around a value that will be produced at some point - so that you can use it.
  • contains the code that produces that value
  • contains calls to the consuming code (onFullfillment, onRejection; more on this below)
  • keeps track of where in the process it is
that starts as
, and becomes either
(which you don't use directly)

So when something wants to be interactive via Promises, see e.g. the Fetch example below, it will return a Promise object, which can be seen as the bookkeeping of whether that work has finished yet, plus what will happen once it does, and the code that is currently working on fulfilling it.

How do you pass data around?

The first example doesn't actually return or use values, for brevity and because there we didn't care -- we were abusing it only for the chaining.

In a lot of cases, though, you'll be dealing with data.

The value your promise code passes into the fulfilled or reject function is well, passed in.

For example,

xhr_promise = new Promise(function(resolve, reject) {
  let req = new XMLHttpRequest();
  req.open('GET', "/");
  req.onload = function() {
    if (req.status == 200) {
      resolve( req.response );
    } else {
      reject(  "File not Found" );
  function(value) { console.log('yay',value);},
  function(error) { console.log('bah',error);}


  • If what you return is not a promise (which seems common in people's code), it will be implicitly wrapped in a Promise
basically so that you can always .then()

For example, you may have seen examples for the Fetch API (which is built around promises to start with), like:

url = 'https://helpful.knobs-dials.com';
fetch( url )
 .then( function (result) { return result.text() } )
 .then( function (text)   { console.log(text)    } );

Or, if you like arrow functions:

fetch( url ).then( result => result.text() ).then( text => console.log(text) );

It is actually necessary to do that in two then()s - you can't do

fetch( url ).then( function (result) { console.log( result.text() ) } )

If you do, you'll find out why: fetch's Response.text() call (also .json(), .blob()) returns not text but a Promise - so you have to .then() that promise so that it resolves to hand over the text into a function you give, as a value in that second .then's onFinalize function.

Did that sentence seem to be vague and are promises sometimes an overdesigned bullyshitty mess? Yes.

Are they useful when applied well? Also yes.

Some uses of Promises

async and await

scoping details

default scope; var, let

From most scopes, the default scope declarations go to is global (which in browsers means 'a member of window').

That gets messy and buggy real fast (and is frankly a mistake in language design, from a time with different needs) which is why people say you should always use var, and now let or const.


  • unqualified declarations are global
  • var
    makes variables local to function scope

So using var in all your functions and libraries was always a good habit.

ES6 added block scope, so added more scoping rules:

  • unqualified declarations are global
  • var
    scopes to the nearest function
  • let
    scopes to the nearest enclosing block
and warns you about redeclaration within the same scope (unlike var)
  • const
    scopes to the nearest enclosing block
and warns you about redeclaration within the same scope
and warns you about reassignment (but see notes below)

This has fired up a few discussions.

One is let is the new var, in the sense that people should default to using let.

This discussion has mostly two camps.

  • One says:
    • let is the same in many cases, and more controlled in others,
    • let is better for people who come from block-scoped languages
  • The other says:
    • there are few real situations where let is actually better
    • you generally shouldn't intentionally make functions where this really matters (using the same variable for distinct things, or using the same name independently, both points to monster functions, and reusing names in a function scope that aren't the same variable because of block scope is probably unnecessarily confusing)


  • arguably, the main difference is that the mental model is now more complex, for little benefit, a
    • without googling: what happens with let at global scope, and why?
    • why is ES6's block-scoped
      allowed globally when block-scoped let is explicitly not, beyond "because specs"?
  • are these fairly small gains worth the mental overhead of thinking, and all the office discussions

Another is that you should use const as much as possible.

Proponents argue that

  • const means you won't accidentally reassign a variable

Others point out that

  • const only prevents reassignment, it does not prevent you from altering objects.
just one reason, that protects much less than people think, is probably a bad reason to force people to think about yet another choice


const makes constants at block scope

However, like in many languages, though, const does not mean immutable.

In ES6 because the reference (more specifically the binding) is protected, so you can't assign another object to the variable.

So when so when what you assigned acts like a primitive, it's effectively a constant.

But if it points to an object, that's as mutable as ever.

The example case for const is typically numbers, because that does work:

const c=3; 
c=4;      // is a typeerror

While this:

const o={};
o.foo=5;  // is fine, because you're not changing what o is bound to

In other words

  • it can't hurt to define things const when you won't alter them
  • it's still good to signal intent to other coders
  • it's mildly useful to protect you against yourself
e.g. signals accidental redeclaration (like let)
  • also const is allowed globally (let is not)

Function expressions and IIFEs

Modules and the module "pattern"

importing (and the non/pre-standard ways of doing that)

Text escaping and encoding

(See also Webdev_notes#Escaping)

URL escaping, UTF8 encoding

escape() does the %20%3D thing, but only for ASCII characters. Unicode is encoded in a javascript-specific way (e.g. U+2222 encodes as %u2222) that is non-standard and most servers will not understand.

In javascript 1.5 and later, you have two functions that encode as UTF8 before escaping, which is what most servers with unicode support may expect (see also cp1252, though):

encodeURI(), in which the escape step does not %-escape ":", "/", ";", and "?" so that the result is still formatted/usable as a URL.

encodeURIComponent(), which %-escapes everything. Use this when shoving URLs into URL query parameters.

Note that various code has special treatment for URLs in URLs (particularly if they are lone parameters), so you should always check the specs. For example, look at the differences in syndication 'Add to' links.

UTF8 encoding without URL escaping does not exist in Javascript, but consists of simple operations. You could implement your own, or use someone else's, such as this UTF8 encoder and decoder.

other escaping

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

HTML escaping (that is to say, & into &amp; and < into &lt; and > into &gt;) can be done in a few ways. One hack you see now and then is to leverage browser innards by using innerHTML and letting it sanitize the string, then read out the same thing, which is probably faster than a few replaces in a row, but also counts on browser behaviour.

A very basic implementation (which may well be more portable) would be:

function escape_nodetext(s) {
   return s.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;');

In attributes, you would also want to replace ", and possibly '.

function escape_attr(s) {
   return s.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;')
           .replace(/"/g, '&#34;').replace(/'/g, '&#39;');

See also