Difference between revisions of "Javascript notes - syntax and behaviour"

From Helpful
Jump to: navigation, search
m (Optional)
m (Optional)
Line 15: Line 15:
And sometimes that's a proper thing to do anyway, though because it ''also'' require more coder knowledge of the typing system.
And sometimes that's a proper thing to do anyway, though because it ''also'' require more coder knowledge of the typing system.
This is one of a handful of reasons [[typescript]] isn't a bad idea.
This is one of a handful of reasons [[typescript]] isn't a bad idea {{comment|(though note that TS isn't static/inferential, it works out as a shorter way of wwriting these checks because the syntactic sugar is actually transpiled into ES - and subvertible if you really want to)}}.

Revision as of 14:26, 29 June 2022

Related to web development, lower level hosting, and such: (See also the webdev category)

Lower levels

Server stuff:

Higher levels

function stuff

function parameters


Javascript fills in undefined values for any omitted parameters, so all parameters are optional (also positional only).

So all arguments are optional by (loose) contract.

There is a lot of code that will assume you are calling it with halfway sensible values, and trip over absence of some parameter.

To not do that, you need to basically test every parameter.

And sometimes that's a proper thing to do anyway, though because it also require more coder knowledge of the typing system. This is one of a handful of reasons typescript isn't a bad idea (though note that TS isn't static/inferential, it works out as a shorter way of wwriting these checks because the syntactic sugar is actually transpiled into ES - and subvertible if you really want to).

You could write something along the lines of:

function upper(s,alsoStrip) {
  var r=s.toUpperCase();
  if (alsoStrip!==undefined) {r=r.strip();}
  return r;

In this particular case, undefined, null, false and 0 are all coerced to false. In other cases you may specifically want to have 0 be a value you can pass in, in which case you would probably check against undefined as the 'unspecified' value.

Default argument values




Using new

You can take a basic function definition, such as:

function sorta_class() {}

and do

thing = new sorta_class()
The use of
means creating a new instantiated Object, in that
  • you use them that way
  • you can use this in that function, which practically makes it a constructor
  • you can add prototypes to that sorta-class - also lets you use this
  • prototype inheritance stuff works

Half of that demonstrated:

function sorta_class(i) { this.i=i }
sorta_class.prototype.alerti = function() { alert(this.i); }
var sc = new sorta_class(5);

Of course, because this is syntax hackery and classes don't really exist, forgetting 'new' is syntactically valid but does something else entirely.

ES6 classes

Since ES6 (~2017 in browsers) you can write that as:

class better_class {  
  constructor(i) {
    this.i = i;
  alerti() {

after either of those you can do:

thing = new better_class(5);

The difference beyond that syntactic sugar:

  • ES6 gives you super()
  • ES6 doesn't let you forget the use of new
before it you could also directly invoke the thing. Often didn't make much sense, but you could.


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Wide browser support since 2017ish[1]

It seems arrow functions were intended to address issues with function expressions.

...but most people treat them as a shorter way to write something that mostly acts like a regular function.

For example,

["one","two","three"].map(str => str.length);

is shorthand for

["one","two","three"].map( function (str) {return str.length} );

For another example, sorting becomes a little more readable:

l.sort( (a, b) => b.foo - a.foo )

For another example, chains of promises can be less typing (we often use inline anonymous functions anyway).

Say, instead of

func1().then(  function() {func2();}  ).then(  function() {func3();}  ).done(  function() {finish();}  );

we can write:

func1().then(  () => func2()  ).then(  () => func3()  ).done(  () => finish()  );

For reference: zero, one, and more parameters can be written like:

_     => "Hi!";
()    => "Hi!";

a     => "Hi "+a; 

(a,b) => "Hi "+a+" and "+b; 

Keep in mind that arrow functions have differing semantics from regular functions, including:

  • arrow functions do not have their own context for
(I think it it references their outer function / global scope? (verify))
meaning there are many contexts where you'll now alter something else, and make interesting bugs, e.g.
event handlers
object methods that think they're altering the object
and more, e.g. [2]
There is valid criticism in the form of "Previously we had implicit binding of this that many people didn't quite understand. Now we have two ways of implicit binding of this that many people don't quite understand".
  • can't use
    on it
it's considered illegal because it doesn't make sense
It doesn't make sense because without its own this, it can't be about its own object.
  • no super
  • no .prototype, no new, can't be constructors
  • no function overloading
though many of us haven't used that so won't miss it
  • can't use yield
  • multiline arrow function need to use a block, and return
at which point the amount of typing between a multiline arrow function and a regular function becomes negligible
  • no instrospectable name, which
    • can make it harder to find the code while debugging
    • can't refer to itself, so no recursion

That said, there are certainly cases where arrow function is brief and more readable - and little of the above is an issue, e.g. where you the only thing you do is a real call.

Even so, it is also going going to lead to more unnecessary arguments and bugs.


Data structure stuff

Array ≈ stack ≈ queue

Given that:

  • push() is 'add to end'
  • pop() is 'take from end'
  • shift() is 'take from start'
  • unshift() is 'add to start'

...you can use an array as a stack using push() and pop()

...and as a queue using push() and shift() (or pop() and unshift())

They're probably not the fastest possible implementations of said data structures, but they can be convenient.

Also, it doesn't seem to be a linked list, so you can't efficiently take things from the middle, but for small arrays it's fine, and the simplest call is probably splice, specifically:



splice, will,

  • at a given offset
  • remove a number of elements (if omitted, the rest of the array is removed)
  • insert given elements

and not return anything

splice(start, deletecount)
splice(start, deleteCount, zero, or, more, items)

For example

var a=[1,2,3];
a.splice(1,1, 9,9,9); // remove one element at offset 1, then insert three items that are the number 9
// a is now [ 1, 9, 9, 9, 3 ]
// a is now [ 1, 9, 9 ]


slice(start,end) returns a subrange of an array - it's a shallow copy, not touching the original

end is exclusive, so e.g. [1,2,3,4,5].slice(2,3) is [3]

Iterating arrays and other things

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

Now that there are more data structures, the most flexible ways of iterating are probably

forEach (since 2011 or so)
acts as is a series of callbacks (but synchronous, so mostly in the 'is function calls' sense)
...also meaning it cannot be stopped with anything other than an exception
for..of (since 2015 or so)
can use break


s = new Set();
a = new Array(1,2)
m = new Map();
m.set(1, 'a');
m.set(2, 'b');
for (const v of s)  
for (const v of a)  
// Also note that for Arrays you can get indices like:
 for (const [i, v] of a.entries()) { console.log(i, v) }
for (const item of m) 
// the items will be [key,val] Arrays, so you may prefer using destructuring:
for (const [k,v] of m)
  console.log(k, v)
s.forEach( val => console.log(val) );
a.forEach( val => console.log(val) );
m.forEach( (key,val) => console.log(`${key} -> ${val}`) );

For contrast...

We used to often do :

for (var i=0;i<ary.length;i+=1) {

...which is fine on Array, but will not work on Map or Set, which are really made just to work with iterators.

There is also:

for (const element in ary) {

...but it's a little different. It iterates all 'enumerable properties'[3]. On arrays that means indices, while on other objects that means properties.


ES6, wide support since ~2015 [4]

Both keys and values may be objects or almost any types (unlike objects, where keys would always be toString()'d because they would really be object properties)

m = new Map();
m.set(1, 'a');
m.set(2, 'b');
console.log( m.get(1) );
console.log( m.has(2) );


  • the older, object-style map[key]=val will function, but is really the classical object interface, so stores into object properties and not the actual map



ES6, Wide support since ~2015 [5]

let letters = new Set();
console.log(letters.size); // 3
console.log(letters.has('c')); // true
console.log(letters.has('c')); // false


Typing is exciting

On numbers

is always 64-bit floating point
There are no integers
there are no 32-bit floats

On integers

Outside that range, though, thinking that integers exist will eventually bite you, e.g.
9999999999999999 === 10000000000000000
  • BigInt has existed for a while (though apparently not been truly widely supported until 2020ish[6]), with syntax like
    2n ** 160n
But there is no elegant way to integrate it into the existing type system, so it's an entirely separate thing
and you cannot mix them with regular Numbers, e.g.
is a TypeError
BigInts absolutely have very useful uses, yet in generic use they are often not worth the bother.
  • Since ES6, there are typed arrays for integers
  • There are very few integer operations.
for bitwise NOT, that is internally calculated as two's-complement integers.
which also makes for some different rough (though understandable) edges, e.g.
~1E30 === -1

On floats

  • Since ES6, float32 are sort of in there, not as first class citizens in the type system but in the form of Float32Array (one of the typed arrays (and were widely supported a few years before this [7]) which represents float32 in platform byte order, allowing for faster calculation. [8]
  • ES6 also introduced Math.fround which rounds a 64-bit to the nearest 32-bit float value, meant as a helper when comparing in 32-bit precision, e.g. between Number and values from a Float32Array.

On objects

Enthusiastic coercion means never having to say TypeError

"str" + 1 is "str1"   
"5"+3-3   is "50"                   lesson: + is overloaded in ways that make it coerce to string more easily,
"0"-1     is -1                     ...where other operators would more easily convert
"str" - 1 is NaN                    ...so also more easily fail
+[1]      is 1                      because unary + is ToNumber, in this case ToNumber(toString(input)) [9]
+[1,2]    is NaN                    because ToNumber can't deal with "1,2"

! + []    is true                   because that's  unary !,  on unary + on [],
                                    and +[] reduces to ToNumber(""),  which is 0,
                                    and !0 is coerced to Boolean true

! + [1,2] is true                   because that's  unary !,  on unary + on [],   so it works out as !NaN, 
                                    and !NaN is true. Look, it's gotta be something. Even if it's false in some other languages.

3 - true ==  2                      while true!==1  (because Boolean is own type),  true==1

typeof(null)   == "object"          because specs says so, not for particularly good reason, as
typeof(Object) != "object"          ...because:
typeof(Object) == "function"        ...because it's uninstantiated;
                                    (and relates to classes not really existing beyond syntactic sugar), yet:
typeof(new Object()) == "object"    because it's instantiated.  (which means less than it does in other languages, but still)

NaN !== NaN                         which is correct according to floating point semantics,
                                      because not all reasons for NaN are the same.   This is quite sensible.
NaN != NaN                          ...can be considered slightly weirder, given how fuzzy and coercive == often is

Array(3)[0] === undefined           having a new array filled with undefined is fair
Array(3)    == ',,'                 so apparently undefined coerces to an empty string - yet
Array(3)[0].toString()              is a typeError because you can't dereference on a non-object - yet
{undefined:3}                       is allowed even though that undefined becomes the string 'undefined'
                                    and you'd expect that to involve toString

The following are more contrived, but still sort of interesting

[] + []  is  ""                     even though toString([]) is [object Object]
[] + {}  is  "[object Object]"   

{} + []  is  0                      ...or "[object Object]", see below
[] + {} === {} + []           
{} + {}  is  NaN                    ...or "[object Object][object Object]", see below

Context matters.

For some of the last, what happens is that {}

in another code block, or on the console, will be an empty code block that is ignored
assigned to a variable will be an Object

This is contrived in that you wouldn't do that accidentally.

Still, consider
{} + []
in code context would become a loose
, and unary + coerces things to numbers, which is
(see above)
which is 0.
in expression context {} is an empty object
so this reduces to "[object Object]" + ""
{} + {}
in code context the the whole thing reduces to
which works out as
ToNumber("[object Object]")
, which doesn't parse
so becomes NaN
in expression context it's two empty objects, coerced to strings (I think by the +(verify))
so "[object Object][object Object]"

See also:

undefined, null and object detection in general


  • having both mean you can use
undefined as 'was declared/bound but never used' (...except you can assign undefined)
null as 'I've use it but want to use null as a special value'
which means you may want to check against both values specifically, and consider that
  • There is an argument that you don't need both. Yes, we've invented meanings for the difference, and they're sometimes handy, but but none of them are necessary and sometimes it's just weird. Also, it's unlikely to change so deal.

  • javascipt the core language
uses undefined in a bunch of places, e.g.
Array(2) is [undefined, undefined]
uninitialized function arguments are undefined
functions that don't explicitly return will return undefined
referencing object memebers that do not exist returns undefined
mostly doesn't use null
  • javascript itself basically never sets null(verify)
so you can often use it in the sense of "I did get around to setting this, there is just no sensible value for me to have set", or in a "...yet" sense.

  • type-wise,
null is a primitive value; null is not an object
...yet typeof null == 'object' and literally just because the specs say so (you can consider this a misfeature due to historical reasons)
undefined is a primitive value with its own type
  • (that sort of means) "is this unassigned or undeclared" can be done using:
if (typeof x === 'undefined')
...in part because typeof is a special case in that does not require its argument to exist
while if (x === undefined)
would be a ReferenceError in that case)
...in part because typeof undefined is 'undefined'
Note that this is not really necessary when asking for Object members (e.g. including window.something), because those will be undefined rather than ReferencErrors


Bitwise NOT

Unless you understand two's complement, you may have no use for this, (and remember JS does not have an integer type at all)

Other than maybe that
is effectively a shorthand for
, but that's sort of obscure. Note there are other things that effectively do floor(), like
, and

(all more or less the same speed, at least these days)

== versus ===


  • Double-equals: asks "is this equivalent enough?",
    • does type conversion to a common type (this is nontrivial, and involves a bunch of the language specs)
    • then returns true if both are equal as that type
  • triple-equals: is identity
    • does not convert to a common type before testing (e.g. meaning that values with different type can never be equal)
    • so returns true if both are the same object (verify).
...which can be implicitly so, where the language specs define singleton behaviour.
  • the language specs introduce/hide some funny cases.

See also:


scoping details

default scope; var, let

From most scopes, the default scope declarations go to is global, which in browsers means 'a member of window'.

That gets messy and buggy real fast (and is frankly a mistake in language design, from a time with different needs).

Which is one reason why people say you should always use var, and now let or const.


  • unqualified declarations are global
  • var
    makes variables local to function scope

So using var in all your functions and libraries was always a good habit.

ES6 added block scope, so added more scoping rules:

  • unqualified declarations are global
  • var
    scopes to the nearest function
  • let
    scopes to the nearest enclosing block
and warns you about redeclaration within the same scope (unlike var)
  • const
    scopes to the nearest enclosing block
and warns you about redeclaration within the same scope
and warns you about reassignment (but see notes below)

This has fired up a few discussions.

One is let is the new var, in the sense that people should default to using let.

This discussion has mostly two camps.

  • One says:
    • let is the same in many cases, and more controlled in others,
    • let is better for people who come from block-scoped languages
  • The other says:
    • there are few real situations where let is actually better
    • you generally shouldn't intentionally make functions where this really matters (using the same variable for distinct things, or using the same name independently, both points to monster functions, and reusing names in a function scope that aren't the same variable because of block scope is probably unnecessarily confusing)


  • arguably, the main difference is that the mental model is now more complex, for little benefit, a
    • without googling: what happens with let at global scope, and why?
    • why is ES6's block-scoped
      allowed globally when block-scoped let is explicitly not, beyond "because specs"?
  • are these fairly small gains worth the mental overhead of thinking, and all the office discussions

Another is that you should use const as much as possible.

Proponents argue that

  • const means you won't accidentally reassign a variable

Others point out that

  • const only prevents reassignment, it does not prevent you from altering objects.
just one reason, that protects much less than people think, is probably a bad reason to force people to think about yet another choice


const makes constants at block scope

But as in many languages, const does not mean immutable.

In ES6 because the reference (more specifically the binding) is protected, so you can't assign another object to the variable later. So when so when what you assigned acts like a primitive, that primitive's value can't be changed.

But if it points to an object, that object is as mutable as ever.

The example case for const is typically numbers, because that does work:

const c=3; 
c=4;       // is a typeerror

While this:

const o={};
o.foo=5;   // is fine, because you're not changing what o is bound to

In other words

  • it can't hurt to define things const when you won't alter them
  • it's still good to signal intent to other coders
  • it's mildly useful to protect you against yourself
e.g. signals accidental redeclaration (like let)
  • also const is allowed globally (let is not)

global scope

Template literals, ${}, tagged templates

Since ES6; widespread in browsers since 2016 (except IE)

Template literals, refer to using backticks to enclose a string, like
`Foo ${a+b} bar`
, which e.g.
  • interprets ${expressions}
can also be nested
  • allows multi-line strings

Tagged templates are sort of parsers of such templates.

Perhaps clearest by example, in that:

function_name`Foo ${1+2} bar ${3+4}`

evaluates as if you wrote:

function_name(['Foo ',' bar',''], 3,7)

That function could be used e.g. to return a variation on what that template itself would output, but since it doesn't need to return a string, you could

There are a few related details to dealing with strings without processing escape sequences.


rest parameters

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


For example,

function timeprint(...a) {
  console.log((new Date().getTime()/1000.0).toFixed(3), ...a);

...or some destructuring tricks.

async, await


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)
keyword on a function implies that
  • the function returns a Promise, and
  • that you can do async things in it.

Yes, you could write:

function hello1() {
  return Promise.resolve("Hello1");

and get a Promise object.

Yet if you wanted to await within that function (which if you write everything async style, you well may), you would need to:

async function hello2() {
  console.log(await hello1());
  return Promise.resolve("Hello2");

Note on the 'async function returns a promise' point: if a function is declared async and returns a non-promise, JS will wrap it in a promise, so
return retvalue
inside that function is basically equivalent to
return Promise.resolve(retvalue)
. This lets you write a slightly cleaner-looking function like:
async function hello3() {
  return "Hello3";

This is mostly syntactic sugar, to help adapt functions into Promise/async structuring, with less typing.

How do you actually run these?


This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

ES6, wide browser support since 2017ish [10]

Given an object, you can take out specific values. For example[11]

user = {
    id: 42,
    is_verified: true
const {id, is_verified} = user;


x = [1, 2, 3, 4, 5];


    [y, z] = x     // 1 and 2
  [a, , b] = x     // 1 and 3          (ignoring elements)
 [a, ...b] = x     // 1 and [2,3,4,5]  ('rest of the array')
[a=5, b=7] = [1]   // 1 and 7          (with defaults)


[, protocol, host, path] = /^(\w+)\:\/\/([^\/]+)\/(.*)$/.exec( some_url )

[a, b] = [b, a]   //swapping

const {length} = "foo";  // which is a little more magically implicit