== algorithm — the things every JS interview
starts with.
Before touching type-checking APIs or coercion rules, you need one mental model: where does JavaScript store a value, and what gets copied when you assign it?
The 8 primitives: string, number, bigint, boolean, undefined, null, symbol, object*.
When you copy a primitive, you get a completely independent copy. Changing one does NOT affect the other.
let b = a;
b = 10;
// a is still 5 ✅
Arrays, objects, functions, Maps, Sets — all reference types. The variable holds a pointer to data on the heap, not the data itself.
When you copy an object variable, both variables point to the same object in memory.
let b = a;
b.x = 99;
// a.x is now 99 ⚠
A primitive is like writing a number on a sticky note. If you copy the sticky note, you get your own independent copy — changing yours doesn't affect the original.
An object is like writing a house address on a sticky note. If you copy the note, both copies point to the same house. Knocking down a wall in that house affects everyone holding that address.
param = {}), the caller's variable is unaffected. Only mutations (param.x = 1) are shared.
| Type | Example values | typeof result | Notes |
|---|---|---|---|
| string | "hello", '', `template` | "string" | Immutable — no method can change the original string |
| number | 42, 3.14, NaN, Infinity | "number" | 64-bit IEEE 754 float. NaN is a number type! |
| bigint | 9007199254740991n | "bigint" | Integers beyond Number.MAX_SAFE_INTEGER |
| boolean | true, false | "boolean" | Only two values |
| undefined | undefined | "undefined" | Variable declared but not assigned |
| null | null | "object" ⚠ | Historical bug — null is a primitive, not an object |
| symbol | Symbol('id') | "symbol" | Unique, non-enumerable identifiers (ES6) |
| object | {}, [], function(){} | "object" / "function" | Reference type — everything that isn't a primitive |
// ── PRIMITIVES are immutable and compared by VALUE ── let a = "hello"; let b = a; b = "world"; console.log(a); // "hello" — unchanged // String methods return NEW strings — they never mutate let s = "hello"; s.toUpperCase(); // returns "HELLO" — s is still "hello" s[0] = "H"; // silently fails in non-strict mode // ── OBJECTS are mutable and compared by REFERENCE ── let obj1 = { x: 1 }; let obj2 = obj1; // copies the reference (address) obj2.x = 99; console.log(obj1.x); // 99 — same object! // ── typeof quirks to memorise ── typeof null // "object" ← famous historical bug typeof [] // "object" ← arrays are objects typeof function(){} // "function" ← special case for functions typeof undefined // "undefined" typeof Symbol() // "symbol"
Primitives are not objects, yet you can call "hello".toUpperCase(). How? JavaScript temporarily auto-boxes the primitive into its wrapper object (String, Number, Boolean), calls the method, then discards the wrapper.
// What JS does internally when you call a string method: "hello".toUpperCase(); // → new String("hello").toUpperCase() → discards wrapper → "HELLO" // Never use wrapper object constructors — they create OBJECTS, not primitives: typeof new String("hi") // "object" ← not "string"! new Boolean(false) == true // true — because objects are always truthy ⚠
"JavaScript has 7 primitive types — string, number, bigint, boolean, undefined, null, and symbol. Primitives are immutable and compared by value. Objects (arrays, functions, plain objects) are reference types — variables hold a pointer to the data in the heap. One gotcha: typeof null === 'object' is a historical bug — null is a primitive. Another: primitives can call methods because JS temporarily boxes them in wrapper objects."
Type coercion is JavaScript automatically translating a value from one type to another when an operation requires it. Think of it as a translator that makes guesses when you haven't been explicit — sometimes the guess is right, sometimes it's spectacularly wrong.
Explicit coercion = hiring a professional translator you trust (Number("42"), String(99)). You control exactly what happens.
Implicit coercion = JavaScript guessing what you meant. Usually fine, but the rules are non-obvious — which is why this topic comes up in every interview.
// To Number Number("42") // 42 Number("") // 0 ← surprising! Number(true) // 1 Number(false) // 0 Number(null) // 0 ← surprising! Number(undefined) // NaN Number("abc") // NaN Number([]) // 0 ← very surprising! Number([1]) // 1 Number([1,2]) // NaN parseInt("42px") // 42 (stops at first non-digit) +"42" // 42 (unary + operator) // To String String(42) // "42" String(null) // "null" String(undefined) // "undefined" (42).toString() // "42" // To Boolean Boolean(0) // false Boolean("") // false Boolean(null) // false Boolean({}) // true ← empty object is truthy! !!value // double-negation trick — most common
// + prefers STRING concatenation if either operand is a string "5" + 3 // "53" (number coerced to string) 5 + "3" // "53" "5" + true // "5true" "5" + null // "5null" [] + [] // "" (both convert to "") [] + {} // "[object Object]" {} + [] // 0 ← {} parsed as empty BLOCK, not object! // - * / prefer NUMBER coercion "5" - 3 // 2 (string coerced to number) "5" * "3" // 15 "abc" - 1 // NaN // Comparison operators — many gotchas 1 < "2" // true (string → number) "a" > "B" // true (char codes: a=97, B=66) null > 0 // false null == 0 // false ← null only == undefined null >= 0 // true ← bizarre! (null converts to 0 for >=)
=== for all equality checks to skip coercion entirely. Use explicit conversion functions (Number(), String(), Boolean()) instead of relying on implicit coercion — your future self will thank you.
"Type coercion is JavaScript automatically converting values between types. Explicit coercion is deliberate — Number('42'), String(99). Implicit coercion is triggered by operators — + prefers string concatenation if either side is a string, while -, *, / prefer numbers. The rule I follow: always use === to avoid the coercion rabbit hole."
| Value | Type | Why falsy? |
|---|---|---|
| false | boolean | Literally false |
| 0 | number | Zero — numeric nothing |
| -0 | number | Negative zero — same as 0 in boolean context |
| 0n | bigint | BigInt zero |
| "" | string | Empty string — absence of characters |
| null | null | Intentional absence of a value |
| undefined | undefined | Variable not assigned |
| NaN | number | Not a valid number |
Everything else is truthy — including empty objects {}, empty arrays [], and the string "false".
// ⚠ These are ALL truthy — they catch people out if ({}) console.log('truthy'); // ✅ empty object if ([]) console.log('truthy'); // ✅ empty array if ("false") console.log('truthy'); // ✅ non-empty string if ("0") console.log('truthy'); // ✅ "0" is truthy but Number("0") is 0 (falsy) if (new Boolean(false)) console.log('truthy'); // ✅ OBJECTS are always truthy if (function(){}) console.log('truthy'); // ✅ functions are objects // Practical consequence — how to check for empty array: if (arr) // ❌ ALWAYS truthy — even if empty if (arr.length) // ✅ 0 is falsy, non-zero is truthy if (arr.length > 0) // ✅ most explicit // Practical consequence — the "0" string bug: const input = "0"; // user typed "0" if (input) { /* runs — "0" is truthy ✅ */ } if (Number(input)) { /* does NOT run — Number("0") is 0, falsy */ }
typeof, instanceof, and Object.prototype.toString.call().| Method | Best for | Limitations |
|---|---|---|
| typeof x | Primitives (string, number, boolean, undefined, symbol, bigint, function) | typeof null === "object"; can't distinguish array from object |
| x instanceof C | Checking if object was created by a specific constructor / class | Fails across iframes (different realms); doesn't work on primitives |
| Object.prototype.toString.call(x) | Precise type tag for any value including null, arrays, RegExp, Date | Verbose; can be spoofed via Symbol.toStringTag |
// ── typeof ── works perfectly for primitives typeof "hello" // "string" typeof 42 // "number" typeof true // "boolean" typeof undefined // "undefined" typeof Symbol() // "symbol" typeof 42n // "bigint" typeof function(){} // "function" ← special case typeof {} // "object" typeof [] // "object" ← can't distinguish array! typeof null // "object" ← the famous bug // Safe null check using typeof: if (x !== null && typeof x === "object") { /* it's a real object */ } // ── instanceof ── checks the prototype chain [] instanceof Array // true [] instanceof Object // true ← arrays ARE objects new Date() instanceof Date // true "hello" instanceof String // false! ← primitive, not wrapper object // ── Object.prototype.toString ── most precise const typeOf = x => Object.prototype.toString.call(x).slice(8, -1); typeOf(null) // "Null" typeOf(undefined) // "Undefined" typeOf([]) // "Array" ← correctly identifies arrays typeOf({}) // "Object" typeOf(new Date()) // "Date" typeOf(/regex/) // "RegExp" typeOf(Promise.resolve()) // "Promise" // ── Practical helpers for daily use ── const isArray = x => Array.isArray(x); // best for arrays const isNull = x => x === null; // only way const isObject = x => typeof x === 'object' && x !== null;
iframeArray instanceof Array returns false because each browsing context has its own Array constructor. Array.isArray() works across realms — always prefer it for array checks.
null, undefined, and undeclared? When does each appear?| null | undefined | undeclared | |
|---|---|---|---|
| Meaning | Intentional absence of a value — explicitly set by the developer | Variable declared but no value assigned yet — JS's default empty | Variable was never declared at all — doesn't exist in scope |
| typeof | "object" (bug) | "undefined" | "undefined" — no error! (safety valve) |
| == null | true | true | ReferenceError if accessed |
| Common cause | API returning no result, resetting a ref | Missing args, uninitialised vars, missing object props | Typos, missing imports, not-yet-declared vars |
// null — intentional emptiness let selectedUser = null; // nothing selected yet function findUser(id) { const user = db.find(id); return user ?? null; // explicit: "found nothing" } // undefined — appears automatically let x; // undefined — declared, not assigned function greet(name) { console.log(name); // undefined if called with no arg } const obj = {}; obj.missing; // undefined — property doesn't exist [1,2,3][99]; // undefined — out of bounds index // undeclared — accessing causes ReferenceError console.log(notDeclared); // ❌ ReferenceError typeof notDeclared; // "undefined" — safe, no error // Safe pattern to check for undeclared globals: if (typeof myGlobal !== 'undefined') { /* safe */ } // Null check — the nullish pattern if (value == null) { /* catches BOTH null AND undefined */ } if (value === null) { /* only null */ } if (value === undefined) { /* only undefined */ }
null. Let JS handle undefined naturally — don't assign it manually.
typeof NaN === 'number'? How do you safely detect NaN and -0?NaN (Not-a-Number) is the result of an invalid numeric operation. It's part of the IEEE 754 floating-point standard and is categorised as a number type — representing a numeric computation that has no meaningful result.
// Ways NaN appears Number("abc") // NaN — invalid conversion 0 / 0 // NaN Math.sqrt(-1) // NaN — no real square root of negative parseInt("xyz") // NaN undefined + 1 // NaN // NaN's most bizarre property: it doesn't equal itself NaN === NaN // false ← NaN is the only value not equal to itself NaN !== NaN // true // ❌ Wrong ways to detect NaN: value === NaN // always false — even for NaN itself! // ✅ Correct ways to detect NaN: isNaN(value) // true for NaN, BUT coerces first! isNaN("abc") → true Number.isNaN(value) // ✅ best: no coercion. Number.isNaN("abc") → false value !== value // true only for NaN (exploits the self-inequality)
// -0 exists and behaves strangely const negZero = -0; // Most operations treat -0 same as 0 -0 === 0 // true ← === can't distinguish them! -0 > 0 // false String(-0) // "0" ← loses the sign JSON.stringify(-0) // "0" ← loses the sign // ✅ Ways to detect -0: Object.is(-0, 0) // false ← Object.is has no coercion, distinguishes -0 Object.is(-0, -0) // true 1 / -0 === -Infinity // true ← divide by -0 gives -Infinity // Object.is — the "same value equality" algorithm Object.is(NaN, NaN) // true ← NaN equals NaN here (unlike ===) Object.is(-0, 0) // false ← -0 ≠ +0 here (unlike ===) // Otherwise identical to ===
"NaN is typed as 'number' because it represents an invalid floating-point result per IEEE 754 — it's inside the number type domain. The key oddity is NaN !== NaN, which means you can't use === to check for it. Always use Number.isNaN(), not the older isNaN() which coerces first. For -0, use Object.is(value, -0) — the only reliable way since -0 === 0 is true."
{} === {} return false? How does reference equality work?Every time you write {}, JavaScript creates a new object at a new memory address. The === operator for objects doesn't compare their contents — it compares their memory addresses. Two different objects, even with identical contents, live at different addresses.
// {} === {} is like asking "is house at 12 Oak St === house at 14 Oak St" {} === {} // false — different addresses [] === [] // false — same reason // Equality only holds when pointing to THE SAME object const a = { x: 1 }; const b = a; // b holds the same address as a a === b // true — same reference // Value comparison of objects — you must write your own or use a library const deepEqual = (a, b) => JSON.stringify(a) === JSON.stringify(b); // quick but has limits (undefined, functions) // In tests: Jest's toEqual() does deep structural comparison expect({ a: 1 }).toEqual({ a: 1 }); // ✅ passes expect({ a: 1 }).toBe({ a: 1 }); // ❌ fails (reference check) // React's useState — same reference = no re-render const [items, setItems] = useState([]); setItems(items); // ❌ same reference — no re-render setItems([...items]); // ✅ new reference — triggers re-render
Shallow copy = copying the tray but reusing the same ice cubes. The top level is independent, but nested items are still shared — changing a nested object affects both copies.
Deep copy = making a completely new tray with completely new ice cubes. No shared references anywhere — fully independent.
const original = { a: 1, nested: { x: 10 } }; // Spread operator — shallow copy const copy1 = { ...original }; // Object.assign — shallow copy const copy2 = Object.assign({}, original); // Both are SHALLOW — nested object is still shared copy1.a = 99; // ✅ independent — original.a is still 1 copy1.nested.x = 99; // ⚠ original.nested.x also becomes 99! // Array shallow copies const arrCopy = [...original]; // spread const arrCopy2 = arr.slice(); // slice with no args const arrCopy3 = Array.from(arr);
| Method | Works for | Limitations |
|---|---|---|
| structuredClone(obj) | Objects, arrays, Date, Map, Set, RegExp, ArrayBuffer | Cannot clone functions, DOM nodes, class instances with methods |
| JSON.parse(JSON.stringify(obj)) | Plain objects and arrays with JSON-compatible values | Loses undefined, functions, Symbol, Date becomes string, circular ref crashes |
| lodash.cloneDeep(obj) | Almost everything including class instances | External dependency |
| Custom recursive function | Full control | Must handle edge cases manually |
const obj = { name: 'Alice', scores: [90, 85], date: new Date(), fn: () => 'hello' }; // structuredClone — the modern standard (Node 17+, modern browsers) const deep1 = structuredClone(obj); deep1.scores.push(100); // original.scores unaffected ✅ // Note: obj.fn is NOT cloned — structuredClone ignores functions // JSON.parse/stringify — quick but lossy const deep2 = JSON.parse(JSON.stringify(obj)); // obj.date → string, obj.fn → gone, obj.undefined → gone
structuredClone() as the default deep copy solution — it handles Date, Map, Set, ArrayBuffer correctly and it's native (no dependencies). Fall back to lodash.cloneDeep only when you need to clone class instances with methods.
Symbol? What problem does it solve? Show practical use cases.A Symbol is a guaranteed-unique primitive value. Every call to Symbol() creates a new, distinct value that will never equal any other Symbol or any other value — even if created with the same description string.
This solves the key collision problem: adding properties to objects you don't own without risk of overwriting existing properties.
// ── Basic: every Symbol is unique ── const s1 = Symbol('id'); const s2 = Symbol('id'); s1 === s2 // false — description is just a label, not the value // ── Use case 1: unique object keys (avoid naming collisions) ── const ID = Symbol('id'); const user = { [ID]: 123, name: 'Alice' }; user[ID] // 123 — only accessible via the Symbol reference user.id // undefined — string "id" key doesn't exist // Symbol keys are NOT enumerable in for...in or Object.keys() Object.keys(user) // ['name'] — Symbol key hidden JSON.stringify(user) // '{"name":"Alice"}' — Symbol stripped Object.getOwnPropertySymbols(user) // [Symbol(id)] // ── Use case 2: private-like class fields (pre-#fields era) ── const _secret = Symbol('secret'); class Vault { constructor(val) { this[_secret] = val; } reveal() { return this[_secret]; } } // ── Use case 3: well-known Symbols — hook into JS internals ── class Range { constructor(start, end) { this.start = start; this.end = end; } [Symbol.iterator]() { // makes Range iterable! let cur = this.start; const end = this.end; return { next: () => cur <= end ? { value: cur++, done: false } : { done: true } }; } } [...new Range(1,5)] // [1, 2, 3, 4, 5] // ── Symbol.for — global symbol registry (shared across files) ── const g1 = Symbol.for('shared'); // creates or retrieves from registry const g2 = Symbol.for('shared'); g1 === g2 // true — same registry entry
"Symbol is a primitive that guarantees uniqueness — two Symbol() calls with the same description are never equal. The main use cases are: collision-free object keys (they don't appear in for...in or JSON.stringify), and well-known Symbols that hook into language internals like Symbol.iterator to make objects iterable, Symbol.toPrimitive to control coercion, or Symbol.hasInstance to customise instanceof."
==). Explain: [] == false, null == undefined, "0" == false.When you write x == y, JavaScript follows a specific set of rules defined in the ECMAScript spec. Knowing these rules lets you predict any == result without memorising every case.
=== rules directly (no coercion needed)true. These two and only these two are equal via ==1 == "1" → 1 == 1 → truefalse→0, true→1.valueOf() then .toString())false (null only equals undefined)// ── Case 1: null == undefined ── null == undefined // true — spec rule: these two are always equal via == null == 0 // false — null only == undefined, nothing else null == "" // false null == false // false ← catches many people out // ── Case 2: "0" == false ── "0" == false // Step 4: boolean involved → convert false to number → 0 // Now: "0" == 0 // Step 3: number vs string → convert "0" to number → 0 // Now: 0 == 0 → true ✅ // But: if ("0") → TRUE (non-empty string is truthy) // Paradox: "0" is truthy, yet "0" == false is true! // ── Case 3: [] == false ── [] == false // Step 4: boolean → 0 → [] == 0 // Step 5: object vs primitive → [].valueOf() → [] (not primitive) // → [].toString() → "" → "" == 0 // Step 3: string vs number → Number("") → 0 → 0 == 0 → true // ── Case 4: [] == ![] ── (famous interview trick) [] == ![] // ![] = !true = false (objects are truthy → ![] is false) // Now: [] == false → true (same as Case 3) // So: [] == ![] is TRUE! Both sides coerce to 0. // ── Summary table for common == cases ── "" == 0 // true (""→0) "1" == true // true (true→1, "1"→1) "0" == false // true (false→0, "0"→0) null == undefined // true (spec rule) NaN == NaN // false (NaN is never equal to anything)
== is the value == null check (catches both null and undefined in one comparison). For all other comparisons, use === and perform explicit conversions when needed.
"The == algorithm coerces types before comparing. The key rules are: booleans are converted to numbers first (false→0, true→1), strings are converted to numbers when compared against numbers, and objects call ToPrimitive (valueOf, then toString) when compared to primitives. The special case is null == undefined which is always true by spec. The infamous [] == ![] is true because both sides end up as 0. In production, I use === everywhere except for the null-check shorthand x == null."
Every piece of JavaScript code runs inside a scope — a boundary that determines which variables are visible. Understanding scope answers three questions that trip up most developers:
- Why can a function see variables from outside it? → Lexical scope
- Why does
var xseem to exist before its line? → Hoisting - Why does an inner function remember variables after its outer function returns? → Closures
Imagine scopes as nested rooms with one-way glass windows. An inner room can always see out to its parent rooms (and their parent rooms, all the way to the global room). But outer rooms cannot see into inner rooms.
Lexical scope means the room layout is decided by where you write the code, not by where or how the function is called.
- Global scope — top-level, accessible everywhere
- Function scope — created per function call,
varlives here - Block scope — created per
{ }block,let/constlive here
JavaScript uses lexical (static) scope — the scope of a variable is determined by where it is written in the source code. Dynamic scope (used by some older languages) would determine scope by the call stack at runtime. JS does NOT do this — except for this, which is dynamically bound.
var hoisting differ from function declarations and let/const?Before running your code, the JavaScript engine makes a first pass through each scope to register all declarations. This is hoisting — declarations are "moved to the top" of their scope conceptually. Only declarations are hoisted, not initialisations.
Before class starts, the teacher scans the attendance list and notes all enrolled students (declarations). When class begins, students who haven't arrived yet are marked "present but empty seat" (undefined for var) or "seat reserved, can't sit yet" (TDZ for let/const).
// What you write: console.log(x); // undefined (NOT ReferenceError) var x = 5; console.log(x); // 5 // What the engine sees (conceptually): var x; // ← declaration hoisted to top of scope, initialised undefined console.log(x); // undefined x = 5; // ← assignment stays in place console.log(x); // 5 // var is function-scoped — block {} does NOT create a new scope for var function demo() { if (true) { var leaky = 'I escape the block'; } console.log(leaky); // 'I escape the block' — var leaked out of if block }
// Function declaration: fully hoisted — can call BEFORE the definition greet('Alice'); // ✅ Works! Output: "Hello, Alice" function greet(name) { console.log('Hello, ' + name); } // Function EXPRESSION — NOT hoisted (only the var declaration is) sayHi(); // ❌ TypeError: sayHi is not a function var sayHi = function() { console.log('hi'); }; // var sayHi is hoisted as undefined, then called → TypeError // Arrow function assigned to const — also NOT hoisted (TDZ) sayBye(); // ❌ ReferenceError: Cannot access 'sayBye' before initialization const sayBye = () => console.log('bye');
// let and const ARE hoisted (block-scoped), but accessing them before // declaration throws ReferenceError — the Temporal Dead Zone console.log(y); // ❌ ReferenceError: Cannot access 'y' before initialization let y = 10; // Proof let IS hoisted (not just "not hoisted"): let x = 'global'; { console.log(x); // ❌ ReferenceError — NOT 'global'! let x = 'block'; // the block-scoped x IS hoisted, creating TDZ from line 1 of block } // If let wasn't hoisted at all, console.log(x) would print 'global' // The ReferenceError proves it was hoisted but in TDZ
| Declaration | Hoisted? | Initialised to | Scope |
|---|---|---|---|
| var x | ✅ Yes | undefined | Function |
| function f(){} | ✅ Yes (fully) | The function itself | Function |
| let x | ✅ Yes (TDZ) | Uninitialized — ReferenceError if accessed | Block |
| const x | ✅ Yes (TDZ) | Uninitialized — ReferenceError if accessed | Block |
| class C{} | ✅ Yes (TDZ) | Uninitialized — ReferenceError if accessed | Block |
"Hoisting is the engine's first-pass registration of declarations before code runs. var is hoisted and initialised to undefined — which is why accessing it before the line gives undefined not an error. Function declarations are fully hoisted — callable before they appear. let and const are also hoisted but placed in the Temporal Dead Zone — accessing them before the declaration line throws a ReferenceError, which is actually better behaviour because it catches bugs."
The Temporal Dead Zone is the period between the start of a block scope and the actual let/const/class declaration line. During this zone, the variable exists in the scope (is hoisted) but is not yet initialised — accessing it throws a ReferenceError.
{
// ← TDZ for 'name' STARTS here (block start)
// ← TDZ for 'name' STARTS here (block start)
console.log(typeof name); // ❌ ReferenceError — in TDZ
console.log(name); // ❌ ReferenceError — in TDZ
let name = 'Alice'; // ← TDZ ENDS here — name is now initialised
console.log(name); // ✅ 'Alice'
}
// typeof does NOT protect you from TDZ (unlike undeclared variables)
console.log(typeof notDeclared); // "undefined" — safe for undeclared
console.log(typeof tdzVar); // ❌ ReferenceError — TDZ overrides typeof safety
let tdzVar = 1;
With var, reading a variable before its assignment silently returns undefined — hiding bugs. The TDZ was a deliberate design decision to catch these bugs at runtime with a clear error rather than silent undefined behaviour.
// With var — silent, confusing bug: function badSetup() { init(config); // config is undefined — silent wrong behaviour var config = { debug: true }; } // With let — explicit error, immediately obvious: function goodSetup() { init(config); // ❌ ReferenceError — caught immediately let config = { debug: true }; }
var, let, and const? When should you use each?| Feature | var | let | const |
|---|---|---|---|
| Scope | Function | Block | Block |
| Hoisted | Yes → undefined | Yes → TDZ | Yes → TDZ |
| Re-declarable in same scope | ✅ Yes | ❌ SyntaxError | ❌ SyntaxError |
| Re-assignable | ✅ Yes | ✅ Yes | ❌ TypeError |
| Global object property | Yes (window.x in browsers) | No | No |
| In for loops | One shared variable — the classic bug | New binding per iteration | New binding per iteration (can't reassign) |
// const means the BINDING is constant — not the VALUE const user = { name: 'Alice' }; user.name = 'Bob'; // ✅ mutating the object is fine user = {}; // ❌ TypeError: reassigning the binding is not allowed const arr = [1, 2, 3]; arr.push(4); // ✅ mutating array is fine arr = []; // ❌ TypeError // To make an object truly immutable: const frozen = Object.freeze({ x: 1 }); frozen.x = 99; // silently ignored (throws in strict mode)
const by default — for everything that shouldn't be reassigned. Objects and arrays declared with const can still be mutated.let when you know the variable will be reassigned (loop counters, accumulated values, flags).var in new code. Its function scope and silent hoisting cause hard-to-find bugs. It exists only for legacy compatibility.Lexical scope means a function's scope is determined by where it is defined in the source code — not where it is called from. The "lexical" part refers to the text/source, as opposed to the runtime call stack.
const city = 'London'; // global scope function outer() { const country = 'UK'; // outer function scope function inner() { const street = 'Baker St'; // inner function scope console.log(street); // ✅ own scope console.log(country); // ✅ parent scope (closure) console.log(city); // ✅ grandparent scope } inner(); // console.log(street); ← ❌ ReferenceError — outer can't see inner } // Lexical: inner's scope is decided at write-time, not call-time function getCountry() { const country = 'France'; // this scope is irrelevant to inner() inner(); // inner still sees UK, not France }
let value = 'global'; function demo() { let value = 'function'; // shadows the outer 'value' console.log(value); // 'function' — inner variable wins { let value = 'block'; // shadows further console.log(value); // 'block' } console.log(value); // 'function' — block scope gone } console.log(value); // 'global' — outer untouched // ⚠ Shadowing can cause bugs — use unique, descriptive names in nested scopes
When the engine encounters a variable name, it searches for it using this chain: current scope → parent scope → grandparent scope → … → global scope. The first match wins. If nothing is found in global scope, it throws a ReferenceError.
const a = 'global-a'; function level1() { const b = 'level1-b'; function level2() { const c = 'level2-c'; function level3() { // Scope chain lookup for each variable: console.log(c); // found in level2 scope ✅ console.log(b); // not in level3, not in level2 → found in level1 ✅ console.log(a); // not in level3/2/1 → found in global ✅ console.log(d); // not found anywhere → ReferenceError ❌ } level3(); } level2(); } // Scope chain is fixed at write time — it's the closure environment // The engine walks UP the chain, never DOWN
A closure is a function bundled together with its lexical environment — the variables that were in scope when the function was defined. The function "closes over" those variables and can access them even after the outer function has returned.
When an inner function is created, it gets a backpack. Into this backpack go references to all the variables from its surrounding scope. When the outer function finishes and its stack frame is gone, the inner function still carries its backpack — the closed-over variables live on in memory as long as the inner function exists.
function makeCounter() { let count = 0; // this variable lives in makeCounter's scope return function increment() { count++; // closes over 'count' — still accessible after makeCounter returns return count; }; } const counter = makeCounter(); // makeCounter's stack frame is gone... counter(); // 1 — but count is still alive in the closure! counter(); // 2 counter(); // 3 // Each call to makeCounter creates a NEW closure with its OWN count const counter2 = makeCounter(); counter2(); // 1 — independent from counter
function makeAdder(x) { return (y) => x + y; // closes over x (the parameter) } const add5 = makeAdder(5); const add10 = makeAdder(10); add5(3); // 8 — x is 5 in this closure add10(3); // 13 — x is 10 in this closure // By reference — if the outer variable changes, the closure sees it function makeLogger() { let msg = 'initial'; const log = () => console.log(msg); const update = (s) => { msg = s; }; return { log, update }; } const logger = makeLogger(); logger.log(); // 'initial' logger.update('changed'); logger.log(); // 'changed' — both functions share the same msg reference
"A closure is a function plus its lexical environment — the variables that were in scope when it was defined. When an inner function is returned, it carries references to its outer variables even after the outer function's stack frame is gone. Key points: closures capture by reference (not value), each function call creates a new closure with its own copies of the captured variables, and closures are the mechanism behind module patterns, memoization, and React hooks."
// ❌ Expected: clicks print 0, 1, 2, 3, 4 // Actual: every click prints 5 for (var i = 0; i < 5; i++) { document.getElementById('btn-' + i).addEventListener('click', function() { console.log(i); // all 5 callbacks close over the SAME 'i' }); } // By the time any button is clicked, the loop is done and i === 5 // All callbacks share ONE var i — they all see the final value
// ✅ let creates a new 'i' for EACH loop iteration for (let i = 0; i < 5; i++) { document.getElementById('btn-' + i).addEventListener('click', function() { console.log(i); // each callback captures its own i: 0, 1, 2, 3, 4 }); } // This is the modern, clean fix — just switch var to let
// ✅ IIFE captures i as a parameter (fresh copy each time) for (var i = 0; i < 5; i++) { (function(j) { // j is a new parameter — a snapshot of i at this iteration document.getElementById('btn-' + j).addEventListener('click', function() { console.log(j); // j is 0, 1, 2, 3, 4 — captured by value in IIFE param }); })(i); // ← pass current i as argument immediately }
// ✅ bind pre-fills i as first argument — captures current value for (var i = 0; i < 5; i++) { document.getElementById('btn-' + i).addEventListener('click', (function(j) { console.log(j); }).bind(null, i) ); } // ── Summary ── // Root cause: var has function scope — ALL iterations share ONE variable // let fix: creates a new block-scoped binding per iteration ← use this // IIFE fix: creates a new function scope + captures via parameter // bind fix: pre-fills the current i value as an argument
function memoize(fn) { const cache = new Map(); // closed over — persists across calls return function(...args) { const key = JSON.stringify(args); if (cache.has(key)) return cache.get(key); const result = fn(...args); cache.set(key, result); return result; }; } const expensiveFib = memoize(function fib(n) { return n <= 1 ? n : fib(n-1) + fib(n-2); }); expensiveFib(40); // computed once expensiveFib(40); // instant — returned from cache
function createBankAccount(initialBalance) { let balance = initialBalance; // private — no direct outside access return { deposit(amount) { if (amount > 0) balance += amount; }, withdraw(amount) { if (amount > balance) throw new Error('Insufficient funds'); balance -= amount; }, getBalance() { return balance; } }; } const account = createBankAccount(100); account.deposit(50); console.log(account.getBalance()); // 150 console.log(account.balance); // undefined — balance is truly private
// Partial application — pre-fill some arguments function multiply(x, y) { return x * y; } const double = (y) => multiply(2, y); // closes over 2 const triple = (y) => multiply(3, y); // closes over 3 double(5); // 10 // React — closure captures the current item in a list render function ItemList({ items, onDelete }) { return items.map((item) => ( <button onClick={() => onDelete(item.id)}> // ← closure over item Delete {item.name} </button> )); } // React useState — the updater function closes over state const [count, setCount] = useState(0); // ⚠ Stale closure trap — count may be stale in async callbacks: setTimeout(() => setCount(count + 1), 1000); // captured stale count setTimeout(() => setCount(c => c + 1), 1000); // ✅ updater form — always fresh
An IIFE (Immediately Invoked Function Expression) is a function defined and called at the same moment. The wrapping () turns the function keyword from a declaration into an expression, making it callable immediately.
// Classic IIFE syntax (function() { var secret = 'hidden'; // scoped to the IIFE — never pollutes global console.log('runs immediately'); })(); // Arrow function IIFE (ES6+) (() => { console.log('also runs immediately'); })(); // IIFE with parameters (function(name) { console.log('Hello, ' + name); })('Alice'); // Returning a value from IIFE const result = (function() { return 42; })();
// Pre-ES6 — all scripts shared one global namespace // library-a.js and library-b.js both declare 'utils' — collision! var utils = { ... }; // overwritten if another file does the same // Fix: wrap everything in an IIFE — creates private scope (function() { var utils = { ... }; // scoped here — no collision window.MyLib = { ... }; // only expose what's needed })();
// 1. Async IIFE — top-level await without a module (async () => { const data = await fetchData(); console.log(data); })(); // 2. Initialisation logic that runs once with no leftover variables const config = (() => { const env = process.env.NODE_ENV; return { debug: env !== 'production', apiUrl: env === 'production' ? 'https://api.prod.com' : 'https://api.dev.com' }; })(); // env is not accessible outside — only config is
const UserModule = (function() { // Private — not accessible from outside const _users = []; let _nextId = 1; function _validate(name) { return typeof name === 'string' && name.length > 0; } // Public API — the returned object is all that's exposed return { addUser(name) { if (!_validate(name)) throw new Error('Invalid name'); _users.push({ id: _nextId++, name }); }, getUsers() { return [..._users]; }, // return copy — protect internal array count() { return _users.length; } }; })(); UserModule.addUser('Alice'); console.log(UserModule.count()); // 1 console.log(UserModule._users); // undefined — truly private
| Feature | Module Pattern (IIFE) | ES6 Modules (import/export) |
|---|---|---|
| Privacy | Variables in IIFE scope — closure-based | File-level scope — not exported = private |
| Syntax | Verbose function wrapping | Clean import/export |
| Loading | Synchronous, inline | Async, browser-native, tree-shakeable |
| Circular deps | Manual management | Handled by the module system |
| Use today | Legacy code, quick init blocks | All new projects |
"The Module Pattern is the closure-based predecessor to ES6 modules. It uses an IIFE to create a private scope and returns a public API object. The pattern still appears in legacy codebases and is worth knowing because it demonstrates exactly how closures enable encapsulation. In modern code, ES6 modules handle this natively — each module file has its own scope and you explicitly export what you want to expose."
thisthis binding, call/apply/bind, currying,
and pure functions will put you in the top tier of JS interviewees. These concepts
underpin React hooks, functional programming, and professional API design.
this MysteryTwo ideas power everything in this segment:
In JavaScript, a function is just a value — like a number or string. You can store it in a variable, pass it to another function, return it from a function, or put it in an array. This single fact enables higher-order functions, callbacks, closures, and functional patterns.
array.push(fn);
otherFn(fn);
return fn;
this is decided at call-time, not write-timeUnlike scope (which is lexical — decided where the code is written), this is dynamic — decided by how the function is called. There are exactly four rules that determine it, and knowing them in order of priority solves every this puzzle.
fn() // this = global/undefined
fn.call(x) // this = x
new Fn() // this = new object
thisThink of a function as an actor and this as the character they're playing right now. The same actor (function) can play different characters depending on which stage (call site) they're performing on. Arrow functions are method actors — they stay in character from their surrounding scene and can never play a different role.
map, filter, reduce are classic examples.this is determined by the call site, not the definition site.this — the most common this bug.| Feature | Declaration | Expression | Arrow |
|---|---|---|---|
| Hoisted | ✅ Fully (callable before definition) | ❌ Only the var (TDZ for let/const) | ❌ Only the var (TDZ for let/const) |
Own this | ✅ Yes — dynamic | ✅ Yes — dynamic | ❌ No — inherits lexically |
arguments object | ✅ Yes | ✅ Yes | ❌ No — use rest ...args |
| Used as constructor | ✅ Yes (new fn()) | ✅ Yes | ❌ TypeError with new |
prototype property | ✅ Yes | ✅ Yes | ❌ No |
| Implicit return | ❌ No | ❌ No | ✅ Yes (single expression) |
| Named in stack traces | ✅ Always | ✅ If named expression | ⚠ Limited (inferred) |
// ── 1. Function Declaration ── greet('Alice'); // ✅ hoisted — works before definition function greet(name) { return `Hello, ${name}`; } // ── 2. Function Expression ── const add = function(a, b) { // anonymous return a + b; }; const factorial = function fact(n) { // named expression — name is usable inside return n <= 1 ? 1 : n * fact(n - 1); // 'fact' accessible here }; // ── 3. Arrow Function ── const multiply = (a, b) => a * b; // implicit return const square = n => n * n; // single param — no parens needed const getObj = () => ({ x: 1 }); // wrap object literal in () to avoid ambiguity const process = (x) => { // block body — explicit return needed const result = x * 2; return result; };
- Object methods — arrow function's
thiswon't be the object - Constructors — arrow functions can't be called with
new - Event handlers that need
this—thiswon't be the element - Prototype methods — no
prototypeproperty
this? When does this cause bugs?thisRegular functions get a new this binding at each call — determined by the call site. Arrow functions capture this from their enclosing lexical scope at the time they are created and it can never be changed — not by call(), apply(), bind(), or new.
A regular function's this is like a compass needle — it always points toward whoever is calling it right now. An arrow function's this is like a compass that was soldered in place the moment it was created — it always points in the same direction, regardless of who holds it or calls it.
function Timer() { this.seconds = 0; // ❌ Pre-ES6 problem: this inside setTimeout is NOT the Timer instance setInterval(function() { this.seconds++; // 'this' is window/undefined — bug! }, 1000); // Old fix: capture this in a variable const self = this; setInterval(function() { self.seconds++; // ✅ 'self' is captured from the outer scope }, 1000); // ✅ Modern fix: arrow function — 'this' is captured from Timer constructor setInterval(() => { this.seconds++; // 'this' = Timer instance — correct! }, 1000); }
const user = { name: 'Alice', // ❌ Arrow function — 'this' is NOT the user object greetArrow: () => { console.log(`Hello, ${this.name}`); // this = window/undefined }, // ✅ Regular function — 'this' is the user object when called as method greetRegular() { console.log(`Hello, ${this.name}`); // 'Alice' } }; user.greetArrow(); // "Hello, undefined" user.greetRegular(); // "Hello, Alice" // Arrow functions DO work great as callbacks inside methods: const counter = { count: 0, nums: [1, 2, 3], sum() { // 'this' here is counter (method call) this.nums.forEach(n => { this.count += n; // ✅ arrow inherits 'this' from sum() — correct! }); } };
this Keywordthis binding? What is the priority order?new: this = the newly created object.call(), .apply(), or .bind(): this = first argumentobj.fn(): this = the object before the dotfn(): this = window (global) or undefined in strict modefunction showThis() { console.log(this); } showThis(); // window (browser) / global (Node) — non-strict // undefined — in strict mode or ES modules // Implicit binding LOST — a very common bug const obj = { name: 'Obj', greet() { console.log(this.name); } }; const detached = obj.greet; // copied the function reference — no obj context detached(); // undefined — default binding, not obj setTimeout(obj.greet, 100); // also loses context — passed as callback
const person = { name: 'Alice', greet() { console.log(this.name); } }; person.greet(); // 'Alice' — this = person (object before the dot) // Only the LAST object in the chain matters const a = { b: { name: 'nested', fn() { console.log(this.name); } } }; a.b.fn(); // 'nested' — this = b, not a
function introduce(greeting, punctuation) { console.log(`${greeting}, I'm ${this.name}${punctuation}`); } const alice = { name: 'Alice' }; introduce.call(alice, 'Hi', '!'); // call — args spread out: "Hi, I'm Alice!" introduce.apply(alice, ['Hello', '.']); // apply — args as array: "Hello, I'm Alice." const boundFn = introduce.bind(alice, 'Hey'); // bind — returns NEW function, this fixed boundFn('?'); // "Hey, I'm Alice?"
function Dog(name) { this.name = name; // 'this' = the newly created object } const rex = new Dog('Rex'); console.log(rex.name); // 'Rex' // new always beats explicit binding (highest priority): function Cat(name) { this.name = name; } const BoundCat = Cat.bind({ name: 'overridden' }); const fluffy = new BoundCat('Fluffy'); console.log(fluffy.name); // 'Fluffy' — new wins over bind
"To find this, I look at the call site and ask four questions in order: 1) Is the function called with new? → this = new object. 2) Is it called with call/apply/bind? → this = first argument. 3) Is it called as a method (obj.fn())? → this = that object. 4) Otherwise? → this = global (or undefined in strict mode). Arrow functions skip all rules — they always inherit this from their surrounding lexical scope."
this in each context — global, class, setTimeout, event handler, and strict mode?// 1. Global scope console.log(this); // window (browser) / {} or module.exports (Node CJS) // 2. Regular function (non-strict) function fn() { return this; } fn(); // window / global // 3. Regular function (strict mode) 'use strict'; function strictFn() { return this; } strictFn(); // undefined ← strict prevents global pollution // 4. Class — always strict mode internally class Car { constructor() { this.speed = 0; } // this = new Car instance accelerate() { this.speed += 10; } // this = the calling instance } const car = new Car(); const accel = car.accelerate; accel(); // ❌ TypeError: cannot set speed of undefined (class = strict) car.accelerate(); // ✅ this = car instance // 5. setTimeout — callback is detached (loses context) class Clock { constructor() { setTimeout(function() { console.log(this); // ❌ window/undefined — function detached }, 100); setTimeout(() => { console.log(this); // ✅ Clock instance — arrow inherits lexical this }, 100); } } // 6. DOM event handler button.addEventListener('click', function() { console.log(this); // ✅ the button element }); button.addEventListener('click', () => { console.log(this); // ❌ window — arrow doesn't get element as this });
call, apply, and bind in depth. What are their practical use cases?| Method | Calls function? | Arguments | Returns |
|---|---|---|---|
| fn.call(ctx, a, b) | ✅ Immediately | Spread individually | Function's return value |
| fn.apply(ctx, [a,b]) | ✅ Immediately | As array | Function's return value |
| fn.bind(ctx, a, b) | ❌ Not yet | Pre-filled (partial application) | New bound function |
// Borrow Array methods for array-like objects function logArgs() { // 'arguments' is array-like but NOT a real Array const arr = Array.prototype.slice.call(arguments); // borrow slice from Array console.log(arr.join(', ')); } logArgs(1, 2, 3); // "1, 2, 3" // Modern equivalent: function logArgs2(...args) { console.log(args.join(', ')); } // Borrow a method from one object for another const dog = { name: 'Rex', speak() { return `${this.name} says woof`; } }; const cat = { name: 'Mimi' }; console.log(dog.speak.call(cat)); // "Mimi says woof" — cat borrows dog's method
const nums = [5, 3, 8, 1, 9, 2]; // apply spreads an array as individual arguments Math.max.apply(null, nums); // 9 — old way Math.max(...nums); // 9 — modern spread (preferred)
// Fix lost context — pass method as callback class Button { constructor(label) { this.label = label; } handleClick() { console.log('Clicked:', this.label); } } const btn = new Button('Submit'); document.addEventListener('click', btn.handleClick); // ❌ this lost document.addEventListener('click', btn.handleClick.bind(btn)); // ✅ this = btn document.addEventListener('click', () => btn.handleClick()); // ✅ arrow wrapper // Partial application with bind — pre-fill first arguments function multiply(x, y) { return x * y; } const double = multiply.bind(null, 2); // null = don't care about this, x=2 fixed const triple = multiply.bind(null, 3); double(5); // 10 triple(5); // 15
map, filter, and reduce from scratch.A higher-order function (HOF) is a function that either:
- Takes one or more functions as arguments (callbacks), OR
- Returns a function as its result
This is only possible because functions are first-class values. map, filter, reduce, forEach, setTimeout, addEventListener are all HOFs.
// ── map: transform every element, return new array of same length ── Array.prototype.myMap = function(callbackFn) { const result = []; for (let i = 0; i < this.length; i++) { if (i in this) { // skip holes in sparse arrays result.push(callbackFn(this[i], i, this)); } } return result; }; // ── filter: keep elements where callback returns truthy ── Array.prototype.myFilter = function(callbackFn) { const result = []; for (let i = 0; i < this.length; i++) { if (i in this && callbackFn(this[i], i, this)) { result.push(this[i]); } } return result; }; // ── reduce: fold array into a single accumulated value ── Array.prototype.myReduce = function(callbackFn, initialValue) { let acc, startIndex; if (arguments.length >= 2) { acc = initialValue; startIndex = 0; } else { if (this.length === 0) throw new TypeError('Reduce of empty array with no initial value'); acc = this[0]; startIndex = 1; } for (let i = startIndex; i < this.length; i++) { if (i in this) { acc = callbackFn(acc, this[i], i, this); } } return acc; }; // Usage and verification [1,2,3].myMap(x => x * 2); // [2, 4, 6] [1,2,3,4].myFilter(x => x % 2 === 0); // [2, 4] [1,2,3,4].myReduce((acc, x) => acc + x, 0); // 10
const nums = [1, 2, 3, 4, 5]; // map via reduce nums.reduce((acc, x) => [...acc, x * 2], []); // [2,4,6,8,10] // filter via reduce nums.reduce((acc, x) => x % 2 ? [...acc, x] : acc, []); // [1,3,5] // group by — a common interview question const people = [ { name: 'Alice', dept: 'eng' }, { name: 'Bob', dept: 'eng' }, { name: 'Carol', dept: 'mkt' }, ]; people.reduce((groups, person) => { (groups[person.dept] ??= []).push(person.name); return groups; }, {}); // { eng: ['Alice','Bob'], mkt: ['Carol'] }
curry() function.Transforms a function that takes multiple arguments into a chain of functions that each take exactly one argument. Named after mathematician Haskell Curry.
Pre-fills some arguments of a function, returning a new function that takes the remaining arguments. The result doesn't have to be unary.
Full call: order('pizza', 'large', 'pepperoni')
Curried: order('pizza')('large')('pepperoni') — every decision is a separate step
Partial: orderPizza = order('pizza') — you've chosen "pizza", later call orderPizza('large', 'pepperoni')
// ── Manual currying ── const add = (a) => (b) => (c) => a + b + c; add(1)(2)(3); // 6 const add1 = add(1); // partial: a=1 fixed const add1and2 = add1(2); // partial: a=1, b=2 fixed add1and2(10); // 13 // ── Generic curry() — works on any function ── function curry(fn) { return function curried(...args) { if (args.length >= fn.length) { // Enough arguments — call the original function return fn.apply(this, args); } // Not enough — return a function collecting more arguments return function(...moreArgs) { return curried.apply(this, args.concat(moreArgs)); }; }; } const sum = (a, b, c) => a + b + c; const curriedSum = curry(sum); curriedSum(1, 2, 3); // 6 — all at once curriedSum(1)(2)(3); // 6 — one at a time curriedSum(1, 2)(3); // 6 — mixed const addTo5 = curriedSum(2, 3); // partial — returns fn waiting for c addTo5(10); // 15 // Real world: curried validators, formatters, event loggers const log = curry((level, msg) => console.log(`[${level}] ${msg}`)); const info = log('INFO'); const error = log('ERROR'); info('Server started'); // [INFO] Server started error('DB connection lost'); // [ERROR] DB connection lost
Same input always produces the same output. No matter when, how many times, or in what context it's called.
Does not modify anything outside its own scope: no network calls, no DOM mutations, no modifying its arguments, no reading Date.now() or Math.random().
// ✅ PURE — same input, same output, no external changes const add = (a, b) => a + b; const double = (arr) => arr.map(x => x * 2); // returns new array, doesn't mutate const getFullName = (user) => `${user.first} ${user.last}`; // ❌ IMPURE — mutates argument function addItem(arr, item) { arr.push(item); // mutates the original array return arr; } // ✅ Pure version: const addItem = (arr, item) => [...arr, item]; // new array // ❌ IMPURE — depends on external state let taxRate = 0.2; const getPrice = (base) => base * (1 + taxRate); // depends on outer taxRate // ✅ Pure version: const getPrice = (base, taxRate) => base * (1 + taxRate); // inject the dependency // ❌ IMPURE — non-deterministic const getRandomId = () => Math.random(); // different output every call const getTimestamp = () => Date.now(); // depends on external time
// ❌ Impure React component — mutates a variable outside the component let guestCount = 0; function Guest() { guestCount++; // side effect — React may call this multiple times in StrictMode return <p>Guest #{guestCount}</p>; } // ✅ Pure React component — same props = same output, no external mutations function Guest({ number }) { return <p>Guest #{number}</p>; } // Benefits of pure functions: // 1. Memoization: React.memo, useMemo — safely skip re-renders // 2. Concurrent mode: React can pause/resume/retry renders safely // 3. Testing: no setup/teardown — just call the function and check the result // 4. Time-travel debugging: replay state = replay same output
arguments object? How do rest parameters differ, and when do you use each?function oldSum() { console.log(arguments); // Arguments [1, 2, 3] — array-LIKE object console.log(Array.isArray(arguments)); // false — NOT a real array! // Can't use array methods directly: arguments.map(...); // ❌ TypeError — no map method // Must convert to array first: const args = Array.from(arguments); // ES6 way const args2 = [...arguments]; // spread way const args3 = Array.prototype.slice.call(arguments); // old way return args.reduce((a, b) => a + b, 0); } oldSum(1, 2, 3); // 6 // arguments is NOT available in arrow functions! const arrowFn = () => console.log(arguments); // ReferenceError or outer arguments
// ✅ Rest parameters — a real Array, works in any function type const sum = (...nums) => nums.reduce((a, b) => a + b, 0); sum(1, 2, 3, 4); // 10 // Rest must be last parameter — collects REMAINING args function logWithPrefix(prefix, ...messages) { messages.forEach(msg => console.log(`[${prefix}] ${msg}`)); } logWithPrefix('INFO', 'Started', 'Ready', 'Listening'); // ── Comparison ── // arguments: array-like, no arrow support, contains ALL args, has .callee/.caller // rest ...x : real Array, works everywhere, collects only named remainder // Always prefer rest parameters in new code
new do step-by-step? Implement your own myNew() function.new does — 4 Steps{}[[Prototype]] to the constructor's prototype property: obj.__proto__ = Fn.prototypethis to the new object, then call the constructor functionfunction myNew(Constructor, ...args) { // Step 1 + 2: Create object linked to Constructor.prototype const obj = Object.create(Constructor.prototype); // Step 3: Call constructor with new object as 'this' const returnVal = Constructor.apply(obj, args); // Step 4: Return object unless constructor returned a different object return returnVal instanceof Object ? returnVal : obj; } // ── Verify it works ── function Person(name, age) { this.name = name; this.age = age; } Person.prototype.greet = function() { return `Hi, I'm ${this.name}`; }; const p1 = new Person('Alice', 30); const p2 = myNew(Person, 'Alice', 30); p2.greet(); // "Hi, I'm Alice" ✅ p2 instanceof Person; // true ✅ Object.getPrototypeOf(p2) === Person.prototype; // true ✅ // ── The return-value rule — when constructor returns an object ── function Tricky() { this.x = 1; return { y: 2 }; // returns a DIFFERENT object } const t = new Tricky(); console.log(t); // { y: 2 } — constructor's returned object wins // But if constructor returns a primitive, it's ignored: function ReturnNum() { this.x = 1; return 42; } new ReturnNum(); // { x: 1 } — primitive return ignored
"When you call new Fn(args), four things happen: a new empty object is created, its [[Prototype]] is linked to Fn.prototype (so it inherits methods), this inside the constructor refers to that new object, and finally the new object is returned — unless the constructor explicitly returns a different object. Arrow functions can't be constructors because they have no prototype property and no own this."
instanceof works, how methods are shared without copying,
why Object.create(null) is special, and how to build inheritance
patterns that interviewers love to probe.
Most OOP languages copy methods into each object (classical inheritance). JavaScript does something fundamentally different: objects delegate to other objects. When a property is not found on an object, the engine walks up a chain of linked objects until it finds it or reaches the end.
Imagine asking your local library for a book. If they don't have it, they check the regional library. If that fails, the national library. If no one has it, the answer is "not found" (undefined). Each library is a prototype in the chain — you never copy the book, you just know where to look next.
Objects don't own methods — they borrow them from the prototype chain at lookup time. This is why a thousand array instances don't each carry their own copy of map and filter — they all share one copy living on Array.prototype.
↓ [[Prototype]]
Object.prototype
↓ [[Prototype]]
null ← end of chain
↓ [[Prototype]]
Array.prototype (map, filter…)
↓ [[Prototype]]
Object.prototype
↓ [[Prototype]]
null
hasOwnProperty() checks for these.null.When you access obj.prop, the engine follows this exact sequence:
prop is an own property of obj → return it if foundobj.[[Prototype]] to its prototype object → check for prop thereprop is found or [[Prototype]] is nullnull is reached without finding it → return undefinedconst animal = { breathe() { return 'inhale... exhale'; } }; const dog = Object.create(animal); // dog's [[Prototype]] = animal dog.name = 'Rex'; // own property const rex = Object.create(dog); // rex's [[Prototype]] = dog // Lookup: rex.name // Step 1: rex has no own 'name' property // Step 2: walk to dog → dog has own 'name' = 'Rex' ✅ console.log(rex.name); // 'Rex' // Lookup: rex.breathe // Step 1: rex has no 'breathe' // Step 2: dog has no 'breathe' // Step 3: animal has 'breathe' ✅ console.log(rex.breathe()); // 'inhale... exhale' // Lookup: rex.fly // Chain: rex → dog → animal → Object.prototype → null // Not found anywhere → undefined console.log(rex.fly); // undefined // Visualise the chain: console.log(Object.getPrototypeOf(rex) === dog); // true console.log(Object.getPrototypeOf(dog) === animal); // true console.log(Object.getPrototypeOf(animal) === Object.prototype); // true console.log(Object.getPrototypeOf(Object.prototype)); // null
const proto = { x: 1 }; const child = Object.create(proto); // GET: walks the chain console.log(child.x); // 1 — found on proto // SET: always creates an OWN property on the object — never modifies prototype child.x = 99; console.log(child.x); // 99 — own property now shadows the prototype console.log(proto.x); // 1 — prototype untouched // This shadowing is important: writing never pollutes the prototype chain
__proto__, .prototype, and Object.getPrototypeOf()?__proto__ — an accessor property on every object instance that exposes the hidden [[Prototype]] slot. Deprecated but universally supported. Use Object.getPrototypeOf() instead.Fn.prototype — a property that exists only on function objects. When you call new Fn(), the newly created object's [[Prototype]] is set to Fn.prototype. It is NOT the prototype of the function itself.Object.getPrototypeOf(obj) — the standard, non-deprecated API to read an object's [[Prototype]]. Use this in production code.function Animal(name) { this.name = name; } Animal.prototype.speak = function() { return `${this.name} speaks`; }; const cat = new Animal('Mimi'); // ── __proto__ (deprecated) — the [[Prototype]] of the INSTANCE ── cat.__proto__ === Animal.prototype; // true // ── Object.getPrototypeOf (preferred) — same thing, correct API ── Object.getPrototypeOf(cat) === Animal.prototype; // true // ── Fn.prototype — only on FUNCTION objects ── // Animal.prototype is the object that becomes instances' [[Prototype]] console.log(Animal.prototype); // { speak: [Function], constructor: Animal } console.log(Animal.prototype.constructor === Animal); // true // ── Common confusion: prototype of the function ITSELF ── Object.getPrototypeOf(Animal) === Function.prototype; // true // Animal.__proto__ = Function.prototype (because Animal is a function object) // Animal.prototype = the shared object for instances (separate thing!) // Mental map: // cat.__proto__ === Animal.prototype (instance → constructor's prototype) // Animal.__proto__ === Function.prototype (function → Function's prototype) // Function.prototype.__proto__ === Object.prototype (chain continues)
obj.__proto__ = newProto or Object.setPrototypeOf(obj, newProto) at runtime destroys V8's hidden class optimisation for that object and makes all subsequent property accesses slower. Build the prototype chain at object creation time using Object.create().
Object.create() work? How do you use it for prototypal inheritance?Object.create(proto) creates a brand-new object whose [[Prototype]] is set to proto. It's the purest way to set up prototype relationships — no constructor function required.
// Implementing Object.create from scratch (to understand it) function myCreate(proto) { function F() {} // temp constructor F.prototype = proto; // set its prototype to desired proto return new F(); // new instance inherits from proto } // ── Basic usage ── const vehicleProto = { describe() { return `I am a ${this.type} with ${this.wheels} wheels`; } }; const car = Object.create(vehicleProto); car.type = 'car'; car.wheels = 4; car.describe(); // "I am a car with 4 wheels" // ── Prototypal inheritance chain ── const animal = { init(name) { this.name = name; return this; }, eat() { return `${this.name} eats`; } }; const dog = Object.create(animal); // dog inherits from animal dog.bark = function() { return `${this.name} barks`; }; const rex = Object.create(dog).init('Rex'); // rex inherits from dog rex.eat(); // "Rex eats" — from animal rex.bark(); // "Rex barks" — from dog // Object.create(null) — object with NO prototype (covered in Q10) const pureMap = Object.create(null); console.log(pureMap.toString); // undefined — no inherited methods at all
| Object.create(proto) | new Constructor() | |
|---|---|---|
| Needs constructor? | ❌ No | ✅ Yes |
| Sets [[Prototype]] | ✅ Explicitly to proto | ✅ To Constructor.prototype |
| Runs init code | ❌ Manual (call .init()) | ✅ Constructor body runs |
| Best for | Pure prototypal inheritance, prototype delegation patterns | Constructor pattern, ES6 classes |
writable, enumerable, configurable, and accessor descriptors.When you write obj.x = 1, JavaScript doesn't just store the value — it stores a full property descriptor: an object with flags controlling how the property behaves. By default all flags are true for assignment-created properties.
false, the value cannot be changed. Attempts to reassign silently fail (or throw in strict mode).false, the property is hidden from for...in, Object.keys(), and JSON.stringify().false, the descriptor itself cannot be changed and the property cannot be deleted. One-way door.get fires on read, set fires on write.const obj = {}; // Data descriptor — value + writable + enumerable + configurable Object.defineProperty(obj, 'id', { value: 42, writable: false, // cannot reassign enumerable: false, // hidden from for...in / Object.keys configurable: false // cannot delete or redefine }); obj.id = 99; // silently fails (throws in strict mode) console.log(obj.id); // 42 — unchanged Object.keys(obj); // [] — 'id' not enumerable delete obj.id; // false — not configurable // Inspect a descriptor Object.getOwnPropertyDescriptor(obj, 'id'); // { value: 42, writable: false, enumerable: false, configurable: false } // Default descriptor for assignment-created properties (ALL true): const plain = { x: 1 }; Object.getOwnPropertyDescriptor(plain, 'x'); // { value: 1, writable: true, enumerable: true, configurable: true }
const person = { _firstName: 'Alice', _lastName: 'Smith' }; Object.defineProperty(person, 'fullName', { get() { return `${this._firstName} ${this._lastName}`; }, set(val) { [this._firstName, this._lastName] = val.split(' '); }, enumerable: true, configurable: true }); console.log(person.fullName); // 'Alice Smith' — triggers get() person.fullName = 'Bob Jones'; // triggers set() console.log(person._firstName); // 'Bob' // Class syntax equivalent (shorthand for Object.defineProperty under the hood) class Circle { constructor(r) { this._r = r; } get area() { return Math.PI * this._r ** 2; } set radius(v) { if (v < 0) throw Error('negative'); this._r = v; } }
Object.freeze(), Object.seal(), and Object.preventExtensions()?| Method | Add props? | Delete props? | Modify values? | Modify descriptors? |
|---|---|---|---|---|
| preventExtensions | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes |
| seal | ❌ No | ❌ No | ✅ Yes | ❌ No |
| freeze | ❌ No | ❌ No | ❌ No | ❌ No |
const config = Object.freeze({ host: 'localhost', port: 3000, db: { name: 'mydb' } // ← nested object }); config.port = 8080; // silently fails — writable: false delete config.host; // silently fails — configurable: false config.newProp = 'x'; // silently fails — not extensible console.log(config.port); // 3000 — unchanged ✅ // ⚠ freeze is SHALLOW — nested objects are NOT frozen! config.db.name = 'hacked'; // ✅ this WORKS — db is a different object console.log(config.db.name); // 'hacked' // Deep freeze — recursive solution function deepFreeze(obj) { Object.getOwnPropertyNames(obj).forEach(name => { const val = obj[name]; if (typeof val === 'object' && val !== null) deepFreeze(val); }); return Object.freeze(obj); } // Check frozen/sealed state Object.isFrozen(config); // true Object.isSealed(config); // true (frozen implies sealed)
Understanding the constructor-prototype pattern is essential because:
- ES6
classis syntactic sugar over this exact mechanism - Legacy codebases (jQuery, Backbone, many npm packages) use this pattern
- Interviewers often ask you to "do what class does, without class"
// ── Parent Constructor ── function Shape(color) { this.color = color; } Shape.prototype.getColor = function() { return this.color; }; Shape.prototype.describe = function() { return `A ${this.color} shape`; }; // ── Child Constructor ── function Circle(color, radius) { Shape.call(this, color); // Step 1: call parent constructor with child's 'this' this.radius = radius; } // Step 2: set up prototype chain — Circle.prototype inherits from Shape.prototype Circle.prototype = Object.create(Shape.prototype); // Step 3: repair the constructor reference (broken by Object.create) Circle.prototype.constructor = Circle; // Step 4: add child-specific methods Circle.prototype.area = function() { return Math.PI * this.radius ** 2; }; Circle.prototype.describe = function() { // override parent method return `A ${this.color} circle of radius ${this.radius}`; }; // ── Usage ── const c = new Circle('red', 5); c.getColor(); // 'red' — inherited from Shape.prototype c.describe(); // 'A red circle of radius 5' — overridden c.area(); // 78.54... — own method c instanceof Circle; // true c instanceof Shape; // true — chain includes Shape.prototype // ES6 class equivalent (same behaviour, cleaner syntax): class CircleES6 extends Shape { constructor(color, radius) { super(color); // ← equivalent to Shape.call(this, color) this.radius = radius; } area() { return Math.PI * this.radius ** 2; } }
function Animal(name) { this.name = name; } Animal.prototype.type = 'animal'; Animal.prototype.speak = function() { return '...'; }; const dog = new Animal('Rex'); dog.breed = 'Labrador'; // ── Checking ownership ── dog.hasOwnProperty('name'); // true — own dog.hasOwnProperty('breed'); // true — own dog.hasOwnProperty('type'); // false — inherited from Animal.prototype dog.hasOwnProperty('speak'); // false — inherited // Modern alternative (safer for Object.create(null) objects): Object.hasOwn(dog, 'name'); // true — ES2022, doesn't fail on null-proto objects // ── 'in' operator — checks own AND inherited ── 'name' in dog; // true — own 'type' in dog; // true — inherited 'wings' in dog; // false — not anywhere in chain // ── Enumeration methods comparison ── Object.keys(dog); // ['name','breed'] — OWN + ENUMERABLE only Object.values(dog); // ['Rex','Labrador'] — OWN + ENUMERABLE only Object.entries(dog); // [['name','Rex'],['breed','Labrador']] Object.getOwnPropertyNames(dog); // ['name','breed'] — OWN (incl. non-enumerable) for (const key in dog) { console.log(key); // 'name','breed','type','speak' — own + inherited enumerable } // Correct pattern: for...in with hasOwn guard for (const key in dog) { if (Object.hasOwn(dog, key)) console.log(key); // 'name','breed' only }
| Method | Own? | Inherited? | Non-enumerable? | Symbols? |
|---|---|---|---|---|
| Object.keys() | ✅ | ❌ | ❌ | ❌ |
| Object.getOwnPropertyNames() | ✅ | ❌ | ✅ | ❌ |
| Object.getOwnPropertySymbols() | ✅ (Symbols only) | ❌ | ✅ | ✅ |
| Reflect.ownKeys() | ✅ | ❌ | ✅ | ✅ |
| for...in | ✅ | ✅ | ❌ | ❌ |
| "key" in obj | ✅ | ✅ | ✅ | ❌ |
instanceof really work? What are its limitations and how do you work around them?obj instanceof Constructor walks obj's prototype chain and checks if Constructor.prototype appears anywhere in it. It has nothing to do with the constructor that created the object — it's purely about the prototype chain.
// Implementing instanceof from scratch function myInstanceof(obj, Constructor) { let proto = Object.getPrototypeOf(obj); while (proto !== null) { if (proto === Constructor.prototype) return true; proto = Object.getPrototypeOf(proto); } return false; } function A() {} function B() {} B.prototype = Object.create(A.prototype); const b = new B(); myInstanceof(b, B); // true myInstanceof(b, A); // true — A.prototype is up the chain // instanceof can be fooled — it checks the prototype, not the constructor const fake = Object.create(B.prototype); // never called new B() fake instanceof B; // true — prototype chain matches
// ❌ Limitation 1: fails for primitive wrapper types "hello" instanceof String; // false — "hello" is a primitive, not an object 42 instanceof Number; // false // ❌ Limitation 2: cross-realm failure (iframes, vm module) // An array from an iframe has Array !== window.Array // iframeArr instanceof Array === false ← wrong! // ✅ Fix: Array.isArray() works across realms Array.isArray(iframeArr); // true ✅ // ❌ Limitation 3: doesn't work on null/undefined null instanceof Object; // false — null has no prototype chain // ✅ Custom instanceof — Symbol.hasInstance class EvenNumber { static [Symbol.hasInstance](num) { return typeof num === 'number' && num % 2 === 0; } } console.log(4 instanceof EvenNumber); // true ← custom logic! console.log(7 instanceof EvenNumber); // false
JavaScript classes (and prototypes) support only single inheritance — a class can only extend one parent. But real-world objects often need behaviours from multiple sources.
A mixin is like mixing paint colours. You start with a base (your class), then blend in behaviours from multiple sources (mixins). The result is an object with all the combined capabilities — without any of them being its "parent".
// ── Approach 1: Object.assign mixin (simple, shallow copy) ── const Serializable = { serialize() { return JSON.stringify(this); }, deserialize(s) { return Object.assign(this, JSON.parse(s)); } }; const Validatable = { validate() { return Object.keys(this).every(k => this[k] !== null); } }; class User { constructor(name, email) { this.name = name; this.email = email; } } // Mix behaviours INTO the prototype Object.assign(User.prototype, Serializable, Validatable); const u = new User('Alice', 'a@b.com'); u.serialize(); // '{"name":"Alice","email":"a@b.com"}' u.validate(); // true // ── Approach 2: Functional Mixin (factory, preferred pattern) ── const withLogging = (Base) => class extends Base { log(msg) { console.log(`[${this.constructor.name}] ${msg}`); } }; const withTimestamps = (Base) => class extends Base { constructor(...args) { super(...args); this.createdAt = new Date(); } }; class BaseModel { save() { return 'saved'; } } // Compose: User gets both logging and timestamps class UserModel extends withLogging(withTimestamps(BaseModel)) { constructor(name) { super(); this.name = name; } } const user = new UserModel('Alice'); user.log('created'); // [UserModel] created console.log(user.createdAt); // Date object
Object.create(null)? When would you use a prototype-free object and why?A completely bare object with no prototype chain at all. It inherits nothing — no toString, no hasOwnProperty, no constructor, no valueOf. Its [[Prototype]] is literally null.
const normal = {}; const pure = Object.create(null); console.log(Object.getPrototypeOf(normal)); // Object.prototype console.log(Object.getPrototypeOf(pure)); // null normal.toString(); // "[object Object]" — inherited pure.toString(); // ❌ TypeError: pure.toString is not a function pure.hasOwnProperty('x'); // ❌ TypeError — use Object.hasOwn(pure, 'x') instead
// ── Use Case 1: Safe dictionary / hash map ── // Regular object is vulnerable to prototype pollution: const dict = {}; dict['__proto__'] = 'hacked'; // ⚠ could affect the prototype dict['toString'] = 'shadow'; // shadows inherited method 'toString' in dict; // true — can't tell own from inherited // Object.create(null) — zero risk of collision const safeMap = Object.create(null); safeMap.toString = 'my value'; // just a key — no prototype shadow 'toString' in safeMap; // true ONLY if explicitly set for (const key in safeMap) { ... } // only own keys — no prototype noise // ── Use Case 2: Preventing prototype pollution attacks ── // Malicious JSON: {"__proto__": {"isAdmin": true}} // If parsed into a regular object, Object.prototype.isAdmin becomes true // Object.create(null) target is immune — has no [[Prototype]] to pollute // ── Use Case 3: Performance — inline cache friendly ── // V8 can use a single hidden class for a null-proto object used as a lookup // table — no prototype chain to walk for every lookup // ── Object.create(null) vs Map ── // Object.create(null): string/symbol keys, JSON-serializable, prototype-safe // Map: any key type, ordered, has size, better for frequent add/delete
"Object.create(null) creates an object with absolutely no prototype — it doesn't inherit from Object.prototype at all. This makes it perfect as a safe dictionary where any string key is guaranteed to be your own data, not a shadow of an inherited method like toString or hasOwnProperty. It's also the main defence against prototype pollution attacks — if you parse untrusted JSON into a null-proto object, there's no prototype chain to pollute. The trade-off is that you can't use inherited methods directly and must use Object.hasOwn() instead of hasOwnProperty()."
Mental Model — Classes Are Syntactic Sugar Over Prototypes
ES6 class did not change JavaScript's object model. It is a cleaner syntax for the same constructor-function + prototype pattern from Segment 4. Under the hood the engine still builds prototype chains — behaviour is shared by delegation, not copied into each instance.
A class is a blueprint. Every new MyClass() stamps out an instance. The instance owns its data (fields set in the constructor) but shares behaviour (methods) by delegating up to MyClass.prototype. 1 000 instances → 1 copy of each method, not 1 000 copies.
function Animal(name) {
this.name = name;
}
Animal.prototype.speak = function() {
return this.name + ' speaks';
};
class Animal {
constructor(name) {
this.name = name;
}
speak() {
return `${this.name} speaks`;
}
}
typeof Animal === 'function' — classes still produce functions. Classes add stricter rules: new is required, the body is always strict, prototype methods are non-enumerable.
| Feature | ES5 Constructor | ES6 Class |
|---|---|---|
| Hoisting | Fully hoisted | Hoisted but TDZ (like let) |
| Strict mode | Optional | Always strict |
| Prototype methods enumerable | Yes | No |
Call without new | Silently corrupts globals | Throws TypeError |
| Inheritance | Manual prototype wiring | extends keyword |
Topic A — ES6 Class Syntax
Class declaration — hoisted into TDZ like let, NOT usable before the definition line.
class Person {
constructor(name, age) {
this.name = name; // own property on each instance
this.age = age;
}
greet() { return `Hi, I'm ${this.name}`; } // on Person.prototype
toString() { return `Person(${this.name}, ${this.age})`; }
}
const p = new Person('Alice', 30);
console.log(p.greet()); // Hi, I'm Alice
console.log(`${p}`); // Person(Alice, 30) — auto toString
console.log(p instanceof Person); // true
Class expression — named or anonymous, assigned to a variable.
const Dog = class {
constructor(breed) { this.breed = breed; }
bark() { return 'Woof!'; }
};
// Named class expression — inner name scoped to class body only
const Cat = class PrivateCat {
clone() { return new PrivateCat(); } // ✅ inside
};
// new PrivateCat() outside → ReferenceError
Hoisting trap:
const x = new Foo(); // ❌ ReferenceError — TDZ
class Foo {}
const y = new Bar(); // ✅ function declarations are fully hoisted
function Bar() {}
new throws TypeError instead of silently polluting the global. Class bodies support computed method names: [Symbol.iterator]() {} — how built-ins implement the iteration protocol.
extends and super?
extends wires up the prototype chain. super() calls the parent constructor; super.method() calls a parent method. You must call super() before accessing this in a derived class.
class Animal {
constructor(name) { this.name = name; this.alive = true; }
speak() { return `${this.name} makes a noise.`; }
}
class Dog extends Animal {
constructor(name, breed) {
super(name); // MUST call before using this
this.breed = breed;
}
speak() {
return super.speak() + ' Woof!'; // call parent method
}
}
const d = new Dog('Rex', 'Lab');
console.log(d.speak()); // Rex makes a noise. Woof!
console.log(d instanceof Dog); // true
console.log(d instanceof Animal); // true
// Omitting constructor in derived class — JS auto-inserts:
// constructor(...args) { super(...args); }
class Cat extends Animal {
speak() { return 'Meow'; }
}
// Extending built-ins
class Stack extends Array {
peek() { return this[this.length - 1]; }
isEmpty() { return this.length === 0; }
}
const s = new Stack();
s.push(1, 2, 3);
console.log(s.peek()); // 3
super in a method uses the [[HomeObject]] internal slot — resolved at definition time, not call time. Arrow functions have no [[HomeObject]] so they cannot use super for method calls. This is one reason arrow functions should not be used as prototype methods.
| Kind | Lives on | Access via |
|---|---|---|
| Instance method | ClassName.prototype | instance.method() |
| Instance field | the instance (own property) | instance.field |
| Static method | the class function itself | ClassName.method() |
| Static field | the class function itself | ClassName.field |
class Counter {
count = 0; // instance field — fresh per instance
static instances = 0; // static field — one copy on Counter
constructor() { Counter.instances++; }
increment() { this.count++; } // instance method → Counter.prototype
static reset() { Counter.instances = 0; } // static utility
static createBatch(n) { // static factory method
return Array.from({ length: n }, () => new Counter());
}
}
const a = new Counter();
const b = new Counter();
a.increment(); a.increment();
console.log(a.count); // 2 — own
console.log(b.count); // 0 — separate instance
console.log(Counter.instances); // 2 — shared
Critical gotcha — object/array fields:
class Good { items = []; }
// ✅ Each instance gets its OWN fresh array
class Bad {}
Bad.prototype.items = [];
// ❌ ALL instances share the SAME array — mutation bug!
const x = new Bad(), y = new Bad();
x.items.push(1);
console.log(y.items); // [1] — oops
Date.now(), Array.from(), Object.keys(). They cannot access instance data. Calling a static method on an instance fails because static methods live on the constructor, which is NOT in the instance's prototype chain.
Topic B — Modern Class Features
#) and private methods? How do they differ from closure-based privacy?
Private fields (ES2022) use a # prefix. They are hard-private: inaccessible outside the class body, enforced by the engine. Not visible to Object.keys, JSON.stringify, or bracket notation.
class BankAccount {
#balance; // must declare at class-body top
#owner;
#log = [];
constructor(owner, balance) {
this.#owner = owner;
this.#balance = balance;
}
#record(msg) { this.#log.push(msg); } // private method
deposit(amount) {
if (amount <= 0) throw new Error('Must be positive');
this.#balance += amount;
this.#record(`deposit ${amount}`);
return this; // fluent interface
}
withdraw(amount) {
if (amount > this.#balance) throw new Error('Insufficient funds');
this.#balance -= amount;
this.#record(`withdraw ${amount}`);
return this;
}
statement() {
return { owner: this.#owner, balance: this.#balance, log: [...this.#log] };
}
}
const acc = new BankAccount('Alice', 1000);
acc.deposit(500).withdraw(200);
console.log(acc.statement()); // { owner:'Alice', balance:1300, log:[...] }
acc.#balance; // ❌ SyntaxError
acc['#balance']; // undefined — # is NOT a string prefix
| Aspect | Private fields (#) | Closure pattern |
|---|---|---|
| Enforcement | Engine-level SyntaxError | Convention — bypassable |
| Subclass access | Declaring class only | Subclasses can share same closure |
| Methods shared | Yes — on prototype (efficient) | No — new copies per instance |
| Brand check | #field in obj | No equivalent |
// Brand checking — proves obj is a genuine Point instance
class Point {
#x; #y;
constructor(x, y) { this.#x = x; this.#y = y; }
static isPoint(obj) { return #x in obj; }
}
console.log(Point.isPoint(new Point(1,2))); // true
console.log(Point.isPoint({ x:1, y:2 })); // false
this.#balance declared in the parent gets a TypeError — the field is not on the subclass brand. JS has no protected visibility. Work around it with public getters in the parent or keep field access inside parent methods.
Getters/setters define computed or validated properties accessed with dot notation — accessor descriptors placed on the prototype.
class Temperature {
#celsius;
constructor(c) { this.celsius = c; } // calls setter for validation
get fahrenheit() { return this.#celsius * 9/5 + 32; }
set fahrenheit(f) { this.celsius = (f - 32) * 5/9; }
get celsius() { return this.#celsius; }
set celsius(c) {
if (c < -273.15) throw new RangeError('Below absolute zero!');
this.#celsius = c;
}
}
const t = new Temperature(100);
console.log(t.fahrenheit); // 212
t.fahrenheit = 32;
console.log(t.celsius); // 0
Abstract class pattern — JS has no abstract keyword; simulate with new.target and explicit throws:
class Shape {
constructor() {
if (new.target === Shape)
throw new TypeError('Shape is abstract');
}
area() { throw new Error('area() must be implemented'); }
perimeter() { throw new Error('perimeter() must be implemented'); }
describe() { // Template Method pattern — skeleton defined here
return `Area: ${this.area()}, Perimeter: ${this.perimeter()}`;
}
}
class Circle extends Shape {
constructor(r) { super(); this.r = r; }
area() { return Math.PI * this.r ** 2; }
perimeter() { return 2 * Math.PI * this.r; }
}
new Shape(); // ❌ TypeError
const c = new Circle(5);
console.log(c.describe()); // Area: 78.53..., Perimeter: 31.41...
new.target is the constructor invoked with new. Inside a subclass it refers to the subclass — so the guard only fires when you try to instantiate the base directly.
describe() is the Template Method pattern — base class defines the algorithm skeleton, subclasses fill in the steps. Getter-only properties (no setter) are naturally read-only: assigning in strict mode throws TypeError.
Topic C — Design Patterns
Singleton — guarantees only one instance. Used for shared resources: config, logger, DB connection pool.
class Logger {
static #instance = null;
#logs = [];
constructor() {
if (Logger.#instance) return Logger.#instance;
Logger.#instance = this;
}
log(msg) { this.#logs.push(`[${Date.now()}] ${msg}`); }
getLogs() { return [...this.#logs]; }
static getInstance() {
return Logger.#instance ?? (Logger.#instance = new Logger());
}
}
const a = Logger.getInstance();
const b = Logger.getInstance();
console.log(a === b); // true — same object
a.log('Hello');
console.log(b.getLogs()); // ['[...] Hello'] — shared state
Factory pattern — creates objects without exposing construction logic; type chosen at runtime.
class Car { describe() { return 'Car — 200 km/h'; } }
class Motorcycle { describe() { return 'Motorcycle — 250 km/h'; } }
class Truck { describe() { return 'Truck — 120 km/h'; } }
class VehicleFactory {
static #registry = {
car: Car, motorcycle: Motorcycle, truck: Truck
};
static create(type) {
const Cls = this.#registry[type.toLowerCase()];
if (!Cls) throw new Error(`Unknown vehicle: ${type}`);
return new Cls();
}
static register(type, Cls) { // Open/Closed — extend without modifying
this.#registry[type] = Cls;
}
}
console.log(VehicleFactory.create('car').describe()); // Car — 200 km/h
register() lets you add types without touching existing code.
One emitter notifies many observers when state changes. Foundation of every event system, reactive library, and state manager.
class EventEmitter {
#events = new Map();
on(event, handler) {
if (!this.#events.has(event)) this.#events.set(event, new Set());
this.#events.get(event).add(handler);
return () => this.off(event, handler); // returns unsubscribe fn
}
once(event, handler) {
const w = (...args) => { handler(...args); this.off(event, w); };
return this.on(event, w);
}
off(event, handler) { this.#events.get(event)?.delete(handler); }
emit(event, ...args) { this.#events.get(event)?.forEach(h => h(...args)); }
}
class Store extends EventEmitter {
#state;
constructor(init) { super(); this.#state = init; }
setState(patch) {
const prev = { ...this.#state };
this.#state = { ...this.#state, ...patch };
this.emit('change', this.#state, prev);
}
getState() { return { ...this.#state }; }
}
const store = new Store({ count: 0 });
const unsub = store.on('change', (next, prev) =>
console.log(`count: ${prev.count} → ${next.count}`)
);
store.setState({ count: 1 }); // count: 0 → 1
store.setState({ count: 2 }); // count: 1 → 2
unsub();
store.setState({ count: 3 }); // (silent — unsubscribed)
| Pros | Cons / Watch-outs |
|---|---|
| Loose coupling — emitter doesn't know observers | Memory leaks — forgotten subscriptions keep closures alive |
| Easy add/remove at runtime | Hard to trace where an event fired from |
| Foundation of Vue, Redux, Node EventEmitter | Synchronous emit: one slow handler blocks all others |
on() mirrors React's useEffect cleanup return. Using a Set prevents double-registration of the same handler.
Wraps an object or function to add behaviour without modifying the original — Open/Closed principle in action.
Class-based (coffee shop example):
class Coffee {
cost() { return 5; }
description() { return 'Coffee'; }
}
class Milk {
constructor(b) { this.b = b; }
cost() { return this.b.cost() + 1.5; }
description() { return this.b.description() + ', Milk'; }
}
class Vanilla {
constructor(b) { this.b = b; }
cost() { return this.b.cost() + 2; }
description() { return this.b.description() + ', Vanilla'; }
}
let drink = new Vanilla(new Milk(new Coffee()));
console.log(drink.cost()); // 8.5
console.log(drink.description()); // Coffee, Milk, Vanilla
Function decorators — cross-cutting concerns:
function memoize(fn) {
const cache = new Map();
return function(...args) {
const key = JSON.stringify(args);
if (cache.has(key)) return cache.get(key);
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
function withLogging(fn) {
return function(...args) {
console.log(`Calling ${fn.name}`, args);
const r = fn.apply(this, args);
console.log(`→`, r);
return r;
};
}
const fib = memoize(function fib(n) {
return n <= 1 ? n : fib(n-1) + fib(n-2);
});
console.log(fib(40)); // instant — memoized
compose(...middlewares). React HOCs (withAuth(Component)) are decorators. Express middleware is a decorator chain on req/res. Key: composition via wrapping vs code sharing via prototype chain.
Topic D — Composition vs Inheritance & Proxy/Reflect
Deep inheritance trees share code along a single chain. If a class needs behaviour from two branches it cannot inherit both (no multiple inheritance in JS). Composition builds objects from independent capability pieces — no hierarchy conflicts.
Inheritance is a rigid blueprint: a FlyingFish must pick one parent (Fish or Bird?). Composition is LEGO: snap together canSwim, canFly, canBite freely. Any combination, no hierarchy conflicts.
// Capability factories
const canSwim = (s) => ({ swim() { return `${s.name} swimming`; } });
const canFly = (s) => ({ fly() { return `${s.name} flying`; } });
const canBite = (s) => ({ bite() { return `${s.name} bites (${s.force})`; } });
// Object factories — compose only what they need
const createDuck = (name) => {
const s = { name };
return Object.freeze({ ...canSwim(s), ...canFly(s),
quack() { return 'Quack!'; } });
};
const createCroc = (name, force) => {
const s = { name, force };
return Object.freeze({ ...canSwim(s), ...canBite(s) });
};
const duck = createDuck('Donald');
const croc = createCroc('Sobek', 3700);
console.log(duck.fly()); // Donald flying
console.log(croc.bite()); // Sobek bites (3700)
// croc.fly is undefined — clean absence
Composition with classes — dependency injection:
class OrderService {
constructor(payment, email, logger) {
this.payment = payment;
this.email = email;
this.logger = logger;
}
async place(order) {
await this.payment.charge(order);
await this.email.send(order);
this.logger.log('Order placed');
}
}
// Swap any collaborator without changing OrderService
| Inheritance | Composition | |
|---|---|---|
| Coupling | Tight — child depends on parent internals | Loose — depends on interface only |
| Flexibility | Fixed at definition time | Swappable at runtime |
| Testability | Hard — must construct full hierarchy | Easy — inject mocks |
| Multiple behaviours | Single chain only | Mix any capabilities freely |
React.Component) to hooks (composing independent capabilities) for exactly this reason.
Proxy wraps any object and intercepts fundamental operations via traps. Reflect provides the default behaviour for each trap — always call Reflect inside a trap to preserve correct semantics.
Every passenger (operation) passes through security (the Proxy trap). Security can inspect, modify, allow, or block. After the check the passenger boards normally (Reflect performs the original operation).
// ── Validation Proxy ──
function createValidator(target, schema) {
return new Proxy(target, {
set(obj, prop, value, receiver) {
const rule = schema[prop];
if (rule) {
if (typeof value !== rule.type)
throw new TypeError(`${prop} must be ${rule.type}`);
if (rule.min != null && value < rule.min)
throw new RangeError(`${prop} must be ≥ ${rule.min}`);
if (rule.max != null && value > rule.max)
throw new RangeError(`${prop} must be ≤ ${rule.max}`);
}
return Reflect.set(obj, prop, value, receiver);
}
});
}
const person = createValidator({}, {
name: { type: 'string' },
age: { type: 'number', min: 0, max: 150 }
});
person.name = 'Alice'; // ✅
person.age = 200; // ❌ RangeError: age must be ≤ 150
// ── Lazy Initialisation ──
function lazy(factory) {
let instance;
return new Proxy({}, {
get(_, prop) {
if (!instance) { instance = factory(); console.log('Initialised!'); }
return instance[prop];
}
});
}
const db = lazy(() => ({ query: sql => `result of ${sql}` }));
db.query('SELECT 1'); // Initialised! (first access)
db.query('SELECT 2'); // reuses instance
// ── Negative Array Index (like Python) ──
function negIdx(arr) {
return new Proxy(arr, {
get(t, prop, r) {
const i = +prop;
const key = Number.isInteger(i) && i < 0
? String(t.length + i) : prop;
return Reflect.get(t, key, r);
}
});
}
const arr = negIdx([10, 20, 30]);
console.log(arr[-1]); // 30
console.log(arr[-2]); // 20
Why always use Reflect inside traps:
// ❌ Breaks inherited getters — receiver is lost
get(obj, prop) { return obj[prop]; }
// ✅ Correct — receiver forwarded, inherited getters work
get(obj, prop, receiver) { return Reflect.get(obj, prop, receiver); }
All 13 Proxy traps:
| Trap | Intercepts |
|---|---|
get | obj.x |
set | obj.x = v |
has | x in obj |
deleteProperty | delete obj.x |
apply | function call fn() |
construct | new Fn() |
getPrototypeOf | Object.getPrototypeOf(obj) |
setPrototypeOf | Object.setPrototypeOf(obj,p) |
isExtensible | Object.isExtensible(obj) |
preventExtensions | Object.preventExtensions(obj) |
getOwnPropertyDescriptor | Object.getOwnPropertyDescriptor(obj,k) |
defineProperty | Object.defineProperty(obj,k,d) |
ownKeys | Object.keys, Reflect.ownKeys, for-in |
Object.defineProperty-based reactivity (Vue 2) with Proxy — the old approach could not intercept array index assignments or .length changes. Real-world uses: ORMs lazy-loading relations, validation, state management, access control. Performance: ~5–20% overhead per operation — fine for business objects, avoid on hot numerical loops.
Mental Model — JavaScript's Concurrency Engine
JavaScript is single-threaded — only one piece of code runs at a time. Yet browsers handle timers, network calls, and user events concurrently. The secret is the Event Loop: JS offloads slow work to the browser/Node runtime, which calls back into JS when the work finishes.
The JS thread is a single chef. When an order (async task) comes in, the chef hands it to the kitchen assistant (browser APIs / libuv). The chef keeps cooking other orders. When the assistant signals "order ready", the chef picks it up at the pass (the task queue) — but only when the current dish is done. The pass has two windows: a VIP window (microtask queue — Promises, queueMicrotask) served first, and a regular window (task queue — setTimeout, I/O events).
// Execution order mental model
console.log('1 — sync');
setTimeout(() => console.log('4 — macrotask'), 0);
Promise.resolve().then(() => console.log('3 — microtask'));
console.log('2 — sync');
// Output: 1, 2, 3, 4
// Sync runs first → microtasks drain → then macrotask
| Queue | Fed by | Priority | Examples |
|---|---|---|---|
| Call stack | Running JS | Highest — runs now | function calls, expressions |
| Microtask queue | Promise callbacks, queueMicrotask, MutationObserver | Drains completely before next task | .then(), await resumption |
| Task queue (macrotask) | Browser/Node runtime | One task per event loop tick | setTimeout, setInterval, I/O, UI events |
Topic A — Event Loop & Task Queues
The event loop algorithm (simplified):
// Pseudocode of one event loop iteration
while (true) {
// 1. Execute all synchronous code (call stack)
runCallStack();
// 2. Drain ALL microtasks (including ones added during draining)
while (microtaskQueue.length) {
microtaskQueue.shift()();
}
// 3. Run ONE macrotask
if (taskQueue.length) taskQueue.shift()();
// 4. Render (browser only) if needed
// 5. Loop
}
Practical ordering quiz — trace the output:
console.log('A');
setTimeout(() => console.log('B'), 0);
setTimeout(() => console.log('C'), 0);
Promise.resolve()
.then(() => {
console.log('D');
return Promise.resolve();
})
.then(() => console.log('E'));
Promise.resolve().then(() => console.log('F'));
console.log('G');
// Output: A G D F E B C
//
// Sync: A, G
// Microtasks: D (queues E), F → then E drains
// Macrotasks: B, C (one per loop tick)
Microtask starvation — if microtasks keep adding microtasks, macrotasks (and rendering) starve:
function flood() {
Promise.resolve().then(flood); // ❌ infinite microtask loop
}
flood();
// setTimeout callbacks never fire — UI freezes
queueMicrotask — explicit microtask scheduling without creating a Promise:
queueMicrotask(() => console.log('runs after current sync, before macrotasks'));
// Useful in library code to batch DOM updates
setTimeout(fn, 0) does NOT mean "run immediately" — it means "put in the macrotask queue after at least 0ms". The actual delay is often 4ms+ in browsers due to clamping. Microtasks always run before the next macrotask, which is why Promise chains feel "instant" compared to setTimeout.
requestAnimationFrame and where does it fit in the event loop?
requestAnimationFrame (rAF) schedules a callback to run just before the browser paints the next frame (~16ms at 60fps). It sits between macrotasks and the render step — after microtasks drain but before painting.
// Event loop order (browser):
// 1. Macrotask 2. Microtasks drain 3. rAF callbacks 4. Paint
// BAD — causes layout thrash (forces synchronous layout)
function badAnimation() {
element.style.left = element.offsetLeft + 1 + 'px'; // read then write
setTimeout(badAnimation, 16); // not synced to display refresh
}
// GOOD — synced to display refresh, no wasted frames
function goodAnimation() {
element.style.left = element.offsetLeft + 1 + 'px';
requestAnimationFrame(goodAnimation);
}
requestAnimationFrame(goodAnimation);
Batching DOM reads and writes with rAF:
// Read-write batching pattern — avoids layout thrash
const reads = [];
const writes = [];
function scheduleRead(fn) { reads.push(fn); requestRaf(); }
function scheduleWrite(fn) { writes.push(fn); requestRaf(); }
let rafScheduled = false;
function requestRaf() {
if (!rafScheduled) {
rafScheduled = true;
requestAnimationFrame(() => {
reads.splice(0).forEach(f => f()); // reads first
writes.splice(0).forEach(f => f()); // writes second
rafScheduled = false;
});
}
}
setInterval keeps firing. Use rAF for animations; use setInterval/setTimeout for polling. rAF receives a DOMHighResTimeStamp argument — use it to compute delta time for frame-rate-independent animation.
Topic B — Promises
A Promise is an object representing an eventual value. It has three states — pending, fulfilled, rejected — and transitions are one-way and irreversible.
A Promise is like a restaurant pager. The kitchen (async work) is processing your order. The pager is pending. When food is ready it buzzes fulfilled (or flashes rejected if they're out of stock). You can hand the pager to a friend — that's chaining. The pager's state never resets.
// Creating a Promise
const p = new Promise((resolve, reject) => {
// executor runs SYNCHRONOUSLY
setTimeout(() => {
const ok = Math.random() > 0.5;
if (ok) resolve('data');
else reject(new Error('Failed'));
}, 1000);
});
// .then returns a NEW promise — that's how chains work
p
.then(data => { console.log('Got:', data); return data.toUpperCase(); })
.then(upper => console.log('Upper:', upper))
.catch(err => console.error('Error:', err.message))
.finally(() => console.log('Always runs'));
How chaining works under the hood: each .then(onFulfilled) returns a new Promise whose value is whatever onFulfilled returns. If it returns another Promise, the chain waits for that Promise to settle.
// Error propagation — errors skip .then, jump to .catch
Promise.resolve(1)
.then(v => { throw new Error('Oops'); }) // error thrown
.then(v => console.log('skipped')) // SKIPPED
.then(v => console.log('also skipped')) // SKIPPED
.catch(e => console.log('caught:', e.message)) // caught: Oops
.then(v => console.log('resumes here')); // chain resumes
// .catch(fn) is sugar for .then(undefined, fn)
Common mistakes:
// ❌ Forgotten return — breaks the chain
fetch('/api')
.then(res => {
res.json(); // no return — next .then gets undefined
})
.then(data => console.log(data)); // undefined
// ✅ Return the inner promise
fetch('/api')
.then(res => res.json()) // returned
.then(data => console.log(data));
// ❌ Promise constructor anti-pattern
const bad = new Promise((resolve) => {
fetch('/api').then(resolve); // redundant wrapper
});
// ✅ Just return the fetch promise directly
const good = fetch('/api');
.catch() at the end of chains, or use try/catch with async/await. A resolved Promise's .then callbacks still run asynchronously (microtask) — they never run synchronously even if the promise is already resolved.
all, allSettled, race, and any?
| Method | Resolves when | Rejects when | Use case |
|---|---|---|---|
Promise.all | ALL fulfill | ANY rejects (fast-fail) | Parallel independent tasks that all must succeed |
Promise.allSettled | ALL settle (either way) | Never rejects | Run all, collect results + errors |
Promise.race | FIRST to settle (any) | FIRST to settle if reject | Timeout pattern, fastest response |
Promise.any | FIRST to fulfill | ALL reject (AggregateError) | First successful, ignore failures |
const delay = (ms, val) => new Promise(res => setTimeout(() => res(val), ms));
const fail = (ms, msg) => new Promise((_, rej) => setTimeout(() => rej(new Error(msg)), ms));
// Promise.all — all must succeed
const [user, posts, comments] = await Promise.all([
fetch('/user').then(r => r.json()),
fetch('/posts').then(r => r.json()),
fetch('/comments').then(r => r.json()),
]);
// 3x faster than sequential await — runs in parallel
// Promise.allSettled — collect all results regardless
const results = await Promise.allSettled([
delay(100, 'ok'),
fail(200, 'boom'),
delay(300, 'fine')
]);
results.forEach(r => {
if (r.status === 'fulfilled') console.log('✅', r.value);
else console.log('❌', r.reason.message);
});
// ✅ ok ❌ boom ✅ fine
// Promise.race — timeout pattern
function withTimeout(promise, ms) {
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error(`Timed out after ${ms}ms`)), ms)
);
return Promise.race([promise, timeout]);
}
await withTimeout(fetch('/slow-api'), 5000);
// Promise.any — first success wins
const fastest = await Promise.any([
fetch('https://cdn1.example.com/file'),
fetch('https://cdn2.example.com/file'),
fetch('https://cdn3.example.com/file'),
]);
// Returns first successful CDN response
await inside a loop sequentially when the tasks are independent: for (const url of urls) { await fetch(url); } is 3x slower than await Promise.all(urls.map(fetch)). Use Promise.all for parallel work. Use allSettled when you need a report on all tasks, not just the first failure.
Topic C — async/await
async marks a function to always return a Promise. await suspends the function at that point, releases the thread (goes to microtask queue), and resumes when the awaited Promise settles.
Think of an async function as a DVR recording. When it hits await, it pauses the tape and hands control back to the OS (event loop). When the awaited value is ready, the DVR resumes from exactly where it paused — all local variables intact.
// async/await is Promise syntax sugar
async function getUser(id) {
const res = await fetch(`/users/${id}`); // suspends here
const user = await res.json(); // suspends here
return user; // wraps in Promise.resolve(user)
}
// Equivalent Promise chain:
function getUser(id) {
return fetch(`/users/${id}`)
.then(res => res.json())
.then(user => user);
}
Error handling with try/catch:
async function loadData() {
try {
const res = await fetch('/api/data');
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const data = await res.json();
return data;
} catch (err) {
console.error('Failed:', err.message);
throw err; // re-throw to caller
}
}
// Alternatively — per-await error handling
const [data, err] = await loadData()
.then(d => [d, null])
.catch(e => [null, e]); // Go-style error handling
if (err) { /* handle */ }
Common async/await pitfalls:
// ❌ Sequential when parallel is possible
const a = await fetchA(); // waits for A before starting B
const b = await fetchB();
// ✅ Parallel
const [a, b] = await Promise.all([fetchA(), fetchB()]);
// ❌ await inside forEach — doesn't work as expected
[1,2,3].forEach(async (id) => {
await process(id); // forEach ignores returned Promises
});
// ✅ Use for...of for sequential, Promise.all for parallel
for (const id of [1,2,3]) { await process(id); } // sequential
await Promise.all([1,2,3].map(process)); // parallel
// ❌ Top-level await only works in ES modules
// ✅ Wrap in async IIFE in CommonJS
(async () => {
const data = await fetchData();
console.log(data);
})();
async/await does NOT make code run faster — it just makes async code read like synchronous code. The thread is still released at each await. An async function always returns a Promise — even async function f() { return 1; } returns Promise.resolve(1). Awaiting a non-Promise just wraps it: await 5 is a microtask hop that resolves to 5.
AbortController provides a cancellation signal that can be passed to fetch and other async APIs. Calling controller.abort() fires an abort event on the signal, causing any associated fetch to reject with an AbortError.
// Basic fetch cancellation
const controller = new AbortController();
const { signal } = controller;
// Cancel after 5 seconds
const timeoutId = setTimeout(() => controller.abort(), 5000);
try {
const res = await fetch('/api/slow', { signal });
const data = await res.json();
clearTimeout(timeoutId);
return data;
} catch (err) {
if (err.name === 'AbortError') {
console.log('Request was cancelled');
} else {
throw err;
}
}
React pattern — cancel on component unmount:
useEffect(() => {
const controller = new AbortController();
async function load() {
try {
const res = await fetch('/api/data', { signal: controller.signal });
const data = await res.json();
setData(data); // safe — won't run if aborted
} catch (e) {
if (e.name !== 'AbortError') setError(e);
}
}
load();
return () => controller.abort(); // cleanup on unmount
}, []);
Cancellable async operations (custom):
async function processWithCancel(items, signal) {
for (const item of items) {
if (signal.aborted) throw new DOMException('Aborted', 'AbortError');
await processItem(item);
}
}
// Using AbortSignal.timeout() — ES2022 shorthand
const res = await fetch('/api', {
signal: AbortSignal.timeout(3000) // auto-cancel after 3s
});
"Can't perform a React state update on an unmounted component". AbortController in the useEffect cleanup prevents this. AbortSignal.timeout(ms) (ES2022) creates a one-shot signal without needing a controller — cleaner for simple fetch timeouts.
Topic D — Generators & Async Iteration
function* and yield work?
A generator function returns an iterator. Each call to .next() runs the function body until the next yield, pauses, and returns { value, done }. The function's local state is fully preserved between pauses.
A generator is a book you read a page at a time. Each yield is a page break — you put a bookmark there, close the book, and hand back the current page. Next time you call .next() you open to exactly where you left off.
function* counter(start = 0) {
while (true) {
const reset = yield start; // pause — value sent IN via .next(val)
start = reset ?? start + 1;
}
}
const gen = counter(1);
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next(10).value); // 10 — reset to 10
console.log(gen.next().value); // 11
Finite generator — iterable sequences:
function* range(start, end, step = 1) {
for (let i = start; i < end; i += step) {
yield i;
}
}
// Works with for...of, spread, destructuring
console.log([...range(0, 5)]); // [0, 1, 2, 3, 4]
console.log([...range(0, 10, 2)]); // [0, 2, 4, 6, 8]
for (const n of range(1, 4)) {
console.log(n); // 1, 2, 3
}
Lazy infinite sequences — memory-efficient:
function* fibonacci() {
let [a, b] = [0, 1];
while (true) {
yield a;
[a, b] = [b, a + b];
}
}
function take(n, iter) {
const result = [];
for (const val of iter) {
result.push(val);
if (result.length === n) break;
}
return result;
}
console.log(take(8, fibonacci())); // [0,1,1,2,3,5,8,13]
// Only computes values on demand — no array pre-allocated
yield* — delegate to another iterable:
function* concat(...iterables) {
for (const it of iterables) {
yield* it; // delegate — yields each item from it
}
}
console.log([...concat([1,2], [3,4], [5])]); // [1,2,3,4,5]
[Symbol.iterator]() method returning an object with .next() is iterable. Generators are how custom iterables are built. They also power redux-saga (saga effects are yielded Promises that the middleware resolves) and are conceptually the foundation async/await is built on.
for await...of? When would you use them?
An async generator (async function*) yields Promises. for await...of consumes async iterables — it awaits each .next() call before iterating. Perfect for streaming data, paginated APIs, and event streams.
// Async generator — yields values over time
async function* paginate(url) {
let nextUrl = url;
while (nextUrl) {
const res = await fetch(nextUrl);
const data = await res.json();
yield data.items; // pause after each page
nextUrl = data.nextPage ?? null;
}
}
// Consumer — processes pages as they arrive
for await (const page of paginate('/api/items?page=1')) {
console.log('Got page:', page.length, 'items');
// No need to buffer all pages — processes one at a time
}
// Reading a Node.js Readable stream with for await...of
async function readStream(stream) {
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
return Buffer.concat(chunks).toString();
}
// Rate-limited async generator — process with delays
async function* withRateLimit(items, delayMs) {
for (const item of items) {
yield await processItem(item);
await new Promise(r => setTimeout(r, delayMs));
}
}
for await (const result of withRateLimit(urls, 100)) {
console.log(result);
}
Custom async iterable object:
const asyncRange = {
[Symbol.asyncIterator](start = 0, end = 5) {
let i = start;
return {
next() {
return i < end
? Promise.resolve({ value: i++, done: false })
: Promise.resolve({ value: undefined, done: true });
}
};
}
};
for await (const n of asyncRange) {
console.log(n); // 0 1 2 3 4
}
for await...of vs Promise.all: for await processes items sequentially (one at a time), which is correct for rate-limited APIs or when order matters. Promise.all is parallel (all at once). Async generators are the right tool for streaming scenarios — you start processing the first chunk before the last chunk even arrives, reducing time-to-first-result.
Retry with exponential backoff:
async function retry(fn, { attempts = 3, baseDelay = 1000 } = {}) {
for (let i = 0; i < attempts; i++) {
try {
return await fn();
} catch (err) {
if (i === attempts - 1) throw err;
const delay = baseDelay * 2 ** i + Math.random() * 100; // jitter
console.log(`Attempt ${i+1} failed, retrying in ${delay|0}ms`);
await new Promise(r => setTimeout(r, delay));
}
}
}
const data = await retry(() => fetch('/flaky-api').then(r => r.json()));
Async queue — limit concurrency:
async function mapConcurrent(items, fn, limit = 3) {
const results = new Array(items.length);
const executing = new Set();
for (let i = 0; i < items.length; i++) {
const p = fn(items[i], i).then(r => { results[i] = r; });
executing.add(p);
p.finally(() => executing.delete(p));
if (executing.size >= limit) {
await Promise.race(executing); // wait for one slot to free
}
}
await Promise.all(executing);
return results;
}
// Process 100 URLs, max 3 at a time
const results = await mapConcurrent(urls, url => fetch(url).then(r => r.json()), 3);
Debounce and throttle (async-aware):
// Debounce — fires AFTER silence period
function debounce(fn, ms) {
let timer;
return function(...args) {
clearTimeout(timer);
return new Promise(resolve => {
timer = setTimeout(() => resolve(fn.apply(this, args)), ms);
});
};
}
// Throttle — fires AT MOST once per interval
function throttle(fn, ms) {
let last = 0;
return function(...args) {
const now = Date.now();
if (now - last >= ms) {
last = now;
return fn.apply(this, args);
}
};
}
const search = debounce(async (query) => {
const res = await fetch(`/search?q=${query}`);
return res.json();
}, 300);
input.addEventListener('input', (e) => search(e.target.value));
mapConcurrent) is critical for real-world APIs — hammering a server with 1000 simultaneous requests causes rate-limiting or crashes; batching to 3-5 concurrent is correct.
Web Workers run JavaScript in a separate OS thread — no shared memory, communication only via postMessage. The main thread stays responsive while the worker crunches data.
// ── main.js ──
const worker = new Worker('worker.js');
worker.postMessage({ type: 'COMPUTE', data: largeArray });
worker.onmessage = (event) => {
console.log('Result:', event.data.result);
};
worker.onerror = (err) => {
console.error('Worker error:', err.message);
};
// ── worker.js ──
self.onmessage = (event) => {
const { type, data } = event.data;
if (type === 'COMPUTE') {
// Heavy computation here — doesn't block main thread
const result = data.reduce((sum, n) => sum + n * n, 0);
self.postMessage({ result });
}
};
Promise wrapper for cleaner worker API:
class WorkerPool {
#worker;
#pending = new Map();
#id = 0;
constructor(url) {
this.#worker = new Worker(url);
this.#worker.onmessage = ({ data }) => {
const { id, result, error } = data;
const { resolve, reject } = this.#pending.get(id);
this.#pending.delete(id);
error ? reject(new Error(error)) : resolve(result);
};
}
run(type, payload) {
return new Promise((resolve, reject) => {
const id = ++this.#id;
this.#pending.set(id, { resolve, reject });
this.#worker.postMessage({ id, type, payload });
});
}
}
const pool = new WorkerPool('worker.js');
const result = await pool.run('COMPUTE', largeArray);
console.log('Result:', result);
Transferable objects — zero-copy transfer for large data (ArrayBuffer, ImageBitmap):
const buffer = new ArrayBuffer(1024 * 1024); // 1 MB
// Transfer ownership — buffer is neutered in main thread (no copy)
worker.postMessage({ buffer }, [buffer]);
// buffer.byteLength === 0 now in main thread
window, or document. Use them for: image/video processing, crypto, compression, large data sorting, WASM computation. For shared memory between workers use SharedArrayBuffer + Atomics (requires COOP/COEP headers for security). Inline workers (using URL.createObjectURL(new Blob([...]))) avoid needing a separate file — useful for libraries.
Mental Model — ES6+ Is a Better Vocabulary
ES6 (2015) and beyond didn't change how JavaScript works fundamentally — it gave us a richer vocabulary to express intent more clearly, avoid common bugs, and write less boilerplate. Each feature solves a specific pain point from the ES5 era.
| Feature cluster | Pain point solved |
|---|---|
| Destructuring, spread, rest | Verbose property extraction and argument handling |
| Map / Set / WeakMap / WeakSet | Objects as dictionaries have prototype pollution; arrays have O(n) lookup |
Optional chaining ?., nullish coalescing ?? | Verbose null-guard chains (a && a.b && a.b.c) |
| Tagged template literals | String interpolation had no sanitisation hooks |
| Symbols + well-known symbols | No way to add non-colliding metadata to objects |
| ES modules | Global scope pollution, no static analysis, no tree-shaking |
Topic A — Destructuring, Spread & Rest
Array destructuring — position-based extraction:
const [a, b, c] = [1, 2, 3];
console.log(a, b, c); // 1 2 3
// Skip elements with commas
const [first, , third] = [10, 20, 30];
// Default values
const [x = 0, y = 0, z = 0] = [5, 10];
console.log(x, y, z); // 5 10 0
// Rest element — collects remainder
const [head, ...tail] = [1, 2, 3, 4];
console.log(head, tail); // 1 [2, 3, 4]
// Swap variables — no temp needed
let p = 1, q = 2;
[p, q] = [q, p];
console.log(p, q); // 2 1
Object destructuring — name-based extraction:
const user = { name: 'Alice', age: 30, role: 'admin' };
// Basic
const { name, age } = user;
// Rename: property : localName
const { name: userName, age: userAge } = user;
console.log(userName); // Alice
// Default values
const { role = 'guest', theme = 'light' } = user;
console.log(theme); // light (default — not in user)
// Rest in objects
const { name: n, ...rest } = user;
console.log(rest); // { age: 30, role: 'admin' }
// Nested destructuring
const { address: { city, zip = '00000' } } = {
address: { city: 'NYC' }
};
console.log(city, zip); // NYC 00000
Function parameter destructuring:
// Destructure in the parameter list — common in React
function render({ title, body, footer = '© 2025' }) {
return `<h1>${title}</h1>${body}<footer>${footer}</footer>`;
}
// Array return — multiple values
function minMax(arr) {
return [Math.min(...arr), Math.max(...arr)];
}
const [min, max] = minMax([3, 1, 4, 1, 5]);
console.log(min, max); // 1 5
// Iterating map entries
const map = new Map([['a', 1], ['b', 2]]);
for (const [key, value] of map) {
console.log(key, value);
}
undefined triggers defaults; null does NOT — const { x = 5 } = { x: null } gives x === null not 5. React's useState return value uses array destructuring intentionally — you get to name the state variable whatever you like.
Same syntax (...), opposite direction: Rest collects multiple values into one; Spread expands one value into multiple positions.
// ── REST — collects into array/object ──
// Function rest params — replaces arguments object
function sum(...nums) { // rest must be last param
return nums.reduce((a, b) => a + b, 0);
}
console.log(sum(1, 2, 3, 4)); // 10
// Object rest — collect remaining properties
const { a, b, ...others } = { a: 1, b: 2, c: 3, d: 4 };
console.log(others); // { c: 3, d: 4 }
// Array rest — collect remaining elements
const [first, second, ...rest] = [1, 2, 3, 4, 5];
console.log(rest); // [3, 4, 5]
// ── SPREAD — expands into individual items ──
// Array spread — clone / merge
const arr1 = [1, 2];
const arr2 = [3, 4];
const merged = [...arr1, ...arr2, 5]; // [1,2,3,4,5]
const clone = [...arr1]; // shallow copy
// Object spread — clone / override / merge
const defaults = { theme: 'dark', lang: 'en', fontSize: 14 };
const userPrefs = { theme: 'light', fontSize: 16 };
const config = { ...defaults, ...userPrefs }; // later wins
// { theme:'light', lang:'en', fontSize:16 }
// Spread in function calls — replaces .apply
const numbers = [5, 2, 8, 1];
console.log(Math.max(...numbers)); // 8
// Convert array-like / iterable to array
const divs = [...document.querySelectorAll('div')];
const chars = [..`'hello'`]; // ['h','e','l','l','o']
const setToArr = [...new Set([1, 2, 2, 3])]; // [1,2,3]
Immutable update patterns (React/Redux style):
// Update one field without mutation
const user = { name: 'Alice', age: 30, role: 'user' };
const promoted = { ...user, role: 'admin' }; // user unchanged
// Update nested object (shallow merge only!)
const state = { user: { name: 'Alice', score: 0 }, loading: false };
const next = { ...state, user: { ...state.user, score: 10 } };
// Insert into array at index
const arr = [1, 2, 4, 5];
const fixed = [...arr.slice(0, 2), 3, ...arr.slice(2)]; // [1,2,3,4,5]
{ ...obj } is equivalent to Object.assign({}, obj). The last spread wins for conflicting keys. Spread works on any iterable (strings, Sets, Maps, generators) — but object spread only works on objects (not iterables).
Topic B — Map, Set, WeakMap, WeakSet
Map and Set instead of plain objects and arrays?
Map — a key-value store where any value (including objects and functions) can be a key. Unlike plain objects, Maps preserve insertion order, have a proper .size, and avoid prototype pollution.
const map = new Map();
// Keys can be ANYTHING
const objKey = { id: 1 };
map.set(objKey, 'user data');
map.set(42, 'number key');
map.set(true, 'boolean key');
console.log(map.get(objKey)); // 'user data'
console.log(map.size); // 3
console.log(map.has(42)); // true
// Iterate in insertion order
for (const [key, value] of map) {
console.log(key, '→', value);
}
// From entries array
const m2 = new Map([['a', 1], ['b', 2]]);
// Convert to plain object (string keys only)
const obj = Object.fromEntries(m2); // { a: 1, b: 2 }
// And back
const m3 = new Map(Object.entries(obj));
Set — ordered collection of unique values. Uses SameValueZero equality.
const set = new Set([1, 2, 2, 3, 3, 3]);
console.log([...set]); // [1, 2, 3] — duplicates removed
console.log(set.size); // 3
console.log(set.has(2)); // true — O(1) lookup
// Deduplication — most common use case
const unique = [...new Set(array)];
// Set operations (ES2025 has native methods, below is ES2022 way)
const a = new Set([1, 2, 3]);
const b = new Set([2, 3, 4]);
const union = new Set([...a, ...b]); // {1,2,3,4}
const intersection = new Set([...a].filter(x => b.has(x))); // {2,3}
const difference = new Set([...a].filter(x => !b.has(x)));// {1}
| Plain Object | Map | |
|---|---|---|
| Key types | String / Symbol only | Any value |
| Prototype pollution | Yes (toString, __proto__) | No |
| Size | Object.keys(o).length | map.size |
| Iteration order | Insertion (mostly) — unreliable for non-string | Guaranteed insertion order |
| JSON serialization | Native JSON.stringify | Must convert first |
| Use when | Known string keys, config objects | Dynamic keys, object keys, frequent add/delete |
Set lookup (has()) is O(1) — backed by a hash table — vs Array includes() which is O(n). Use Set when you're checking membership frequently. Map vs plain object: if keys are user-provided strings, use Map or Object.create(null) to avoid prototype pollution (a user could send a key like "__proto__" or "constructor").
WeakMap and WeakSet? When do they prevent memory leaks?
WeakMap holds weak references to its keys — objects only. When the key object has no other references, the garbage collector can reclaim it and the entry is automatically removed. You cannot iterate a WeakMap or check its size.
const wm = new WeakMap();
let obj = { id: 1 };
wm.set(obj, 'metadata');
console.log(wm.get(obj)); // 'metadata'
console.log(wm.has(obj)); // true
obj = null; // key object is eligible for GC
// WeakMap entry is automatically removed — no memory leak
Real use case 1 — private data per instance:
const _private = new WeakMap();
class Person {
constructor(name, secret) {
_private.set(this, { secret });
this.name = name;
}
revealSecret() {
return _private.get(this).secret;
}
}
const p = new Person('Alice', 'likes cats');
console.log(p.revealSecret()); // 'likes cats'
// When p is GC'd, the WeakMap entry is also freed
Real use case 2 — DOM node metadata (no leak on removal):
const nodeData = new WeakMap();
function attachTooltip(el, text) {
nodeData.set(el, { tooltip: text, created: Date.now() });
}
function getTooltip(el) {
return nodeData.get(el)?.tooltip;
}
const btn = document.querySelector('button');
attachTooltip(btn, 'Click me!');
// When btn is removed from DOM and dereferenced,
// the WeakMap entry is GC'd automatically — no cleanup needed
WeakSet — same idea for sets: stores weak references to objects, useful for tracking "have I seen this object?"
const seen = new WeakSet();
function processOnce(obj) {
if (seen.has(obj)) { console.log('Already processed'); return; }
seen.add(obj);
// process obj...
}
// Circular reference detection in serializer
function safeStringify(obj) {
const visited = new WeakSet();
return JSON.stringify(obj, (key, value) => {
if (typeof value === 'object' && value !== null) {
if (visited.has(value)) return '[Circular]';
visited.add(value);
}
return value;
});
}
.size. You can't iterate them because the GC can remove entries at any time. That's the trade-off for automatic cleanup. They're the right choice when the lifetime of the data should match the lifetime of the key object — not when you need to enumerate or count entries.
Topic C — Modern Operators & Syntax
?., nullish coalescing ??, and logical assignment operators.
Optional chaining ?. — short-circuits to undefined if the left side is null or undefined, instead of throwing a TypeError.
const user = { address: { city: 'NYC' } };
// Without ?. — verbose and error-prone
const zip1 = user && user.address && user.address.zip;
// With ?. — clean short-circuit
const zip2 = user?.address?.zip; // undefined (not an error)
const city = user?.address?.city; // 'NYC'
// Optional method call
const len = user?.getName?.(); // undefined if getName doesn't exist
// Optional bracket notation
const key = 'address';
const val = user?.[key]?.city; // 'NYC'
// Optional with arrays
const first = arr?.[0]; // undefined if arr is null/undefined
Nullish coalescing ?? — falls back only when left side is null or undefined (not other falsy values like 0 or '').
// The problem with ||
const count = 0;
console.log(count || 10); // 10 — WRONG! 0 is a valid value
console.log(count ?? 10); // 0 — correct, ?? only fires for null/undefined
const name = '';
console.log(name || 'anonymous'); // 'anonymous' — may be wrong
console.log(name ?? 'anonymous'); // '' — preserves empty string
// Chaining ?? with ?.
const theme = user?.prefs?.theme ?? 'dark';
Logical assignment operators (ES2021):
// ??= — assign only if null or undefined
let config = null;
config ??= { theme: 'dark' }; // config = { theme:'dark' }
config ??= { theme: 'light' }; // no-op — config already set
// ||= — assign only if falsy
let cache = '';
cache ||= 'default'; // cache = 'default' ('' is falsy)
// &&= — assign only if truthy
let user2 = { name: 'Alice' };
user2 &&= { ...user2, verified: true }; // only updates if user2 is truthy
// Practical: lazy initialise cache
function getCache(key) {
cache[key] ??= computeExpensive(key); // only compute if missing
return cache[key];
}
?? vs || is a classic interview question. || checks for falsy (0, '', false, null, undefined, NaN). ?? checks only for nullish (null, undefined). Use ?? when 0 or empty string are valid values — which is almost always the case for user-controlled data.
A tagged template is a function call where the function receives the template's static strings and interpolated values as separate arguments — giving you full control over how the string is assembled.
// Tag function signature
function tag(strings, ...values) {
// strings: array of literal string parts
// values: array of interpolated expressions
console.log(strings); // ['Hello ', ', you are ', ' years old']
console.log(values); // ['Alice', 30]
return strings.reduce((result, str, i) =>
result + str + (values[i] ?? ''), ''
);
}
const name = 'Alice', age = 30;
tag`Hello ${name}, you are ${age} years old`;
SQL sanitization — prevent injection:
function sql(strings, ...values) {
const params = [];
const query = strings.reduce((q, str, i) => {
if (i < values.length) {
params.push(values[i]); // collect as parameter, not inline
return q + str + '$' + params.length;
}
return q + str;
}, '');
return { query, params };
}
const userId = "1; DROP TABLE users"; // injection attempt
const { query, params } = sql`SELECT * FROM users WHERE id = ${userId}`;
// query: "SELECT * FROM users WHERE id = $1"
// params: ["1; DROP TABLE users"] — passed as parameter, not inline SQL
HTML sanitization:
function html(strings, ...values) {
const escape = (s) => String(s)
.replace(/&/g, '&')
.replace(/'<')
.replace(/>/g, '>')
.replace(/"/g, '"');
return strings.reduce((result, str, i) =>
result + str + (i < values.length ? escape(values[i]) : ''), ''
);
}
const username = '<script>alert(1)</script>';
const safe = html`<p>Welcome, ${username}!</p>`;
// <p>Welcome, <script>alert(1)</script>!</p>
Other real-world uses: css tag (styled-components), gql tag (GraphQL queries), i18n tag (translations with pluralization), String.raw (raw escape sequences).
// String.raw — built-in tag, no escape processing
console.log(String.raw`Hello\nWorld`); // Hello\nWorld (literal backslash-n)
console.log(`Hello\nWorld`); // Hello
// World
lit-html use tagged templates to efficiently diff and update only changed DOM parts.
Topic D — Symbols & Iterators
Every Symbol() call creates a unique, immutable primitive. No two symbols are equal. They are perfect for collision-free object keys and for hooking into JS engine protocols.
const s1 = Symbol('id');
const s2 = Symbol('id');
console.log(s1 === s2); // false — always unique
console.log(typeof s1); // 'symbol'
// Symbol as object key — won't clash with string keys
const ID = Symbol('id');
const user = { name: 'Alice', [ID]: 42 };
console.log(user[ID]); // 42
console.log(user.id); // undefined — different key
// Symbols are hidden from most enumeration
console.log(Object.keys(user)); // ['name']
console.log(Object.getOwnPropertySymbols(user)); // [Symbol(id)]
console.log(Reflect.ownKeys(user)); // ['name', Symbol(id)]
// Symbol.for — global registry (shared across realms)
const g1 = Symbol.for('app.id');
const g2 = Symbol.for('app.id');
console.log(g1 === g2); // true — same entry in global registry
Well-known Symbols — hooks into JS engine behaviour:
// Symbol.iterator — make any object iterable
class Range {
constructor(start, end) { this.start = start; this.end = end; }
[Symbol.iterator]() {
let current = this.start;
const end = this.end;
return {
next() {
return current <= end
? { value: current++, done: false }
: { value: undefined, done: true };
}
};
}
}
const r = new Range(1, 5);
console.log([...r]); // [1, 2, 3, 4, 5]
for (const n of r) console.log(n); // 1 2 3 4 5
const [a, b] = r; // destructuring works too
| Well-known Symbol | Controls |
|---|---|
Symbol.iterator | for...of, spread, destructuring |
Symbol.asyncIterator | for await...of |
Symbol.toPrimitive | Type coercion (+obj, \`${obj}\`) |
Symbol.hasInstance | instanceof operator |
Symbol.toStringTag | Object.prototype.toString output |
Symbol.species | Constructor used for derived objects |
// Symbol.toPrimitive — control coercion
class Money {
constructor(amount, currency) {
this.amount = amount;
this.currency = currency;
}
[Symbol.toPrimitive](hint) {
if (hint === 'number') return this.amount;
if (hint === 'string') return `${this.amount} ${this.currency}`;
return this.amount; // default hint
}
}
const price = new Money(42, 'USD');
console.log(+price); // 42 — number hint
console.log(`${price}`); // 42 USD — string hint
console.log(price + 8); // 50 — default hint
[Symbol.iterator] to your class gives it first-class support with all built-in consumers (for...of, spread, destructuring, Array.from). Symbol.for() is the only way to share a symbol across different module instances or iframes.
Topic E — ES Modules & New Built-in Methods
import() and tree-shaking?
ES modules (import/export) are statically analysed at parse time — the dependency graph is known before any code runs. This enables tree-shaking and circular dependency detection.
// math.js — named and default exports
export const PI = 3.14159;
export function add(a, b) { return a + b; }
export function sub(a, b) { return a - b; }
export default function multiply(a, b) { return a * b; }
// app.js — import styles
import multiply from './math.js'; // default
import { add, PI } from './math.js'; // named
import { add as plus } from './math.js'; // renamed
import * as math from './math.js'; // namespace
Dynamic import() — lazy loading at runtime, returns a Promise:
// Code split — load module only when needed
async function loadChart() {
const { Chart } = await import('./chart.js');
return new Chart(data);
}
// Conditional loading — polyfills, locale data
if (!window.IntersectionObserver) {
await import('./polyfill-io.js');
}
// Route-based code splitting (React Router / Vite pattern)
const LazyPage = React.lazy(() => import('./pages/Dashboard.jsx'));
// Dynamic import with import.meta
console.log(import.meta.url); // current module URL
console.log(import.meta.env); // env variables (Vite)
ES modules vs CommonJS:
| ES Modules | CommonJS (require) | |
|---|---|---|
| Loading | Static (parse time) | Dynamic (run time) |
| Syntax | import / export | require() / module.exports |
Top-level await | Yes | No |
| Tree-shaking | Yes — bundlers remove unused exports | No — whole module included |
| Circular deps | Bindings, not copies — usually OK | Partially resolved — common source of bugs |
this at top level | undefined | module.exports |
sub() is never imported and remove it from the bundle. CommonJS's require() is a function call that could depend on runtime logic, so bundlers can't safely remove anything. Always prefer named exports for tree-shaking; default exports hinder it.
Array methods:
// Array.at() — negative indexing (ES2022)
const arr = [1, 2, 3, 4];
console.log(arr.at(-1)); // 4 — last element
console.log(arr.at(-2)); // 3
// Array.flat() and Array.flatMap() (ES2019)
const nested = [1, [2, [3, [4]]]];
console.log(nested.flat()); // [1, 2, [3, [4]]] — depth 1
console.log(nested.flat(2)); // [1, 2, 3, [4]]
console.log(nested.flat(Infinity)); // [1, 2, 3, 4]
const sentences = ['Hello World', 'Foo Bar'];
console.log(sentences.flatMap(s => s.split(' ')));
// ['Hello', 'World', 'Foo', 'Bar']
// Array.fromAsync() — ES2024
const result = await Array.fromAsync(asyncGenerator());
// toSorted, toReversed, toSpliced, with — non-mutating copies (ES2023)
const nums = [3, 1, 4, 1, 5];
const sorted = nums.toSorted(); // [1,1,3,4,5] — nums unchanged
const reversed = nums.toReversed(); // [5,1,4,1,3] — nums unchanged
const updated = nums.with(2, 99); // [3,1,99,1,5] — nums unchanged
Object methods:
// Object.fromEntries() — ES2019, inverse of Object.entries()
const entries = [['a', 1], ['b', 2]];
const obj = Object.fromEntries(entries); // { a: 1, b: 2 }
// Transform object values
const prices = { apple: 1.5, banana: 0.5, cherry: 3 };
const doubled = Object.fromEntries(
Object.entries(prices).map(([k, v]) => [k, v * 2])
);
// { apple: 3, banana: 1, cherry: 6 }
// Object.hasOwn() — ES2022, replaces hasOwnProperty
console.log(Object.hasOwn({ a: 1 }, 'a')); // true
// Safer than obj.hasOwnProperty() which can be overridden
// structuredClone() — ES2022, deep clone
const original = { a: 1, b: { c: [1, 2, 3] } };
const clone = structuredClone(original);
clone.b.c.push(4);
console.log(original.b.c); // [1, 2, 3] — not affected
String methods:
// String.trimStart() / trimEnd() — ES2019
' hello '.trimStart(); // 'hello '
' hello '.trimEnd(); // ' hello'
// String.matchAll() — ES2020, returns all regex matches
const text = 'test1 test2 test3';
const matches = [...text.matchAll(/test(\d)/g)];
console.log(matches.map(m => m[1])); // ['1', '2', '3']
// String.replaceAll() — ES2021
'foo-bar-foo'.replaceAll('foo', 'baz'); // 'baz-bar-baz'
// String.at() — ES2022
'hello'.at(-1); // 'o'
structuredClone() is the modern deep-clone solution — it handles circular references, Maps, Sets, Dates, ArrayBuffers, and more. The old JSON hack (JSON.parse(JSON.stringify(x))) drops undefined, functions, Dates, Maps, Sets, and circular references. toSorted()/toReversed() are the immutable array mutation alternatives needed in React state — they return new arrays instead of mutating in place.
The iterator protocol requires an object with a next() method returning { value, done }. The iterable protocol requires a [Symbol.iterator]() method returning an iterator. An object can implement both to be self-iterating.
// Lazy pipeline — like a mini RxJS or Array methods that don't allocate
class LazySeq {
constructor(iterable) { this.source = iterable; }
map(fn) {
const source = this.source;
return new LazySeq({
[Symbol.iterator]() {
const iter = source[Symbol.iterator]();
return {
next() {
const { value, done } = iter.next();
return done ? { done } : { value: fn(value), done: false };
}
};
}
});
}
filter(pred) {
const source = this.source;
return new LazySeq({
[Symbol.iterator]() {
const iter = source[Symbol.iterator]();
return {
next() {
while (true) {
const { value, done } = iter.next();
if (done || pred(value)) return { value, done };
}
}
};
}
});
}
take(n) {
const source = this.source;
return new LazySeq({
[Symbol.iterator]() {
const iter = source[Symbol.iterator]();
let count = 0;
return {
next() {
if (count++ >= n) return { done: true };
return iter.next();
}
};
}
});
}
[Symbol.iterator]() { return this.source[Symbol.iterator](); }
toArray() { return [...this]; }
}
// Chain lazy operations — nothing evaluates until toArray()
function* naturals() { let n = 1; while (true) yield n++; }
const result = new LazySeq(naturals())
.filter(n => n % 2 === 0) // even numbers
.map(n => n * n) // squared
.take(5) // first 5
.toArray();
console.log(result); // [4, 16, 36, 64, 100]
// Only processes the first 10 natural numbers — never allocates the infinite list
.map, .filter) create intermediate arrays at each step — 3 chained operations on 1M items create 3M temporary items. A lazy iterator pipeline processes one item all the way through before fetching the next — O(1) memory regardless of input size. TC39's Iterator Helpers proposal (Stage 3) will bring .map(), .filter(), .take() etc. natively to iterators without third-party libraries.
Mental Model — Performance Is Measurement First
Never optimise without measuring. Guessing which code is slow wastes time and introduces bugs. The workflow is always: measure → identify bottleneck → fix → measure again. Most performance problems fall into two categories: too much CPU work, or too much memory allocation.
A doctor doesn't prescribe surgery because the patient looks unwell. They run tests first — blood work, scans, vitals. Only then do they target the specific problem. Optimising JS without profiling is surgery without diagnosis: you might fix something that wasn't broken and miss the actual bottleneck.
| Layer | Bottleneck type | Tool to diagnose |
|---|---|---|
| JS engine (V8) | Hot functions, deoptimisations | Chrome DevTools → Performance tab → CPU flame chart |
| Memory | Leaks, excessive allocation | Chrome DevTools → Memory tab → heap snapshot / allocation timeline |
| Network | Large bundles, waterfalls | Network tab, Lighthouse, WebPageTest |
| Rendering | Layout thrash, paint storms | Performance tab → frames, Layers panel |
Topic A — V8 Engine Internals
V8 optimises property access by assigning a hidden class (shape / map) to each object — a description of which properties exist and in what order. Objects with the same hidden class share compiled optimised code. If you change an object's shape, V8 must create a new hidden class and potentially deoptimise.
Imagine V8 has a pre-printed form for { x, y } objects. Every point gets the same form — reading obj.x is instant because V8 knows the offset. Now add a z field to one object — V8 must print a new form. If every object gets different fields at different times, no form can be shared and access becomes slow.
// ❌ Shape changes — creates new hidden class each time
function makePoint(x, y) {
const p = {};
p.x = x; // hidden class A: { x }
p.y = y; // hidden class B: { x, y }
return p;
}
// ✅ All properties declared upfront — single hidden class
function makePoint(x, y) {
return { x, y }; // hidden class created once, shared by all points
}
// ❌ Adding properties after construction = shape change
const obj = { a: 1 };
obj.b = 2; // new hidden class — deoptimise if done in a hot loop
// ❌ delete — punches a hole in the hidden class
delete obj.a; // worst for V8 — forces dictionary mode (slow hash map)
Inline Cache (IC) — V8 remembers the hidden class it saw at a call site. If the same class appears again, it uses the cached machine code. Caches go monomorphic (1 shape) → polymorphic (2–4 shapes) → megamorphic (5+ shapes, no caching).
// ✅ Monomorphic — all objects have the same shape
function getX(point) { return point.x; }
const points = [{ x: 1, y: 2 }, { x: 3, y: 4 }, { x: 5, y: 6 }];
points.forEach(getX); // IC stays monomorphic — fast
// ❌ Megamorphic — many different shapes at same call site
function getVal(obj) { return obj.val; }
getVal({ val: 1, a: 1 });
getVal({ val: 2, b: 2 });
getVal({ val: 3, c: 3 });
getVal({ val: 4, d: 4 });
getVal({ val: 5, e: 5 });
// After 5 different shapes — IC goes megamorphic, no fast path
V8-friendly rules:
| Do | Avoid |
|---|---|
| Initialise all properties in constructor / object literal | Adding properties dynamically after construction |
| Keep consistent property order across all instances | Different property sets per instance |
| Use typed arrays for number-heavy loops | Arrays mixing types (numbers + strings) |
| Keep arrays dense (no holes) | Sparse arrays: arr[1000] = 1 on a 3-element array |
Avoid delete on hot objects | delete obj.key in performance-critical code |
V8 uses a generational garbage collector. Objects are first allocated in the young generation (nursery). Most objects die young — short-lived temporaries. Survivors are promoted to the old generation. This matches real allocation patterns and lets GC run cheaply on most objects most of the time.
New objects go to the nursery. The nursery is cleaned frequently and quickly — most toys left there get thrown away (short-lived objects). Toys that survive multiple cleanings graduate to the archive room (old generation). The archive is cleaned less often because it rarely has garbage, but when it is cleaned it takes longer.
// GC Phases:
// 1. Mark — traverse the object graph from roots (globals, stack)
// and mark every reachable object
// 2. Sweep — reclaim memory for all unmarked (unreachable) objects
// 3. Compact — optional, defragment old generation (can cause pauses)
// Reachability is what matters — not reference count
let a = { ref: null };
let b = { ref: a };
a.ref = b; // circular reference
a = null;
b = null;
// Both objects are now unreachable from roots — eligible for GC
// Old ref-counting GC (IE6) would leak this — mark-and-sweep handles it
What causes GC pressure (more GC = more pauses):
// ❌ Allocating lots of short-lived objects in hot loops
function processFrames() {
for (let i = 0; i < 60; i++) {
const vec = { x: i, y: i * 2 }; // new object every iteration
render(vec);
}
}
// ✅ Reuse objects — object pooling
const vecPool = { x: 0, y: 0 };
function processFrames() {
for (let i = 0; i < 60; i++) {
vecPool.x = i; vecPool.y = i * 2; // reuse, no allocation
render(vecPool);
}
}
// ❌ String concatenation in loops — creates N intermediate strings
let result = '';
for (let i = 0; i < 10000; i++) result += items[i];
// ✅ Array join — one allocation
const result2 = items.join('');
Typed Arrays — contiguous, fixed-type memory. GC-friendly for numerical work:
// Regular array — boxing overhead, non-contiguous, GC pressure
const data = new Array(1000000).fill(0);
// Typed array — raw memory, no boxing, fast SIMD-like access
const data2 = new Float64Array(1000000); // 8 bytes × 1M = 8MB flat
const data3 = new Int32Array(1000000); // 4 bytes × 1M = 4MB flat
// 3–10× faster for number crunching (FFT, physics, image processing)
for (let i = 0; i < data2.length; i++) {
data2[i] = Math.sin(i * 0.01);
}
Topic B — Memory Leaks
A memory leak is memory that is no longer needed but still reachable — so GC can't collect it. The heap grows without bound until the tab crashes.
1. Forgotten event listeners:
// ❌ Leak — handler holds reference to element and closure data
class Component {
constructor() {
this.data = new Array(100000).fill('leak'); // large data
document.addEventListener('click', this.handleClick);
}
handleClick = () => { console.log(this.data.length); };
destroy() {
// missing: document.removeEventListener('click', this.handleClick)
}
}
// Every new Component() + destroy() leaks this.data forever
// ✅ Fix — remove in cleanup
destroy() {
document.removeEventListener('click', this.handleClick);
}
2. Detached DOM nodes:
// ❌ Leak — JS variable keeps detached subtree alive
let detachedTree;
function createTree() {
const ul = document.createElement('ul');
for (let i = 0; i < 1000; i++) {
const li = document.createElement('li');
ul.appendChild(li);
}
detachedTree = ul; // removed from DOM but held in JS
document.body.removeChild(document.body.appendChild(ul));
}
// ✅ Fix — set detachedTree = null when no longer needed
3. Closures capturing large scope:
// ❌ Subtle leak — timer callback closes over large data
function setup() {
const largeData = new Array(1000000).fill('x');
const id = setInterval(() => {
console.log('tick');
// largeData is captured but never used — yet kept alive!
}, 1000);
return id;
}
// ✅ Fix — clearInterval(id) when done, or don't capture largeData
4. Growing caches with no eviction:
// ❌ Unbounded cache — grows forever
const cache = {};
function memoize(key, fn) {
if (!cache[key]) cache[key] = fn();
return cache[key];
}
// ✅ LRU cache — evict oldest when size exceeds limit
class LRUCache {
#cache = new Map();
#maxSize;
constructor(max) { this.#maxSize = max; }
get(key) {
if (!this.#cache.has(key)) return undefined;
const val = this.#cache.get(key);
this.#cache.delete(key); // move to end (most recently used)
this.#cache.set(key, val);
return val;
}
set(key, val) {
if (this.#cache.has(key)) this.#cache.delete(key);
else if (this.#cache.size >= this.#maxSize) {
this.#cache.delete(this.#cache.keys().next().value); // evict LRU
}
this.#cache.set(key, val);
}
}
const lru = new LRUCache(100); // max 100 entries
5. Global variable accumulation:
// ❌ Accidental global (no var/let/const in sloppy mode)
function process() {
result = computeHeavy(); // no declaration — creates window.result
}
// ✅ Always use 'use strict' — this throws ReferenceError instead
How to detect leaks in Chrome DevTools:
| Tool | How to use |
|---|---|
| Heap Snapshot | Memory tab → Take snapshot before/after action → compare "Retained Size" growth |
| Allocation Timeline | Memory tab → Record allocation timeline → look for bars that don't get GC'd |
| Detached DOM | Heap snapshot → filter by "Detached" → see nodes with live JS references |
| Performance monitor | More tools → Performance monitor → watch JS heap size over time |
WeakMap and WeakRef are the language-level tools for holding references without preventing GC.
Topic C — Measuring & Optimising Performance
performance.now() — high-resolution timer (microsecond precision), not affected by system clock changes.
// Basic timing
const start = performance.now();
doHeavyWork();
const end = performance.now();
console.log(`Took ${(end - start).toFixed(3)}ms`);
// console.time — quick and dirty
console.time('sort');
arr.sort();
console.timeEnd('sort'); // sort: 12.45ms
// performance.mark and measure — structured profiling
performance.mark('parse:start');
parseData();
performance.mark('parse:end');
performance.measure('parse', 'parse:start', 'parse:end');
const [entry] = performance.getEntriesByName('parse');
console.log(entry.duration); // ms with sub-millisecond precision
// Clear marks for clean re-runs
performance.clearMarks();
performance.clearMeasures();
Benchmarking correctly — avoid JIT warm-up skew:
function benchmark(name, fn, iterations = 10000) {
// Warm up — let JIT compile the function
for (let i = 0; i < 100; i++) fn();
const start = performance.now();
for (let i = 0; i < iterations; i++) fn();
const elapsed = performance.now() - start;
console.log(`${name}: ${elapsed.toFixed(2)}ms total, `
+ `${(elapsed/iterations*1000).toFixed(2)}µs per op`);
}
benchmark('array push', () => { const a = []; a.push(1); });
benchmark('array literal', () => { const a = [1]; });
// Warm-up ensures V8 has JIT-compiled fn before timing starts
PerformanceObserver — observe long tasks:
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.duration > 50) { // tasks > 50ms block the main thread
console.warn(`Long task: ${entry.duration.toFixed(1)}ms`);
}
}
});
observer.observe({ entryTypes: ['longtask'] });
console.log output or the DevTools panel — both add overhead. Never benchmark a single run — JIT hasn't warmed up yet. Run thousands of iterations and take the median, not the mean (outliers from GC pauses skew means). Tools like Vitest bench and jsbenchmark.com handle this automatically.
Layout thrashing (forced synchronous layout) happens when you read a layout property (e.g. offsetWidth) immediately after writing a style — forcing the browser to flush its pending style recalculations synchronously before returning the value. In a loop this is catastrophic.
// ❌ Layout thrash — browser must recalculate layout for EACH read
const boxes = document.querySelectorAll('.box');
boxes.forEach(box => {
const width = box.offsetWidth; // READ — forces layout flush
box.style.width = width * 2 + 'px'; // WRITE — invalidates layout
});
// N boxes = N forced layouts
// ✅ Batch reads, then writes — one layout calculation
const widths = [...boxes].map(box => box.offsetWidth); // all reads
boxes.forEach((box, i) => { // all writes
box.style.width = widths[i] * 2 + 'px';
});
Properties that trigger layout (expensive reads):
// These properties FORCE layout recalculation when read after a write:
// offsetTop, offsetLeft, offsetWidth, offsetHeight, offsetParent
// scrollTop, scrollLeft, scrollWidth, scrollHeight
// clientTop, clientLeft, clientWidth, clientHeight
// getComputedStyle(), getBoundingClientRect()
// innerText (triggers layout), textContent (does not)
DOM batch update strategies:
// 1. DocumentFragment — build off-DOM, insert once
const frag = document.createDocumentFragment();
items.forEach(item => {
const li = document.createElement('li');
li.textContent = item;
frag.appendChild(li);
});
list.appendChild(frag); // one DOM mutation, one layout
// 2. innerHTML for large replacements
list.innerHTML = items.map(i => `<li>${i}</li>`).join('');
// 3. CSS class toggle instead of inline style changes
el.classList.add('active'); // batched by browser
el.style.cssText = '...'; // one reflow instead of per-property
// 4. will-change — hint browser to promote layer (use sparingly)
el.style.willChange = 'transform'; // element gets its own compositor layer
// 5. CSS containment — limit scope of layout recalculation
// el { contain: layout; } — changes inside don't affect outside
getBoundingClientRect() is the most commonly abused layout trigger — if you need it, call it once, cache the result, and use it for all subsequent writes.
Topic D — Code & Bundle Optimisation
A single large JS bundle delays Time-to-Interactive. Code splitting breaks it into smaller chunks loaded on demand — the initial page only loads what's needed to render the first screen.
// Dynamic import — split at route level (Vite / Webpack)
const routes = [
{
path: '/dashboard',
component: () => import('./pages/Dashboard.js') // lazy chunk
},
{
path: '/settings',
component: () => import('./pages/Settings.js') // lazy chunk
}
];
// React.lazy — component-level code splitting
const HeavyChart = React.lazy(() => import('./HeavyChart'));
// Prefetch — load in background when idle
// <link rel="prefetch" href="/dashboard.chunk.js">
// or in JS:
const prefetch = (url) => {
const link = document.createElement('link');
link.rel = 'prefetch';
link.href = url;
document.head.appendChild(link);
};
Bundle size analysis and techniques:
// Analyse bundle with rollup-plugin-visualizer or webpack-bundle-analyzer
// Tree-shaking — import only what you use
// ❌ Imports entire lodash (~70KB gzipped)
import _ from 'lodash';
_.cloneDeep(obj);
// ✅ Import only cloneDeep (~5KB)
import cloneDeep from 'lodash/cloneDeep';
// Or: import { cloneDeep } from 'lodash-es'; (ES module build)
// ✅ Native alternatives — zero cost
const deep = structuredClone(obj); // replaces _.cloneDeep
const flat = arr.flatMap(fn); // replaces _.flatMap
const uniq = [...new Set(arr)]; // replaces _.uniq
const grp = Object.groupBy(arr, fn); // replaces _.groupBy (ES2024)
Resource hints and loading strategy:
<!-- Critical JS — load immediately -->
<script src="main.js" defer></script>
<!-- Non-critical — load when idle -->
<script src="analytics.js" async></script>
<!-- Preload key resources -->
<link rel="preload" href="font.woff2" as="font" crossorigin>
// defer vs async:
// defer — execute after HTML parse, in order, before DOMContentLoaded
// async — execute as soon as downloaded, out of order, may block parse
Memoization caches the result of a function call keyed by its arguments. Subsequent calls with the same arguments return the cached result without recomputation. Only useful for pure functions — same input must always produce same output.
// Basic memoize
function memoize(fn) {
const cache = new Map();
return function(...args) {
const key = JSON.stringify(args);
if (cache.has(key)) return cache.get(key);
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
// Memoize with LRU eviction (bounded cache)
function memoizeLRU(fn, maxSize = 100) {
const cache = new Map();
return function(...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
const val = cache.get(key);
cache.delete(key); cache.set(key, val); // move to end = MRU
return val;
}
const result = fn.apply(this, args);
if (cache.size >= maxSize) {
cache.delete(cache.keys().next().value); // evict LRU
}
cache.set(key, result);
return result;
};
}
// Example — expensive Fibonacci (exponential without memo)
const fib = memoize(function fib(n) {
return n <= 1 ? n : fib(n - 1) + fib(n - 2);
});
console.log(fib(40)); // instant — memoized recursion
React.useMemo and React.useCallback — component-level memoization:
// useMemo — memoize computed value
const sortedItems = React.useMemo(
() => items.slice().sort((a, b) => a.price - b.price),
[items] // only re-sort when items changes
);
// useCallback — memoize function reference (stable for child props)
const handleClick = React.useCallback(
(id) => dispatch({ type: 'SELECT', id }),
[dispatch]
);
// When NOT to memoize:
// - Cheap computations (memoize overhead > computation cost)
// - Functions that get different args each render (cache miss rate ~100%)
// - When it makes code harder to reason about
JSON.stringify as cache key fails for: functions, Symbols, circular references, and class instances where order of keys matters. Production memoize libraries (like lodash's) use a custom serialiser or a WeakMap for object arguments. For React useMemo, the canonical rule: "measure first" — wrapping everything in useMemo actually slows down apps because of the bookkeeping overhead.
Any JS task taking more than 50ms is a "long task" — it blocks user input and makes the page feel frozen. The browser can't render or respond to clicks while JS is running.
Strategy 1 — Yield to the event loop with scheduler:
// Yield control back to browser every N iterations
async function processLargeArray(items) {
const CHUNK_SIZE = 1000;
const results = [];
for (let i = 0; i < items.length; i++) {
results.push(process(items[i]));
// Yield every CHUNK_SIZE items — let browser paint + handle events
if (i % CHUNK_SIZE === 0) {
await new Promise(r => setTimeout(r, 0));
}
}
return results;
}
// Better: scheduler.yield() — Prioritised Task Scheduling API (Chrome)
async function processWithScheduler(items) {
const results = [];
for (let i = 0; i < items.length; i++) {
results.push(process(items[i]));
if (i % 1000 === 0 && typeof scheduler !== 'undefined') {
await scheduler.yield(); // yields with priority awareness
}
}
return results;
}
Strategy 2 — requestIdleCallback for background work:
// Run when browser is idle (no user interaction expected)
function processInIdle(items) {
let index = 0;
function processChunk(deadline) {
// deadline.timeRemaining() = ms of idle time left in this frame
while (index < items.length && deadline.timeRemaining() > 1) {
process(items[index++]);
}
if (index < items.length) {
requestIdleCallback(processChunk); // schedule next chunk
}
}
requestIdleCallback(processChunk);
}
Strategy 3 — Web Worker (Segment 6) for truly parallel work:
// Summary: when to use each strategy
//
// setTimeout(fn, 0) — simplest yield, macrotask queue
// scheduler.yield() — priority-aware yield (Chrome 115+)
// requestIdleCallback() — only when truly idle, non-urgent work
// requestAnimationFrame() — animation / rendering work
// Web Worker — CPU-heavy, parallelisable, no DOM access
scheduler.yield() / chunked setTimeout).
BigInt? What are TypedArrays and ArrayBuffer?
BigInt — arbitrary precision integers. Regular Number can only safely represent integers up to 2⁵³ − 1 (Number.MAX_SAFE_INTEGER). BigInt has no upper limit.
console.log(Number.MAX_SAFE_INTEGER); // 9007199254740991
console.log(9007199254740991 + 1); // 9007199254740992 ✅
console.log(9007199254740991 + 2); // 9007199254740992 ❌ — same!
const big = 9007199254740991n; // BigInt literal
console.log(big + 2n); // 9007199254740993n ✅
console.log(typeof big); // 'bigint'
// Use cases: crypto, 64-bit IDs from databases, financial calcs
const id = BigInt('18446744073709551615'); // 64-bit max
const hash = 0xFFFFFFFFFFFFFFFFn; // hex BigInt
// Cannot mix BigInt and Number directly
big + 1; // ❌ TypeError — must use BigInt(1) or 1n
big + 1n; // ✅
Number(big); // ✅ convert — loses precision for very large values
ArrayBuffer and TypedArrays — raw binary memory for high-performance I/O and computation:
// ArrayBuffer — raw byte buffer (not directly accessible)
const buf = new ArrayBuffer(16); // 16 bytes
// TypedArray views — typed access to the buffer
const i32 = new Int32Array(buf); // 4 Int32s (4 bytes each)
const u8 = new Uint8Array(buf); // 16 Uint8s (1 byte each)
i32[0] = 0x01020304;
console.log(u8[0], u8[1], u8[2], u8[3]); // 4, 3, 2, 1 (little-endian)
// DataView — mixed types at byte offsets
const view = new DataView(buf);
view.setUint8(0, 0xFF);
view.setFloat32(4, 3.14, true); // true = little-endian
console.log(view.getFloat32(4, true)); // ~3.14
// Available TypedArrays:
// Int8, Uint8, Uint8Clamped, Int16, Uint16, Int32, Uint32
// Float32, Float64, BigInt64, BigUint64
Real-world uses:
// Parse binary file format (PNG header)
async function isPNG(file) {
const buf = await file.arrayBuffer();
const view = new Uint8Array(buf, 0, 8);
const PNG_SIGNATURE = [137, 80, 78, 71, 13, 10, 26, 10];
return PNG_SIGNATURE.every((b, i) => view[i] === b);
}
// WebGL vertex buffer
const vertices = new Float32Array([
0.0, 0.5, // x, y of vertex 1
-0.5, -0.5, // x, y of vertex 2
0.5, -0.5, // x, y of vertex 3
]);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
Uint8ClampedArray is specifically designed for image pixel manipulation — it clamps values to [0, 255] automatically instead of overflowing.
Strict mode ('use strict') — opt into a safer, faster subset of JavaScript. Always enabled inside ES modules and class bodies.
'use strict'; // file-level, or first line inside a function
// What strict mode prevents:
x = 5; // ❌ ReferenceError — accidental globals
delete Object.prototype; // ❌ TypeError — can't delete non-configurable
function f(a, a) {} // ❌ SyntaxError — duplicate parameter names
with (obj) {} // ❌ SyntaxError — with statement banned
this; // undefined in non-method functions (not globalThis)
// Strict mode helps V8 optimise — predictable this, no eval scope leaks
Complete performance best practices checklist:
| Category | Best Practice |
|---|---|
| V8 / JIT | Initialise all object properties in the constructor — keep shapes stable |
Avoid delete on hot objects — forces dictionary mode | |
| Keep arrays dense and homogeneous in type (no holes, no mixed types) | |
| Memory | Clear timers, listeners, and subscriptions in cleanup |
Use WeakMap/WeakRef for caches keyed by objects | |
| Bound cache sizes — use LRU eviction | |
| DOM | Batch reads before writes — avoid layout thrash |
Use DocumentFragment for bulk DOM insertion | |
Prefer classList toggle over manual style assignment | |
| Async | Parallelise independent async work with Promise.all |
| Chunk long tasks — yield every ~50ms | |
| Move CPU-heavy work to Web Workers | |
| Bundles | Use named ES exports for tree-shaking |
| Code-split at route/feature boundaries | |
| Replace heavy libraries with native equivalents | |
| Measurement | Profile before optimising — measure, fix, measure again |