v8 of Google has Interpreter called Ignition, a compiler called Turbo Fan and garbage collector called Orinoco
- JRE consists of a JS Engine (❤️ of JRE), set of APIs to connect with outside environment, event loop, Callback queue, Microtask queue etc.
- Parsing - Code is broken down into tokens. In "let a = 7" -> let, a, =, 7 are all tokens. Also we have a syntax parser that takes code and converts it into an AST (Abstract Syntax Tree) which is a JSON with all key values like type, start, end, body etc (looks like package.json but for a line of code in JS. Kinda unimportant)(Check out astexplorer.net -> converts a line of code into AST).
- Execution - Needs 2 components ie. Memory heap(place where all memory is stored) and Call Stack. There is also a garbage collector. It uses an algo called Mark and Sweep.
The Mark-and-Sweep algorithm is a garbage collection algorithm used in computer programming to automatically reclaim memory that is no longer in use or referenced by the program. It is one of the fundamental techniques for memory management in languages that do not use manual memory management like C or C++. The algorithm consists of two main phases: marking and sweeping.
- The algorithm begins by traversing the entire memory heap, starting from the roots (e.g., global variables, local variables, and registers), and marking all the objects that are reachable and still in use.
- It identifies and marks objects that are reachable by following references or pointers from the root set.
- After the marking phase, the algorithm sweeps through the entire memory heap again, deallocating memory that was not marked in the previous phase.
- Unmarked objects are considered as garbage, meaning they are no longer accessible or needed by the program.
- The memory occupied by these unreferenced objects is then freed up and made available for future use.
The Mark-and-Sweep algorithm has some advantages and disadvantages:
- It can handle circular references, where a group of objects reference each other in a cycle, as it relies on reachability rather than reference counting.
- It reclaims all unreachable memory, preventing memory leaks.
- It may introduce pause times during the collection process, as it requires stopping the execution of the program temporarily to perform the collection.
- Fragmentation can occur in the memory heap as a result of the non-contiguous layout of the reclaimed memory.
While the Mark-and-Sweep algorithm is a classic garbage collection technique, there are other algorithms like generational garbage collection and incremental garbage collection, each with its own set of trade-offs and optimizations.
Inline caching is a technique used in computer programming and language runtime environments, particularly in the context of dynamic dispatch and polymorphism. It aims to optimize method or function calls by caching information about the types of objects involved in the call, reducing the overhead associated with dynamic dispatch.
Here's how inline caching typically works:
- When a method or function is called with a specific set of object types, the runtime system records information about the types involved in the call.
- This information is cached or "inlined" directly in the code that performs the method dispatch.
- On subsequent calls to the same method or function with the same or compatible types, the cached information is used to skip the dynamic dispatch mechanism.
- Instead of going through the usual process of dynamic type resolution and method lookup, the cached information is directly used to determine the appropriate method to invoke.
Inline caching is particularly effective in situations where the types involved in method calls do not change frequently. It reduces the overhead associated with dynamic dispatch and can significantly improve the performance of certain code paths.
Inline caching is closely related to the concept of polymorphic inline caching (PIC), where the caching mechanism is designed to handle multiple types efficiently. Polymorphic inline caching is an extension of inline caching that allows for handling a varying number of types associated with a method call.
The effectiveness of inline caching depends on the usage patterns of the program. If the types involved in method calls are stable or change infrequently, the caching mechanism can significantly improve performance by avoiding the overhead of repeated dynamic dispatch.