In this post, we are going to be talking about the linked list data structure in the language of "thank u, next" by Ariana Grande. If you haven't watched the piece of art that is the music video for the song, please pause and do so before we begin.

Linked lists are linear collections of data that consist of nodes with data and pointers. We're going to be focusing on singly linked lists, which contain nodes that store the value of the node and a pointer to the next node. There are also other types of linked lists, like doubly linked lists and cyclical linked lists, but we'll focus on singly linked ones for now.

People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.

I can understand where that misconception comes from. The WebAssembly community group is really committed to backwards compatibility. This means that the WebAssembly that you create today will continue working on browsers into the future.

But that doesn’t mean that WebAssembly is feature complete. In fact, that’s far from the case. There are many features that are coming to WebAssembly which will fundamentally alter what you can do with WebAssembly.

WebAssembly is a performance optimised virtual machine that was shipped in all four major browsers earlier this year. It is a nascent technology and the current version is very much an MVP (minimum viable product). This blog post takes a look at the WebAssembly roadmap and the features it might be gain in the near future.

I’ll try to keep this blog post relatively high-level, so I’ll skip over some of the more technical proposals, instead focusing on what they might mean for languages that target WebAssembly.

A very brief WebAssembly intro

If you’ve not heard of WebAssembly before, I’ll give you a very brief introduction. The team behind it describe it as follows:

If you feel like Chrome's been using more RAM on the desktop client since the v67 release a month back, good news: you're not going crazy! Bad news: it definitely is using more RAM (again, on the desktop).

That's because of an advanced new security feature the Chromium team has rolled into the latest version of Google's infamously memory-hungry browser, known as Site Isolation. I'll spare you the technical details, but the short of it is that because the growing number of memory leak vulnerabilities being exposed as part of the Spectre and Meltdown flaws, the Chrome team has decided to enable Site Isolation by default in Chrome on the desktop as of version 67.

Ever since I found the blog 'What Makes Them Click' [1] by Susan Weinschenk, I've been fascinated with her writing. I'm a natural analyst, much to some people's dismay, as I mentally poke and prod people till I really understand what drives people to behave the way they do. This has led me to study Product Design, Anthropology and to now be employed in the UX industry in an attempt to understand people and better their human experience. So when I read Susan's post on 'The Psychologist's View of UX Design' [2], I was fully engrossed in what she had to say on the matter.

The Most Important Rule in Product Design, and Possibly Life Management

There is one principle of organization that every human should adhere to, particularly people who design products. Day after day, I see companies break this rule, and it is 100% of the time to their detriment. In this article I will explain what that rule is, and what it means to product and service design. I’ll also raise the possible implications of this phenomenon on organizational management, collaboration, and general performance. The psychological phenomenon I will be discussing in this article is known as Miller’s Law. Rather than just tell you what Miller’s Law is, I ask you to take part in this exercise for a more immersive learning lesson.

A few weeks ago we started a series aimed at digging deeper into JavaScript and how it actually works: we thought that by knowing the building blocks of JavaScript and how they come to play together you’ll be able to write better code and apps.

In this third post, we’ll discuss another critical topic that’s getting ever more neglected by developers due to the increasing maturity and complexity of programming languages that are being used on a daily basis — memory management. We’ll also provide a few tips on how to handle memory leaks in JavaScript that we at SessionStack follow as we need to make sure SessionStack causes no memory leaks or doesn’t increase the memory consumption of the web app in which we are integrated.

This piece of code will leak memory and eventually crash your Node.js process or browser:

It takes a while to fill GBs of heap, plus Node is less conservative, the GC eventually freezing the process trying to recover memory, so give it a few seconds. Chrome will crash it faster.

This is equivalent with this async function:

Of course, if this loop would be synchronous, not using Promise or async / await, then the process would blow up with a stack overflow error because at the moment of writing JavaScript does not do tail calls optimizations (until everybody implements ECMAScript 6 fully at least).

This piece of code will leak memory and eventually crash your Node.js process or browser:

It takes a while to fill GBs of heap, plus Node is less conservative, the GC eventually freezing the process trying to recover memory, so give it a few seconds. Chrome will crash it faster.

This is equivalent with this async function:

Of course, if this loop would be synchronous, not using Promise or async / await, then the process would blow up with a stack overflow error because at the moment of writing JavaScript does not do tail calls optimizations (until everybody implements ECMAScript 6 fully at least).

In this article we will explore common types of memory leaks in client-side JavaScript code. We will also learn how to use the Chrome Development Tools to find them. Read on!

Introduction

Memory leaks are a problem every developer has to face eventually. Even when working with memory-managed languages there are cases where memory can be leaked. Leaks are the cause of whole class of problems: slowdowns, crashes, high latency, and even problems with other applications.

What are memory leaks?

In essence, memory leaks can be defined as memory that is not required by an application anymore that for some reason is not returned to the operating system or the pool of free memory. Programming languages favor different ways of managing memory. These ways may reduce the chance of leaking memory. However, whether a certain piece of memory is unused or not is actually an

A little while ago I started working on a 16 bit virtual machine running in node. I’d been watching these amazing series of videos from Ben Eater about how he build a fully functional 8 bit computer from scratch, and wanted to see if I could put into practice some of the things I’d learned. So from the beginning I knew I wanted:

to design an assembly language

to build an assembler that compiled *.asm files to a binary format

to build a VM that would simulate memory, a CPU, and some form of I/O

The choice of 16 bit was a bit arbitrary, but I wanted to allow for a non trivial amount of computation to take place, and it seemed like a good balance between too small to be useful and too complex to manage. Keeping the complexity low was actually a really big concern for me – it’s very easy with this kind of thing to get stuck with the really tiny details, optimising all kinds of stuff, which in my opinion take away from the core concepts. Because of this I ended up making some tradeoffs in design which forced more creative approaches.

Passing a reference by value is passing a reference, yes. However, it is still a copy of the reference that is passed. Very subtle semantic difference. It is not the reference. It is a different reference, which starts out referencing the same thing.

The subtlety comes in what happens if the reference itself (not what it references) is modified.

Go to first principles:

A variable is merely a block in memory of extent defined by the size of the variable. It could be in global memory, or stack memory, or a processor register. Depends on the declaration, scope, language and compiler/linker. But it exists somewhere.

We’re pleased to announce that WebKit has a full WebAssembly implementation.

Dynamic Duo

WebAssembly is a no-nonsense sidekick to JavaScript. It isn’t meant to be hand-written; rather, it’s a low-level binary format designed to be a suitable compilation target for existing languages such as C++. The WebAssembly code that the browser sees will already have undergone high-level, language-specific optimizations. This is great because it means implementations don’t have to know about how C++ or other languages are optimized. Running expensive language-specific optimization on the developers’ machines allows WebKit to focus on target-specific optimizations. It also means that we can focus WebAssembly compiler optimizations on fast code delivery and predictable performance.

The purpose of this article series is to chronicle my exploration into JavaScript performance through my ongoing work creating a visualization tool for Chrome memory profiles. We’ll start with how the project came about and then take a deep dive into the file format that Chrome uses to represent a memory heap.

A few months ago, I had the privilege of listening to a fantastic talk by Casey Rosenthal on the subject of intuition engineering. Intuition engineering is the concept that humans are much better at understanding information that can be consumed without needing to think about it. Things like sights, sounds, and smells can be processed in a fraction of the time that it takes to read a block of text or interpret a table.

JavaScript performance is an evergreentopic on the web. With each new release of Microsoft Edge, we look at user feedback and telemetry data to identify opportunities to improve the Chakra JavaScript engine and enable better performance on real sites.

In this post, we’ll walk you through some new features coming to Chakra with the Windows 10 Creators Update that improve the day-to-day browsing experience in Microsoft Edge, as well as some new experimental features for developers: WebAssembly, and Shared Memory and Atomics.

Saving memory by re-deferring functions

Back in the days of Internet Explorer, Chakra introduced the ability to

You’ve been building your React apps with a Redux store for quite a while, yet you feel awkward when your components update so often. You’ve crafted your state thoroughly, and your architecture is such that each component gets just what it needs from it, no more no less. Yet, they update behind your back. Always mapping, always calculating.

Reselector to the rescue

How could would it be if you could just calculate what you need? If this part of the state tree changes, then yes, please, update this.

Let’s take a look at the code for a simple TODOs list with a visibility filter, specially the part in charge of getting the visible TODOs in our container component:

This guide is unlike others I’ve done so far, and has a bit of a narrative to go along with the sketches. I thought the entire concept of garbage collection and how it gets dealt with in javascript, or more specifically in engines that run javascript, deserves a bit more of an explanation. I would also like to mention that this guide is meant to be beginner friendly, and does not cover every aspect of memory management within V8, and the rest of V8 internals. I’ve added a bunch of resources, if you want to delve into it more. This guide also focuses on ✨ javascript ✨, , obviously garbage collection is different in things like C.