HTTP/2: based on Google’s SPDY protocol - intent is to improve page load latency and security
- binary protocol not text based (more compact, efficient to parse and less prone to errors)
- multiplexed: multiple files can be transferred on a single connection
- server push: allows the server to transfer resources to the client before they’re requested (pre-filling the cache

HTTP/2 and JS developers
- concatenating multiple files into bundles makes it difficult for the browser to effectively cache our code
- the whole bundle needs to be redownloaded if one line of code changes
- since HTTP/2 can multiplex (making requests inexpensive) we can split code into smaller bundles and make use of caching (better experience for users)

web servers also have limits on how efficiently they can serve a large number of files (so we shouldn’t endlessly split files)

Still is protocol overhead for each request compared to a single concatenated file

The compression of the single large file is better than many small files

Servers are slower serving many small files than a single large file

Changing one module invalidates the cache for one bundle which is only a part of the complete application - the remaining application is still cached. Need to find a balance.

More bundles = better caching but less compression

AggressiveSplittingPlugin (from webpack)
- Splits the original chunks into smaller chunks (you specify the size)
- To combine similar modules, they are sorted alphabetically (by path) before splitting - modules in the same folder are probably related to each other and similar from compression point of view - with this sorting they end up in the same chunk

We need to reuse the previously created chunks
- When AgressiveSplittingPlugin finds a good chunk, it stores the chunk’s modules and has into records (web pack’s concept of state that is kept between compilations)
- AggressiveSplittingPlugin tries to restore the chunks from records before trying to spit the remaining modules (ensures reuse)

The application using this optimization will have multiple script tags to load each chunk in parallel
- The browser can start executing older files in cache while waiting for the download of most recent files
- HTTP/2 Server push can be used to send these chunks to the client when the HTML page is requested - best to start pushing the most recent file first as older files are more likely already in the cache
- The client can cancel push responses for files it already has, but this take a round trip
- When using code splitting for on demand loading, w ebpack handles the parallel requests for you

Async/await makes asynchronous code look and behave a little more like synchronous code. Any async function returns a promise implicitly and the resolve value of the promise will be whatever you return from the function

Why is it better?
1. Concise + clean (no more .then, no more nested code)
2. Error Handling: can handle synchronous and async errors with the same construct (try/catch) - previously, try/catch won’t handle errors inside the promise
3. Conditionals: simplifies cause it makes multiple promises easier
4. Intermediate Values: no more crazy nesting promises or using promise.all
5. Error stacks: especially in production environments with large promise chains
6. Debugging: much easier - promises were annoying because you couldn’t set breakpoints in arrow functions that return expressions (no body)
- Doesn’t step through .then statements because it only goes through synchronous code
- With async/await you can step through like its synchronous