Generally you should first compile and run your code without optimizations (the default). Once you are sure that the code runs correctly, you can use the techniques in this article to make it load and run faster.

The higher optimization levels introduce progressively more aggressive optimization, resulting in improved performance and code size at the cost of increased compilation time. The levels can also highlight different issues related to undefined behavior in code.

The optimization level you should use depends mostly on the current stage of development:

When first porting code, run emcc on your code using the default settings (without optimization). Check that your code works and debug and fix any issues before continuing.

Build with lower optimization levels during development for a shorter compile/test iteration cycle (-O0 or -O1).

Build with -O2 to get a well-optimized build.

Building with -O3 or -Os can produce an ever better build than -O2, and are worth considering for release builds. -O3 builds are even more optimized than -O2, but at the cost of significantly longer compilation time and potentially larger code size. -Os is similar in increasing compile times, but focuses on reducing code size while doing additional optimization. It’s worth trying these different optimization options to see what works best for your application.

Other optimizations are discussed in the following sections.

In addition to the -Ox options, there are separate compiler options that can be used to control the JavaScript optimizer (js-opts), LLVM optimizations (llvm-opts) and LLVM link-time optimizations (llvm-lto).

Note

The meanings of the emcc optimization flags (-O1,-O2 etc.) are similar to gcc, clang, and other compilers, but also different because optimizing asm.js and WebAssembly includes some additional types of optimizations. The mapping of the emcc levels to the LLVM bitcode optimization levels is documented in the reference.

This section describes optimisations and issues that are relevant to code size. They are useful both for small projects or libraries where you want the smallest footprint you can achieve, and in large projects where the sheer size may cause issues (like slow startup speed) that you want to avoid.

By default Emscripten emits the static memory initialization code inside the .js file. This can cause the JavaScript file to be very large, which will slow down startup. It can also cause problems in JavaScript engines with limits on array sizes, resulting in errors like Arrayinitializertoolarge or Toomuchrecursion.

The --memory-init-file1emcc option causes the compiler to emit this code in a separate binary file with suffix .mem. The .mem file is loaded (asynchronously) by the main .js file before main() is called and compiled code is able to run.

Note

From Emscripten 1.21.1 this setting is enabled by default for fully optimized builds, that is, -O2 and above.

You may wish to build the less performance-sensitive source files in your project using -Os or -Oz and the remainder using -O2 (-Os and -Oz are similar to -O2, but reduce code size at the expense of performance. -Oz reduces code size more than -Os.)

Note that -Oz may take longer to build. For example, it enables EVAL_CTORS which tries to optimize out C++ global constructors, which takes time.

You can use the NO_FILESYSTEM option to disable bundling of filesystem support code (the compiler should optimize it out if not used, but may not always succeed). This can be useful if you are building a pure computational library, for example. See settings.js for more details.

You can use ELIMINATE_DUPLICATE_FUNCTIONS to remove duplicate functions, which C++ templates often create. See settings.js for more details.

You can move some of your code into the Emterpreter, which will then run much slower (as it is interpreted), but it will transfer all that code into a smaller amount of data.

You can use separate modules through dynamic linking. That can increase the total code size of everything, but reduces the maximum size of a single module, which can help in some cases (e.g. if a single big module hits a memory limit).

By default Emscripten emits one JS file, containing the entire codebase: Both the asm.js code that was compiled, and the general code that sets up the environment, connects to browser APIs, etc. in a very large codebase, this can be inefficient in terms of memory usage, as having all of that in one script means the JS engine might use some memory to parse and compile the asm.js, and might not free it before starting to run the codebase. And in a large game, starting to run the code might allocate a large typed array for memory, so you might see a “spike” of memory, after which temporary compilation memory will be freed. And if big enough, that spike can cause the browser to run out of memory and fail to load the application. This is a known problem on Chrome (other browsers do not seem to have this issue).

A workaround is to separate out the asm.js into another file, and to make sure that the browser has a turn of the event loop between compiling the asm.js module and starting to run the application. This can be achieved by running emcc with --separate-asm.

You can also do this manually, as follows:

Run tools/separate_asm.py. This receives as inputs the filename of the full project, and two filenames to emit: the asm.js file and a file for everything else.

Load the asm.js script first, then after a turn of the event loop, the other one, for example using code like this in your HTML file:

If you hit memory limits in browsers, it can help to run your project by itself, as opposed to inside a web page containing other content. If you open a new web page (as a new tab, or a new window) that contains just your project, then you have the best chance at avoiding memory fragmentation issues.

JavaScript engines will often compile very large functions slowly (relative to their size), and fail to optimize them effectively (or at all). One approach to this problem is to use “outlining”: breaking them into smaller functions that can be compiled and optimized more effectively.

The OUTLINING_LIMIT setting defines the function size at which Emscripten will try to break large functions into smaller ones. Search for this setting in settings.js for information on how to determine what functions may need to be outlined and how to choose an appropriate function size.

Aggressive variable elimination attempts to remove variables whenever possible, even at the cost of increasing code size by duplicating expressions. This can improve speed in cases where you have extremely large functions. For example it can make sqlite (which has a huge interpreter loop with thousands of lines in it) 7% faster.

You can enable aggressive variable elimination with -sAGGRESSIVE_VARIABLE_ELIMINATION=1.

Catching C++ exceptions (specifically, emitting catch blocks) is turned off by default in -O1 (and above). Due to how asm.js/wasm currently implement exceptions, this makes the code much smaller and faster (eventually, wasm should gain native support for exceptions, and not have this isue).

To re-enable exceptions in optimized code, run emcc with -sDISABLE_EXCEPTION_CATCHING=0 (see src/settings.js).

Note

When exception catching is disabled, a thrown exception terminates the application. In other words, an exception is still thrown, but it isn’t caught.

Note

Even with catch blocks not being emitted, there is some code size overhead unless you build your source files with -fno-exceptions, which will omit all exceptions support code (for example, it will avoid creating proper C++ exception objects in errors in std::vector, and just abort the application if they occur). -fno-rtti may help as well.

Building with -sALLOW_MEMORY_GROWTH=1 allows the total amount of memory used to change depending on the demands of the application. This is useful for apps that don’t know ahead of time how much they will need, but it disables asm.js optimizations. In WebAssembly, however, there should be little or no overhead.

--closure1: This can help with reducing the size of the non-generated (support/glue) JS code, and with startup. However it can break if you do not do proper Closure Compiler annotations and exports. But it’s worth it!

--llvm-lto1: This enables LLVM’s link-time optimizations, which can help in some cases. However there are known issues with these optimizations, so code must be extensively tested. See llvm-lto for information about the other modes.

Emscripten-compiled code can currently achieve approximately half the speed of a native build. If the performance is significantly poorer than expected, you can also run through the additional troubleshooting steps below:

Building Projects is a two-stage process: compiling source code files to LLVM and generating JavaScript from LLVM. Did you build using the same optimization values in both steps (-O2 or -O3)?

Test on multiple browsers. If performance is acceptable on one browser and significantly poorer on another, then file a bug report, noting the problem browser and other relevant information.

Does the code validate in Firefox (look for “Successfully compiled asm.js code” in the web console). If you see a validation error when using an up-to-date version of Firefox and Emscripten then please file a bug report.