shadow-cljs provides everything you need to compile your ClojureScript projects with a focus on simplicity and ease of use. The provided build targets abstract away most of the manual configuration so that you only have to configure the essentials for your build. Each target provides optimal defaults for each environment and get an optimized experience during development and in release builds.

When working with shadow-cljs you will be defining one or more builds in the shadow-cljs.edn configuration file. Each build will have a :target property which represents a configuration preset optimized for the target environment (eg. the Browser, a node.js application or a Chrome Extension).

Each build can either produce development or release output depending on the command used to trigger the compilation. The standard build commands are: compile, watch and release.

Creating a release build will strip out all the development related code and finally run the code through the Closure Compiler. This is an optimizing Compiler for JavaScript which will significantly reduce the overall size of the code.

There are several important concepts that you should familiarize yourself with when using shadow-cljs. They are integral to understanding how everything fits together and how the tool works with your code.

shadow-cljs uses the Java Virtual Machine (JVM) and its "classpath" when working with files. This is a virtual filesystem composed of many classpath entries. Each entry is either

A local filesystem directory, managed by :source-paths entry in the configuration.

Or a .jar file, representing Clojure(Script) or JVM libraries. These are compressed archives containing many files (basically just a .zip file). These are added by your :dependencies.

In the Clojure(Script) everything is namespaced and each name is expected to resolve to a file. If you have a (ns demo.app) namespace the compiler expects to find a demo/app.cljs (or .cljc) on the classpath. The classpath will be searched in order until it is found. Suppose you configured the :source-paths ["src/main" "src/test"] the compiler will first look for a src/main/demo/app.cljs and then src/test/demo/app.cljs. When the file is not found on any source path the JVM will begin looking into the .jar files on the classpath. When it finds a demo/app.cljs at the root of any of the libraries that file it will be used.

Important

When a filename exists multiple times on the classpath then only the first one is used. Everything on the JVM and Clojure(Script) is namespaced to avoid such conflicts. Very similar to npm where each package must have a unique name.

It is therefore recommended to be very disciplined about the names you choose and properly namespacing everything. It may seem repetitive to always use (ns your-company.components.foo) over (ns components.foo) but it will save you from lot of headaches later on.

This is unlike npm where the package name itself is never used inside the package itself and only relative paths are used.

shadow-cljs can be started in "server" mode which is required for long-running tasks such as watch. A watch will implicitly start the server instance if it is not already running. The server will provide the Websocket endpoint that builds will connect to as well as all the other endpoints for nREPL, Socket REPL and the development HTTP servers.

When using the shadow-cljs CLI interface all commands will re-use a running server instance JVM instead of starting a new JVM. This is substantially faster since start-up time can be quite slow.

Once the server is running however you only have to restart it whenever your :dependencies change and everything else can be done via the REPL.

The REPL is at the heart of all Clojure(Script) development and every CLI command can also be used directly from the REPL as well. It is absolutely worth getting comfortable with the REPL even if the command line may seem more familiar.

Many of the examples are of the configuration file for the compiler. This file contains an EDN map. Where we have already discussed required options we will often elide them for clarity. In this case we’ll usually include an ellipsis to indicate "content that is required but isn’t in our current focus":

Example 1. Specify dependencies

{:dependencies [[lib "1.0"]]}

Example 2. Add source paths

{...:source-paths ["src"]
...}

This allows us to concisely include enough context to understand the nesting of the configuration of
interest:

In your project directory you’ll need a package.json. If you do not have one yet you can create one by running npm init -y. If you don’t have a project directory yet consider creating it by running

$ npx create-cljs-project my-project

This will create all the necessary basic files and you can skip the following commands.

If you have a package.json already and just want to add shadow-cljs run

NPM

$ npm install --save-dev shadow-cljs

Yarn

$ yarn add --dev shadow-cljs

For convenience you can run npm install -g shadow-cljs or yarn global add shadow-cljs. This will let you run the shadow-cljs command directly later. There should always be a shadow-cljs version installed in your project, the global install is optional.

shadow-cljs can be used in many different ways but the general workflow stays the same.

During development you have the option to compile a build once or run a watch worker which watches your source files for changes and re-compiles them automatically. When enabled the watch will also hot-reload your code and provide a REPL. During development the focus is on developer experience with fast feedback cycles. Development code should never be shipped to the public.

When it is time to get serious you create a release build which creates an optimized build suitable for production. For this the Closure Compiler is used which applies some seriously :advanced optimizations to your code to create the most optimal output available. This may require some tuning to work properly when using lots of interop with native JavaScript but works flawlessly for ClojureScript (and the code from the Closure Library).

A shadow-cljs command can be fairly slow to start. To improve this shadow-cljs can run in "server mode" which means that a dedicated process is started which all other commands can use to execute a lot faster since they won’t have to start a new JVM/Clojure instance.

Commands that do long-running things implicitly start a server instance (eg. watch) but it is often advisable to have
a dedicated server process running.

You can run the process in the foreground in a dedicated terminal. Use CTRL+C to terminate the server.

shadow-cljs can integrate with other Clojure tools since the primary distribution is just a .jar file available via Clojars. By default your :dependencies are managed via shadow-cljs.edn but you can use other builds tools to manage your dependencies as well.

Caution

It is strongly recommended to use the standalone shadow-cljs version. The command does a lot of things to optimize the user experience (e.g. faster startup) which are not done by other tools. You’ll also save yourself a lot of headaches dealing with dependency conflicts and other related errors.

If you’d like to use Leiningen to manage your dependencies, you can do so by adding a :lein entry to your shadow-cljs.edn config. With this setting, the shadow-cljs command will use lein to launch the JVM, ignoring any :source-paths and :dependencies in shadow-cljs.edn; relying instead on lein to set them from project.clj.

{:leintrue; :source-paths and :dependencies are now ignored in this file; configure them via project.clj:builds { ... }

When using project.clj to manage your :dependencies you must manually include the thheller/shadow-cljs artifact in your :dependencies (directly or in a profile).

Important

When you are running into weird Java Stackstraces when starting shadow-cljs or trying compile builds you may have a dependency conflict. It is very important that shadow-cljs is used with proper matching org.clojure/clojurescript and closure-compiler versions. You can check via lein deps :tree and the required versions are listed on clojars (on the right side).

You may also directly execute shadow-cljs commands via lein if you prefer to not use the shadow-cljs command itself.

Important

It is recommended to still use the shadow-cljs command to run commands since that will take full advantage of a running server mode instance. This will run commands substantially faster than launching additional JVMs when using lein directly.

The new deps.edn can also be used to manage your :dependencies and :source-paths instead of using the built-in methods or lein. All shadow-cljs commands will then be launched via the new clojure utility instead.

Important

tools.deps is still changing quite frequently. Make sure you are using the latest version.

To use this set the :deps true property in your config. It is also possible to configure which deps.edn aliases should be used.

You must add the thheller/shadow-cljs artifact to your deps.edn manually.

You may also specify additional aliases via the command line using -A, eg. shadow-cljs -A:foo:bar …​.

Important

Aliases are only applied when a new instance/server is started. They do not apply when connecting to a running server using the shadow-cljs command. Running via clj will always start a new JVM and does not support server-mode.

The authors have little Boot experience, so this chapter is in need of contributions. We understand
that Boot allows you to build your tool chain out of functions. Since shadow-cljs is a normal
JVM library, you can call functions within it to invoke tasks.

You can use the shadow-cljs CLI to call specific Clojure functions from the command line. This is useful when you want run some code before/after certain tasks. Suppose you wanted to rsync the output of your release build to a remote server.

The usual (defn release [& args]) structure also works if you want to parse the args with something like tools.cli.

You have access to the full power of Clojure here. You can build entire tools on top of this if you like. As a bonus everything you write this way is also directly available via the Clojure REPL.

Important

When the server is running the namespace will not be reloaded automatically, it will only be loaded once. It is recommended to do the development using a REPL and reload the file as usual (eg. (require 'my.build :reload)). You may also run shadow-cljs clj-eval "(require 'my.build :reload)" to reload manually from the command line.

By default the functions called by clj-run only have access to a minimal shadow-cljs runtime which is enough to run compile, release and any other Clojure functionality. The JVM will terminate when your function completes.

If you want to start a watch for a given build you need to declare that the function you are calling requires a full server. This will cause the process to stay alive until you explicitly call (shadow.cljs.devtools.server/stop!) or CTRL+C the process.

(nsdemo.run
(:require [shadow.cljs.devtools.api :as shadow]))
;; this fails because a full server instance is missing
(defnfoo
[& args]
(shadow/watch :my-build))
;; this metadata will ensure that the server is started so watch works
(defnfoo
{:shadow/requires-servertrue}
[& args]
(shadow/watch :my-build))

The REPL is a very powerful tool to have when working with Clojure(Script) code. shadow-cljs provides several built-in variants that let you get started quickly as well as variants that are integrated into your standard builds.

When you quickly want to test out some code the built-in REPLs should be enough. If you need more complex setups that also do stuff on their own it is best to use an actual build.

By default you can choose between a node-repl and a browser-repl. They both work similarly and the differentiating factor is that one runs in a managed node.js process while the others opens a Browser Window that will be used to eval the actual code.

node-repl lets you get started without any additional configuration. It has access to all your code via the usual means, ie. (require '[your.core :as x]). Since it is not connected to any build it does not do any automatic rebuilding of code when your files change and does not provide hot-reload.

node-repl and browser-repl work without any specific build configuration. That means they’ll only do whatever you tell them to do but nothing on their own.

If you want to build a specific thing you should configure a build using one of the provided build-targets. Most of them automatically inject the necessary code for a ClojureScript REPL. It should not require any additional configuration. For the build CLJS REPL to work you need 2 things

a running watch for your build

connect the JS runtime of the :target. Meaning if you are using the :browser target you need to open a Browser that has the generated JS loaded. For node.js builds that means running the node process.

Once you have both you can connect to the CLJS REPL via the command line or from the Clojure REPL.

A Clojure REPL is also provided in addition to the provided ClojureScript REPLs. This is can be used to control the shadow-cljs process and run all other build commands through it. You can start with a Clojure REPL and then upgrade it to a CLJS REPL at any point (and switch back).

You can stop the embedded server by running (shadow.cljs.devtools.server/stop!). This will also stop all running build processes.

Important

If you want to switch to a CLJS REPL this may require additional setup in the tool you used to start the server in. Since lein will default to using nREPL it will require configuring additional nREPL :middleware. When using clj you are good to go since it doesn’t use nREPL.

shadow-cljs is configured by a shadow-cljs.edn file in your project root directory. You can
create a default one by running shadow-cljs init. It should contain a map with some global
configuration and a :builds entry for all your builds.

Both manage your dependencies via a package.json file in your project directory. Almost every package available via npm will explain how to install it. Those instructions now apply to shadow-cljs as well.

Installing a JavaScript package

# npm
$ npm install the-thing
# yarn
$ yarn add the-thing

Nothing more is required. Dependencies will be added to the package.json file and this will be used to manage them.

Tip

If you don’t have a package.json yet run npm init from a command line.

You might run into errors related to missing JS dependencies. Most ClojureScript libraries do not yet declare the npm packages they use since they still expect to use CLJSJS. We want to use npm directly which means you must manually install the npm packages until libraries properly declare the :npm-deps themselves.

The required JS dependency "react" is not available, it was required by ...

This means that you should npm install react.

Tip

In the case of react you probably need these 3 packages: npm install react react-dom create-react-class.

Most configuration will be done in the projects themselves via shadow-cljs.edn but some config may be user-dependent. Tools like CIDER may require the additional cider-nrepl dependency which would be useless for a different team member using Cursive when adding that dependency via shadow-cljs.edn.

A restricted set of config options can be added to ~/.shadow-cljs/config.edn which will then apply to all projects built on this users machine.

Adding dependencies is allowed via the usual :dependencies key. Note that dependencies added here will apply to ALL projects. Keep them to a minimum and only put tool related dependencies here. Everything that is relevant to a build should remain in shadow-cljs.edn as otherwise things may not compile for other users. These dependencies will automatically be added when using deps.edn or lein as well.

Example ~/.shadow-cljs/config.edn

{:dependencies
[[cider/cider-nrepl "0.21.1"]]}
;; this version may be out of date, check whichever is available

When using deps.edn to resolve dependencies you may sometimes want to activate additional aliases. This can be done via :deps-aliases.

The default global config file in ~/.nrepl/nrepl.edn or the local .nrepl.edn will also be loaded on startup and can be used to configure :middleware.

If the popular middleware cider-nrepl is found on the classpath (e.g. it’s included in :dependencies), it will be used automatically. No additional configuration required. This can be disabled by setting :nrepl {:cider false}.

You may configure the namespace you start in when connecting by setting :init-ns in the :nrepl options. It defaults to shadow.user.

When connecting to the nREPL server the connection always starts out as a Clojure REPL. Switching to a CLJS REPL works similarly to the non-nREPL version. First the watch for the given build needs to be started and then we need to select this build to switch the current nREPL session to that build. After selecting the build everything will be eval’d in ClojureScript instead of Clojure.

When you use shadow-cljs embedded in other tools that provide their own nREPL server (eg. lein) you need to configure the shadow-cljs middleware. Otherwise you won’t be able to switch between CLJ and CLJS REPLs.

A Clojure Socket REPL is started automatically in server-mode and uses a random port by default. Tools can find the port it was started under by checking .shadow-cljs/socket-repl.port which will contain the port number.

You must generate the Certificate with a SAN (Subject Alternative Name) for "localhost" (or whichever host you want to use). SAN is required to get Chrome to trust the Certificate and not show warnings. The password used when exporting must match the password assigned to the Keystore.

The shadow-cljs server starts one primary HTTP server. It is used to serve the UI and websockets used for Hot Reload and REPL clients. By default it listens on Port 9630. If that Port is in use it will increment by one and attempt again until an open Port is found.

Startup message indicating the Port used

shadow-cljs - server running at http://0.0.0.0:9630

When :ssl is configured the server will be available via https:// instead.

Tip

The server automatically supports HTTP/2 when using :ssl.

If you prefer to set your own port instead you can do this via the :http config.

shadow-cljs.edn with :http config

{...:http {:port12345:host"my.machine.local"}
...}

:ssl switches the server to server https:// only. If you want to keep the http:// version you can configure a separate :ssl-port as well.

shadow-cljs can provide additional basic HTTP servers via the :dev-http config entry. By default these will serve all static files from the configured paths, and fall back to index.html when a resource is not found (this is what you typically want when developing an application which uses browser push state).

These servers are started automatically when shadow-cljs is running in server mode. They are not specific to any build and can be used to serve files for multiple builds as long as a unique :output-dir is used for each.

IMPORTANT

These are just generic web servers that server static files. They are not required for any live-reload or REPL logic. Any webserver will do, these are just provided for convenience.

When shadow-cljs.edn is used in charge of starting the JVM you can configure additional command line arguments to be passed directly to the JVM. For example you may want to decrease or increase the amount of RAM used by shadow-cljs.

This is done by configuring :jvm-opts at the root of shadow-cljs.edn expecting a vector of strings.

The arguments that can be passed to the JVM vary depending on the version but you can find an example list here. Please note that assigning too little or too much RAM can degrade performance. The defaults are usually good enough.

Important

When using deps.edn or project.clj the :jvm-opts need to be configured there.

Each build in shadow-cljs must define a :target which defines where you intend your code to be executed. There are default built-ins for the browser and node.js. They all share the basic concept of having :dev and :release modes. :dev mode provides all the usual development goodies like fast compilation, live code reloading and a REPL. :release mode will produce optimized output intended for production.

As a developer most of your time is spent in development mode. You’re probably familiar with tools like figwheel,
boot-reload, and devtools. It’s almost certain that you want one or more of these in your builds.

Preloads are used to force certain namespaces into the front of your generated Javascript. This is
generally used to inject tools and instrumentation before the application actually loads and runs. The
preloads option is simply a list of namespaces in the :devtools/:preloads section of
shadow-cljs-edn:

Since version 2.0.130 shadow-cljs automatically adds cljs-devtools to the preloads in
watch and compile if they are on the classpath. All you need to do is make sure binaryage/devtools is in your
dependencies list. (Note, not binaryage/cljs-devtools.) If you don’t want to have cljs-devtools in
specific targets, you can suppress this by adding :console-support false to the :devtools section of
those targets.

The React and ClojureScript ecosystems combine to make this kind of thing super useful. The shadow-cljs
system includes everything you need to do your hot code reload, without needing to resort to external tools.

You can configure the compiler to run functions just before hot code reload brings in updated code, and just after. These are useful for stopping/starting things that would otherwise close over old code.

These can be configured via the :devtools section in your build config or directly in your code via metadata tags.

This would call my.app/stop before loading any new code and my.app/start when all new code was loaded. You can tag multiple functions like this and they will be called in dependency order of their namespaces.

There are also async variants of these in case you need to do some async work that should complete before proceeding with the reload process.

If neither :after-load nor :before-load are set the compiler will only attempt to hot reload the code in the :browser target. If you still want hot reloading but don’t need any of the callbacks you can set :autoload true instead.

It is sometimes desirable to execute some custom code at a specific stage in the compilation pipeline. :build-hooks let you declare which functions should be called and they have full access to the build state at that time. This is quite powerful and opens up many possible tool options.

This example would call (my.util/hook build-state 1 2 3) after the build completed the :flushstage (ie. written to disk). The example would print [:hello-world (1 2 3)] but please do something more useful in actual hooks.

The hook is a just a normal Clojure function with some additional metadata. The {:shadow.build/stage :flush} metadata informs the compiler to call this hook for :flush only. You may instead configure {:shadow.build/stages #{:configure :flush}} if the hook should be called after multiple stages. At least one configured stage is required since the hook otherwise would never do anything.

All build hooks will be called after the :target work is done. They will receive the build-state (a clojure map with all the current build data) as their first argument and must return this build-state modified or unmodified. When using multiple stages you can add additional data to the build-state that later stages can see. It is strongly advised to use namespaced keys only to ensure not accidentally breaking the entire build.

The build-state has some important entries which might be useful for your hooks:

:shadow.build/build-id - the id of the current build (eg. :app)

:shadow.build/mode - :dev or :release

:shadow.build/stage - the current stage

:shadow.build/config - the build config. You can either store config data for the hook in the build config directly or pass it as arguments in the hook itself

Important

With a running watch all hooks will be called repeatedly for each build. Avoid doing too much work as they can considerably impact your build performance.

With a running watch the :configure is only called once. Any of the others may be called
again (in order) for each re-compile. The build-state will be re-used until the build config changes at which point it will be thrown away and a fresh one will be created.

shadow-cljs will cache all compilation results by default. The cache is invalidated whenever anything relevant to the individual source files changes (eg. changed compiler setting, changed dependencies, etc.). This greatly improves the developer experience since incremental compilation will be much faster than starting from scratch.

Invalidating the cache however can not always be done reliably if you are using a lot of macros with side-effects (reading files, storing things outside the compiler state, etc.). In those cases you might need to disable caching entirely.

Namespaces that are known to include side-effecting macros can be blocked from caching. They won’t be cached themselves and namespaces requiring them will not be cached as well. The clara-rules library has side-effecting macros and is blocked by default. You can specify which namespaces to block globally via the :cache-blockers configuration. It expects a set of namespace symbols.

clara.rules cache blocking example (this is done by default)

{...:cache-blockers #{clara.rules}
:builds {...}}

In addition you can control how much caching is done more broadly via the :build-options:cache-level entry. The supported options are:

:all

The default, all CLJS files are cached

:jars

Only caches files from libraries, ie. source files in .jar files

:off

Does not cache any CLJS compilation results (by far the slowest option)

The Closure Library & Compiler allow you to define variables that are essentially compile time constants. You can use these to configure certain features of your build. Since the Closure compiler treats these as constants when running :advanced optimizations they are fully supported in the Dead-Code-Elimination passes and can be used to remove certain parts of the code that should not be included in release builds.

This defines the your.app/VERBOSE variable as false by default. This will cause the println to be removed in :advanced compilation. You can toggle this to true via the :closure-defines options which will enable the println. This can either be done for development only or always.

It is generally safer to use the "disabled" variant as the default since it makes things less likely to be included in a release build when they shouldn’t be. Forgetting to set a :closure-defines variable should almost always result in less code being used not more.

Closure Defines from the Closure Library

goog.DEBUG: The Closure Library uses this for many development features. shadow-cljs automatically sets this to false for release builds.

goog.LOCALE can be used to configure certain localization features like goog.i18n.DateTimeFormat. It accepts a standard locale string and defaults to en. Pretty much all locales are supported, see here and here.

The CLJS compiler supports several options to influence how some code is generated. For the most part shadow-cljs will pick some good defaults for each :target but you might occasionally want to change some of them.

These are all grouped under the :compiler-options key in your build config.

Most of the standard ClojureScript Compiler Options are either enabled by default or do not apply. So very few of them actually have an effect. A lot of them are also specific to certain :target types and do not apply universally (e.g. :compiler-options {:output-wrapper true} is only relevant for :target :browser).

Currently supported options include

:optimizations supports :advanced, :simple or :whitespace, defaults to :advanced. :none is the default for development and cannot be set manually. release with :none won’t work.

:infer-externs:all, :auto, true or false, defaults to true

:static-fns (Boolean) defaults to true

:fn-invoke-direct (Boolean) defaults to false

:elide-asserts (Boolean) default to false in development and true in release builds

:pretty-print and :pseudo-names default to false. You can use shadow-cljs release app --debug to enable both temporarily without touching your config. This is very useful when running into problem with release builds

:source-map (Boolean) defaults to true during development, false for release.

:source-map-include-sources-content (Boolean) defaults to true and decides whether source maps should contains their sources in the .map files directly.

:source-map-detail-level:all or :symbols (:symbols reduces overall size a bit but also a bit less accurate)

:externs vector of paths, defaults to []

:checked-arrays (Boolean), defaults to false

:anon-fn-naming-policy

:rename-prefix and :rename-prefix-namespace

:warnings as a map of {warning-type true|false}, eg. :warnings {:undeclared-var false} to turn off specific warnings.

Unsupported or non-applicable Options

Options that don’t have any effect at all include

:verbose is controlled by running shadow-cljs compile app --verbose not in the build config.

:ignore takes a set of symbols refering to namespaces. Either direct matches or .* wildcards are allowed. :warning-types has the same functionality as above, not specifying it means all warnings will throw except the ignored namespaces.

By default the generated JS output will be compatible with ES5 and all "newer" features will be transpiled to compatible code using polyfills. This is currently the safest default and supports most browsers in active use (including IE10+).

You can select other output options if you only care about more modern environments and want to keep the original code without replacements (eg. node, Chrome Extensions, …​)

Important

Note that this mostly affects imported JS code from npm or .js files from the classpath. CLJS will currently only generate ES5 output and is not affected by setting higher options.

You can configure this via the :output-feature-set in :compiler-options. The older :language-out option should not be used as :output-feature-set replaced it.

This feature only works in shadow-cljs. It was officially rejected by the ClojureScript project. It will still compile fine in CLJS but only the official branches work (e.g. :cljs). It might still be supported one day but as of now it is not.

shadow-cljs lets you configure additional reader features in .cljc files. By default you can only use reader conditionals to generate separate code for :clj, :cljs or :cljr. In many CLJS builds however it is also desirable to select which code is generated based on your :target.

Example: Some npm packages only work when targeting the :browser, but you may have a ns that you also want to use in a :node-script build. This might happen frequently when trying to use Server-Side Rendering (SSR) with your React App. codemirror is one such package.

This namespace will compile fine for both builds (:node-script and :browser) but when trying to run the :node-script it will fail since the codemirror package tries to access the DOM. Since react-dom/server does not use refs the init-cm function will never be called anyways.

While you can use :closure-defines to conditionally compile away the init-cm fn you can not use it to get rid of the extra :require. Reader conditionals let you do this easily.

(nsmy.awesome.component
(:require
["react":as react]
;; NOTE: The order here matters. Only the first applicable;; branch is used. If :cljs is used first it will still be;; taken by the :server build#?@(:node [[]]
:cljs [["codemirror":as CodeMirror]])))
#?(:node;; node platform override
(defninit-cm [dom-node]
:no-op)
:cljs;; default impl
(defninit-cm [dom-node]
... actual impl ...))
...

:reader-features config examples

{...:builds;; app build configured normally, no adjustments required
{:app
{:target:browser...}
;; for the server we add the :node reader feature;; it will then be used instead of the default :cljs:server
{:target:node-script:compiler-options
{:reader-features #{:node}}}}}

The :server build will then no longer have the codemirror require and the init-cm function is removed. Becoming only

(nsmy.awesome.component
(:require
["react":as react]))
;; this will likely be removed as dead code if;; its never actually called anywhere
(defninit-cm [dom-node] :no-op)
...

Important

This feature is only available in .cljc files and will fail in .cljs files.

It is sometimes desirable to make small adjustments to the build configuration from the command line with values that can’t be added statically to the shadow-cljs.edn config or may change depending on the environment you are in.

You can pass additional config data via the --config-merge {:some "data"} command line option which will be merged into the build config. Data added from the CLI will override data from the shadow-cljs.edn file.

It is possible to use environment variables to set configuration values in shadow-cljs.edn but you should consider using --config-merge instead. If you really must use an environment variable you can do so via the #shadow/env "FOO" reader tag. You can also use the shorter #env.

Supported :as coercions are :int, :bool, :keyword, :symbol. Supplied :default values will not be converted and are expected to be in the correct type already.

Important

The environment variables used when the shadow-cljs process was started are used. If a server process is used its environment variables will be used over those potentially set by other commands. This is mostly relevant during development but may be confusing. --config-merge does not have this limitation.

The :browser target produces output intended to run in a Browser environment. During development it supports live code reloading, REPL, CSS reloading. The release output will be minified by the Closure Compiler with :advanced optimizations.

The browser target outputs a lot of files, and a directory is needed for them all. You’ll need to serve
these assets with some kind of server, and the Javascript loading code needs to know the server-centric
path to these assets. The options you need to specify are:

:output-dir

The directory to use for all compiler output.

:asset-path

The relative path from web server’s root to the resources in :output-dir.

Your entry point javascript file and all related JS files will appear in :output-dir.

Warning

Each build requires its own :output-dir, you may not put multiple builds into the same directory.
This directory should also be exclusively owned by the build. There should be no other files in there.
While shadow-cljs won’t delete anything it is safer to leave it alone. Compilation
creates many more files than just the main entry point javascript file during development:
source maps, original sources, and generated sources.

The :asset-path is a prefix that gets added to the paths of module loading code inside of the
generated javascript. It allows you to output your javascript module to a particular subdirectory
of your web server’s root. The dynamic loading during development (hot code reload) and production
(code splitting) need this to correctly locate files.

Locating your generated files in a directory and asset path like this make it so that other assets
(images, css, etc.) can easily co-exist on the same server without accidental collisions.

For example: if your web server will serve the folder public/x when asked for the URI /x,
and your output-dir for a module is public/assets/app/js then your asset-path should be /assets/app/js.
You are not required to use an absolute asset path, but it is highly recommended.

Modules configure how the compiled sources are bundled together and how the final .js are generated. Each Module declares a list of Entry Namespace and from that dependency graph is built. When using multiple Modules the code is split so that the maximum amount of code is moved to the outer edges of the graph. The goal is to minimize the amount of code the browser has to load initially and loading the rest on-demand.

Tip

Don’t worry too much about :modules in the beginning. Start with one and split them later.

The :modules section of the config is always a map keyed by module ID. The module ID is also used
to generate the Javascript filename. Module :main will generate main.js in :output-dir.

The available options in a module are:

:entries

The namespaces that serve as the root nodes of the dependency graph for the output code of this module.

:init-fn

Fully qualified symbol pointing to a function that should be called when the module is loaded initially.

:depends-on

The names of other modules that must be loaded in order for this one to have everything it needs.

:prepend

String content that will be prepended to the js output. Useful for comments, copyright notice, etc.

:append

String content that will be appended to the js output. Useful for comments, copyright notice, etc.

:prepend-js

A string to prepend to the module output containing valid javascript that will be run through Closure optimizer.

:append-js

A string to append to the module output containing valid javascript that will be run through Closure optimizer.

shadow-cljs will follow the dependency graph from the root set of code entry points in the :entries
to find everything needed to actually compile and include in the output. Namespaces that are not required will not be included.

The above config will create a public/js/main.js file. During development there will be an additional public/js/cljs-runtime directory with lots of files. This directory is not required for release builds.

Declaring more than one Module requires a tiny bit of additional static configuration so the Compiler can figure out how the Modules are related to each other and how you will be loading them later.

In addition to :entries you’ll need to declare which module depends on which (via :depends-on). How you structure this is entirely up to your needs and there is no one-size-fits-all solution unfortunately.

Say you have a traditional website with actual different pages.

www.acme.com - serving the homepage

www.acme.com/login - serving the login form

www.acme.com/protected - protected section that is only available once the user is logged in

One good configuration for this would be to have one common module that is shared between all the pages. Then one for each page.

Using the loader is very lightweight. It has a few dependencies which you may not be otherwise using. In practice using :module-loader true adds about 8KB gzip’d to the default module. This will vary depending on how much of goog.net and goog.events you are already using, and what level of optimization you use for your release builds.

The generated code is capable of using the standard ClojureScript cljs.loader API. See the
documentation on the ClojureScript
website for instructions.

The advantage of using the standard API is that your code will play well with others. This
may be of particular importance to library authors. The disadvantage is that the dynamic module
loading API in the standard distribution is currently somewhat less easy-to-use than the
support in shadow-cljs.

Release builds only: The code generated by the Closure Compiler :advanced compilation will create a lot of global variables which has the potential to create conflicts with other JS running in your page. To isolate the created variables the code can be wrapped in an anonymous function to the variables only apply in that scope.

release builds for :browser with only one :modules are wrapped in (function(){<the-code>}).call(this); by default. So no global variables are created.

When using multiple :modules (a.k.a code splitting) this is not enabled by default since each module must be able to access the variables created by the modules it depends on. The Closure Compiler supports an additional option to enable the use of an output wrapper in combination with multiple :modules named :rename-prefix-namespace. This will cause the Compiler to scope all "global" variables used by the build into one actual global variable. By default this is set to :rename-prefix-namespace "$APP" when :output-wrapper is set to true.

This will only create the MY_APP global variable. Since every "global" variable will now be prefixed by MY_APP. (e.g. MY_APP.a instead of just a) the code size can go up substantially. It is important to keep this short. Browser compression (e.g. gzip) helps reduce the overhead of the extra code but depending on the amount of global variables in your build this can still produce a noticeable increase.

Important

Note that the created variable isn’t actually useful directly. It will contain a lot of munged/minified properties. All exported (eg. ^:export) variables will still be exported into the global scope and are not affect by this setting. The setting only serves to limit the amount of global variables created, nothing else. Do not use it directly.

The :modules configuration may also be used to generate files intended to be used as a Web Workers.
You may declare any module as a Web Worker by setting :web-worker true. The
generated file will contain some additional bootstrap code which will load its dependencies
automatically. The way :modules work also ensures that code used only by the worker will also only
be in the final file for the worker. Each worker should have a dedicated CLJS namespace.

The above configuration will generate worker.js which you can use to start the Web Worker.
It will have all code from the :shared module available (but not :main). The code in the
my.app.worker namespace will only ever execute in the worker. Worker generation happens in
both development and release modes.

Note that the empty :entries [] in the :shared module will make it collect all the code shared between the :main and :worker modules.

The :devtools {:browser-inject :main} is currently required to tell the compiler where the browser devtools/hud should be added to. It defaults to adding them to the "base" module which would be :shared in this case. Since that contains code not compatible with the Worker environment we need to move it.

In a web setting it is desirable to cache .js files for a very long time to avoid extra request. It is common
practice the generate a unique name for the .js file for every released version. This changes the URL used to
access it and thereby is safe to cache forever.

This would create the main.v1.js and extra.v1.js files in public/js instead of the usual main.js and extra.js.

You can use manual versions or something automated like the git sha at the time of the build. Just make sure that you bump whatever it is once you shipped something out to the user since with caching they won’t be requesting newer versions of old files.

You can add :module-hash-names true to your build config to automatically create a MD5
signature for each generated output module file. That means that a :main module will generate
a main.<md5hash>.js instead of just the default main.js.

:module-hash-names true will include the full 32-length md5 hash, if you prefer a shorter version you can specify a
number between 1-32 instead (eg. :module-hash-names 8). Be aware that shortening the hash may increase the chances
of generating conflicts. I recommend using the full hash.

shadow-cljs generates a manifest.edn file in the configured :output-dir.
This file contains a description of the module config together with an extra :output-name property which
maps the original module name to actual filename (important when using the :module-hash-names feature).

The manifest contains all :modules sorted in dependency order. You can use it to map the :module-id back to the
actual generated filename.

Development builds also produce this file and you may check if for modifications to
know when a new build completed. :module-hash-names does not apply during development so you’ll get the usual
filenames.

You can configure the name of the generated manifest file via the :build-options :manifest-name entry. It defaults to
manifest.edn. If you configure a filename with .json ending the output will be JSON instead of EDN. The file will
be relative to the configured :output-dir.

The :browser target now uses a HUD to display a loading indicator when a build is started. It will also display warnings and errors if there are any.

You can disable it completely by setting :hud false in the :devtools section.

You may also toggle certain features by specifying which features you care about via setting :hud #{:errors :warnings}. This will show errors/warnings but no progress indicator. Available options are :errors, :warnings, :progress. Only options included will be enabled, all other will be disabled.

Warnings include a link to source location which can be clicked to open the file in your editor. For this a little bit of config is required.

You can either configure this in your shadow-cljs.edn config for the project or globally in your home directory under ~/.shadow-cljs/config.edn.

:open-file-command configuration

{:open-file-command
["idea":pwd"--line":line:file]}

The :open-file-command expects a vector representing a very simple DSL. Strings are kept as they are and keyword are replaced by their respective values. A nested vector can be used in case you need to combine multiple params, using clojure.core/format style pattern.

The Browser devtools can also reload CSS for you. This is enabled by default and in most cases requires no additional
configuration when you are using the built-in development HTTP servers.

Any stylesheet included in a page will be reloaded if modified on the filesystem. Prefer using absolute paths but relative paths should work as well.

Example HTML snippet

<linkrel="stylesheet"href="/css/main.css"/>

Example Hiccup since we aren’t savages

[:link {:rel"stylesheet":href"/css/main.css"}]

Using the built-in dev HTTP server

:dev-http {8000"public"}

This will cause the browser to reload /css/main.css when public/css/main.css is changed.

shadow-cljs currently provides no support for directly compiling CSS but the usual tools will work and should
be run separately. Just make sure the output is generated into the correct places.

When you are not using the built-in HTTP Server you can specify :watch-dir instead which should be a path to the
document root used to serve your content.

Example :watch-dir config

{...
{:builds
{:app {...:devtools {:watch-dir"public"}}}}

When your HTTP Server is serving the files from a virtual directory and the filesystem paths don’t exactly match the path used in the HTML you may adjust the path by setting :watch-path which will be used as a prefix.

By default the devtools client will attempt to connect to the shadow-cljs process via the configured HTTP server (usually localhost). If you are using a reverse proxy to serve your HTML that might not be possible. You can set :devtools-url to configure which URL to use.

shadow-cljs will then use the :devtools-url as the base when making requests. It is not the final URL so you must ensure that all requests starting with the path you configured (eg. /shadow-cljs/*) are forwarded to the host shadow-cljs is running on.

Incoming Request to Proxy

https://some.host/shadow-cljs/ws/foo/bar?asdf

must forward to

http://localhost:9630/foo/bar?asdf

The client will make WebSocket request as well as normal XHR requests to load files. Ensure that your proxy properly upgrades WebSockets.

Important

The requests must be forwarded to the main HTTP server, not the one configured in the build itself.

The :target :react-native produces code that is meant to integrate into the default react-native tooling (eg. metro). Tools like expo which wrap those tools should automatically work and require no additional setup.

You will need the same basic main configuration as in other targets (like
:source-paths), the build specific config is very minimal and requires at least 2 options (besides :target itself)

:init-fn

(required). The namespace-qualified symbol of your apps init function. This function will be called once on startup and should probably render something.

When compiled this results in a app/index.js file intended to be used as an entry point for the react-native tools. During development the :output-dir will contain many more files but you should only reference the generated app/index.js directly. A release build will only generated the optimized app/index.js and requires no additional files.

There are two ways to use react-native, "plain" react-native, which allows you to use native code and libraries and the one "wrapped" in expo (described below). All the steps described above are sufficient to start using shadow-cljs with the plain react-native. See this example repo:

expo requires that a React Component is registered on startup which can be done manually or by using the shadow.expo/render-root function which takes care of creating the Component and instead directly expects a React Element instance to start rendering.

init is called once on startup. Since the example doesn’t need to do any special setup it just calls start directly. start will be called repeatedly when watch is running each time after the code changes were reloaded. The reagent.core/as-element function can be used to generate the required React Element from the reagent hiccup markup.

There is built-in support for generating code that is intended to be used as a stand-alone
script, and also for code that is intended to be used as a library. See the
section on common configuration for the base settings needed in
a configuration file.

When compiled this results in a standalone out/demo-script/script.js file intended to be called
via node script.js <command line args>. When run it will call (demo.script/main <command line args>)
function on startup. This only ever produces the file specified in :output-to. Any other support files
(e.g. for development mode) are written to a temporary support directory.

Many libraries hide state or do actions that prevent hot code reloading from working well. There
is nothing the compiler can do to improve this since it has no idea what those libraries are doing.
Hot code reload will only work well in situations where you can cleanly "stop" and "restart" the
artifacts used.

In addition you may specify :exports-fn as a fully qualified symbol. This should point to a function with no arguments which should return a JS object (or function). This function will only ever be called ONCE as node caches the return value.

There is an additional target that is intended to integrate CLJS into an existing JS project. The output can seamlessly integrate with existing JS tools (eg. webpack, browserify, babel,
create-react-app, …​) with little configuration.

:output-dir

The path for the output files are written to, defaults to node_modules/shadow-cljs.

:entries

(required) A vector of namespace symbols that should be compiled

Example shadow-cljs.edn config

{...:builds
{:code
{:target:npm-module:entries [demo.foo]}}}

If you use the default :output-dir of "node_modules/shadow-cljs" you can access the declared namespaces by using require("shadow-cljs/demo.foo") in JS. When using something not in node_modules you must include them using a relative path. With :output-dir "out" that would be require("./out/demo.foo") from your project root.

If you plan to distribute code on NPM, then you may want to use the :node-library target instead since it allows for a finer level of control over exports and optimization.

Unlike the :node-library target, the module target does not know what you want to call the
symbols you’re exporting, so it just exports them as-is. If you use advanced compilation, then everything
will get a minified munged name!

This is easy to remedy, simply add :export metadata on any symbols that you want to preserve:

(nsdemo.foo)
(def ^:export foo 5.662)
(defn ^:export bar [] ...)

This is a standard annotation that is understood by ClojureScript and prevents Google Closure from
renaming an artifact. JS code will still be able to access them after optimizations. Without the ^:export hint the closure-compiler will likely have removed or renamed them.

shadow-cljs provides a few utility targets to make building tests a little easier.

All test targets generate a test runner and automatically add all namespaces matching the configurable :ns-regexp. The default test runners were built for cljs.test but you can create custom runners if you prefer to use other test frameworks.

The default :ns-regexp is "-test$", so your first test could look like:

In the Clojure world it is common to keep test files in their own source paths so the above example assumes you have configured :source-paths ["src/main" "src/test"] in your shadow-cljs.edn config. Your usual app code goes into src/main and the tests go into src/test. This however is optional and it is totally fine to keep everything in src and just use :source-paths ["src"].

This target will create a test runner including all test namespaces matching the given regular expression.

The relevant configuration options are:

:target

:node-test

:output-to

The final output file that will be used to run tests.

:ns-regexp

(optional) A regular expression matching namespaces against project files. This only scans files, and will not scan jars. Defaults to "-test$".

:autorun

(boolean, optional) Run the tests via node when a build completes. This is mostly meant to be used in combination with watch. The node process exit code will not be returned as that would have to forcefully kill the running JVM.

:main

(qualified symbol, optional) Function called on startup to run the tests, defaults to shadow.test.node/main which runs tests using cljs.test.

This target is meant for gathering up namespaces that contain tests (based on a filename pattern match),
and triggering a test runner. It contains a built-in runner that will automatically scan for cljs.test
tests and run them.

The relevant configuration options are:

:target

:browser-test

:test-dir

A folder in which to output files. See below.

:ns-regexp

(optional) A regular expression matching namespaces against project files. This only scans files, and
will not scan jars. Defaults to "-test$".

:runner-ns

(optional) A namespace that can contain a start, stop, and init function. Defaults to
shadow.test.browser.

The normal :devtools options are supported, so you will usually create an http server to serve the files.
In general you will need a config that looks like this:

index.html - If and only if there was not already an index.html file present. By default the generated
file loads the tests and runs init in the :runner-ns. You may edit or add a custom version that will
not be overwritten.

js/test.js - The Javascript tests. The tests will always have this name. The entries for the module are
auto-generated.

When you want to run your CLJS tests against a browser on some kind of CI server you’ll need to
be able to run the tests from a command line and get back a status code. Karma is a well-known
and supported test runner that can do this for you, and shadow-cljs includes a target that
can add the appropriate wrappers around your tests so they will work in it.

Most npm packages will also include some instructions on how to use the actual code. The “old” CommonJS style just has require calls which translate directly:

var react = require("react");

(nsmy.app
(:require ["react":as react]))

Whatever "string" parameter is used when calling require we transfer to the :require as-is. The :as alias is up to you. Once we have that we can use the code like any other CLJS namespace!

(react/createElement "div"nil"hello world")

In shadow-cljs: always use the ns form and whatever :as alias you provided. You may also use :refer and :rename. This is different than what :foreign-libs/CLJSJS does where you include the thing in the namespace but then used a global js/Thing in your code.

Some packages just export a single function which you can call directly by
using (:require ["thing" :as thing]) and then (thing).

More recently some packages started using ES6 import statements in their examples. Those also translate pretty much 1:1 with one slight difference related to default exports.

The following table can be used for translation:

Important

This table only applies if the code you are consuming is packaged as actual ES6+ code. If the code is packaged as CommonJS instead the :default may not apply. See the section below for more info.

Table 1. ES6 Import to CLJS Require

ES6 Import

CLJS Require

import defaultExport from "module-name";

(:require ["module-name" :default defaultExport])

import * as name from "module-name";

(:require ["module-name" :as name])

import { export } from "module-name";

(:require ["module-name" :refer (export)])

import { export as alias } from "module-name";

(:require ["module-name" :rename {export alias}])

import { export1 , export2 } from "module-name";

(:require ["module-name" :refer (export1 export2)])

import { export1 , export2 as alias2 , […​] } from "module-name";

(:require ["module-name" :refer (export1) :rename {export2 alias2}])

import defaultExport, { export [ , […​] ] } from "module-name";

(:require ["module-name" :refer (export) :default defaultExport])

import defaultExport, * as name from "module-name";

(:require ["module-name" :as name :default defaultExport])

import "module-name";

(:require ["module-name"])

Notice that previously we were stuck using bundled code which included a lot of code we didn’t actually need. Now we’re in a better situation:
Some libraries are also packaged in ways that allow you to include only the parts you need, leading to much less code in your final build.

react-virtualized is a great example:

// You can import any component you want as a named export from 'react-virtualized', egimport { Column, Table } from 'react-virtualized'// But if you only use a few react-virtualized components,// And you're concerned about increasing your application's bundle size,// You can directly import only the components you need, like so:import AutoSizer from 'react-virtualized/dist/commonjs/AutoSizer'import List from 'react-virtualized/dist/commonjs/List'

With our improved support we we can easily translate this to:

(nsmy-ns;; all
(:require ["react-virtualized":refer (Column Table)])
;; OR one by one
(:require ["react-virtualized/dist/commonjs/AutoSizer":default virtual-auto-sizer]
["react-virtualized/dist/commonjs/List":default virtual-list]))

The :default option is currently only available in shadow-cljs, you can
vote here to hopefully make it standard. You can always use :as alias and then call alias/default if you prefer to stay compatible with standard CLJS in the meantime.

Default exports are a new addition in ECMAScript Modules and do not exist in CommonJS code. Sometimes you will see examples of import Foo from "something" when the code is actually CommonJS code. In theses cases (:require ["something" :default Foo]) will not work and (:require ["something" :as Foo]) must be used instead.

If a :require does not seem to work properly it is recommended to try looking at it in the REPL.

Since printing arbitrary JS objects is not always useful (as seen above) you can use (js/console.dir x) instead to get a more useful reprensentation in the browser console. goog/typeOf may also be useful at times. Since the above example shows "function" using :default would not work since :default basically is just syntax sugar for x/default.

shadow-cljs supports several different ways to include npm packages into your build. They are configurable via the :js-options :js-provider setting. Each :target usually sets the one appropriate for your build most often you won’t need to touch this setting.

Currently there are 3 supported JS Providers:

:require

Maps directly to the JS require("thing") function call. It is the default for all node.js targets since it can resolve require natively at runtime. The included JS is not processed in any way.

:shadow

Resolves the JS via node_modules and includes a minified version of each referenced file in the build. It is the default for the :browser target. node_modules sources do not go through :advanced compilation.

:closure

Resolves similarly to :shadow but attempts to process all included files via the Closure Compiler CommonJS/ES6 rewrite facilities. They will also be processed via :advanced compilation.

:shadow vs :closure

Ideally we want to use :closure as our primary JS Provider since that will run the entire application through :advanced giving us the most optimized output. In practice however lots of code available via npm is not compatible with the aggressive optimizations that :advanced compilation does. They either fail to compile at all or expose subtle bugs at runtime that are very hard to identify.

:shadow is sort of a stopgap solution that only processes code via :simple and achieves much more reliable support while still getting reasonably optimized code. The output is comparable (or often better) to what other tools like webpack generate.

Until support in Closure gets more reliable :shadow is the recommend JS Provider for :browser builds.

By default shadow-cljs will resolve all (:require ["thing" :as x]) requires following the npm convention. This means it will look at <project>/node_modules/thing/package.json and follow the code from there. To customize how this works shadow-cljs exposes a :resolve config option that lets you override how things are resolved.

Say you already have React included in your page via a CDN. You could just start using js/React again but we stopped doing that for a good reason. Instead you can continue to use (:require ["react" :as react]) but configure how "react" resolves!

Sometimes you wan’t more control over which npm package is actually used depending on your build. You can "redirect" certain requires from your build config without changing the code. This is often useful if you either don’t have access to the sources using such packages or you just want to change it for one build.

The :shadow-js and :closure have full control over :resolve and everything mentioned above works without any downsides. The :js-provider :require however is more limited. Only the initial require can be influenced since the standard require is in control after that. This means it is not possible to influence what a package might require internally. It is therefore not recommended to be used with targets that use require directly (eg. :node-script).

The above works fine in the Browser since every "react" require will be replaced, including the "react" require "react-table" has internally. For :js-provider :require however a require("react-table") will be emitted and node will be in control how that is resolved. Meaning that it will resolve it to the standard "react" and not the "preact" we had configured.

By default shadow-cljs will only look at the <project-dir>/node_modules directory when resolving JS packages. This can be configured via the :js-package-dirs option in :js-options. This can be applied globally or per build.

Relative paths will be resolved relative to the project root directory. Paths will be tried from left to right and the first matching package will be used.

DANGER: This feature is an experiment! It is currently only supported in shadow-cljs and other CLJS tools will yell at you if you attempt to use it. Use at your own risk. The feature was initially rejected from CLJS core but I think it is useful and should not have been dismissed without further discussion.

CLJS has an alternate implementation which in turn is not supported by shadow-cljs. I found this implementation to be lacking in certain aspects so I opted for the different solution. Happy to discuss the pros/cons of both approaches though.

We covered how npm packages are used but you may be working on a codebase that already has lots of plain JavaScript and you don’t want to rewrite everything in ClojureScript just yet. shadow-cljs provides 100% full interop between JavaScript and ClojureScript. Which means your JS can use your CLJS and CLJS can use your JS.

There are only a few conventions you need to follow in order for this to work reliably but chances are that you are already doing that anyways.

For string requires the extension .js will be added automatically but you can specify the extension if you prefer. Note that currently only .js is supported though.

Absolute requires like /some-library/components/foo mean that the compiler will look for a some-library/components/foo.js on the classpath; unlike node which would attempt to load the file from the local filesystem. The same classpath rules apply so the file may either be in your :source-paths or in some third-party .jar library you are using.

Relative requires are resolved by first looking at the current namespace and then resolving a relative path from that name. In the above example we are in demo/app.cljs to the ./bar require resolves to demo/bar.js, so it is identical to (:require ["/demo/bar"]).

Important

The files must not be physically located in the same directory. The lookup for the file appears on the classpath instead. This is unlike node which expects relative requires to always resolve to physical files.

It is expected that the classpath only contains JavaScript that can be consumed without any pre-processing by the Compiler. npm has a very similar convention.

The Closure Compiler is used for processing all JavaScript found on the classpath using its ECMASCRIPT_NEXT language setting. What exactly this setting means is not well documented but it mostly represents the next generation JavaScript code which might not even be supported by most browsers yet. ES6 is very well supported as well as most ES7 features. Similarly to standard CLJS this will be compiled down to ES5 with polyfills when required.

Since there are many popular JavaScript dialects (JSX, CoffeeScript, etc) that are not directly parsable by the Closure Compiler we need to pre-process them before putting them onto the classpath. babel is commonly used in the JavaScript world so we are going to use babel to process .jsx files as an example here.

The JS sources can access all your ClojureScript (and the Closure Library) directly by importing their namespaces with a goog: prefix which the Compiler will rewrite to expose the namespace as the default ES6 export.

CLJSJS is an effort to package Javascript libraries to be able to use them from within ClojureScript.

Since shadow-cljs can access npm packages directly we do not need to rely on re-packaged CLJSJS packages.

However many CLJS libraries are still using CLJSJS packages and they would break with shadow-cljs since it doesn’t support those anymore. It is however very easy to mimick those cljsjs namespaces since they are mostly build from npm packages anyways. It just requires one shim file that maps the cljsjs.thing back to its original npm package and exposes the expected global variable.

Since this would be tedious for everyone to do manually I created the shadow-cljsjs
library which provides just that. It does not include every package but I’ll keep adding
them and contributions are very welcome as well.

Note

The shadow-cljsjs library only provides the shim files. You’ll still need to
npm install the actual packages yourself.

CLJSJS packages basically just take the package from npm and put them into a .jar and re-publish them via clojars. As a bonus they often bundle Externs. The compiler otherwise does nothing with these files and only prepends them to the generated output.

This was very useful when we had no access to npm directly but has certain issues since not all packages are easily combined with others. A package might rely on react but instead of expressing this via npmthey bundle their own react. If you are not careful you could end up including 2 different react versions in your build which may lead to very confusing errors or at the very least increase the build size substantially.

Apart from that not every npm package is available via CLJSJS and keeping the package versions in sync requires manual work, which means packages are often out of date.

shadow-cljs does not support CLJSJS at all to avoid conflicts in your code. One library might attempt to use the "old" cljsjs.react while another uses the newer (:require ["react"]) directly. This would again lead to 2 versions of react on your page again.

So the only thing we are missing are the bundled Externs. In many instances these are not required due to improved externs inference. Often those Externs are generated using third-party tools which means they are not totally accurate anyways.

Development mode always outputs individual files for each namespace so that they can be hot loaded
in isolation. When you’re ready to deploy code to a real server you want to run the Closure Compiler
on it to generate a single minified result for each module.

By default the release mode output file should just be a drop-in replacements for the
development mode file: there is no difference in the way you include them in your HTML. You
may use filename hashing to improve caching characteristics on browser targets.

Usually you won’t need to add any extra configuration to create a release version for your build. The default config already captures everything necessary and should only require extra configuration if you want to override the defaults.

Each :target already provides good defaults optimized for each platform so you’ll have less to worry about.

Since we want builds to be fully optimized by the Closure Compiler :advanced compilation we need to deal with Externs. Externs represent pieces of code that are not included when doing :advanced compilation. :advanced works by doing whole program optimizations but some code we just won’t be able to include so Externs inform the Compiler about this code. Without Externs the Compiler may rename or remove some code that it shouldn’t.

Typically all JS Dependencies are foreign and won’t be passed through :advanced and thus require Externs.

Tip

Externs are only required for :advanced, they are not required in :simple mode.

With :auto the compiler will perform additional checks at compile time for your files only. It won’t warn you about possible externs issues in library code. :all will enable it for everthing but be aware that you may get a lot of warnings.

When enabled you’ll get warnings whenever the Compiler cannot figure out whether you are working with JS or CLJS code.

In :advanced the compiler will be renaming .baz to something "shorter" and Externs inform the Compiler that this is an external property that should not be renamed.

shadow-cljs can generate the appropriate externs if you add a typehint to the object you are performing native interop on.

Type-hint to help externs generation

(defnwrap-baz [x]
(.baz ^js x))

The ^js typehint will cause the compiler to generate proper externs and the warning will go away. The property is now safe from renaming.

Multiple interop calls

(defnwrap-baz [x]
(.foo ^js x)
(.baz ^js x))

It can get tedious to annotate every single interop call so you can annotate the variable binding itself. It will be used in the entire scope for this variable. Externs for both calls will still be generated.

Annotate x directly

(defnwrap-baz [^js x]
(.foo x)
(.baz x))

Important

Don’t annotate everything with ^js. Sometimes you may be doing interop on CLJS or ClosureJS objects. Those do not require externs. If you are certain you are working with a CLJS Object prefer using the ^clj hint.
It is not the end of the world when using ^js incorrectly but it may affect some optimizations when a variable is not renamed when it could be.

Calls on globals do not require a typehint when using direct js/ calls.

Writing Externs by hand can be challenging and shadow-cljs provides a way to write a more convenient way to write them. In combination with shadow-cljs check <your-build> you can quickly add the missing Externs.

Start by creating a externs/<your-build>.txt, so build :app would be externs/app.txt. In that file each line should be one word specifying a JS property that should not be renamed. Global variables should be prefixed by global:

Example externs/app.txt

# this is a comment
foo
bar
global:SomeGlobalVariable

In this example the compiler will stop renaming something.foo(), something.bar().

The Closure Compiler supports removing unwanted code by name. This allows removing code that normal dead-code removal can’t or won’t remove. This is quite dangerous as it can remove code you actually care about but it can remove a lot of dev only code easily. It is grouped into 4 separate options of which pretty much only :strip-type-prefixes is relevant to ClojureScript but other may be useful as well.

DANGER: Be careful with these options. They apply to your entire build and may remove code you actually need. You may accidentally remove code in libraries not written by you. Always consider other options before using this.

You can run npx shadow-cljs server inside the Terminal provided by IntelliJ and use Clojure REPL → Remote Run Configuration to connect to the provided nREPL server. Just select the "Use port from nREPL file" option in Cursive Clojure REPL → Remote or configure a fixed nREPL port if you prefer.

Note that the Cursive REPL when first connected always starts out as a CLJ REPL. You can switch it to CLJS by calling (shadow/repl :your-build-id). This will automatically switch the Cursive option as well. You can type :cljs/quit to drop back down to the CLJ REPL.

Note

You cannot switch from CLJ→CLJS via the Cursive select box. Make sure you use the call above to switch.

This section is written for CIDER version 0.20.0 and above. Ensure your Emacs environment has this version of the cider package or later. Refer to the CIDER documentation for full installation details.

Proto REPL is mostly intended for Clojure development so most features do not work for ClojureScript. It is however possible to use it for simple evals.

You need to setup a couple of things to get it working.

1) Create a user.clj in on of your :source-paths.

(nsuser)
(defnreset [])

The file must define the user/reset fn since Proto REPL will call that when connecting. If user/reset is not found it will call tools.namespace which destroys the running shadow-cljs server. We don’t want that. You could do something here but we don’t need to do anything for CLJS.

5) Run the Atom Command Proto Repl: Remote Nrepl Connection connect to localhost and the port you configured

6) Eval (shadow.cljs.devtools.api/watch :your-build) (if you used server in 4)

7) Eval (shadow.cljs.devtools.api/nrepl-select :your-build). The REPL connection is now in CLJS mode, meaning that everything you eval will be eval’d in JS. You can eval :repl/quit to get back to Clojure Mode. If you get [:no-worker :browser] you need to start the watch first.

8) Before you can eval CLJS you need to connect your client (eg. your Browser when building a :browser App).

9) Eval some JS, eg. (js/alert "foo"). If you get There is no connected JS runtime the client is not connected properly. Otherwise the Browser should show an alert.

Chlorine connects Atom to a Socket REPL, but also tries to refresh namespace. So first, open up Chlorine package config and check if configuration Should we use clojure.tools.namespace to refresh is set to simple, otherwise it’ll destroy the running shadow-cljs server.

Once you checked that the configuration is right, you can start your shadow app (replace app with whatever build):

$ shadow-cljs watch app

Now, all you have to do is to run the atom command Chlorine: Connect Clojure Socket Repl. This will connect a REPL to evaluate Clojure code. Next you need to run Chlorine: Connect Embeded, and it’ll connect the ClojureScript REPL too.

Now, you can use the Chlorine: Evaluate…​ commands to evaluate any Clojure or ClojureScript REPL. It’ll evaluate .clj files as Clojure, and cljc files as ClojureScript.

Once the app is loaded in the browser, and you see JS runime connected in the terminal where you started the app, Calva can connect to its repl. Open the project in VS Code and Calva will by default try to auto connect and prompt you with a list of builds read from shadow-cljs.edn. Select the right one (:app in this example) and Calva’s Clojure and Clojurescript support is activated.

(If you already have the project open in VS Code when you start the app, issue the Calva: Connect to a Running REPL Server in the Project command.)

Switch between Clojure and Clojurescript repl ctrl+alt+c ctrl+alt+t (or click the green cljc/clj button in the status bar). This determines both which repl is backing the editor and what terminal repl is being accessed, see above.

Fireplace.vim is a Vim/Neovim plug-in which provides Clojure REPL integration by acting as an nREPL client. When combined with Shadow-CLJS, it also provides ClojureScript REPL integration.

This guide uses as an example the app created in the official Shadow-CLJS Quick Start guide therefore refers to a few configuration items in the app’s shadow-cljs.edn. That being said, these configuration items are fairly generic so should be applicable to other apps with minor modifications.

As an nREPL client, Fireplace.vim depends on CIDER-nREPL (which is nREPL middleware that provides common, editor-agnostic REPL operations) therefore you need to include this dependency in ~/.shadow-cljs/config.edn or shadow-cljs.edn (as shown in the next sub-section.) Shadow-CLJS will inject the required CIDER-nREPL middleware once it sees this dependency.

Once that is done, start the app (note the Shadow-CLJS build ID, frontend, specified in shadow-cljs.edn):

npx shadow-cljs watch frontend

Open the app in a browser at http://localhost:8080/. Without this step, you would get the following error message from Fireplace.vim if you attempt to connect to the REPL server from within Vim/Neovim:

No application has connected to the REPL server.
Make sure your JS environment has loaded your compiled ClojureScript code.

This creates a Clojure (instead of ClojureScript) REPL session. Execute the following command to add ClojureScript support to the session (note the Shadow-CLJS build ID, frontend, specified in shadow-cljs.edn):

Sometimes shadow-cljs can fail to start properly. The errors are often very confusing and hard to identify. Most commonly this is caused by a few dependency conflicts on some of the important dependencies. When using just shadow-cljs.edn to manage your :dependencies it will provide a few extra checks to protect against these kinds of errors but when using deps.edn or project.clj these protections cannot be done so these errors happen more often when using those tools.

Generally the important dependencies to watch out for are

org.clojure/clojure

org.clojure/clojurescript

org.clojure/core.async

com.google.javascript/closure-compiler-unshaded

Each shadow-cljs version is only tested with one particular combination of versions and it is recommended to stick with that version set for best compatibility. It might work when using different versions but if you encounter any kind of weird issues consider fixing your dependency versions first.

The way to diagnose these issues vary by tool, so please refer to the appropriate section for further info.

Generally if you want to be sure you can just declare the matching dependency versions directly together with your chosen shadow-cljs version but that means you must also update those versions whenever you upgrade shadow-cljs. Correctly identifying where unwanted dependencies may be more work but will make future upgrades easier.

shadow-cljs will likely always be on the very latest version for all the listed dependencies above so if you need to stick with an older dependency you might need to stick with an older shadow-cljs version as well.

shadow-cljs is very often several versions ahead on the com.google.javascript/closure-compiler-unshaded version it uses, so if you are depending on the version org.clojure/clojurescript normally supplies that might cause issues. Make sure the thheller/shadow-cljs version is picked over the version preferred by org.clojure/clojurescript.

If you want to make your live easier just use shadow-cljs.edn to manage your dependencies if you can. It is much less likely to have these problems or will at least warn you directly.

If you have ensured that you are getting all the correct versions but things still go wrong please open a Github Issue with a full problem description including your full dependency list.

When using deps.edn to manage your dependencies via the :deps key in shadow-cljs.edn it is recommended to use the clj tool directly for further diagnosis. First you need to check which aliases you are applying via shadow-cljs.edn. So if you are setting :deps {:aliases [:dev :cljs]} you’ll need to specify these aliases when running further commands.

First of all you should ensure that all dependencies directly declared in deps.edn have the expected version. Sometimes transitive dependencies can cause the inclusion of problematic versions. You can list all dependencies via:

Listing all active dependencies

$ clj -A:dev:cljs -Stree

This will list all the dependencies. Tracking this down is a bit manual but you’ll need to verify that you get the correct versions for the dependencies mentioned above.

Please refer to the official tools.deps documentation for further information.

When using project.clj to manage you dependencies you’ll need to specify your configured :lein profiles from shadow-cljs.edn when using lein directly to diagnose the problem. For example :lein {:profiles "+cljs"} would require lein with-profiles +cljs for every command.

This will usually list all the current conflicts at the top and provide suggestions with the dependency tree at the bottom. The suggestions aren’t always fully accurate so don’t get mislead and don’t add exclusions to the thheller/shadow-cljs artifact.

Getting a CLJS REPL working can sometimes be tricky and a lot can go wrong since all the moving parts can be quite complicated. This guide hopes to address the most common issues that people run into and how to fix them.

A REPL in Clojure does exactly what the name implies: Read one form, Eval it, Print the result, Loop to do it again.

In ClojureScript however things are a bit more complicated since compilation happens on the JVM but the results are eval’d in a JavaScript runtime. There are a couple more steps that need to be done due in order to "emulate" the plain REPL experience. Although things are implemented a bit differently in shadow-cljs over regular CLJS the basic principles remain the same.

First you’ll need a REPL client. This could just be the CLI (eg. shadow-cljs cljs-repl app) or your Editor connected via nREPL. The Client will always talk directly to the shadow-cljs server and it’ll handle the rest. From the Client side it still looks like a regular REPL but there are a few more steps happening in the background.

1) Read: It all starts with reading a singular CLJS form from a given InputStream. That is either a blocking read directly from stdin or read from a string in case of nREPL. A Stream of characters are turned into actual datastructures, "(+ 1 2)" (a string) becomes (+ 1 2) (a list).

2) Compile: That form is then compiled on the shadow-cljs JVM side and transformed to a set of instructions.

3) Transfer Out: Those instructions are transferred to a connected JavaScript runtime. This could be a Browser or a node process.

4) Eval: The connected runtime will take the received instructions and eval them.

5) Print: The eval result is printed as a String in the JS runtime.

6) Transfer Back: The printed result is transferred back to the shadow-cljs JVM side.

7) Reply: The JVM side will forward the received results back to initial caller and the result is printed to the proper OutputStream (or sent as a nREPL message).

The shadow-cljs JVM side of things will require one running watch for a given build which will handle all the related REPL commands as well. It uses a dedicated thread and manages all the given events that can happen during development (eg. REPL input, changing files, etc).

The compiled JS code however must also be loaded by a JS runtime (eg. Browser or node process) and that JS runtime must connect back to the running shadow-cljs process. Most :target configurations will have the necessary code added by default and should just connect automatically. How that connect is happening is dependent on the runtime but usually it is using a WebSocket to connect to the running shadow-cljsHTTP server.

Once connected the REPL is ready to use. Note that reloading the JS runtime (eg. manual browser page reload) will wipe out all REPL state of the runtime but some of the compiler side state will remain until the watch is also restarted.

It is possible for more than one JS runtime to connect to the watch process. shadow-cljs by default picks the first JS runtime that connected as the eval target. If you open a given :browser build in multiple Browsers only the first one will be used to eval code. Or you could be opening a :react-native app in iOS and Android next to each other during development. Only one runtime can eval and if that disconnects the next one takes over based on the time it connected.

No application has connected to the REPL server. Make sure your JS environment has loaded your compiled ClojureScript code.

This error message just means that no JS runtime (eg. Browser) has connected to the shadow-cljs server. Your REPL client has successfully connected to the shadow-cljs server but as explained above we still need a JS runtime to actually eval anything.

Regular shadow-cljs builds do not manage any JS runtime of their own so you are responsible for running them.

For :target :browser builds the watch process will have compiled the given code to a configured :output-dir (defaults to public/js). The generated .js must be loaded in a browser. Once loaded the Browser Console should show a WebSocket connected message. If you are using any kind of custom HTTP servers or have over-eager firewalls blocking the connections you might need to set some additional configuration (eg. via :devtools-url). The goal is to be able to connect to the primary HTTP server.

These targets will have produced a .js file that are intended to run in a node process. Given the variety of options however you’ll need to run them yourself. For example a :node-script you’d run via node the-script.js and on startup it’ll try to connect to the shadow-cljs server. You should see a WebSocket connected message on startup. The output is designed to only run on the machine they were compiled on, don’t copy watch output to other machines.

The generated <:output-dir>/index.js file needs to be added to your react-native app and then loaded on an actual device or emulator. On startup it will also attempt to connect to the shadow-cljs server. You can check the log output via react-native log-android|log-ios and should show a WebSocket connected message once the app is running. If you see a websocket related error on startup instead it may have failed to connect to the shadow-cljs process. This can happen when the IP detection picked an incorrect IP. You can check which IP was used via shadow-cljs watch app --verbose and override it via shadow-cljs watch app --config-merge '{:local-ip "1.2.3.4"}'.

ClojureScript libraries are published to maven repositories just like Clojure. Most commonly they are published to Clojars but all other standard maven repositories work too.

shadow-cljs itself does not have direct support for publishing but since ClojureScript libraries are just uncompiled source files published in a JAR (basically just a ZIP compressed file) any common tool that is able to publish to maven will work. (eg. mvn, gradle, lein, etc). No extra compilation or other steps are required to publish. The ClojureScript compiler and therefore shadow-cljs is not involved at all.

There are a variety of options to publish libraries and I currently recommend Leiningen. The setup is very straightforward and doesn’t require much configuration at all.

Important

This does not mean that you have to use Leiningen during development of the library itself. It is recommended to just use Leiningen for publishing but use shadow-cljs normally otherwise. You’ll only need to copy the actual :dependencies definition once you publish. Remember to keep development related dependencies out though.

Assuming you are already using the recommended project structure where all your primary sources are located in src/main you can publish with a very simple project.clj.

This will generate the required pom.xml and put all sources from src/main into the published .jar file. All you need to run is lein deploy clojars to publish it. When doing this for the first time you’ll first need to setup proper authentication. Please refer to the official Leiningen and Clojars documentation on how to set that up.

Leiningen defaults to signing libraries via GPG before publishing which is a good default but given that this can be a hassle to setup and not many people are actually verifying the signatures you can disable that step via adding a simple :repositories config to the project.clj.

If you write tests or user other development related code for your library make sure to keep them in src/dev or src/test to avoid publishing them together with the library.

Also avoid generating output to resources/* since Leiningen and other tools may include those files into the .jar which may cause problems for downstream users. Your .jar should ONLY contains the actual source files, no compiled code at all.

Important

You can and should verify that everything is clean by running lein jar and inspecting the files that end up in it via jar -tvf target/library-1.0.0.jar.

Please note that currently only shadow-cljs has a clean automatic interop story with npm. That may represent a problem for users of your libraries using other tools. You may want to consider providing a CLJSJS fallback and/or publishing extra documentation for webpack related workflows.

You can declare npm dependencies directly by including a deps.cljs with :npm-deps in your project (eg. src/main/deps.cljs).

Example src/main/deps.cljs

{:npm-deps {"the-thing""1.0.0"}}

You can also provide extra :foreign-libs definitions here. They won’t affect shadow-cljs but might help other tools.

Since the JS world is still evolving rapidly and not everyone is using the same way to write and
distribute code there are some things shadow-cljs cannot work around automatically. These
can usually be solved with custom :resolve configs, but there may also be bugs or oversights.

The shadow-cljs compiler ensures that things on your source paths are compiled first, overriding files from JARs. This means that you can copy a source file from a library, patch it, and include it in your own source directory.

This is a convenient way to test out fixes (even to shadow-cljs itself!) without having to clone
that project and understand its setup, build, etc.