This is a stand-alone book for Fulcro developers that can be used by beginners and experienced
developers and covers most of the library in detail. Fulcro has a
pretty extensive set of resources on the web tailored to fit your learning style.
There is this book,
YouTube videos,
an interactive tutorial,
and full-blown sample applications.

A lot of time and energy went into creating these libraries and materials and providing them free of
charge. If you find them useful please consider
contributing to the project.

This book includes quite a bit of live code. Live code demos with their source
look like this:

All of the full stack examples use a mock server embedded in the browser to
simulate the interaction, but the source that you’ll read for the application is
identical to what you’d write for a real server.

Warning

If you’re viewing this directly from the GitHub Fulcro repository then you
won’t see the live code! Use http://book.fulcrologic.com instead.

The mock server has a built-in latency to
simulate a moderately slow network so you can observe behaviors over time. You can
control the length of this latency in milliseconds using the "Server Controls" in the upper-right
corner of this document (if you’re reading the HTML version with live examples).

The next chapter, "Getting Started", is an exception. The code you see in that chapter is meant to be added to
a template that you create and run on your own machine. A complete running version of it is available on GitHub
as described at the end of the chapter.

This chapter takes you through a step-by-step guide of how to go
from nothing to a full-stack basic application. Concepts are introduced as we
go, and given a very cursory definition in the interest of concision. Once you’ve got the general idea you
can use other materials to refine your understanding.

The Leiningen template is the very quickest way to get started. It gives you a number of useful
things like devcards, production builds, CI integration and more while also giving you the minimal
amount of actual code. This can save you hours of setup.

Important

This document assumes you’re working with Fulcro 2.5 and above. The differences are minor, but the DOM
factories changed to be more succinct.

The nodemo option tells the template not to include demonstration full-stack code. It gives you a
shell of a project that still contains everything you’d want to set up (cards, testing, a development
web server, figwheel, etc.) without much actual code to understand or delete.

You should stop here for a moment and read the README in your generated project to see how that
is laid out.

Figwheel is a hot-reload development tool. We recommend using figwheel sidecar so
you can easily start the project from the command line or use it from the REPL
support built into IntelliJ. The template already has the code for doing this. Part of
it is in user.clj, and the other part is a simple script script/figwheel.clj to invoke it:

(require '[user :refer [start-figwheel]])
(start-figwheel)

The README of your project describes how to start the various builds of your cljs code.

If you look at your project.clj file you’ll see it is configured to re-call mount on every hot load.
Mounting an already mounted app is the same as asking for a forced UI refresh.

This is all the real code you need to get started with a hot-code reload capable application! However, the
browser needs instructions to load this stuff up, and the target div of the mount needs to exist.

Now that you have a basic project working, let’s understand how to add some
content!

Important

When developing it is a good idea to: Use Chrome (the devtools only work there),
have the developer’s console open, and in the developer console settings: "Network, Disable cache (while
DevTools is open)", and "Console, Enable custom formatters".

Cached files can, as everywhere else, cause you lots of headaches. Fortunately they only really affect you poorly
on the initial load in Fulcro. Hot reloads typically work very well.

One of the most maddening things that can happen during development is mystery around build errors. Nothing is
more frustrating than not understanding what is wrong.

As you work on your code your compiler errors and warnings will show in the browser. DO NOT RELOAD THE PAGE! If
you reload the page you’ll lose the warning or error, and that makes it harder to figure out what is wrong!

Instead, edit your code and re-save.

If you are having problems and you’ve lost your way, it is sometimes useful to ask figwheel to clean and recompile
everything:

Sometimes stuff just fails for reasons we fail to understand. There are times when
you may want to completely kill your REPL, clean the project with lein clean, and start again. Make sure all
of the generated Javascript is removed when you clean, or things might not clear up.

It is also true that problems in your project configuration may cause problems that are very difficult to
understand. If this happens to you (especially if you’ve never run a project with the current project setup) then
it is good to look at things like dependency problems with lein deps :tree and fix those.

In general, if you see a conflict on versions it will work to place the newest version of the conflicted dependency into
your own dependency list. This can cause problems as well, but is less likely to fail than using an older version
of a library that doesn’t have some needed feature of bug fix.

Fulcro supplies defsc to build React components. This macro emits React components that work as 100% raw React
components (i.e. once you compile them to Javascript they could be used from other native React code).

There are also factory functions for generating all standard HTML5 DOM elements in React in the fulcro.client.dom namespace.

As of Fulcro 2.5 properties no longer need #js, are optional, and classname keywords exist as a shortcut, so
the body of that example could be written (dom/div :.a (dom/p "Hello")) instead.

For our purposes we won’t be saying much about the React lifecycle methods, though they can be added. The basic
intention of this macro’s syntax is to declare a component that can render UI and participate in our
data-driven story.

This macro emits the equivalent of a React component with a render method.

Luckily, there are factory methods for all of HTML5 in fulcro.client.dom. These functions generally take a Javascript map
as their first argument (for things like classname and event handlers) and any children. There are two ways to
generate the Javascript map: with the reader tag #js or with clj→js.

Version 2.5 no longer requires the #js, the properties are optional, and they support an optional shorthand keyword
for adding CSS class and DOM ids:

Fulcro 2.5+:

(dom/div :.a#thing "Hi") ; keyword can contain any number of classes preceeded by dots, and an id with #
(dom/div :.a#thing {:data-prop3} "Hi") ; props can still be supplied with the keyword
(dom/div :.a.b.c "Hi") ; Any number of static classes (a b, and c).
(dom/div {:className"a":data-prop3} "Hi") ; or it can all be done in props

The 2.5 versions are macros that obtain the same runtime speed as the older versions in the most cases. Note that supplying
props as nil (instead of omitting them) will result in a very slight performance improvement if the element has children.

Important

If you’re writing your UI in CLJC files in 2.5, then you need to make sure you use a conditional
reader to pull in the proper server DOM functions for Clojure:

The reason this is necessary is that CLJS requires macros to be in CLJ files, but in order to get higher-order
operation in CLJ the DOM elements must be functions. In CLJS, you can have both a macro and function with
the same name, but this is not true in CLJ. Therefore, in order to get the optimal (inlined) client performance two namespaces are
required.

React components receive their data through props and state (which is local mutable state on the component).
In Fulcro we highly recommend using props for most things. This
ensures that various other features work well. The data passed to a component can be accessed (as a cljs map) by
calling prim/props on this, or by destructuring in the second argument of defsc.

So, let’s define a Person component to display details about
a person. We’ll assume that we’re going to pass in name and age as properties:

Now, in order to use this component we need an element factory. An element factory lets
us use the component within our React UI tree. Name confusion can become an
issue (Person the component vs. person the factory?) we recommend prefixing the factory with ui-:

Part of our quick development story is getting hot code reload to update the UI whenever we change the source.
Try editing the UI of Person and save. You should see the UI update even though the person’s data didn’t change.

You should already be getting the picture that your UI is going to be a tree composed from a root element. The
method of data passing (via props) should also be giving you the picture that supplying data to your UI (through root)
means you need to supply an equivalently structured tree of data. This is true of basic React.
However, just to drive the point home let’s make a slightly more complex UI and see it in detail:

Obviously it isn’t going to be desirable to hand-manage this very well for anything
but the most trivial application (which is the crux of the problems with most UI libraries).

At best it does give us a persistent data structure that represents the
current "view" of the application (which has many benefits), but at worst it requires us to "think globally"
about our application. We want local reasoning. We also want to be able to easily re-compose our UI as needed,
and a static data graph like this would have to be updated every time we made a change! Almost equally as bad: if
two different parts of our UI want to show the same data then we’d have to find and update a bunch of copies
spread all over the data tree.

This is certainly a possibility; however, it leads to other complications. What is the data model? How do you
interact with remotes to fill your data needs? Fulcro has a very nice cohesive story for these questions,
while other systems end up with complications like event handler middleware, coeffect accretion,
and signal graphs…​not to mention that the sideband solution says nothing definitive about how you actually
accomplish the server interactions with said data model.

Fulcro has a model for all of this, and it is surprising how simple it makes your application once you
put your appliation together. Let’s look at the steps and parts:

All applications have some starting initial state. Since our UI is a tree, our starting state needs to
somehow establish what goes to the initial nodes.

In Fulcro, there is a way to construct the initial tree of data in a way that allows for local reasoning and
easy refactoring: co-locate the initial desired part of the tree with the component that uses it. This allows
you to compose the state tree in exactly the same way as the UI tree.

The defsc macro makes short work of this with the initial-state option. Simply give it a
lambda that gets parameters (optionally from the parent) and returns a map representing the state of the component.
You can retrieve this data using (prim/get-initial-state Component).

You must reload your browser for this to show up. Fulcro pulls this data into the database when the
application first mounts, not on hot code reload (because that would change your app state, and hot
code reload is more useful without state changes).

Now a lot of the specific data here is just for demonstration purposes. Data like this (people) would almost
certainly come from a server, but it serves to illustrate that we can localize the initial data needs of a
component to the component, and then compose that into the parent in an abstract way
(by calling get-initial-state against that child).

There are several benefits of this so far:

It generates the exact tree of data needed to feed the initial UI.

That initial state becomes your initial application database.

It restores local reasoning (and easy refactoring). Moving a component just means local reasoning about the
component being moved and the component it is being moved from/to: You remove the get-initial-state from one
parent and add it to a different one.

You can see that there is no magic if you just pull the initial tree at the REPL:

Fulcro unifies the data access story using a co-located query on each component. This sets up data access
for both the client and server, and also continues our story of local reasoning and composition.

Queries go on a component in the same way as initial state: as static implementations of a protocol.

The query notation is relatively light, and we’ll just concentrate on two bits of query syntax: props and joins.

Queries form a tree just like the UI and data. Obtaining a value at the current node in the tree traversal is done
using the keyword for that value. Walking down the graph (a join) is represented as a map with a single entry whose
key is the keyword for that nested bit of state.

This query reads "At the root you’ll find :friends, which joins to a nested entity that has a label and people,
which in turn has nested properties name and age.

A vector always means "get this stuff at the current node"

:friends is a key in a map, so at the root of the application state the query engine would expect to find that
key, and would expect the value to be nested state (because maps mean joins on the tree)

The value in the :friends join must be a vector, because we have to indicate what we want out of the nested data.

Joins are automatically to-one if the data found in the state is a map, and to-many if the data found is a
vector. In the example above the :friends field from root pointed to a single PersonList, whereas the PersonList
field :person-list/people pointed to a vector of Person. Be care that you don’t confuse yourself with
naming (e.g. friends is plural, but points to a single list).

The namespacing of keywords in your data (and therefore your query) is highly encouraged, as it makes it clear to the
reader what kind of entity you’re working against (it also ensures that over-rendering doesn’t happen on
refreshes later).

You can try this query stuff out in your REPL. Let’s say you just want the friends list label. The function
db→tree can take an application database (which we can generate from initial state) and run a query
against it:

HINT: The mirror of initial state with query is a great way to error-check your work (and defsc does some of that
for you): For each scalar property in
initial state, there should be an identical simple property in your query. For each join of initial state to a child via
get-initial-state there should be a query join via get-query to that same child.

We want our queries to have the same nice local-reasoning as our initial data tree. The get-query function
works just like the get-initial-state function, and can pull the query from a component. In this case, you
should not ever call query directly. The get-query function augments the subqueries with metadata that is
important at a later stage.

This all looks like a minor (and useless) change. The operation is the same; however, we’re getting close to
the magic, so stick with us. The major difference in this code is that even though the database starts
out with the initial state, there is nothing to say we have to query for everything that is in there,
or that the state has to start out with everything we might query for in the future. We’re getting
close to having a dynamic data-driven application.

Notice that everything we’ve done so far has global client database implications, but that each component
codes only the portion it is concerned with. Local reasoning is maintained. All software evolution in
this model preserves this critical aspect.

Also, you now have application state that can evolve (the query is running against the active application
database stored in an atom)!

Important

You should always think of the query as "running from root". You’ll
notice that Root still expects to receive the entire data tree for the UI (even though it doesn’t have to
know much about what is in it, other than the names of direct children), and it still picks out those sub-trees
of data and passes them on. In this way an arbitrary component in the UI tree is not querying
for it’s data directly in a side-band sort of way, but is instead being composed in from parent to parent all the
way to the root. Later, we’ll learn how Fulcro can optimize this and pull the data from the database for
a specific component, but the reasoning will remain the same.

The queries on component describe what data the component wants from the database; however, you’re not allowed
to put code in the database, and sometimes a parent might compute something it needs to pass to a child like
a callback function.

It turns out that we can optimize away the refresh of components (if their data has not changed). This
means that we can use a component’s query to directly re-supply data for refresh; however, since doing so
skips the rendering of the parent, if we are not careful this can lead to "losing" these extra bits of
computationally generated data passed from the parent, like callbacks.

Let’s say we want to render a delete button on our individual people in our UI. This button will mean
"remove the person from this list"…​but the person itself has no idea which list it is in. Thus,
the parent will need to pass in a function that the child can call to affect the delete properly:

Pulling the onDelete from the passed props (WRONG). The query has to be changed to a lambda to turn off error checking to even try this method.

Invoking the callback when delete is pressed.

This method of passing a callback will work initially, but not consistently. The problem is that we can optimize away a
re-render of a parent when it can figure out how to pull just the data of the child on a refresh, and in that case the
callback will get lost because only the database data will get supplied to the child! Your delete button will work
on the initial render (from root), but may stop working at a later time after a UI refresh.

There is a special helper function that can record the computed data like callbacks onto the child that receives them
such that an optimized refresh will still know them. There is also an additional (optional) component parameter to defsc
that you can use to deconstruct them:

In general you don’t have to think about how the UI updates, because most changes are run within the
context that needs refreshed. But for general knowledge UI Refresh is triggered in two ways:

Running a data modification transaction on a component (which will re-render the subtree of that component), and
refresh only the DOM for those bits that had actual changes.

Telling Fulcro that some specific data changed (e.g. :person/name).

The former is most common, but the latter is often needed when a change executed in one part of the application
modifies data that some UI component elsewhere in the tree needs to respond to.

So, if we run the code that affects changes from the component that will need to refresh (a very common case) we’re
covered. If a child needs to make a change that will affect a parent (as in our earlier example), then the
modification should run from the parent via a callback so that refresh will not require further interaction. Later we’ll
show you how to deal with refreshes that could be in far-flung parts of the UI. First, let’s get some data
changing.

Every change to the application database must go through a transaction processing system. This has two
goals:

Abstract the operation (like a function)

Treat the operation like data (which allows us to generalize it to remote interactions)

The operations are written as quoted data structures. Specifically as a vector of mutation
invocations. The entire transaction is just data. It is not something run in the UI, but instead
passed into the underlying system for processing.

You essentially just "make up" names for the operations you’d like to do to your database, just like
function names. Namespacing is encouraged, and of course syntax quoting honors namespace aliases.

When a transaction runs in Fulcro it passes things off to a multimethod. The multi-method is described in more
detail in the section on the mutation multimethod, but Fulcro provides a macro that makes
building (and using) mutations easier: defmutation.

The template application comes with a pre-built namespace for these src/main/app/api/mutations.cljs, but you can put them anywhere as long
as the namespace in question is required by your application at runtime. Note there is also a mutations.clj, which is
for the server-side handling of these same mutations.

A mutation looks a bit like a method. It can have a docstring, and the argument list will always receive a
single argument (params) that will be a map (which then allows destructuring).

The body looks a bit like a letfn, but the names we use for these methods are pre-established. The one
we’re interested in at the moment is action, which is what to do locally. The action method will be
passed the application database’s app-state atom, and it should change the data in that atom to reflect
the new "state of the world" indicated by the mutation.

For example, delete-person must find the list of people on the list in question, and filter out the one
that we’re deleting:

The require ensures that the mutations are loaded, and also gives us an alias to the namespace of the mutation’s symbol.

Running the transaction in the callback.

Note that our mutation’s symbol is actually app.api.mutations/delete-person, but the syntax quoting will fix it.
Also realize that the mutation is not running in the UI, it is instead being handled "behind the scenes". This
allows a snapshot of the state history to be kept, and also a more seamless integration to full-stack operation
over a network to a server (in fact, the UI code here is already full-stack capable without any changes!).

This is where the power starts to show: all of the minutiae above is leading us to some grand unifications when
it comes to writing full-stack applications.

Fortunately, we have a very good solution to the mutation problem above, and it is one that has been around for decades:
database normalization!

Here’s what we’re going to do:

Each UI component represents some conceptual entity with data (assuming it has state and a query). In a fully
normalized database, each such concept would have its own table, and related things would refer to it
through some kind of foreign key. In SQL land this looks like:

In a graph database (like Datomic) a reference can have a to-many arity, so the direction can be more natural:

Since we’re storing things in a map, we can represent "tables" as an entry in the map where the key is the
table name, and the value is a map from ID to entity value. So, the last diagram could be represented as:

This is close, but not quite good enough. The set in :person-list/people is a problem. There is no schema, so there is no
way to know what kind of thing "1" and "2" are!

The solution is rather easy: code the foreign reference to include the name of the table (is a single such
"pointer", and to-many relations
store many such "pointers" in a vector (so you end up with a doubly-nested vector)):

So, now that we have the concept and implementation, let’s talk about conventions:

Properties are usually namespaced (as shown in earlier examples)

Table names are usually namespaced with the entity type, and given a name that indicates how it is indexed.
For example: :person/by-id, :person-list/by-name, etc. If you use Clojure spec, you may choose to
alter this a bit for convenience in namespace-aliasing keywords (e.g. ::my-db-schema/person-by-id).

Fortunately, you don’t have to hand-normalize your data. The components have almost everything they need to
do it for you, other than the actual value of the Ident. So, we’ll add one more option to your components
(and we’ll add IDs to the data at this point, for easier implementation):

Adding an ident allows Fulcro to know how to build a FK reference to a person (given its props). The first element is the table name, the second is the name of the property that
contains the ID of the entity.

We will be using IDs now, so we need to add :db/id to the query (and props destructuring). This is just a convention for the ID attribute

The state of the entity will also need the ID

The callback can now delete people by their ID, which is more reliable.

The list will have an ID, and an Ident as well

If you reload the web page (needed to reinitialize the database state), then you can look at the newly normalized
database at the REPL:

Note that db→tree understands this normalized form, and can convert it (via a query)
to the proper data tree. db→tree (for legacy reasons) requires a way to resolve references (idents) and the
database. In Fulcro these are the same. So, try this at the REPL:

References are always idents, meaning we know the value to remove from the FK list

By defining a function that can filter the ident from (1), we can use update-in on the person list table’s people.

This is a very typical operation in a mutation: swap on the application state, and update a particular thing
in a table (in this case the people to-many ref in a specific person list).

If we were to now wrap the person list in any amount of additional UI (e.g. a nav bar, sub-pane, modal dialog, etc) this
mutation will still work perfectly, since the list itself will only have one place it ever lives in the
database.

The get-query function adds the component itself to the metadata for that query fragment. We already know that
we can call the static methods on a component (in this case we’re interested in ident).

So, Fulcro includes a function called tree→db that can simultaneously walk a data tree (in this case initial-state) and a
component-annotated query. When it reaches a data node whose query metadata names a component with an Ident, it
places that data into the approprite table (by calling your ident function on it to obtain the table/id), and
replaces the data in the tree with its FK ident.

Once you realize that the query and the ident work together to do normalization, you can more easily
figure out what mistakes you might make that could cause auto-normalization to fail (e.g. stealing a query from
one component and placing it on another, writing the query of a sub-component by-hand instead of pulling it
with get-query, etc.).

So far we’ve been hacking things in place and using the REPL to watch what we’re doing. There are better ways to work
on Fulcro applications, and now that we’ve got one basically working, let’s take a look at them both.

A relatively recent (late 2017) addition to the ecosystem is Fucro Inspect. A set of tools you can load into your
environment during development. In fact, the template already has them (for the dev build)! On OSX or Linux, simply hit
CTRL-F. See Fulcro Inspect’s documentation for how to set the keyboard shortcut in Windows.

The DB tab of this tool shows you your application’s database and has a time slider to see the history of states! It also
has tabs for showing you transactions that have run, and network interactions. See the tool’s documentation for more
information. In fact, by the time you read this it will probably have even more exciting features!

There is a build in the template project called cards. This starts up a development environment where you can
code entire applications (or portions of them) in an environment that can show you live state and is quite handy, particularly
for working with small parts of your program (remember, we can actually split off chunks of the application because they
are all relative to their parent).

Starting in Fulcro 2.5 the prebuilt servers for Fulcro require that you add some dependencies to your project.
These namespaces dynamically resolve these so that you won’t end up with extra dependencies in your product unless you
need them:

The easy server is based upon the component system. It is set up so that it can be stopped, code refreshed,
and restarted very quickly. The management functions are already written in src/dev/user.clj underneath
the Figwheel startup code.

The make-fulcro-server function needs to know where to find the server config file. You can tell it a number
of other things, including which components you’d like to be available when parsing the incoming
client requests. In the template, the only component available is the one that reads the application
config (which contains the port on which to run the web server).

The configuration is meant for production environments, and requires a default file that spells out
defaults in case the main config does not have values for them, and a primary config file that can
override any defaults.

Your template already has these in src/main/config (the config component looks for defaults.edn on the
CLASSPATH at relative location config/):

defaults.edn:

{:port3000}

dev.edn:

{}

The first file is always looked for by the server, and should contain all of the default settings you think you
want independent of where the server is started.

The server (for safety reasons in production) will not start if there isn’t a user-specified file containing potential
overrides.

Basically, it will deep-merge the two and have the latter override things in the former. This makes mistakes in
production harder to make. If you read the source of the go function in the user.clj file you’ll see that
we supply this development config file as an argument. In production systems you’ll typically want this file to be
on the filesystem when an admin can tweak it.

When you add/change code on the server you will want to see those changes in the live server without having to restart
your REPL.

user=> (restart)

will do this.

If there are compiler errors, then the user namespace might not reload properly. In that case, you should be able
to recover using:

user=> (tools-ns/refresh)
user=> (go)

Warning

Don’t call refresh while the server is running. It will refresh the code, but it will lose the reference to
the running server, meaning you won’t be able to stop it and free up the network port. If you do this, you’ll have to
restart your REPL.

Figwheel comes with a server that we’ve been using to serve our client. When you want to build a full-stack app
you must serve your client from your own server. Thus, if you load your page with the figwheel server (which is still
available on an alternate port) you’ll see your app, but the server interactions won’t succeed.

One might ask: "If I don’t use figwheel’s server, do I lose hot code reload on the client?"

The answer is no. When figwheel compiles your application it embeds it’s own websocket code in your application for
hot code reload. When you load that compiled code (in any way) it will try to connect to the figwheel websocket.

So your network topology was:

where both the HTML/CSS/JS resources and the hot code were coming from different connections to the same server.

The networking picture during full-stack development just splits these like this:

Fulcro’s client will automatically route requests to the /api URI of the source URL that was used to load the page,
and Fulcro’s server is built to watch for communications at this endpoint.

It is very handy to be able to look at your application’s state to see what might be wrong. We’ve been manually
dumping application state at the REPL using a rather long expression. So, at this point make sure
you are either running your application in a devcard, or you know how to look at things with Fulcro
Inspect. The output in the devcards is typically easier for beginners to read.

Now we will start to see more of the payoff of our UI co-located queries and auto-normalization. Our application
so far is quite unrealistic: the people we’re showing should be coming from a server-side database, they
should not be embedded in the code of the client. Let’s remedy that.

Fulcro provides a few mechanisms for loading data, but every possible load scenario can be done using
the fulcro.client.data-fetch/load function.

It is very important to remember that our application database is completely normalized, so anything we’d want to put
in that application state will be at most 3 levels deep (the table name, the ID of the thing in the table, and the
field within that thing). We’ve also seen that Fulcro can also auto-normalize complete trees of data,
and has graph queries that can be used to ask for those trees.

Thus, there really are not very many scenarios!

The three basic scenarios are:

Load something into the root of the application state

Load something into a particular field of an existing thing

Load some pile of data, and shape it into the database (e.g. load all of the people, and then separate them into
a list of friends and enemies).

Let’s try out these different scenarios with our application.

First, let’s correct our application’s initial state so that no people are there:

When you load something you will use a query from something on your UI (it is rare to load something you don’t want to
show). Since those components (should) have a query and ident, the result of a load can be sent from the server as a
tree, and the client can auto-normalize that tree just like it did for our initial state!

This case is less common, but it is a simple starting point. It is typically used to obtain something that you’d want
to access globally (e.g. the user info about the current session). Let’s assume that our Person component represents
the same kind of data as the "logged in" user. Let’s write a load that can ask the server for the "current user" and
store that in the root of our database under the key :current-user.

Loads, of course, can be triggered at any time (startup, event, timeout). Loading is just a function call.

For this example, let’s trigger the load just after the application has started.

Of course hot code reload does not restart the app (it just hot patches the code), so to see this load trigger we must
reload the browser page.

If you do that at the moment, you should see an error in the various consoles related to the failure of the load.

Important

Make sure your application (or dev card) is running from your server (port 3000) and not the figwheel one!

Technically, load is just writing a query for you (in this case [{:current-user (prim/get-query Person)}]) and sending it to the
server. The server will receive exactly that query as a CLJ data structure.

You now need to converting the raw CLJ query into a response. You can read more
about the gory details of that in the developer’s guide; however, Fulcro’s has some
helpers that make our job much easier.

The template has a spot to put your query handlers in src/main/app/api/read.clj.
Since we’re on the server and we’re going to be supplying and manipulating people, we’ll just make a single atom-based
in-memory database. This could easily be stored in a database of any kind.
To handle the incoming "current user" request, we can use a macro to write the handler for us.
Change the file to look like this:

This actually augments a multimethod, which means we need to make sure this namespace is loaded by our server. The
user namespace already does this. So, you should be able to simply restart/refresh the server at the SERVER REPL:

user=> (restart)

If you’ve done everything correctly, then reloading your application should successfully load your current user. You
can verify this by examining the network data, but it will be even more convincing if you look at your client database
via the dev card visualization on Fulcro Inspect. It should look something like this:

Of course, the question is now "how do I use that in some arbitrary component?" We won’t completely
explore that right now, but the answer is easy: The query syntax has a notation for "query something at the root". It looks like this:
[ {[:current-user '_] (prim/get-query Person)} ]. You should recognize this as a query join, but on something that
looks like an ident without an ID (implying there is only one, at root).

We’ll just use it on the Root UI node, where we don’t need to "jump to the top":

The next common scenario is loading something into some other existing entity in your database. Remember that since
the database is normalized this will cover all of the other loading cases (except for the one where you want to convert
what the server tells you into a different shape (e.g. paginate, sort, etc.)).

Fulcro’s load method accomplishes this by loading the data into the root of the database, normalizing it, then
(optionally) allowing you to re-target the top-level FK to different location(s) in the database.

The :target option indicates that once the data is loaded and normalized (which will leave the FK reference
at the root as we saw in the last section) this top-level reference (or vector of references) will be moved into the key-path provided.
Since our database is normalized, this means a 3-tuple (table, id, target field).

Warning

It is important to choose a keyword for this load that won’t stomp on real data in your database’s root.
We already have the top-level keys :friends and :enemies as part of our UI graph from root. So, we’re making up
:my-friends as the load key. One could also namespace the keyword with something like :server/friends.

Since friend and enemies are the same kind of query, let’s add both into the startup code (in the card/client):

It is somewhat common for a server to return data that isn’t quite what we want in our UI. So far we’ve just been placing
the data returned from the server directly in our UI. Fulcro’s load mechanism allows a post mutation of the loaded
data once it arrives, allowing you to re-shape it into whatever form you might desire.

For example, you may want the people in your lists to be sorted by name. You’ve already seen how to write client
mutations that modify the database, and that is really all you need. The client mutation for sorting the people
in the friends list could be (in mutations.cljs):

Once things are loaded from the server they are immediately growing stale (unless you’re pushing updates with
websockets). It is very common to want to re-load a particular thing in your database. Of course, you can trigger
a load just like we’ve been doing, but in that case we reloading a whole bunch of things. What if we just wanted to
refresh a particular person (e.g. in preparation for editing it).

The load function can be used for that as well. Just replace the keyword with an ident, and you’re there!

Load can take the app or any component’s this as the first argument, so from within the UI we can trigger a load
using this:

The incoming query will have a slightly different form, so there is an alternate macro for making a handler for entity
loading. Let’s add this in our server’s read.clj:

(defquery-entity :person/by-id"Server query for allowing the client to pull an individual person from the database"
(value [env id params]
; the update is just so we can see it change in the UI
(update (get @people-db id) :person/namestr" (refreshed)")))

The defquery-entity takes the "table name" as the dispatch key. The value method of the query handler will receive
the server environment, the ID of the entity to load, and any parameters passed with the query (see the :params option
of load).

In the implementation above we’re augmenting the person’s name with "(refreshed)" so that you can see it happen in the UI.

Remember to (restart) your server to load this code.

Your UI should now have a button, and when you press it you should see one person update!

There is a special case that is somewhat common: you want to trigger a refresh from an event on the item that needs
the refresh. The code for that is identical to what we’ve just presented (a load with an ident and component); however,
the data-fetch namespace includes a convenience function for it.

So, say we wanted a refresh button on each person. We could leverage df/refresh for that:

Fulcro’s load system covers a number of additional bases that bring the story to completion. There are load markers
(so you can show network activity), UI refresh add-ons (when you modify data that isn’t auto-detected, e.g. through a post
mutation), server query parameters, and error handling. See the Developers Guide, doc strings, or source for more details.

where remote is the name of a remote server (the default is remote). You can have any number of network remotes.
The default one talks to the
page origin at /api. What is this AST we speak of? It is the abstract syntax tree of the mutation itself (as data).
Using a boolean true means "send it just as the client specified". If you wish you can pull the AST from the env,
augment it (or completely change it) and return that instead. See the Developers Guide for more details.

Now that you’ve got the UI in place, try deleting a person. It should disappear from the UI as it did before; however,
now if you’re watching the network you’ll see a request to the server. If you server is working right, it will handle
the delete.

Try reloading your page from the server. That person should still be missing, indicating that it really was removed
from the server.

Now that you’ve gotten an overview and have written some code, you can read through the remaining chapters for
more detail on each topic.

This chapter covers some detail about the core language features and theory that are important in the Fulcro ecosystem. You
need not read this chapter to use Fulcro, but it will aid in your understanding of it quite a bit, especially if you’re
relatively new to Clojurescript.

Many of the most interesting and compelling features of Fulcro are directly or
indirectly enabled (or made simpler) by the use of persistent data structures
that are a first-class citizen of the language.

In imperative programming languages like Java and Javascript you have no idea what
a function or method might do to your program state:

Person p = new Person();
doSomethingOnAnotherThread(p);
p.fumble();
// did p just change??? Did I just cause a race condition???

This leads to all sorts of subtle bugs and is arguably the source of many of
the hardest problems in keeping software sustainable today. What if Person couldn’t
change and you instead had to copy instead if you wanted to modify?

Person p = new Person();
doSomethingOnAnotherThread(p);
Person q = p.fumble();
// p is definitely unchanged, but q could be different

Now you can reason about what will happen. The other thread will see p exactly as
it was when you (locally) reasoned about it. Furthermore, q cannot be affected
because if p is truly "read-only" then I still know what it is when I use it to
derive q (the other thread can’t modify it either).

In order to derive these benefits you need to either write objects that enforce
this behavior (which is highly inconvenient and hard to make efficient
in imperative langauges), or use a programming language that supplies the ability
to do so as a first-class feature.

Another benefit is that persistent data structures can do structural sharing. Basically
the new version of a map, vector, list, or set can use references to point to any
parts of the old version that are still the same in the new version. This means,
for example, that adding an element to the head of a list that had 1,000,000 entries
(where only one is being changed) is still a constant time operation!

Here are some of the features in Fulcro that trivially result from using persistent data structures:

Render is a function you make that generates a data structure known as the
VDOM (a lightweight virtual DOM)

On The first "frame", the real DOM is made to match this data structure.

On every subsequent frame, render is used to make a new VDOM. React
compares the prior VDOM (which is cached) to the new one, and then applies the
changes to the DOM.

The cool realization the creators of React had was that the DOM operations
that are slow and heavy, but there are efficient ways to figure out what
needs to be changed via the VDOM without you having to write a bunch of
controller logic.

Now, because React lives in a mutable space (Javascript), it allows all sorts of things
that can embed "rendering logic" within a component. This sounds like a good
idea to our OOP brains, but consider this:

What if you could have a complete snapshot of the state of your application, pass
that to a function, and have the screen just "look right". Like writing a 2D game: you
just redraw the screen based on the new "state of the world". All of the sudden your
mind shifts away from "bit twiddling" to thinking more about the representation
of your model with minimal data!

That is what we mean by "pure rendering".

Here’s an example to whet your appetite: Nested check-boxes.
In imperative programming each checkbox has it’s own state, and when we want a "check all"
we end up writing nightmares of logic to make sure the thing works right because we’re
having to store a mutable value into an object that then does the rendering.
Then we play with it and find out we
forgot to handle that event where some sub-box gets unchecked to
fire an event to ensure to uncheck the "select all"…​oh wait, but when I do that
it accidentally fires the event from "check all" which unchecks everything
and then goes into an infinite loop!

What a mess! Maybe you eventually figure out something that’s tractable, but
that extra bit of state in the "check all" is definitely the source of bugs.

Here’s what you do in pure rendering with immutable data:

Each sub-item checkbox is a simple data structure with a :checked? key that has a boolean
value. You use that to directly tell the checkbox what it’s state should be
(and React enforces that…​making it impossible for the UI to draw it any
differently)

(defstate {:items [{:id:a:checked?true} {:id:b:checked?false} ...]})

For a "state of the world", these are read-only. (you have to make a "new
state of the world" to change one). When you render, the state of the
check-all is just the conjunction of it’s children’s :checked?:

The check-all button would have no application state at all, and React will
force it to the correct state based on the calculated value.
When the sub-items change, a new "state of the world"
is generated with the altered item:

(defnext-state (assoc-in state [:items0:checked?] false))

and the entire UI is re-rendered (React makes this fast
using the VDOM diff), the "check all" checkbox will just be
right!

If the "check all" button is pressed, then the logic is similarly very simple:
change the state for the subitems to checked if any were unchecked, or set them
all to unchecked if they were all checked:

Data-driven concepts were pioneered in web development by Facebook’s GraphQL and
Netflix’s Falcor. The idea is quite powerful, and eliminates huge amounts of
complexity in your network communication and application development.

The basic idea is this: Your UI, which might have various versions (mobile, web, tablet)
all have different, but related, data needs. The prevalent way of talking to our
servers is to use REST, but REST itself isn’t a very good query 'or' update
language. It creates a lot of complexity that we have to deal with in order
to do the simplest things. In the small, it is "easy". In the large, it isn’t
the best fit.

Data-driven applications basically use a more detailed protocol that allows the
client UIs to specify what they need, and also typically includes a "mutation
on the wire" notation that allows the client to abstractly say what it
needs the server to do.

So, instead of /person/3 you can instead say "I need person 3, but only their
name, age, and billing info. But in the billing info, I only need to know their
billing zip code".

Notice that this abstract expression (which of course has a syntax we’re
not showing you yet) is "walking a graph". This is why Facebook calls their language
"GraphQL".

You can imagine that the person and billing info might be stored in two tables
of a database, with a to-one relationship, and our query is basically asking
to query this little sub-graph:

Modifications are done in a similar, abstract way. We model them as if
they were "function calls on the wire". Like RPC/RMI:

'(change-person {:id3:age44})

but instead of actually 'calling' the function, we encode this list as
a data structure (it is a list containing a symbol and a map: the power of Clojure!) and then process that
data locally (in the back-end of the UI) and optionally also
transmit it 'as data' over the wire for server processing!

The client-side of Fulcro keeps all relevant data in a simple graph database, which
is referenced by a single top-level atom. The database itself is a persistent map.

The database should be thought of as a root-level node (the top-level map itsef),
and tables that can hold data relevant to any
particular component or entity in your program (component or entity nodes).

The tables are also simple maps, with a naming convention and well-defined structure.
The name of the table is typically namespaced with the "kind" of thing you’re storing,
and has a name that indicates the way it is indexed:

Items are joined together into a graph using a tuple of the table name and the key of
an entity. For example, the item above is known as [:person/by-id 4]. Notice that this
tuple is also exactly the vector you’d need in an operation that would pull data from that
entity or modify it:

These tuples are known as 'idents'. Idents can be used anywhere one node
in the graph needs to point to another. If the idents (which are vectors)
'appear' in a vector, then you are creating a 'to-many' relation:

Notice in the example above that Joe and Julie point at each other. This creates
a 'loop' in the graph. This is perfectly legal. Graphs can contain loops. The
table in the example contains 4 nodes.

The client database treats the 'root' node as a special set of non-table properties
in the top of the database map. Thus, an entire state database with 'root node'
properties might look like this:

The above data structure is now a graph database that looks like this:

This makes for a very compact representation of a graph with an arbitrary number of nodes and edges.
All nodes but the special "root node" live in tables. The root node itself is special because
it is the storage location for both root properties and for the tables themselves.

Important

Since the root node and the tables containing other nodes are merged
together into the same overall map it is important that you use
care when storing things so as not to accidentally collide on a name. Larger programs
should namespace all keywords.

The graph database on the client is the most central and key concept to understand in Fulcro. Remember
that we are doing pure rendering. This means that the UI is simply a function transforming this
graph database into the the UI.

There are two primary things to write in Fulcro: the UI and the mutations. The UI pulls data from
this database and displays it. The mutations evolve this database to a new version.
Every interaction that changes the UI should be thought of as a data manipulation. You’re making
a new state of the world that your pure renderer turns into DOM.

The graph format of the database means that your data manipulation, the main dynamic thing in
the entire application, is simplified down to updating properties/nodes, which themselves
live at the top of the state atom or are only 2-3 levels deep:

For the most part the UI takes care of itself. Clojure has very good functions for manipulating
maps and vectors, so even when your data structures get more complex the task is still about
as simple as it can be.

To avoid collisions in your database, the following naming conventions are recommended for
use in the Fulcro client-side graph database:

UI-only Properties

:ui/name. These are special in that they never end up in server queries
derived from components. Can be used on any node to hold UI-only state. Not needed if the node itself
is not involved with server interaction.

The core HTML5 elements all have simple factory functions that
generate the core elements that stand-in for the real DOM.
These stand-ins (commonly referred to as the virtual DOM or VDOM)
are ultimately what React uses to generate, diff, and update the real DOM.

So, there are functions for every possible HTML5 element. These are in the
fulcro.client.dom namespace.

Fulcro 2.4 and below required you to specify props as a #js map or nil.

Important

If you’re writing your UI in CLJC files in 2.5, then you need to make sure you use a conditional
reader to pull in the proper server DOM functions for Clojure:
(ns app.ui (:require #?(:clj [fulcro.client.dom-server :as dom] :cljs [fulcro.client.dom :as dom]))

As an example - CSS class names are specified with :className instead of :class.

Any time a VDOM includes a collection of elements they should each have a unique :key
attribute. This helps the React diff figure out how that collection has changed. You will
get warnings in the browser console if you fail to do so.

When you start a Fulcro application your :started-callback will get the completed app as a parameter.

Inside of this app is a reconciler under the key :reconciler. The reconciler is a central component in the system
that is responsible reconciling the differences between the database and the UI. Therefore it
is involved in processing the queries, merging novel data into the database, network interactions, and tracking
mounted components that might need refresh.

You will see it mentioned in many places in this book, and we’ll point out where you’ll use it directly.

When you create a new client you can pass options directly to the reconciler with :reconciler-options.
There are a number of options that are used internally by the higher-level layers of Fulcro and should not
really be used directly, but there are a number of options that can be quite useful.

:shared

A map of global (immutable) properties that will be visible to all components. See Shared State.

:shared-fn

A function to compute shared properties from the root props on UI refresh. only recomputed on
root-level refresh, such as a call to force-root-render. See Shared State.

:root-render

The root render function. Defaults to ReactDOM.render. Useful for switching to React Native.

A function (fn [component event]) that is called when react components either :mount or :unmount. Useful for debugging tools.

:tx-listen

A function of 2 arguments that will listen to transactions. Called when transact! runs. Gets the
transaction and the environment. The environment includes :old-state, :new-state, the state atom, etc.

:instrument

A function that will wrap all rendering. (fn [{:keys [props children class factory}] ) that
is called instead of the real factory. You can use this to wrap all UI components for things like performance
timing, tooling, etc.

The reconciler does a number of things to optimize rendering beyond the basics of React.

Fulcro provides a a built-in shouldComponentUpdate that uses a comparison of the prior props to
tell React to skip no-op updates that would lead to a useless VDOM diff.

When more than one component will refresh the components are refreshed in depth-first order. This allows Fulcro
to prevent double-rendering of children. (e.g. if a Parent and Child need to refresh then updating the parent
first will render the child, and the child can be skipped. If the child was rendered first then the parent refresh would
try to double-render the child).

If a component is the target of a refresh and has an ident then Fulcro will run the query for just that component,
and re-render just that component’s subtree (avoiding a root query and root render).

NOTE: a component without an ident will always trigger a root refresh, since there is no way to figure out
how to run that component’s query (queries are relative, and idents give you an anchor point for them).

The shouldComponentUpdate optimization reduces the load by quite a bit, but running the query from root can
be somewhat costly depending on how well you optimized your UI query. Thus, idents become a major
factor in both normalization and rendering performance since Fulcro relies on them in order to reduce
the query overhead of UI refresh.

Version 2.1+ of Fulcro include the ability to tell the reconciler which rendering mode to use.

:normal

The default mode. Uses all possible optimizations.

:keyframe

Disables the ident-based targeted refresh. Thus every render is considered a key frame of the DOM.
This means that every transaction/change will run the root UI query and render from root. The shouldComponentUpdate
optimization is in force and prevents quite a bit of work from React. This mode can be plenty fast and has the advantage
of not needing you to program with follow-on reads.

:brutal

Disables all optimizations, runs queries from root, runs refresh from root, and forces React to do a full
DOM diff. Primarily useful to compare how much benefit optimizations are actually giving you beyond React’s DOM diff.

If you’re new to Fulcro and started with version 2.0, you can safely ignore most comments about defui. The two
are rougly equivalent, with defsc being the newer.

Fulcro’s defsc is a front-end to the legacy defui macro. It is sanity-checked for the most common elements: ident (optional), query,
render, and initial state (optional). The sanity checking prevents a lot of the most common errors when writing a component,
and the concise syntax reduces boilerplate to the essential novelty. The name means "define stateful component" and is
intended to be used with components that have queries (though that is not a requirement).

The core options (:query, :ident, :initial-state, :css, and :css-include) of defsc support both a lambda and a template
form. The template form is shorter and enables some sanity checks; however, it is not expressive enough to cover all
possible cases. The lambda form is slightly more verbose, but enables full flexibility at the expense of the sanity checks.

IMPORTANT NOTE: In lambda mode use this and props from the defsc argument list. So, for example, (fn [] [:x]) is
valid for query (this is added by the macro), and (fn [] [:table/by-id id]) is valid for ident.

Template idents are great for the common case, but they don’t work if you have a single instance ever (i.e. you want
a literal second element), and they won’t work at all for union queries. They also do not support embedded code.
Therefore, if you want a more advanced ident, you’ll need to spell out the code.

As with :query and :ident, :initial-state supports a template and lambda form.

The template form for initial state is a bit magical, because it tries to sanity check your initial state, but also has
to support relations through joins. Finally it tries to eliminate typing for you by auto-wrapping nested relation
initializations in get-initial-state for you by deriving the correct class to use from the query.
This further reduces the chances of error; however, you may find the terse result more difficult to read and
instead choose to write it yourself. Both ways are supported:

The query is analyzed for joins on keywords (ident joins are not supported).

If a key in the initial state matches up with a join, then the value in initial state must be a map or a vector.
In that case (get-initial-state JoinClass p) will be called for each map (to-one) or mapped across the vector (to-many).

REMEMBER: the value that you use in the initial-state for children is the parameter map to use against that child’s
initial state function. To-one and to-many relations are implied by what you pass (a map is to-one, a vector is to-many).

Step (1) means that nesting of param-namespaced keywords is supported, but realize that the params come from the
declaring component’s initial state parameters, they are substituted before being passed to the child.

Support for using fulcrologic/fulcro-css is built-in. Before 2.4 it was a dynamic dependency so to use needed to include
the fulcrologic/fulcro-css library in your project dependencies and require the fulcro-css.css namespace in any file
that used this support.

As of version 2.4 it is intregrated with Fulcro, and requires no special dependency.

The keys in the defsc options map to leverage co-located CSS are:

:css - The items to put in protocol method css/local-rules. Can be pure garden data, or (fn [] …​)

:css-include - The items to put in protocol method css/include-children. Can be a vector of classes, or (fn [] …​)

Both are optional. If you use neither, then your code will not incur a dependency on the fulcro-css library.

The options of defsc allow for React Lifecycle methods to be defined (as lambdas). The this parameter of defsc is
in scope for all of them, but notprops or computed. You can obtain computed using prim/get-computed. This is
because the lifecycle method may receive prior or next props, and using the top parameter list could be confusing.

If you need to include additional protocols (or lifecycle React methods) on the generated class then you can use the
:protocols option. It takes a list of forms that have the same shape as the body of a defui, and the static qualifier
is supported. If you supply Object methods then they will be properly combined with the generated render:

Here is an example of adding Fulcro CSS using protocols instead of options:

The sanity checking mentioned in the earlier sections causes compile errors. The errors are intended to be self-explanatory.
They will catch common mistakes (like forgetting to query for data that you’re pulling out of props, or misspelling a property).

Feel free to edit the components in this source file and try out the sanity checking. For example, try:

Mismatching the name of a prop in options with a destructured name in props.

Destructuring a prop that isn’t in the query

Including initial state for a field that is not listed as a prop or child in options.

Using a scalar value for the initial value of a child (instead of a map or vector of maps)

Forget to query for the ID field of a component that is stored at an ident

In some cases the sanity checking is more aggressive that you might desire. To get around it simply use the lambda style.

A function from props to a React key. Should generally be supplied to ensure React rendering can properly diff.

:validator

A function from props to boolean. If it returns false then an assertion will be thrown at runtime.

:instrument?

A boolean. If true, it indicates that instrumentation should be enabled on the component.

Instrumentation is a function you can install on the reconciler that wraps component render allowing you to add
measurement and debugging code to your component’s rendering.

In Fulcro documentation we generally adopt the naming convention for UI factories to be prefixed with ui-. This
is because you often want to name joins the same thing as a component: e.g. your query might be
[{:child (prim/get-query Child)}], and then when you destructure in render: (let [{:keys [child]} (prim/props this) …​
you have local data in the symbol child. If your UI factory was also called child then it would cause annoying name
collisions. Prefixing the factories with ui- makes it very clear what is data and what is an element factory.

Properties are always passed to a component factory as the first argument. The properties can be accessed
from within render by calling fulcro.client.primitives/props on the parameter passed to render (typically named this
to remind you that it is a reference to the instance itself).

In components with queries there is a strong correlation between the query (which must join the child’s query),
props (from which you must extract the child’s props), and calling of the child’s factory
(to which you must pass the child’s data).

If you are using components that do not have queries, then you may pass whatever properties you deem useful.

Details about additional aspects of rendering are in the sections that follow.

It is possible that your logic and state will be much simpler if your UI components derive some values at render time.
A prime example of this is the state of a "check all" button. The state of such a button is dependent on other components
in the UI, and it is not a separate value. Thus, your UI should compute it and not store it else it could
easily become out of sync and lead to more complex logic.

You should consider computing a derived value when:
* The known data from the props already gives you sufficient information to calculate the value.
* The computation is relatively light.

Some examples where UI computation are effective, light, or even necessary:

Rendering an internationalized value. (e.g. tr)

Rendering a check-all button

Rendering "row numbering" or other decorations like row highlighting

There are some trade-offs, but most significantly you generally do not want to compute things like the order/pagination of a list of items.
The logic and overhead in sorting and pagination often needs caching, and there are
clear and easy "events" (user clicking on sort-by-name) that make it clear when to call the mutation to update
the database. You still have to store the selected sort order, and you have to have idents pointing to the list of
items. It is possible for your "selected sort order" and list to become out of sync, but the trade-offs of sorting
in the UI are typically high, particularly when pagination is involved and large amounts of data would have
to be fed to the UI.

Many reusable components will need to tell their parent about some event. For example, a list item generally wants
to tell the parent when the user has clicked on the "remote" button for that item. The item itself cannot
be truly composable if it has to know details of the parent. But a parent must always know the details of
a child (it rendered it, didn’t it?). As such, manipulations that affect the content of a parent should be
communicated to that parent for processing. The mechanism for this is identical to what you’d do in stock
React: callbacks from the child.

The one major difference is how you pass the callback to a component.

The query and data feed mechanisms that supply props to a component are capable of refreshing a child without
refreshing a parent. This UI optimization can pull the props directly from the database using the query, and
re-feed them to the child.

But this mechanism knows nothing about callbacks, because they are not (and should not be) stored in
the client database. Such a targeted refresh of a component cannot pass callbacks through the props
because the parent is where that is coded, but the parent may not be involved in the refresh!

So, any value (function or otherwise) that is generated on-the-fly by the parent must be passed via
fulcro.client.primitives/computed. This tells the data feed system how to reconstruct the complete data should it do a targeted update.

Not understanding this can cause a lot of head scratching: The initial render will always work perfectly,
because the parent is involved. All events will be processed, and you’ll think everything is fine; however, if you
have passed a callback incorrectly it will mysteriously stop working after a (possibly unnoticeable) refresh. This
means you’ll "test it" and say it is OK, only to discover you have a bug that shows up during heavier use.

A very common pattern in React is to define a number of custom components that are intended to work in a nested fashion. So,
instead of just passing props to a factory, you might also want to pass other React elements. This is fully supported
in Fulcro, but can cause confusion when you first try to mix it with the data-driven aspect of the system.

At first this seems a little mind-bending, because you are in fact nesting components in the UI, but
the query nesting need only mimic the stateful portion of the UI tree. This means there is ample opportunity
to use React children in a way that looks incorrect from what you’ve learned so far. On deeper inspection
it turns out it is alignment with the rules, but it takes a minute on first exposure.

Take the Bootstrap collapse component: It needs state of its own in order to know when it is collapsed,
and we’d like that to be part of the application database so that the support history viewer can show the
correct thing. However, the children of the collapse cannot be known in advance when writing the collapse
reusable library component.

The solution is simple once you see it: Query for the collapse component’s state and the child state in
the common parent component, then do the UI nesting in that component. Technically the component that is "laying out" the
UI (the ultimate parent) is in charge of both obtaining and rendering the data. The fact that the UI child ends
up nested in a query sibling is perfectly fine.

The collapse component itself is only concerned with the fact that it is open/closed, and that it has children that
should be shown/hidden. The actual DOM elements of those children are immaterial, and can be assembled by the parent:

Form inputs in React can take two possible approaches: controlled and uncontrolled. The browser normally maintains
the value state of inputs for you as mutable data; however, this breaks our overall model of pure rendering! The
advantage is UI interaction speed: If your UI gets rather large, it is possible that UI updates on keystrokes in
form inputs may be too slow. This is the same sort of trade-off that we talked about when covering component
local state for rendering speed with more graphical components. If you follow the basic optimization guidelines
then your application should be fast enough to do database updates on every keystroke, and you can keep all input
changes in your client database.

In general it is recommended that you use controlled inputs and retain the benefits of pure rendering: no embedded
state, your UI exactly represents your data representation, concrete devcards support for UI prototyping, and full
support viewer support.

Most inputs become controlled when you set their :value property. The table below lists the mechanism whereby
a form input is completely controlled by React:

Input type

Attribute

Notes

input

:value

(not checkboxes or radio)

checkbox

:checked

radio

:checked

(only one in a group should be checked)

textarea

:value

select

:value

Instead of marking an option selected. Match select’s `:value to the :value of a nested option.

Important

React will consider nil to mean you want an uncontrolled component. This can result in
a warning about converting uncontrolled to controlled components. In order to prevent this warning you should make
sure that :checked is always a boolean, and that other inputs have a valid :value (e.g. an empty string). The
select input can be given an "extra" option that stands for "not selected yet" so that you can start its value
at something valid.

There are some common use-cases that can only be solved by working directly with the React Lifecycle methods.

Some topics you should be familiar with in React to accomplish many of these things are:

Component references: A mechanism that allows you access to the real DOM of the component once it’s on-screen.

Component-local state: A stateful mechanism where mutable data is stored on the component instance.

General DOM manipulation. Clojurescript builds using the Google Closure compiler and therefore
includes the Google Closure library, which in turn has all sorts of helpful low-level functions should you need them.

Focus is a stateful browser mechanism, and React cannot force the rendering of "focus". As such, when you need
to deal with UI focus it generally involves some interpretation, and possibly component local state. One way
of dealing with deciding when to focus is to look at a component’s prior vs. next properties. This can be
done in componentDidUpdate. For example, say you have an item that renders as a string, but when clicked
turns into an input field. You’d certainly want to focus that, and place the cursor at the end of the
existing data (or highlight it all).

If your component had a property called editing? that you made true to indicate it should render as an input
instead of just a value, then you could write your focus logic based on the transition of your component’s props
from :editing? false to :editing? true.

However, The wrapped inputs of Fulcro 2.3.0 and earlier (which fixed a different issue with React) did
not work with the functional ref technique (use strings and dom/node
instead). As of 2.3.1 this is fixed, and while inputs still have the internal wrapping to prevent "lost keys" bugs,
they work with both the older string support or functional refs.

Libraries like D3 are great for dynamic visualizations, but they need full control
of the portion of the DOM that they create and manipulate.

In general this means that your render method should be called once
(and only once) to install the base DOM onto which the other library
will control.

For example, let’s say we wanted to use D3 to render things. We’d first
write a function that would take the real DOM node and the incoming
props:

(defndb-render [DOM-NODE props] ...)

This function should do everything necessary to render the sub-dom (and
update it if the props change). Then we’d wrap that under a component that
doesn’t allow React to refresh that sub-tree via shouldComponentUpdate.

We override the React lifecycle method shouldComponentUpdate to return false. This tells React to never ever call
render once the component is mounted. D3 is in control of the underlying stuff.

We override componentWillReceiveProps and componentDidMount to do the actual D3 render/update. The former will
get incoming data changes, and the latter is called on initial mount. Our render method
delegates all of the hard work to D3.

Sometimes you need to use component-local state to avoid the overhead in running a query to feed props. An example
of this is when handing mouse interactions like drag. You’ll typically use React refs to grab the actual low-level canvas.

There are actually two ways to change component-local state. One of them defers rendering to the next animation frame,
but it also reconciles the database with the stateful components. This one will not give you as much of a speed boost
(though it may be enough, since you’re not changing the database or recording more UI history).

The other mechanism completely avoids this, and just asks React for an immediate forced update.

(set-state! this data) and (update-state! this data) - trigger a reconcile against the database at the next animation frame. Limits frame rate to 60 fps.

The component receives mouse move events to show a hover box. To make this move in real-time we use component
local state. Clicking to set the box, or resize the container are real transactions, and will actually cause
a refresh from application state to update the rendering.

One of the great parts about React is the ecosystem. There are some great libraries out there.
However, the interop story isn’t always straight forward. The goal of this section is to make that story a little clearer.

Integrating React components is fairly straightforward if you have used React from JS before. The curve comes having
spent time with libraries or abstractions like Om and friends. JSX will also abstract some of this away, so it’s not
just the cljs wrappers. For a good article explaining some of the concepts read, React Elements The take-aways here are:

If you are importing third party components, you should be importing the class, not a factory.

You need to explicitly create the react elements with factories. The relevant js functions are React.createElement, and React.createFactory.

It is very important to consider when using any of these functions - the children. JS does not have a built in notion
of lazy sequences. Clojurescript does. This can create subtle bugs when evaluating the children of a component.

fulcro.util/force-children helps us in this regard by taking a seq and returning a vector. We can use this to
create our own factory function, much like React.createFactory:

This is fine, but you will notice that children in our factory may be missing keys. Because we passed a vector in,
React won’t attach the key attribute. We can solve this problem by using the apply function.

Here the apply function will pass the children in as args to React.createElement, thus avoiding the key problem
as well as the issue with lazy sequences.

Now that we have some background on creating React Elements it’s pretty simple to implement something.
Let' look at making a chart using Victory. We are going to make a simple line chart, with an X axis that contains
years, and a Y axis that contains dollar amounts. Really the data is irrelavent, it’s the implementation we care about.

A common pattern in React libraries is to use a function as a single child instead of an actual element. This
is an accepted and widely used pattern, but you need to do a simple extra step for it to work properly with
Fulcro. You see, Fulcro components use some behind-the-scenes bindings to allow for targeted UI rendering
optimizations, and when you embed them in a function that is invoked from external JS out of that context, things
won’t work correctly.

For example, the react-motion library gives you React tools that can animate DOM motion. It animates variables
that you apply to nested DOM, and it does this through the function-as-a-child pattern. Here’s an example from
a demo project (which uses shadow-cljs to get easy access to NPM
libraries):

The key is the call to with-parent-context. It causes the enclosed elements to have bindings pulled
from the component passed as the first parameter (in this case this). Rendering will work
correctly without this wrapper, but interactions (particularly transact!) will not operate correctly
without it.

CSS can be co-located on any component. This CSS does not take effect until it is embedded on the page
(see Embedding The CSS below). The typical steps for usage are:

Add localized rules to your component via the :css option of defsc or by implementing the fulcro-css.css/CSS protocol’s local-rules.
The result of this must be a vector in Garden notation. Any rules included here will be automatically prefixed with the CSSified namespace
and component name to ensure name collisions are impossible.

(optional) Add "global" rules. The vector of rules (in the prior step) can use a $ prefix to prevent localization (:$container instead of :.container).

(optional) Add the :css-incude option to defsc. This MUST be a vector of components that are used within the render
that also supply CSS. This allows the library to compose together your CSS according to what components you use. Pulling
the CSS rules from some top-level component will dedupe any inclusions before generating the actual on-DOM CSS.

Use one of many options to get the "munged" classnames for use in your code:

(fulcro-css.css/get-classnames Componenet) returns a map of namespaced CSS classes keyed by the simple name you used in your garden rules.

The 4th argument to defsc is that same map, on which you can apply destructuring to get the munged class names.

Use fulcro.client.localized-dom for DOM rendering, and have it all automatically done for you!

Use the fulcro-css.css/upsert-css or fulcro-css.css/style-element function (or your own style element) to embed the CSS.

There are two ways of placing the CSS for a (group of) components on a page:

(upsert-css ID Component) will pull (recurisvely) the rules of Component, translate them to legal CSS, and then insert them
on the body of the page’s DOM at the given ID (overwriting the old style element with that same ID). If you ensure that
this upsert happens on hot code reload, then your CSS will update as you edit your code.

Typically, you’ll run upsert-css with your initial mount and in your hot code reload trigger. This means that the
computational overhead for the CSS is limited to initial startup.

(style-element Component) will pull (recursively) the rules for Component and return a React style element. This allows
you to embed CSS for a sub-tree of components in an "on-demand" fashion, since the element will only render of the component
that uses it renders. The problem with this method is that the computational overhead for computing the CSS is moved into
your primary rendering. This, however, is quite convenient in situations like devcards where you’d like to easily ensure
that the CSS is there, but don’t want to have to worry about conflicting with existing style elements on the top-level page.

The live demo below upserts the co-located CSS from the code itself, but the embedded rules also
base the color on the value in an atom (think theme color). At any time you can change your various
embedded data on colocated CSS and re-upsert the generated result!

If you choose to add localized CSS rules to your components, then you will probably also
want to use the DOM elements that support it natively. The fulcro.client.localized-dom
uses the same more compact notation of dom, but
it interprets the CSS keywords in the context of the component! This means that you can
use localized CSS without having to destructure the munged names at all.

There is a fulcro.client.localized-dom-server namespace that provides the CLJ versions of the DOM functions. In
order to write UI in CLJC files you will need to make sure you use a conditional reader tag to include the
correct namespace for the correct langauge:

Fulcro has the start of an image clip tool.
Right now it is mainly for demonstration purposes, and is a good example of a complex UI component where two components
have to talk to each other and share image data.

You should study the source code (src/main/fulcro/ui/clip_tool.cljs) to get the full details,
but here is an overview of the critical facets:

ClipTool creates a canvas on which to draw

It uses initial state and a query to track the setup (e.g. size, aspect ratio, image to clip)

For speed (and because some data is not serializable), it uses component-local state to track current clip region,
a javascript Image object and the DOM canvas (via React ref) for rendering, and the current active operation (e.g. dragging a handle).

The mouse events are essentially handled as updates to the component local state, which causes a local component
render update.

PreviewClip is a React component, but not data-driven (no query). Everything is just passed through props.

It technically knows nothing of the clip tool.

It expects an :image-object and clip data to be passed in…​it just renders it on a canvas.

It uses React refs to get the reference to the real DOM canvas for rendering

It renders whenever props change

A callback on the ClipTool communicates through the common parent’s component local state. The parent
will re-render when its state changes, which will in turn force a new set of props to be passed to the preview).

When using external libraries with Fulcro you’ll often run into the higher-order component pattern.
React Higher Order Components (HOC) can be used with
Fulcro but due to how Fulcro works internally interop/glue code is needed.

For example google-maps-react provides GoogleApiWrapper HOC that handles
dynamic loading and initialisation of Google Maps Javascript library so it doesn’t have to be handled manually in every place where
Google Maps React component (like Map or Marker) is used. In this particular example, GoogleApiWrapper creates a wrapper
component class that behaves in the following way:
- it will display a placeholder "Loading" component and trigger Google Maps script loading and initialisation
- when the script loading and initialisation is complete it will replace "Loading" placeholder with the wrapped component

So what’s the issue with using React HOC in Fulcro Interop:
* Fulcro will embed React JS components (ones created by HOC)
* React JS HOC will wrap Fulcro components

In the first case React JS components (like these from google-maps-react) expect props to be plain JavaScript objects,
not ClojureScript maps. Fulcro components pass ClojureScript to nested components thus props need to be converted.
This part is described in Fulcro Book’s chapter on
Factory Functions for JS React Components.
All we need to do is to have a factory function that will do the conversion. For example:

We also need LocationView factory that will get JS props received from HOC
and will recover our Cljs map props enhancing it also with google object provided by HOC. We use "function as child" React pattern.

utils-hoc namespace presented below provides a reusable hoc-factory that can be used to handle all
the boilerplate code and interop gluing. It supports :extra-props-fn in the opts argument that can be
used to customize the final props passed to the wrapped component. The code below shows its usage where google value
from js-props (injected by google-maps-react HOC wrapper) needs to be propagated under :google entry in Cljs
props passed to the wrapped LocationView component.

The defui macro generates a React component. It does the same thing as the
defsc macro, but looks more like a defrecord and is a bit more OO in style. It
does not error-check your work, nor does it allow you to destructure incoming data
over the body or options; however, it is syntax-comptible with Om Next so if you’re
porting from that library it can be useful.

It is 100% compatible with the React ecosystem. The macro is intended
to look a bit like a class declaration, and borrows generation notation style from defrecord. There is no
minimum required list of methods (e.g. you don’t even have to define render). This latter fact is useful
for cases where you want a component for server queries and database normalization, but not for rendering.

See React Lifecycle Examples for some specific examples, and the React documentation for a complete description of each of these.

Note

Fulcro does override shouldComponentUpdate to short-circuit renders of a component whose props have not changed. You
generally do not want to change this to make it render more frequently; however, when using Fulcro with
libraries like D3 that want to "own" the portion of the DOM they render you may need to make it so that
React never updates the component once mounted (by returning false always). The Developer’s Guide shows an example
of this in the UI section.

defui supports implementations of protocols in a static context. It basically
means that you’d like the methods you’re defining to go on the class (instead of instance), but conform to the
given protocol. There is no Java analogue for this, but in Javascript the classes themselves are open.

Warning

Since there is no JVM equivalent of implementing static methods, a hack is used internally where the
protocol methods are placed in metadata on the resulting symbol. This is the reason functions like
get-initial-state exist. Calling the protocol (e.g. initial-state) in Javascript will work, but if you
try that when doing server-side rendering on the JVM, it will blow up.

There are two core protocols for supporting a component’s data in the graph database. They work in tandem to
find data in the database for the component, and also to take data (e.g. from a server response or initial state) and
normalize it into the database.

Both of these protocols must be declared static. The reason for this is initial normalization and query: The
system has to be able to ask components about their ident and query generation in order to turn a tree of data
into a normalized database.

Queries must be composed towards the root component (so you end up with a UI query that can pull the entire
tree of data for the UI).

This is wrong because the query will end up annotated with PersonView2’s metadata. Never use the return
value of `get-query as the return value for your own query.

The query will be structured with joins to follow the UI tree. In this manner the render and query
follow form. If you query for some subcomponent’s data, then you should pass that data to that
component’s factory function for rendering.

Before reading this chapter you should make sure you’ve read The Graph Database Section. It details
the low-level format of the application state, and talks about general details that
are referenced in this chapter.

In Fulcro all data is pulled from the database using a notation that is a subset of Datomic’s pull query syntax. Since
the query is a graph walk it is a relative notation: it must start at some specific spot, but that spot is not
always named in the query itself. On the client side the starting point is usually the root node of your database. Thus,
a complete query from the Root UI component will be a graph query that can start at the root node of the database.

However, you’ll note that any query fragment is implied to be relative to where we are in the walk of the graph
database. This is important to understand: no component’s query can just be grabbed and run against the database
as-is. Then again, if you know the ident of a component, then you can start at that table entry in the database
and go from there.

The mutation language is a data representation of the abstract actions you’d like to take on the data model. It is
intended to be network agnostic: The UI need not be aware that a given mutation does local-only modifications and/or
remote operations against any number of remote servers. As such, the mutations, like queries, are simply data. Data
that can be interpreted by local logic, or data that can be sent over the wire to be interpreted by a server.

Queries can either be a vector or a map of vectors. The former is a regular component query, and the latter is
known as a union query. Union queries are useful when you’re walking a graph edge and the target could be
one of many different kinds of nodes, so you’re not sure which query to use until you actually are walking
the graph.

The simplest thing to query are properties "right here" in the relative node of the graph. Properties are queried
by a simple keyword. Their values can be any scalar data value that is serializable in EDN.

The query

[:a:b]

is asking for the properties known as :a and :b at the "current node" in the graph traversal.

A join represents a traversal of an edge to another node in the graph. The notation is a map with a single key
(the local key on the current node that holds the "pointer" to another node) whose single value is the
query for the remainder of the graph walk:

[{:children (prim/get-query Child)}]

The query itself cannot specify that this is a to-one or to-many join. The data in the database graph itself
determines the arity when the query is being run. Basically, if walking the join property leads to a vector of
links, it is to-many. If it leads to a single link, then it is to-one. Rendering the data is going
to have the same concern so the arity of the relation more strongly affects the rendering code.

Joins should always use get-query to get the next component in the graph. This annotates (with metadata) the sub-query
so that normalization can work correctly.

Unions represent a map of queries, only one of which applies at a given graph edge. This is a form of
dynamic query that adjusts based on the actual linkage of data. Unions cannot stand alone.
They are meant to select one of many possible alternate queries when a link (to-one or to-many join) in the
graph is reached. Unions are always used in tandem with a join, and can therefore not be used on root-level
components. The union query itself is a map of the possible queries:

The query would start at the root. When it saw the join it would detect a union. The union would be resolved
by looking at the first element of the ident in the database (in this case :place from [:place 3]). That keyword
would then be used to select the query from the subcomponent union (in this example, (prim/get-query Place)).

Processing of the query then continues as normal as if the join was just on Place.

and now you have a mixed to-many relationship where the correct sub-query will be used for each item in turn.

Normalization of unions requires that the union component itself have an ident function that can properly
generate idents for all of the possible kinds of things that could be found. Often this means that you’ll need
to encode some kind of type indicator in the data itself.

Often it is easier to just include a :type field so that ident can look up both the type and id.

Rendering the correct thing in the UI of the union component has the same concern: you must detect what
kind of data (among the options) that you actually receive, and pass that on to the correct child factory (e.g.
ui-person, ui-place, or ui-thing. This is most commmonly done with a simple case statement.

The fulcro.client.routing/defrouter macro emits a union component that can be switched to point at any kind
of component that it knows about. The support for parameterized routers in the routing tree makes it possible
to very easily reuse the UI router as a component that can show one of many screens in the same location.

This is particularly useful when you have a list of items that have varying types, and you’d like to, for example,
show the list on one side of the screen and the detail on the other.

To write such a thing one would follow these steps:

Create one component for each item type that represents how it will look in the list.

Create one component for each item type that represents the fine detail view for that item.

Join (1) together into a union component and use it in a component that shows them as a list. In other words
the union will represent a to-many edge in your graph. Remember that unions cannot stand alone, so there
will be a union component (to switch the UI) and a list component to iterate through the items.

Combine the detail components from (2) into a defrouter (e.g. named :detail-router).

Create a routing tree that includes the :detail-router, and parameterize both elements of the target ident (kind and id)

Hook a click event from the items to a route-to mutation, and send route parameters for the kind and id.

Mutations are also just data, as we mentioned earlier. However, they are intended to look like single-
argument function calls where the single argument is a map of parameters:

[(do-something)]

The main concern is that this expression, in normal Clojure, will be evaluated because it contains a raw list.
In order to keep it data, one must quote expressions with mutations. Of course you may use syntax quoting
or literal quoting. Usually we recommend namespacing your mutations (with defmutation) and then using
syntax quoting to get reasonably short expressions:

Most of the query elements also support a parameter map. In Fulcro these are mainly useful when sending a query
to the server, and it is rare you will write such a query "by hand". However, for completeness you should know
what these look like. Basically, you just surround the property or join with parentheses, and add a map as
parameters. This is just like mutations, except instead of a symbol as the first element of the list it is either
a keyword (prop) or a map (join).

Thus a property can be parameterized:

[(:prop {:x1})]

This would cause, for example, a server’s query processing to see {:x 1} in the params when handling the read
for :prop.

A join is similarly parameterized:

[({:child (prim/get-query Child)} {:x1})]

with the same kind of effect.

Note

The plain list has the same requirement as for mutations: quoting. Generally syntax quoting is again the best
choice, since you’ll often need unquoting. For example, the join example above would actually be written in code as:

...
(query [this] `[({:child ~(prim/get-query Child)} {:x1})])
...

to avoid trying to use the map as a function for execution, yet allowing the nested get-query to run and embed
the proper subquery.

This has the effect of "re-rooting" the graph walk at that ident’s table entry and continuing from there for the
rest of the subtree. In fact this is how Fulcro’s ident-based rendering optimization works.

There are times when you want to start "back at the root" node. This is useful for pulling data that has
a singleton representation in the root node itself. For example, the current UI locale or currently logged-in
user. There is a special notation for this the looks like an ident without an ID:

[ [:ui/locale '_] ]

This component query would result in :ui/locale in your props (not an ident) with a value that came from the
overall root node of the database. Of course, denormalization just requires you use a join:

[ {[:current-user '_] (prim/get-query Person)} ]

would pull :current-user into the component’s props with a continued walk of the graph. In other words this is
just like the ident join, except the special symbol _ indicates there is only one of them and it is in the root
node.

The problem is that the query engine walks the database and query in tandem. When it sees a join
(:locale-selector in this case) it goes looking for an entry in the database at the current location
(root node in this case) to process the subquery against. If it finds an ident it follows it and processes
the subquery. If it is a map it uses that to fulfill the subquery. If it is a vector then it processes the
subquery against every entry. But if it is missingthen it stops.

The fix is simple: make sure the component has a presence in the database, even if empty:

An alternative to ident and link queries is shared state. The first thing to note is that it is not
a complete replacement for link queries. It is a low-level feature that is meant for two basic scenarios:

Data that needs to be visible to all components, but never changes once the app is mounted.

Data that is derived from the UI props (from root) or globals, but only updates on root-level renders (not component-local updates).

The first use-case might be handy if you pass some data to your mount through the HTML page itself. The
latter is useful for data that affects everything in your application, such as the current user.

The primary thing to remember is that components that look at shared state will not see updates unless a
root render occurs with those updates. This typically means calling prim/force-root-render!.

Say we wanted all components to be able to see :pi (a constant) and :current-user (a value from
the database). We could declare this as follows:

Remember that this is not equivalent to a link query for [:current-user '_]. There are two differences.
The first is that pulling :current-user still requires that your root component query for it (or it
won’t even be in the props). Second, the shared value will not visibly change until a root render happens, where
link queries can refresh locally with a component. The final difference is that if you use data in
your shared-fn that is derived from anything other than the state database then it will not work correctly
in the history support viewer.

Fulcro’s query syntax includes support for recursive queries. Recursion is always expressed on a join,
and it always means that the recursive item has the same type as the component you’re on.
There are two notations for this: …​ and a number. The former means "recurse until there are no more links
(circular detection is included to prevent infinite loops)", and the other is the recursion limit:

Note

At the time of this writing you must use the lamba mode of defsc for queries that include recursion.

The following demo (with source) demonstrates the core basics of recursion:

It is perfectly legal to include recursion in your graph, and it is equally fine to
query for it. The query engine will automatically stop if a loop is detected.

However, this is not the whole story. You see, components can be updated in a relative
fasion when all optimizations are enabled. This means that a refresh could happen anywhere
in the (recursive) UI, and the query would run until it detects the loop again. This can
lead to funny-looking results.

The demo below lets you modify people and their spouse (a circular relation). Try it out
and you’ll see that something isn’t quite right (try making Sally older):

The problem is that when you touch Sally the UI refresh updates just that component; however,
that component has a recursive query of depth 1, so it ends up returning Joe as her spouse! This
is technically correct, but almost certainly isn’t what you want!

The fix is equally simple: calculate depth and pass it to the child. Use the calculated depth
to prevent the extra rendering when local refresh gives you data you don’t need. The
demo below has this fix.

Of course it is perfectly fine for there to be multiple edges in your graph that point
to the same node. Below is a recursive bullet list example. We’ve intentionally nested
item B.1 under B and D so you can see that it all works itself out.

Normalization of initial state (which must be a tree) is perfectly happy to see duplcate
entries. It simply merges the multiple copies into the same normalized entry in the table.

Since the two entries merge to the same entry, it also means the modifications will
be shared among them. Try checking item B.1 in either location.

You can convert any expression in the query/mutation language into an AST (abstract syntax tree) and vice
versa. This lends itself to doing complex parsing of the query (typically on the server). The functions
of interest are fulcro.client.primitives/query→ast and ast→query.

There are many uses for this. One such use might be to convert the graph expression into another form. For
example, say you wanted to run a query against an SQL database. You could write an algorithm that translates
the AST into a series of SQL queries to build the desired result. The AST is always available as one
of the parameters in the mutation/query env on the client and server.

Another use for the AST is in mutations targeted at a remote: it turns out you can morph a mutation before
sending it to the server.

The most common use of the AST is probably adding parameters that the UI is unaware need to be sent to
a remote. When processing a mutation with defmutation (or just the raw defmethod) you will receive
the AST of the mutation in the env. It is legal to return any valid AST from the remote side of a
mutation. This has the effect of changing what will be sent to the server:

When starting any application one thing has to be done before just about anything else: Establish a starting state. In Fulcro
this just means generating a client-side application database (normalized). Other parts of this guide have talked about
the Graph Database. You can well imagine that hand-coding one of these for a large application’s starting
state could be kind of a pain. Actually, even though coding it would be a pain, it turns out that the bigger pain
happens later when you want to refactor! That can become a real mess!

However, Fulcro already knows how to normalize a tree of data, and your UI is already the tree you’re interested in.
So, Fulcro encourages you to co-locate initial application state with the components that need the state and compose
it towards the root, just like you do for queries. This gives some nice results:

Your initial application state is reasoned about locally to each component, just like the queries.

Refactoring the UI just means modifying the local composition of queries and initial state from one place
to another in the UI.

Fulcro understands unions (you can only initialize one branch of a to-one relation), and can scan
for and initialize alternate branches.

For each component that should appear initially: add the :initial-state option.

Compose the components in (1) all the way to your root.

That’s it! Fulcro will automatically detect initial state on the root, and use it for the application!

Note

Pulling the initial state from a component should be done with fulcro.core/get-initial-state. Calling a static
protocol cannot work on the server, so this helper method makes server-side rendering possible for your components.

Notice the nice symmetry here. The initial state is (usually) a map that represents (recursively) the entity and
it’s children. The query is a vector that lists the "scalar" props, and joins as maps. So, in Child we have
initial state for :x and a query for :x. In the parent we have a query for the property :y and a join to
the child, and initial state for the scalar value of :y and the composed initial state of the Child. Render has
the same thing: the things you pull out of props will be the things for which you queried. Thus, all three essentially
list the same things, but in slightly different forms.

The one "extra" feature that initial state support does for you is to initialize alternate branches of components that
have to-one union query. Remember that a to-one relation from a union could be to any number of alternates.

It means "if you find an ident in the graph pointing to a :person, then query for the person. If you find one
for :place, then query for a place. The problem is: if it is a to-one relation then only one can be in the
initial state tree at startup!

{ :person-or-place [:person2]
:person {2 {:id2...}}}

If you look at a proposed initial state, it will make the problem more clear:

Fulcro solves this at startup in the following manner: It pulls the query from root and walks it. If it finds
a union component, then for each branch it sees if that component (via the query metadata) has initial state. If
it does, it places it in the correct table in app state. This does not, of course, join it to anything in the graph
since it isn’t the "default branch" that was explicitly listed (in PersonPlaceUnion’s `InitialAppState).

This behavior is critical when using unions to handle UI routing, which is in turn essential for good application
performance.

If you remember from the diagram about pure rendering, then you’ll also note that this step
generates the first state in that progression. Rendering any state results in the UI for that state.

An interesting note is that this model also results in a really useful property: You can take the initial state,
run it though the implementation of one or more mutations, and end up with any other state. This means you can
easily reason about initializing your application into any state, which is useful for things like testing and server-side rendering.

There are all sorts of very useful features that fall out of this. For example, it is also possible to record a series
of "user interactions" (which can be recorded as a list of the mutations that ran) and replay those. This could be used
to send a tester a sequence of steps to show recent development work, run automated demos/tests, teleport your development
environment to a specific page, etc.

Writing tests against the state model and mutation implementations is a great way to unit test your application
without needing to involve the UI itself at all! You can read more about that in the section on
Visual Regression Testing

The function prim/tree→db is the workhorse that turns an incoming tree of data into normalized data (which can then
be merged into the overall database).

Imagine an incoming tree of data:

{ :people [ {:db/id1:person/name"Joe"...} {:db/id2...} ... ] }

and the query:

[{:people (prim/get-query Person)}]

which expands to:

[{:people [:db/id:person/name]}]
^ metadata {:component Person}

tree→db recursively walks the data structure and query:

At the root, it sees :people as a root key and property. It remembers it will be writing :people to the root.

It examines the value of :people and finds it to be a vector of maps. This indicates a to-many relationship.

It examines the metadata on the subquery of :people and discovers that the entries are represented by
the component Person

For each map in the vector, it calls the ident function of Person (which it found in the metadata) to get a
database location. It then places the "person" values into the result via assoc-in on the ident.

It replaces the entries in the vector with the idents.

If the metadata was missing then it would assume the person data did not need normalization. This is why it is
critical to compose queries correctly. The query and tree of data must have a parallel structure, as should the
UI. In template mode defsc will try to check some things for you, but you must ensure that you
compose the queries correctly.

The process described above is how most data interactions occur. At startup the :initial-state supplies data that
exactly matches the tree of the UI. This gives your UI some initial state to render. The normalization mechanism
described above is exaclty what happens to that initial tree when it is detected by Fulcro at startup.

Network interactions send a UI-based query (which remember is annotated with the components). The query is
remembered and when a response tree of data is received (which must match the tree structure of the query), the
normalization process is applied and the resulting normalized data is merged with the database.

If using websockets, it is the same thing: A server push gives you a tree of data. You could hand-normalize that data,
but actually if you know the structure of the incoming data you can easily generate a client-side query (using
defsc or defui) that can be used in conjunction with prim/tree→db to normalize that incoming data.

Mutations can do the same thing. If a new instance of some entity is being generated by the UI as a tree of data, then
the query for that UI component can be used to turn it into normalized data that can be merged into the state
within the mutation.

Some useful functions to know about:

prim/merge-component - A utility function for merging new instances of a (possibly recursive) entity state into
the normalized database. Usable from within mutations.

Mutations are known by their symbol and are dispatched to the internal multimethod
fulcro.client.mutations/mutate. To handle a mutation you can do two basic things: use defmethod
to add a mutation support, or use the macro defmutation. The macro is recommended for most cases
because it namespaces the mutation, prevents some common errors, and works better with IDEs.

There are multiple passes on a mutation: one local, and one for each possible remote. It is
technically the job of the mutation handler to return a lambda for the local pass, and a boolean (or AST)
for each remote. Returning nil from any pass means to not do anything for that concern.

For example, say you have three remotes: one for normal API, one that hits a REST API, and one for
file uploads. Each would have a name, and each pass of the mutation handling would be interested
in knowing what you’d like to do for the local or remote.

The mutation environment (env in the examples) contains a target that is set to a remote’s name when
the mutation is being asked for details about how to handle the mutation with respect to that remote.

For each pass the mutation is supposed to return a map whose key is :action or the name of the remote, and
whose value is the thing to do (a lambda for :action, and AST or true/false for remotes).

Summary:

You transact! somewhere in the UI

The internals call your mutation with :target set to nil in env. You return a map with an :action key
whose value is the function to run.

The internals call your mutation once for each remote, with :target set. You return a map with
that remote’s keyword as the key, and either a boolean or AST as the remote action. (true means send the
AST for the expression sent in (1) to the remote)

Since the action is just data, it doesn’t matter that we "generate" it for the multiple passes. Same for
the remotes.

Reminder

The example above uses syntax quoting on the symbol which will add the current
namespace to it. In any case the symbol is just that: a symbol (data) that acts as the dispatch
key for the multimethod. If you use a plain quote (') then you should manually namespace the symbol.

Some common possible mistakes are:

You side-effect. Your mutation will be called at least two times so this is a bad idea.
Side effects should be wrapped in the action.

You assume that the remote expression "sees" the old state (e.g. you might build an AST based on
what is in app state). The local action is usually run before the remote passes, meaning the state has already changed
and the remote logic is seeing the "new" client database state.

You forget to return a map with the correct keys (usually if you made mistake 1).

There is no guaranteed order to evaluation. Therefore if you need a value from state as it was seen
when the mutation was triggered: send it as a parameter to the mutation from the UI (where you knew the old value).
That way the call itself has captured the old value.

Thus it ends up looking more like a function definition. IDE’s like Cursive can be told how to resolve
the macro (as defn in this case) and will then let you read the docstrings and navigate to the definition
from the usage site in transact. This makes development a lot easier.

Another advantage is that the symbol is placed into the namespace in which it is declared (not interned,
just given the namespace…​it is still just symbol data). Syntax quoting can expand aliasing, which means
you get a very nice tool experience at usage site:

The final advantage is it is harder to accidentally side-effect. The action section of defmutation
will wrap the logic in a lambda, meaning that it can read as-if you’re side-effecting, but in fact
will do the right thing.

In general these advantages mean you should generally use the macro to define mutations, but it is good
to be aware that underneath is just a multimethod.

The most common case of non-local UI refresh comes up with parent-child relationships. In these cases, the parent can
be seen as a UI component in control of the children (it is responsible for telling them to render). In such cases it is
usually better to reason about the management logic (i.e. deleting, reordering, etc) from the parent; however,
it is commonly the case the you want the child to render some controls, such as a delete button. Thus, the control
that wants to modify the state (the delete button on the item) is not local to the component that will need to refresh
(the parent’s list).

The solution is simple: create a callback in the parent that can run the delete transaction and pass it through
as a computed value.

In our example, we’ll assume the delete is global (removes the item from the list and the normalized table):

It indicates that the given data will have changed, and therefore any on-screen component that queries for
that particular data should also be refreshed when the transaction completes the optimistic update (and again
after the remote interaction, if there is one).

Follow-on reads allow the developer to reason more abstractly about non-local UI refresh. They need only
think about what data is changing, and not about what components might be displaying it. This allows the UI
to evolve without additional concerns that refresh will be slow (due to expensive data analysis) or will become
broken (because you didn’t know a component needed refresh).

This model was inherited from Om Next, and it is the correct model to use when the transaction might, for example,
modify opaque data that is not directly queried but which would cause a component to need a refresh; however,
since most transactions run a mutation that is really more aware of what data is changing
it makes quite a bit of sense for you to be able to declare this on the mutation itself.

Fulcro 2.0+ supports a way of declaring follow-on reads that allows for better local reasoning: co-locate the
follow-on reads with the mutation itself. The mechanism is quite simple: add a refresh section on your mutation
and return the list of keywords for the data that changed:

In this case the transaction is running on a component that doesn’t query for the data being changed (it is pinging the
Left component). The built-in refresh list on the mutation takes care of the update!

The live example below is a full-stack demo of this. The buttons update data that the other button displays.
The `transact!`s on these would normally require follow-on reads or a callback to the parent to refresh properly.
With the refresh list on the mutation itself the UI designer is freed from this responsibility.

The right button uses data from the server in a pessimistic fashion (it does no optimistic update, and you can
increase the simulated delay on the server), so pinging it from the left actually reads a value from the server.
This demonstrates that the refresh is working for full-stack operations.

Mutations themselves are meant to be abstractions across the entire stack; however, the optimistic side of them are
really just functions on your application state. Components have nice clean abstractions,
and you will often benefit from writing low-level functions that represent the general operations on a component’s data. As
you move towards higher-level abstractions you’ll want to compose those lower-level functions. As such, it pays
to think a little about how this will look over time.

If you write the actual logic of a mutation into a defmutation, then composition is difficult because the
model does not encourage recursive calls to transact!. This will either lead to code duplication or other bad
practices.

To maximize code reuse, local reasoning, and general readability it pays to think about your mutations in the following manner:

A mutation is a function that changes the state of the application: state → mutation → state'

Within a mutation, you are essentially doing operations to a graph, which means you have operations that
work on some node in the graph: node → op → node'. These operations may modifiy a scalar or an edge to another node.

You could even take it a step further with a little more sugar by defining a helper that can turn a node-specific
op into a db-level operation (again note that the thing being updated is the first argument):

If you find that a given entity is always modified in the context of the state map itself (a common
case) then it can be a bit shorter to just push the table logic (ident resolution) into the operation itself:

Once you have your general operations written as basic functions on either the entire state (like update-person) or
targeted to nodes or the state map itself (like add-friend*), then it becomes much easier to create mutations that
compose together operations to accomplish any higher-level task.

For example, the fulcro.client.routing/update-routing-links function takes a state map, and changes all of the routers
in the application state to show that particular screen. So, say you wanted add-friend to also
take you to the screen that shows the details of that particular person. The top-level abstract mutation in the UI
might still be called add-friend, but the internals now have two things to do.

Having all of these functions on the graph database allows you to write this in a very nice form as
a sequence of operations on the state map itself through threading:

Of course, this can also be overkill. It is true that it is often handy to be able to compose many db operations together
into one abstract mutation, but don’t forget that more that one mutation can be triggered by a single call
to transact!:

You’ll want to balance your mutations just like you do any other library of code: so that
reuse and clarity are maximized. In the case of mutations the deciding factor is often
how you want to deal with remote mutations.

This section covers a number of additional mutation techniques that arise in more advanced situations.
Almost all of these circumstances arise
from needing to modify your application database outside of the normal prim/transact! mechanism at the UI layer.

A lot of these things are handled for you with normal full-stack operations, so you might want to skip this
section until you’re comfortable with that material.

The first note is that prim/transact! can be used on the reconciler. If you’ve saved your Fulcro Application
in a top-level atom then you can run a transaction "globally" like this:

(prim/transact! (:reconciler @app) ...)

This should generally be used in cases where there is an abstract operation (e.g. you want setTimeout to update
a js/Date to the current time and have the screen refresh). Using (prim/transact! (:reconciler @app) '[(update-time) :current-time]) is
much clearer and in the spirit of the framework than any other low-level data tweaking. That could also be done in
the context of a component to prevent an overall root re-render, though you’d want to be careful to use both sides of
the component lifecycle to install and remove a timer that triggers such an update.

In some cases you will have obtained some data (or perhaps invented it) and you need to integrate that data into the
database. If the data matches your UI structure (as a tree) and you have proper Ident declarations on those components
then you can simply transform the data into the correct shape via the tree→db function using a component’s
query.

Unfortunately, you would then need to follow that transform by a sequence of operations on app state to merge those
various bits.

The function also requires a query in order to do normalization (split the tree into tables).

Important

The general interaction with the world requires integration of external data (often in a tree format) with
your app database (normalized graph of maps/vectors). As a result, you almost always want a component-based query when
integrating data so that the result is normalized.

When in a mutation you very often need to place an ident in various spots in your graph database. The helper
function fulcro.client/integrate-ident can by used from within mutations to help you do this. It accepts
any number of named parameters that specify operations to do with a given ident:

This function checks for the existence of the given ident in the target list, and will refuse to add it if it is
already there. The :replace option can be used on a to-one or to-many relation. When replacing on a to-many, you
use an index in the target path (e.g. [:table id :field 2] would replace the third element in a to-many :field)

If your UI doesn’t have a query that is convenient for sending to the server (or for working on tree data like this),
then it is considered perfectly fine to generate components just for their queries (no render). This is often quite
useful, especially in the context of pre-loading data that gets placed on the UI in a completely different form (e.g. the
UI queries don’t match what you’d like to ask the server).

of course, you can see that you’re still going to need to merge the database table contents into your main app state
and carefully integrate the other bits as well.

Reminder

The ident part of the component is the magic here. This is why you need component queries for
this work work right. The ident functions are used to determine the table locations and idents to place into
the normalized database!

Fulcro includes a function that takes care of the rest of these bits for you. It requires the reconciler (which
as we mentioned earlier can be obtained from the Fulcro App). The arguments are similar to tree→db:

(prim/merge! (:reconciler @app) ROOT-data ROOT-query)

The same things apply as tree→db (idents especially), however, the result of the transform will make it’s way into
the app state (which is owned by the reconciler).

IMPORTANT: The biggest challenge with using this function is that it requires the data and query to be structured
from the ROOT of the database! That is sometimes perfectly fine, but our next section talks about a helper that
might be easier to use.")

There is a common special case that comes up often: You want to merge something that is in the context of some particular UI component.

(prim/merge-component! app ComponentClass ComponentData)

Think of this case as: I have some data for a given component (which MUST have an ident). I want to merge into that
component’s entry in a table, but I want to make sure the recursive tree of data also gets normalized
properly.

merge-component! also integrates the functionality of integrate-ident! to pepper the ident of the merged entity throughout
your app database, and can often serve as a total one-stop
shop for merging data that is coming from some external source.

One of the most interesting and powerful things about Fulcro is that the model for server interaction is unified into
a clean data-driven structure. At first the new concepts can be challenging, but once you’ve seen the core primitive
(component-based queries/idents for normalization) we think you’ll find that it dramatically simplifies everything!

In fact, now that you’ve completed the materials of this guide on the graph database, queries, idents, and
normalization, it turns out that the server interactions become nearly trivial!

Not only is the structure of server interaction well-defined, Fulcro come with pre-written server-side code
that handle all of the Fulcro plumbing for you. You can choose to provide as little or as much as you like. The easy
server code provides everything in a Ring-based stack, and you can also choose to hand-build your server using
a simple API handler for the Fulcro API route.

Even then, there are a lot of possible pitfalls when writing distributed applications. People often underestimate just how hard
it is to get web applications right because they forget that.

So, while the API and mechanics of how you write Fulcro server interactions are as simple as possible there is no getting
around that there are some hairy things to navigate in distributed apps independent of your choice of tools.
Fulcro tries to make these things apparent, and it also tries hard to make sure you’re able to get it right without
pulling out your hair.

Here are some of the basics:

Networking is provided.

The protocol is EDN on the wire (via transit), which means you just speak clojure data on the wire, and can
easily extend it to encode/decode new types.

All network requests (queries and mutations) are processed sequentially unless you specify otherwise. This allows you
to reason about optimistic updates (Starting more than one at a time via async calls could
lead to out-of-order execution, and impossible-to-reason-about recovery from errors).

You may provide fallbacks that indicate error-handling mutations to run on failures.

Writes and reads that are enqueued together will always be performed in write-first order. This ensures that remote reads are
as current as possible.

Any :ui/ namespaced query elements are automatically elided when generating a query from the UI to a server, allowing you
to easily mix UI concerns with server concerns in your component queries.

Normalization of a remote query result is automatic.

Deep merge of query results uses intelligent overwrite for properties that are already present in the client database.

Any number of remotes can be defined (allowing you to easily integrate with microservices).

Protocol and communication is strictly constrained to the networking layer and away from your application’s core structure,
meaning you can actually speak whatever and however you want to a remote. In fact
the concept of a remote is just "something you can talk to via queries and mutations".
You can easily define a "remote" that reads and writes browser
local storage or a Datascript database in the browser. This is an extremely powerful generalization for isolating
side-effect code from your UI.

Note

To those of you with REST APIs. Fulcro can be made to work with REST, but full
stack data-driven architectures are often at odds with REST. You will end up writing a network layer on your
client that translates graph queries to one or more REST calls, and then combines those results into a tree of response
data that you can pass back up the chain. The UI is wonderfully protected from this, but when possible you
will find the most leverage by writing the server to support the graph queries directly (consider your REST API a legacy
thing that your new UI doesn’t want/need). Things like GraphQL are a different story. That is a much simpler translation
layer.

Incremental loads of sub-graphs of something that was previously loaded.

Event-based loads (e.g. user or timed events)

Integrating data from other external sources (e.g. server push)

In standard Fulcro networking, all of the above have the following similarities:

A component-based graph query needs to be involved (to enable auto-normalization). Even the server push (though in that case the client needs to know what implied question the server is sending data about.

The data from the server will be a tree that has the same shape as the query.

The data needs to be normalized into the client database.

Optionally: after integrating new data there may be some need to transform the result to a form the UI needs
(e.g. perhaps you need to sort or paginate some list of items that came in).

IMPORTANT: Remember what you learned about the graph database, queries, and idents. This section cannot possibly be understood
properly if you do not understand those topics!

So, here is the secret: When external data needs to go into your database it all uses the exact same mechanism: a
query-based merge. So, for a simple load: you send a UI-based query to the server, the server responds with a tree
of data that matches that graph query, and then the query itself (which is annotated with the components and ident functions)
can be used to normalize the result. Finally, the normalized result can be merged into your existing client database.

We have all sorts of ways we’d like to view data. Perhaps we’d like to view "all the people who’ve ever had a particular
phone number". That is something we can very simply represent with a UI graph, but may not be trivial to pull from
our database.

In general, there are a few approaches to resolving our graph differences:

Use a query parser on the server to piece together data based on the graph of the query.

Ask the server for exactly what you want, using an invented well-known "root" keyword, and hand-code the database
code to create the UI-centric view.

Ask the server for the data in a format it can easily provide and morph it on the client.

The first two have the advantage of making the client blissfully unaware of the server schema. It just asks for what
it needs, and someone on the server programming team is stuck with satisfying the query. This is the down-side: the number
of possible UI-centric queries could become quite large. Theoretically a parser solution makes this more tractable
than a hand-coded variant, but in practice the parser is hard to make general in a way that allows UI developers to just
run willy-nilly queries and get what they want.

In the example of "people who’ve had a particular phone number", the graph would be phone-number centric. Maybe
[:db/id :phone/number {:phone/historical-owners (prim/get-query Person)}]. There probably isn’t a graph edge in the
real database called :phone/historical-owners. One could write parser code that understood this particular edge,
and did the logic. In this case, that really isn’t even that hard and may be a good choice.

Fulcro gives you another easy-to-access option: morph something the server can easily provide (on the
real graph without custom code). We’ll show you an example of this as we explore the data fetch options.

Fulcro will serialize requests unless you mark queries as parallel (an option you can specify on load). Two different
events that queue mutations or loads will be processed in order. For example The user clicks on something and you trigger two loads
during that event, then both of those will be combined (if they don’t conflict by querying for the same thing)
sent as one network request. If the user clicks on something else and that handler queues two more loads, then the latter two
loads will not start over the network until the first load sequence has completed.

This ensures that you don’t get out-of-order
server execution. This is a distributed system, so it is possible for a second request to hit a less congested server (or even thread)
than the first and get processed out of order. That would hurt your ability to reason about your program, so the default behavior in
Fulcro is to ensure that server interactions happen on the server in the same order as they do on the client.

If you combine mutations and reads in the same event processing (before giving up the thread), then Fulcro also ensures
that remote mutations go over the wire and complete before reads. The idea being that you don’t want to fetch data
and then immediately make it stale through mutations. This additional detail is also aimed at preventing subtle classes
of application bugs.

In Summary:

Loads/transactions queued while "holding the UI thread" will be joined together in a single network request.

Remote writes go before reads

Loads/transactions queued at a later event "new UI thread event" are guaranteed to be processed after ones queued earlier.

There is a potential for a data-driven app to create a new class of problem related to merging data. The normalization
guarantees that the data for any number of views of the same thing are normalized to the same node in the graph
database.

Thus, your PersonListRow view and PersonDetail view should both normalize the same person data to
a location like [:person/by-id id]. Let’s say you have a quick list of people on the screen that is paginated and
demand-loaded, where each row is a PersonListRow with query [:db/id :person/name {:person/image (prim/get-query Image)}].
Now say that you can click on one of these rows and a side-by-side view of that person’s PersonDetail is shown,
but the query for that is a whole bunch of stuff: name, age, address, phone numbers, etc. A much larger query.

A naive merge could cause you all sorts of nightmares. For example, Refreshing the list rows would load only name and image,
but if merge overwrote the entire table entry then the current detail would suddenly empty out!

Fulcro provides you with an advanced merging algorithm that ensures these kinds of cases don’t easily occur. It does an
intelligent merge with the following algorithm:

It is a detailed deep merge. Thus, the target table entry is updated, not overwritten.

If the query asks for a value but the result does not contain it, then that value is removed.

If the query didn’t ask for a value, then the existing database value is untouched.

This reduces the problem to one of potential staleness. It is technically possible for an entity in the resulting client
database to be in a state that has never existed on the server because of such a partial update. This is considered
to be a better result than the arbitrary UI madness that a more naive approach would cause.

For example, in the example above a query for a list row will update the name and image. All of the other details (if already
loaded) would remain the same; however, it is possible that the server has run a mutation that also updated this person’s
phone number. The detail part of the UI will be showing an updated name and image, but the old phone number. Technically
this "state of Person" has never existed in the server’s timeline, but from a user’s perspective it looks better than
the phone number disappearing.

In practice this isn’t that big of a deal; however, if you are switching the UI to an edit mode it is generally a good practice to
on-demand load the entity being edited to help prevent user confusion and accidental overwrite based on stale values.

React lifecycle and Load may not mix well. It is technically legal to issue transactions and loads from React Lifecycle,
methods, but it is recommended that you carefully check the state of the system in said mutations before actually
queuing network traffic.
Perhaps you wish to ensure something is loaded. To do that: trigger a mutation that checks state and optionally
submits a load. You’ll learn how to do this in the sections on the data featch API.

If you’re using the standard server support of Fulcro, then the API hooks are already predefined for you and you
can use helper macros to generate handlers for queries. If your load specified a keyword then this is seen by
the server as a load targeted to your root node. Thus, the server macro is called defquery-root:

It’s as simple as that! Write a function that returns the correct value for the query. The query itself will be available
in the env, and you can use libraries like pathom, Datomic, and fulcro-sql to parse those queries into the proper tree
result from various data sources. See the Query Parsing chapter.

In this demo we’re loading lists of people (thus the keyword’s name). There are two kinds of people available from the
server: friends and enemies. We use the :params config parameter to add a map of parameters to the network request
to specify what we want. The :target key is the path to the (normalized) entity’s property under which the response
should be stored once received.

Since all components should be normalized, this target path is almost always 2 (loading a whole component into a table)
or 3 elements (loading one or more things into a property of an existing component).

The server-side code for handling these queries uses a global table on the server, and is:

Once started, any given person can be refreshed at any time. Timeouts, user event triggers, etc. There are two ways
to refresh a given entity in a database. In the code of Person below you’ll see that we are using load again, but
this time with an ident:

(df/load this (prim/ident this props) Person)

All of the parameters of this call can be easily derived when calling it from the component needing refreshed, so
there is a helper function called refresh! that makes this a bit shorter to type:

A short while ago we noted that loads are targeted at the root of your graph, and that this wasn’t always what you
wanted. After all, your graph database will always have other UI stuff. For example there might be the concept of
a "current screen" (join from root) that might currently point to a "friends screen", which in turn is where you want
to load that list of friends:

If you’ve followed our earlier recommendations then your application’s UI is normalized and any given
spot in your graph is really just an entry in a top-level table. Thus, the path to the desired location
of our friends is usually just 3 deep. In this case: [:screens/by-type :friends :friends/list].

If we were to merge the earlier load into that database we could get what we want by just moving graph edges (where
a to-one edge is an ident, and a to-many edge is a vector of idents):

The append-to function in the data-fetch namespace augments the target to indicate that the incoming items
(which will be normalized) should have their idents appended onto the to-many edge found at the given location.

Note

append-to will not create duplicates.

The other available helper is prepend-to. Using a plain target is equivalent to full replacement.

You may also ask the targeting system to place the result(s) at more than one place in your graph. You do this
with the multiple-targets wrapper:

The component that issued the load will automatically be refreshed when the load completes. You may use the data-driven
nature of the app to request other components refresh as well. The :refresh option tells the system what data has
changed due to the load. It causes all live components that have queried those things to refresh.
You can supply keywords and/or idents:

; load my best friend, and re-render every live component that queried for the name of a person
(df/load comp:best-friend Person {:refresh [:person/name]})

Loads allow a number of additional arguments. Many of these are discussed in more detail in later sections:

:post-mutation and :post-mutation-params

A mutation to run once the load is complete (local data transform only).

:remote

The name of the remote you want to load from.

:refresh

A vector of keywords and idents. Any component that queries these will be re-rendered once the load completes.

:parallel

Boolean. Defaults to false. When true, bypasses the sequential network queue. Allows multiple loads to run at once, but causes you to lose any guarantees about ordering since the server might complete them out-of-order.

:fallback

A mutation to run if the server throws an error during the load.

:focus

A subquery to filter from your component query. Covered in Incremental Loading.

:without

A set of keywords to elide from the query. Covered in Incremental Loading.

:params

A map. If supplied the params will appear as the params of the query on the server.

:initialize bool|map

If true, uses component’s initial state as a basis for incoming merge. If a map, uses the map as the basis for incoming merge.

The :parallel option of load bypasses the normal network sequential queue. Below is a simple live
example that shows off the difference between regular loads and those marked parallel. In
order to see the effect increase your server latency to something like 5 seconds.

Normally, Fulcro runs separate event-based loads in sequence, ensuring that your reasoning can be synchronous;
however, for loads that might take some time to complete, and for which you can guarantee order of
completion doesn’t matter, you can specify an option on load (:parallel true) that allows them to proceed in parallel.

Pressing the sequential buttons on all three (in any order) will take
at least 3x your server latency to complete from the time you click the first one (since each will run after the other is complete).
If you rapidly click the parallel buttons, then the loads will not be sequenced and you will see them all complete in roughly
1x your server latency (from the time you click the last one).

This support is not complete yet (in version 2.0.0). It should be considered ALPHA and subject to change.

On occasion you may find that your entities have :ui/??? attributes that you would like to
default to something on a loaded entity. This is the purpose of the :initialize option to load.
If it is set to true, then load will call get-initial-state on the component of the
load, and merge the return value from the server into that before merging it to app state.

Alternatively, you can pass :initialize a map, and that will be used as the target for
the server response merge before normalizing the result into app state.

Note

The value of :initialize must either be true or a map that matches the correct
shape of the component’s sub-tree of data. It must not be a normalized database fragment.

The steps are:

Send the request

Merge the response into the basis defined by :initialize.

Merge the result of (2) into the database using the component’s query (auto-normalize)

The targeting system that we discussed in the prior section is great for cases where your data-driven query gets you
exactly what you need for the UI. In fact, since you can process the query on the server it is entirely possible that
load with targeting is all you’ll ever need; however, from a practical perspective it may turn out that you’ve got a
server that can easily understand certain shapes of data-driven queries, but not others.

For example, say you were pulling a list of items from a database. It might be trivial to pull that graph of data
from the server from the perspective of a list of items, but let’s say that each item had a category. Perhaps you’d like to
group the items by category in the UI.

The data-driven way to handle that is to make the server understand the UI query that has them grouped by category; however,
that implies that you might end up embedding code on your server to handle a way of looking at data that is really
specific to one kind of UI. That tends to push us back toward a proliferation of code on the server that was a nightmare
in REST.

Another way of handling this is to accept the fact that our data-driven queries have some natural limits: If the
database on the server can easily produce the graph, then we should let it do so from the data-driven query; however,
in some cases it may make more sense to let the UI morph the incoming data into a shape that makes more sense to that
UI.

We all understand doing these kinds of transforms. It’s just data manipulation. So, you may find this has some
distinct advantages:

Simple query to the server (only have to write one query handler) that is a natural fit for the database there.

Simple layout in resulting UI database (normalized into tables and a graph)

It is perfectly legal to use defsc to define a graph query (and normalization) for something like this that doesn’t exactly
exist on your UI. This can be quite useful in the presence of post mutations that can re-shape the data.

The example below simulates post mutations to show how a load of simple data could be morphed into something
that the UI wants to display. In this case we’re pretending that the load has brought in a number of items
(as a collection) and normalized it, but we’d prefer to show the items organized by category..

You can interact with it and view the database to A/B compare the before/after state.

The Ring stack is supplied for you in the server code and most responses are simple EDN with the HTTP details taken care
of for you; however, there are times when you need to modify something about the low-level response itself (such
as adding a cookie).

If you’re using the Fulcro server tools (handle API request or the easy server) then you can add a fully general
response transform to your EDN response as follows:

For example, if you were using Ring and Ring Session, you could cause a session cookie to be generated, and user
information to be stored in a server session store simply by returning this from a query on user:

Mutations are already plain data, so Fulcro can pass them over the network as-is when the client invokes them. All you
need to do to indicate that a given mutation should affect a given remote is add that remote to the mutation:

Now you can see why choosing the right real mutations and amount of composition on the client
can give you optimal server interaction: Anything that you run in transact! itself can stand
alone as a remote mutation call in the transaction on the wire.

The action portion of a mutation is run immediately on the client. When there is also a server interaction
then the client-side operation is known as an optimistic update because by default we assume that the server
will succeed in replicating the action. This gives the user immediate feedback and the ability to proceed quickly
even in the presence of a slow network. We’ll discuss more on error handling shortly.

A multi-method in Fulcro client (which is manipualted with defmutation) can indicate that a given mutation
should be sent to any number of remotes. The default remote is named :remote, but you can define new ones or even
rename the default one.

Basically, you use the name of the remote as an indicator of which remote you want to replicate the mutation to. From
there you either return true (which means send the mutation as-is), or you may return an expression AST that represents
the mutation you’d like to send instead. The fulcro.client.primitives namespace includes ast→query and query-ast
for arbitrary generation of ASTs, but the original AST of the mutation is also available in the mutation’s environment.

Therefore, you can alter a mutation simply by altering and returning the ast given in env:

(defmutation do-thing [params]
(action [env] ...)
; send (do-thing {:x 1}) even if params are different than that on the client
(remote [{:keys [ast]}] (assoc ast :params {:x1})) ; change the param list for the remote; or using the with-params helper
(defmutation do-thing [params]
(action [env] ...)
; send (do-thing {:x 1}) even if params are different than that on the client
(remote [{:keys [ast]}] (m/with-params ast {:x1})) ; change the param list for the remote

The state is available in remote, but the action will run first. This means that you should not
expect the "old" values in state when computing anything for the remote because the optimistic
update of the action will have already been applied! If you need to rely on data as it existed at the time of
transact! then you must pass it as a parameter to the mutation so that the original data is closed over for the
duration of the mutation processing.

Server-side mutations in Fulcro are written the same way as on the client: A mutation returns a map with a key :action
and a function of no variables as a value. The mutation then does whatever server-side operation is indicated. The env
parameter on the server can contain anything you like (for example database access interfaces). You’ll see how
to configure that when you study how to build a server.

The recommended approach to writing a server mutation is to use the pre-written server-side parser and multimethods, which allow you
to mimic the same code structure of the client (there is a defmutation in fulcro.server for this).

If you’re using this approach (which is the default in the easy server), then here are the client-side and server-side
implementations of the same mutation:

It is recommended that you use the same namespace on the client and server for mutations so it is easy to find them,
but the macro allows you to namespace the symbol if you choose to use a different namespace on the server:

The defmutation macro on the server simply hits a multimethod. You can use defmethod on fulcro.server/server-mutate to define your mutations. The advantage of this is that you might want
to write your own wrappers, macros, or code around the low-level implementation.

In general, we recommend using defmutation because it is better supported by IDEs (for navigation, docstrings, etc)
and eliminates some classes of syntactic error.

Fulcro has a built in function prim/tempid that will generate a unique temporary ID. This allows the normalization
and denormalization of the client side database to continue working while the server processes the new data and returns
the permanent identifier(s).

The idea is that these temporary IDs can be safely placed in your client database (and network queues), and will be
automatically rewritten to their real ID when the server has managed to create the real persistent entity. Of course, since
you have optimistic updates on the client it is important that things go in the correct sequence, and that queued operations
for the server don’t get confused about what ID is correct!

Warning

Because mutation code can be called multiple times (at least once + once per each remote),
you should take care to not call fulcro.client.primitives/tempid inside your mutation.
Instead call it from your UI code that builds the mutation params.

Fulcro’s implementation works as follows:

Mutations always run in the order specified in the call to transact!

Transmission of separate calls to transact! run in the order they were called.

If remote mutations are separated in time, then they go through a sequential networking queue, and are processed in order.

As mutations complete on the server, they return tempid remappings. Those are applied to the application state and network
queue before the next network operation (load or mutation) is sent.

This set of rules helps ensure that you can reason about your program, even in the presence of optimistic updates that
could theoretically be somewhat ahead of the server.

For example, you could create an item, edit it, then delete it. The UI responds immediately, but the initial create might
still be running on the server. This means the server has not even given it a real ID before you’re queuing up a request
to delete it! With the above rules, it will just work! The network queue will have two backlogged operations (the edit
and the delete), each with the same tempid that you currently know. When the create finally returns
it will automatically rewrite all of the tempids in state and the network queues, then send the next operation. Thus,
the edit will apply to the current server entity, as will the delete.

All the server code has to do is return a map with the special key :fulcro.client.primitives/tempids
(or the legacy :tempids) whose value is a map of tempid→realid whenever it sees an ID during persistence operations.
Here are the client-side and server-side implementations of the same mutation that create a new item:

There are scenarios where the above behavior is not what you want. In particular are cases like form submission where
you might want to wait until the server completes, so that the user can be kept in the form until you’ve confirmed
the server isn’t down or something.

Fulcro 2.0+ has support for pessimistic transactions that enable exactly this sort of behavior:

(prim/ptransact! this `[(a) (b)])

Will run `a’s action, `a’s remote, then `b’s action and `b’s remote.
This can be combined with analysis of mutation return values to allow you to follow a remote operation with a UI one:

Use caution when using mutations with conditional remote behavior.
ptransact! detects which mutations are remote by pre-running them (they are side-effect free)
against the app state as it is at the beginning of the transaction. If you have a mutation in the middle
that relies on the state modifications of a prior mutation in the same transaction in order to decide if
it is remote then it will be mis-detected.

There is technically nothing wrong with issuing a load that has side-effects on the server (though one could argue that
this is a bit sketchy from a design perspective). For example, one way to
implement login is to issue a load with the user’s credentials:

(df/load :current-user User {:params {:username u :password p}})

The server query response can validate the credentials, set a cookie, and return the user info all at once! Your UI can
simply base rendering on the values in :current-user. If they’re valid, you’re logged in.

If you remember from the General Operations section, you can modify the low-level Ring response by associating a lambda
with your return value. If you were using Ring Session, then this might be how the query would be implemented on the server:

then the transaction will run as-if it were executed in the context of any live component on the screen that currently has
that ident. This will make the ident available in the mutation’s environment as :ref, and will focus refresh at that
component sub-tree(s). This can be useful when you have out-of-band data that causes you to want to run a
transaction outside of the UI using the reconciler.

Mutations generally need not expose their full-stack nature to the UI. For
example a next-page mutation might trigger a load for the next page of
data or simply swap in some already cached data. The UI need not be
aware of the logic of this distinction (though typically the UI will
want to include loading markers, so it is common for there to be some
kind of knowledge about lazy loading).

Instead of coding complex "do I need to load that?" logic in the UI
(where it most certainly does not belong) one should instead write
mutations that abstract it into a nice concept.

Fulcro handles loads by placing load markers into a special place in the
application database. Whenever a remote operation is triggered, the
networking layer will check this queue and process it.

The fulcro.client.data-detch/load function simply runs a transact!
that does both (adds the load to the queue and triggers remote processing).

If you’d like to compose one or more loads into a mutation, there are helper
functions that will help you do just that: df/load-action and df/remote-load.

The remote-load call need only be done for any one of the remotes you’re talking to. It merely tells the
back-end code to process remote requests (which will hit all remote queues). Thus, the parameters to the load-action
calls are where you actually specify which remote a given load should actually talk to.

Rendering is completely up to you, but Fulcro handles the networking (technically this is pluggable, but Fulcro still initiates
the interactions). That means that you’re going to need some help when it comes to showing the user that something is happening on the network.

The first and easiest thing to use is a global activity marker that is automatically maintained at the root node of
your client database.

Fulcro will automatically maintain a global network activity marker at the top level of the app state under the
keyword :ui/loading-data. This key will have a true value when there are network requests awaiting a response from
the server, and will have a false value when there are no network requests in process.

You can access this marker from any component that composes to root by including a link in your component’s query:

Because the global loading marker is at the top level of the application state, do not use the keyword as a follow-on
read to mutations because it may unnecessarily trigger a re-render of the entire application.

Mutations can be passed off silently to the server. You may choose to block the UI if you have reason to believe there
will be a problem, but there is usually no other reason to prevent the user from just continuing to use your application
while the server processes the mutation. Thus, only the global activity marker is available for mutations. See
Pessimistic Transactions for a method of controlling UI around the network activity of remote mutations.

Loads are a different story. It is very often the case that you might have a number of loads running to populate different
parts of your UI all at once. In this case it is quite useful to have some kind of load-specific marker that you
can use to show that activity.

In Fulcro 1.0+ this can be done as follows:

The target of each load is replaced by a load marker until the load completed

You can detect these load markers and show an alternate UI while they are loading.

The component that is to be loaded must include :ui/fetch-state in its query (this is the key under which the marker is placed)

The data-fetch namespace has utility functions for detecting the state of the marker, though usually just its presence is enough.

By default Fulcro places markers where the items will appear that are being loaded. These markers can be used to show progress indicators in
the UI.

In the demo below the first button triggers a load of a child’s data from the server. Use the server latency controls to slow things down so you
can see the markers. Once the child is loaded, a button appears indicating items can be loaded into that child.

Once the items are loaded, each has a refresh button. Again, use the server delay so you can watch the
markers.

This is still supported (and is still currently the default); however, it is deprecated because it was found to be less than ideal:

It caused the old data to disappear. There was no place else to put the targeted load marker except over the old data.
This caused flicker and workarounds (such as mis-targeting the data and using post-mutations to put it in place at the end)

The load markers are rather large. Looking at your component’s app state during a load is kind of ugly.

The load markers could not be queried from elsewhere, meaning activity indicators had to be local to the loaded data.

Worst: you have to add :ui/fetch-state to the query of the component representing the thing being loaded, or the load marker
isn’t available.

The steps are rather simple: Include the :marker parameter with load, and issue a query for the load marker on
the marker table. The table name for markers is stored in the data-fetch namespace in the var df/marker-table.

The most confusing part of normalized load markers is that the IDs are keywords, but you may need a marker for a specific
entity. Say you have a list of people, and you’d like to show an activity marker on the one you’re refreshing. You
have many on the screen, so you can’t just use a simple keyword as the marker ID or they might all show a loading
indicator when only one is updating.

In this case you will need to generate a marker ID based on the entity ID, and then use a link query to pull the
entire load marker table to see what is loading.

For example, you might define the marker IDs as (keyword "person-load-marker" (str person-id)). Your person
component could then find its load marker like this:

The server mutation is always allowed to return a value. Normally the only value that makes sense is the temporary ID
remapping as previoiusly discussed in the main section of full-stack mutations. It is automatically processed by the client
and causes the tempid to be rewritten everywhere in your state and network backlog:

In some cases you’d like to return other details. However, remember that
any data merge needs a tree of data and a query. With a mutation there is no query!
As such, return values from mutations are ignored by default because there is no way to understand how to
merge the result into your database. Remember we’re trying to eliminate the vast majority of callback hell and keep
asynchrony out of the UI. The processing pipeline is always: update the database state, re-render the UI.

If you want to make use of the returned values from the server then you need to add something to remedy the
lack of a query.

The solution might be obvious to you: include the query with the mutation! This is called a mutation join.
The explicit syntax for a mutation join looks like this:

`[{(f) [:x]}]

but you never write them this way because a manual query doesn’t have ident information and cannot aid normalization. Instead,
you write them just like you do when grabbing queries for anything else:

`[{(f) ~(prim/get-query Item)}]

Running a mutation with this notation allows you to return a value from the server’s mutation that exactly matches the graph
of the item, and it will be automatically normalized and merged into your database. So, if the Item query ended up being
[:db/id :item/value] then the server mutation could just return a simple map like so:

At the time of this writing the query must come from a UI component that has an ident. Thus, mutations joins
essentially normalize things into a specific table in your database (determined by the ID(s) of the return
value and the ident on the query’s component). Newer versions may relax this restriction.

Writing transact! using mutation joins is a bit visually noisy. It turns out there is a better way.
If you remember: the remote section of client mutations can return a boolean or an AST. Fulcro comes with helper functions that can
rewrite the AST of the mutation to modify the parameters or convert it to a mutation join! This can simplify how the
mutations look in the UI.

Here’s the difference. With the manual syntactic technique we just described your UI and client mutation would look something like this:

If you use the AST with mutation joins, then Fulcro gives you an additional bonus:
A helper you can use with your mutation to indicate that the given mutation return value
should be further integrated into your app state. By default, you’re just returning an entity. The data gets
normalized, but there is no further linkage into your app state.

You will sometimes want to pepper idents around your app state as a result of the return. You can add this
kind of targeting through the AST in the remote (not available at the UI layer):

There is also a sledge-hammer approach to return values: plug into Fulcro’s merge routines. This is an advanced technique
and is not recommended for most applications.

Fulcro gives a hook for a mutation-merge function that you can install when you’re creating the application. If you
use a multi-method, then it will make it easier to co-locate your return value logic near the client-local mutation itself:

Note that the API is a bit different between the two: mutations get the app state atom in an environment, and you
swap! on that atom to change state. The return merge function is in an already runningswap! during the state merge
of the networking layer. So, it is a function that takes the application state as a map and must
return a new state as a map.

This technique is fully general in terms of handling arbitrary return values, but is limited in that your only recourse
is to merge the data into you app state. Of course, since your rendering is a pure function of app state this means you
can, at the very least, visualize the result.

This works, but is not the recommended approach because it is very easy to make mistakes that affect your entire
application.

Note

Mutation merge happens after server return values have been merged; however, it does happen before tempid remapping.
Just work with the tempids, and they will be rewritten once your merge is complete.

In the example below the displayed volume is coming from the server’s mutation return value.
Use the server latency to convince yourself of this. Notice if you click too rapidly then the value doesn’t increase
any faster than the server can respond (since it computes the new volume based on what the client sends).

The merge function is most easily dealt with as a multimethod so you can dispatch on the mutation symbol:

(defmultimerge-return-value (fn [state sym return-value] sym))

We’re going to return a map with :new-volume in it from the server, so our merge can look like this:

It is often the case that a load results from user interaction with the UI. But it is also the case that the
load isn’t everything you want to do, or that you’d like to hide the load logic or base it on current state that
the triggering component does not know.

In reality load and load-field call prim/transact! under the hood, targeting fulcro’s built-in fulcro/load
mutation, which is responsible for sending your request to the server.

There are similar functions: load-action and load-field-action
that do not call prim/transact!, but instead just push a load request into the load queue and can be used to
inside of one of your own client-side mutations.

Let’s look at an example of a standard load. Say you want to load a list of people from the server:

Since we are in the UI and not inside of a mutation’s action thunk, we can use load-field to initialize the
call to prim/transact!.

The action-suffixed load functions are useful when performing an action in the user interface that must both modify
the client-side database and load data from the server.

Note

You must use the result of the
fulcro.client.data-fetch/remote-load funtion as the value of the remote in the mutation. The
action calls of load-action place the request on a queue. The remote-load returns the correct indicator to
Fulcro so that it knows you queued a load. If you forget it, then your load won’t be processed until the next
operation causes remote interactions.

This snippet defines a mutation that modifies the app state to display the view passed in via the mutation parameters
and loads the data for that view. A few important points:

If an action thunk calls one or more action-suffixed load functions (which do nothing but queue the load
request) then it MUST also call remote-load on the remote side.

The remote-load function changes the mutation’s dispatch key to fulcro/load which in turn triggers
the networking layer that one or more loads are ready. IMPORTANT: Remote loading cannot be mixed with a mutation
that also needs to be sent remotely. I.e. one could not send change-view to the server in this example.

If you find yourself wanting to put a call to any load-* in a React Lifecycle method try reworking
the code to use your own mutations (which can check if a load is really needed) and the use the action-suffixed
loads instead. Lifecycle methods are often misunderstood, leading to incorrect behaviors like triggering loads
over and over again.

Fulcro has a built-in mutation fulcro/load (also aliased as fulcro.client.data-fetch/load).

The mutation can be used from application startup or anywhere you’d run a mutation (transact!). This covers almost
all of the possible remote data integration needs!

The helper functions described above simply trigger this built-in Fulcro mutation
(the *-action variants do so by modifying the remote mutation AST via the remote-load helper function).

You are allowed to use this mutation directly in a call to transact!, but you should never need to.

The arguments to this mutation include most of the options that load can take, but you do
need to specify query. For most direct use-cases you’ll probably skip using the load-field specific parameters
described in the docstring (:field and :ident). You can read the source of load-field if you’d like to simulate
it by hand.

It is very common for your UI query to have a lot more in it than you want to load at any given time. In some cases,
even a specific entity asks for more than you’d like to load. A good example of this is a component that allows comments.
Perhaps you’d like the initial load of the component to not include the comments at all, then later load the comments
when the user, for example, opens (or scrolls to) that part of the UI.

Fulcro makes this quite easy. There are three basic steps:

Put the full query on the UI

When you use that UI query with load, prune out the parts you don’t want.

The :without parameter can be used to elide portions of the query (it works recursively). The query sent to the
server will not ask for :blog/comments. Of course, your server has to parse and honor the exact details
of the query for this to work (if the server decides it’s going to returns the comments, you get them…​but this is why
we disliked REST, right?)

(server/defquery-root :server/blog
(value [{:keys [query]} {:keys [id]}] ; query will be the query of Blog, without the :comments; use a parser on query to get the proper blog result. See Server Interactions - Query Parsing
(get-blog id query)))

Later, say when the user scrolls to the bottom of the screen or clicks on "show comments" we can load the rest
from of this previously partially-loaded graph within the Blog itself using load-field, which does the opposite
of :without on the query:

The load-field function prunes everything from the query except for the branch
joined through the given key. It also generates an entity rooted query based on the calling component’s ident:

[{[:table ID] subquery}]

where the [:table ID] are the ident of the invoking component, and subquery is (prim/get-query invoking-component), but
focused down to the one field. In the example above, this would end up something like this:

Another way to load a subgraph part is to use the :focus setting on the load, :focus
allow you to define a subquery to be loaded from the component query, to start simple here
is how we can write the same previous example using :focus:

A special case that worth mention about focus sub-query is how it handles unions. Other
than on unions, :focus will only use the attributes mentioned on the sub-query, but
on unions, if you don’t express some union branch it will be pulled as is, for example, let’s
say you have this given query:

The first thing I want to challenge you to think about is this: why do errors happen, and what can we do about them?

In the early days of web apps, our UI was completely dumb: the server did all of the logic. The answer to these questions
were clear, because it wasn’t even a distributed app: it was a remote display of an app running on a remote machine.
In other words, the context of the error handling was available at the same time as our request to do the operation.

So we often block the UI so that the user cannot get ahead of things (like
submit a form and move on before the server has confirmed the submission).
Over the years we’ve gotten a little more clever with our error handling, but largely our users (and our ability
to reason about our programs) has kept us firmly rooted to the block-until-we-know method of error handling because
it is actually less like a distributed system. Unfortunately, such UI interactions are doomed to feel sluggish
in congested or bandwidth-limited environments.

More and more code is moving to the client machine. In the world of single-page apps we want
things to "make sense" and we also want them to be snappy. Unfortunately, we still also have security concerns
at the server, so we get confused by the following fact: the server has to be able to
validate a request for security reasons. There is no getting around this. You cannot trust a client.

However, I think many of us take this too far: security concerns are often a lot easier to enforce than the full
client-level interaction with these concerns. For example, we can say on a server that a field must be a number.
This is one line of code that can be done with an assertion.

The UI logic for this is much larger: we have to tell the user what we expected, why we expected it, constrain the UI
to keep them from typing letters, etc. In other words, almost all of the real logic is already on the client, and unless
there is a bug, our UI won’t cause a server error because it is pre-checking everything before sending it out.

So, in a modern UI, here are the scenarios for errors from the server:

You have a bug. Is there anything you can really do? No, because it is a bug. If you could predict it going wrong, you would
have already fixed it. Testing and user bug reports are your only recourse.

There is a security violation. There is nothing for your UI to do, because your UI didn’t do it! This is an attack.
Throw an exception on the server, and never expect it in the UI. If you get it, it is a bug. See (1).

There is a user perspective outage (LAN/WiFi/phone). These are possibly recoverable. You can block the UI, and allow the
user to continue once networking is re-established.

There is an infrastructure outage. You’re screwed. Things are just down. If you’re lucky, it is networking and your
user is seeing it as (3) and is just blocked. If you’re not lucky, your database crashed and you have no idea
if your data is even consistent.

So, I would assert that the only full-stack error handling worth doing in any detail is for case (3). If communications
are down, the client can retry. But in a distributed system this can be a little nuanced. Did that mutation partially complete?

If your application can assume reasonably reliable networking and you write your server operations to be atomic then
your error handling can be a relatively small amount of code. Unrecoverable problems will be rare and at worst
you throw up a dialog that says you’ve had an error and the user hits reload on their browser.
If this happens to users once or twice a year, it isn’t going to hurt you.

But of course there is more to the story, and the devil is in the details.

The general philosophy of a Fulcro application is that optimistic updates are not even triggered on the client
unless they expect to succeed on the server. In other words, you write the application in such a way that operations cannot be triggered
from the UI unless you’re certain that a functioning server will execute them. A server should not throw an exception
and trigger a need for error handling unless there is a real, non-recoverable situation.

If this is true, then a functioning server does need to do sanity checking for security reasons, but in general you
don’t need to give friendly errors when those checks fail: you should assume they are attempted hacks. Other serious
problems are similar: there is usually nothing you can do but throw an exception and let the user contact support.
Exceptions to this rule certainly exist, but they are few and far between.

There are some cases where the server has to be involved in a validation interaction non-optimistically. Login is a great
example of this. However, invalid credentials on login need not be treated as an error! Instead they can be treated as
a response to a question. "Can I log in with these credentials?". Yes or no. This is a query and response, not an
error handling interaction. Thus, something like login can be handled with a query (to get the answer) and post-mutation
(to update the screen with a message or change the UI route).

This philosophy eases the overhead in general application programming. You need not write a bunch of code in the UI
that gives a nice friendly message for every kind of error that can possibly occur (nor does anyone really do that
anywhere anyhow). If an error occurs, you can pretty much assume
it is either a bug or a real outage. In both cases, there isn’t a lot you can do that will work "well" for the user. If
it is a bug, then you really have no chance of predicting what will fix it, otherwise you would have already fixed the
bug. If it’s an outage you might be able to do retries, but in many cases you have no way of knowing what has gone wrong.

So, one approach is to treat most error conditions as a rare problem that needs fairly radical recovery.
One such method is to use a global error handler that is configured during
the setup of your client application (you have to explicitly configure networking). This function could update application
state to show some kind of top-level modal dialog that describes the problem, possibly allows the user to submit
application history (for support viewer) to your servers, and then re-initializes the application in some way.

You can, of course, get pretty creative with the re-initialization. For example, say you write screens so that they will
refresh their persistent data whenever it is older than some amount of time, and write it so all entities have a timestamp.
You could walk the state and "expire" all of the timestamps, and then close the dialog. Your retry
could be set up to check for the expiration, which in turn would trigger loads. If the server is really having
problems then the worst case is that the dialog pops back up telling them there is still a problem.

If your users are likely using your software from a phone on a subway then you have a completely different issue.

Fortunately, Fulcro actually makes handling this case relatively easy as well. Here is what you can do:

Write a custom networking implementation for the client that detects the kind of error, and retries recoverable ones
until they succeed. Possibly with exponential backoff. (If an infinite loop happens, the user will eventually hit reload.)

Make your server mutations idempotent so that a client can safely re-apply a transaction without causing data corruption.

The default fulcro networking does not do retries because it isn’t safe without the idempotent guarantee.

The optimistic updates of Fulcro and the in-order server execution means that "offline" operation is actually quite
tractable. If programmed this way, your error handling becomes isolated almost entirely to the networking layer. Of course,
if the user navigates to a screen that needs server data, they will just have to wait. Writing UI code that possibly has
lifecycle timers to show progress updates will improve the overall feel, but the correctness will be there with a fairly
small number of additions.

However, even with these fancy tricks that make our applications better, there are times when we’d just like to
block until something is complete.

This is only available if you use the built-in remote with the Fulcro client. If you write your own networking then
you can handle errors at the network layer any way you want. To install a network error handler on the default
remote support simply write a function like this:

;; this function is called on *every* network error, regardless of cause
(defnerror-handler"To be used as network-error-callback"
[state status-code error]
(log/warn "Global callback:" error " with status code: " status-code))

The live example below does various things to demonstrate various ways of reacting to errors.
There is a load that fails and uses a fallback to log a message.

The next button tries a mutation that fails (by throwing on the server in a way that propagates the
error back to the client). The final one tries a read that will fail, but does nothing with the error, though
you’ll still see that the global indicator updates.

Fulcro defaults to optimistic updates, which in turn encourages you to write a UI that is very responsive. However,
as soon as you start writing remote mutations you start worrying about the fact that your user submitted some data
but you to let them go off and do other things (like leave the screen they’re on) before the server has responded.
In effect, we’ve told the user "success", but we know we’re kind of lying to them.

Another way of looking at it is: we’re letting them leave the visual context of the information, but we know that if a
server error happens then we need to inform them about that error. We’d like to be sure they understand the error
by still seeing that context when it arrives.

This is a rather complicated way of saying something like "if their email change didn’t work, then we’d like to
show the error next to the email input box".

There is nothing in Fulcro that prevents you from writing a blocking UI. You just have to remember that the UI is a
pure rendering of application state: meaning that if you want to block the UI, then you need a way to put a "block the ui"
marker in state (that renders in a way that prevents navigation), and remove that marker when the operation is complete.

Fulcro has a number of ways that you can accomplish this, but we’ll cover the simplest and most obvious.

We use the prim/ptransact! to submit a transaction, which will run each mutation in pessimistic mode (each element
runs only after the prior element has completed a round-trip to the server).

The first call in the tx will block the UI, and do the remote operation. We’ll also leverage mutation return values
so the server can indicate success to us.

Once the first call finishes, the second call in the tx can choose to unblock the UI, or handle any problem it
sees. The mutation return value is merged (and visible) in app state.

Unlike normal mode, pessimistic transactions expect that you might have to nest another one within a mutation in order to retry a
prior call. This is a supported use, and you will find the reconciler in the mutation’s env parameter to facilitate it as
shown in the example below.

To show how this all works we’ll use an in-browser server emulation and show you a working example.

First, we need something to block our UI (which in the card measures 400x100 px). It is a simple div with some style
that will overlay the main UI and prevent further interactions while also showing some kind of feedback message. The CSS
sucks, but let’s ignore that for now.

We define it, along with some helper functions that can manipulate its state. It does not have an ident, and we
plan to just place it in root at :overlay:

The main UI is just a simple one-field form and submission button. Note, however, that it submits the form
with ptransact!, which will force each call to complete before the next one can start. Thus the second call can
check the result and run whatever in response to it.

It just shows the overlay, and goes remote. Notice the remote part is using returning from the mutations namespace
to indicate a merge of the result value of the mutation. For that we’ve defined a singleton component (for its query only):

If you’re running a mutation is likely to trigger server errors then you can explicitly encode a fallback behavior
with the mutation. Fallbacks are triggered the mutation on the server throws an error that is
detectable, or if there is a network error.

The requirement for a server mutation to trigger fallbacks is for it to throw an ex-info exception and
include {:type :fulcro.client.primitives/abort} in the data. Otherwise the server-side parser will swallow
the exception and continue with the transaction.

Defining a fallback for a transaction is done by including a special mutation in the transaction that names the
mutation to invoke on error:

(require [fulcro.client.mutations :refer [mutate]]
[fulcro.client.primitives :as prim)
(defmutation handle-failure [{:keys [error ::prim/ref] :as params}]
;; fallback mutations are designed to recover the client-side app state from server failures;; THEY DO NOT CHECK FOR REMOTE. You cannot chain a remote interaction in a fallback.
(action [{:keys [state]}]
(swap! state undo-stuff error)))

Assuming that some-mutation is remote, then if the server throws a hard error (e.g. status code not 200)
then the fallback action’s mutation symbol (a dispatch key for mutate) is invoked on the
client with params that include an :error key that includes the details of the server exception (error type, message,
and ex-info’s data). Be sure to only include serializable data in the server exception!

If triggered due to a mutation fallback (not load), then the fallback will also receive the ident of the component
that invoked the original transaction in parameters under the key fulcro.client.primitives/ref.

You can have any number of fallbacks in a tx, and they will run in order if the transaction fails.

Note

It is not recommended that you rely on fallbacks for very much. They are provided for cases where you’d
like to code instance-targeted recovery, but we believe this to be a rarely useful feature.
You’re much better off preventing errors by coding your UI to validate,
authorize, and error check things on the client before sending them to the server. The server should still verify
sanity for security reasons, but optimistic systems like Fulcro put more burden on the client code in order to
provide a better experience under normal operation. See the earlier discussion about error handling.

If you do use fallback then you probably also need to clear the network queue so that additional queued
operations don’t continue to fail.

If the server sends back a failure it may be desirable to clear any pending network requests from the client
network queue. For example, if you’re adding an item to a list and get a server error you might have a mutation waiting
in your network queue that was some kind of modification to that (now failed) item. Continuing the network processing
might just cause more errors.

The FulcroApplication protocol (implemented by your client app) includes the protocol method
clear-pending-remote-requests! which will drain all pending network requests.

(fulcro.client/clear-pending-remote-requests! my-app)

A common recovery strategy from errors could be to clean the network queue and run a mutation that resets your application
to a known state, possibly loading sane state from the server.

The fallback mechanism described for error handling works in ptransact!. Fallbacks are clustered to the remote they follow
up until the next remote mutation (with one exception: fallbacks at the beginning of the entire tx are clustered to the first remote mutation):

In the Getting Started chapter you saw a little on how to build and use Fulcro’s easy server. That server is acually
flexible enough for many production needs, but Fulcro also
comes with code to help you very quickly get a custom server for your
application up and running. In this chapter we’ll give you more detail on these two main approaches to
the server-side of Fulcro:

If you’re integrating with an existing server then you probably just want to know how to get things working without
having to use a Component library, and all of the other stuff that comes along with it.

It turns out that the server API handling is relatively light. Most of the work goes into getting things set up
for easy server restart (e.g. making components stop/start) and getting those components into your parsing environment.

If you have an existing server then you’ve mostly figured out all of that stuff already and just want to plug
a Fulcro API handler into it.

You’re responsible for creating the parser environment. I’d recommend using
the fulcro.server/fulcro-parser because it is already hooked to the server multimethods
like defquery-root and defmutation. Those won’t work unless you use it, but any parser
that can deal with the query/mutation syntax is technically legal.

Here’s a crappy little server with no configuration support, no ability to hot code reload, and no external
integrations at all. But it shows how little you need:

Since you’ll still need to configure your web server, it might be useful to note that the configuration used
by the easy server is a component you can inject into your own server. A number of add-on components for Fulcro
assume a :config keyed component will be in your system, so if you choose to use such components you can
create a config component via fulcro.server/new-config.

It supports pulling in values from the system environment, overriding configs with a JVM option, and more. See
the section in easy server about configuration, and the docstrings on new-config for more details.

The pre-built easy server component for Fulcro uses Stuart Sierra’s Component library. The server has no
global state except for a debugging atom that holds the entire system, and can therefore be easily restarted
with a code refresh to avoid costly restarts of the JVM.

resources/config/defaults.edn: This file should contain a single EDN map that contains
defaults for everything that the application wants to configure.

/abs/path/of/choice: This file can be named what you want (you supply the name when
making the server). It can also contain an empty map, but is meant to be machine-local
overrides of the configuration in the defaults. This file is required. We chose to do this because
it keeps people from starting the app in an unconfigured production environment.

Any components in the server can be injected into the processing pipeline so they are
available when writing your mutations and query procesing. Making them available is as simple
as putting their component keyword into the :parser-injections set when building the server:

The easy server has a hook in front of the API processing (pre-hook), and one at the end of the ring stack
after API processing and just before the not-found handler (post-hook). You can have a component join into
the stack by making it depend on :handler. Here is an example:

Remember that if you serve something like index.html for alternate paths you should use absolute paths for the
resources (e.g. scripts) in that file. If the requested resource wasn’t an html file, then you’ll also need to set the
content type on the response with something like (→ (ring.middleware.resource/resource-request …​) (ring.util.response/content-type "text/html")).

The easy server (and client) default to using /api as the URI on which to handle traffic. If you are proxying multiple
Fulcro applications to a single server, you may want to place them under different URI paths (e.g. /app-1/api and /app-2/api.

The :app-name option of the easy server will add such a prefix to the API route. If you do that, then the client
will also need to have manual configuration of networking to ensure that it tries to contact the correct URI for
API calls.

Fulcro fully support dynamic queries: the ability the change the query of a component at runtime. This feature is fully
serializable (works with the support viewer and other time-travel features), and is critical for features like code splitting
where you may need to compose in a query of an as-yet unloaded component tree of your application.

For dynamic queries to work right they have to be stored in your application database and every aspect of them must
be serializable. Additionally, the UI must be able to look them up at the component level in order to do optimal refresh.
The solution to this is query IDs. A query ID is a simple combination of the the component’s fully-qualified class name
combined with a user-defined qualifier (which defaults to the empty string).

Since this qualifier is needed both in the code that obtains queries (get-query) and in the UI rendering (the factory
that draws that component), it is easiest to locate the qualifier in the UI factory itself. This allows you
to have instances of a class that can have different queries:

In the Loading and Incremental Loading sections we showed you the central entry
points for responding to server queries:
defquery-root and defquery-entity. These are fine for simple examples and for getting into your
processing; however, to be truly data-driven you need to change how the server responds based
on what the client actually asked for (in detail).

So far, we’ve sort of been spewing entire entities back from the server without taking care to prune them down
to the actual query of the client.

Unless you modify the network stack all client communication will be in the form of Query/Mutation expressions,
which of course are recursive in nature. There is no built-in recursive processing, since Fulcro does not know
anything about your server-side storage; however, there is a parsing mechanism that you
can use to build processing to interface with it, and there are a number of libraries that can also help.

A really nice library for building recursive Fulcro query parsers. It has a good model for building parsers that can
bridge everything from REST services to microservice architectures. In general if you need to interpret your UI queries,
this tool can be very useful.

A library that can run Fulcro graph queries against SQL databases. This library lets you define your joins in relation
to the Fulcro join notion. It can walk to-one, to-many, and many-to-many joins in an SQL database in response to a
Fulcro join. This allows it to handle many Fulcro queries as graph queries against your SQL database with just a little
configuration and invocatin code.

The expression parser needs two things: A function to dispatch reads to, and a function to dispatch mutations. Since
we’re talking about query parsing we’ll only be talking about the read dispatch function.

The signature of the read function is (read [env dispatch-key params])
where the env contains the state of your application, a reference to your parser (so you can
call it recursively, if you wish), a query root marker, an AST node describing the exact
details of the element’s meaning, and anything else you want to put in there if
you call the parser recursively.

The most important item in the query processing is the received environment (env). On
the server it contains:

Any components you’ve asked to be injected. Perhaps database and config components.

ast: An AST representation of the item being parsed.

query: The subquery (e.g. of a join)

parser: The query expression parser itself (which allows you to do recursive calls). If you’re using the built-in
parser, this this will be that same parser that is already hooked into your dispatch mechanism (e.g. defquery-root).

request: The full incoming Ring request, which will contain things like the headers, cookies, session, user agent info, etc.

The return value of your read must be the value for the dispatch-key. The parser assembles these back together and
returns a map containing those keys for all of the items for which you return a non-nil result.

If you understand that, you can probably already write a simple recursive parse of a query. If
you need a bit more hand-holding, then read on.

Note

When doing recursive parsing You do not have to use the parser from env. Parsers are cheap. If you want to
make one to deal with a particular graph, go for it! The fulcro.client.primitives/parser can make one.

Now, let’s get a feeling for the parser in general. The example below
runs a parser on an arbitrary query that you supply, records the calls to the read emitter,
and shows the trace of those calls in order.

See the source code comments for a full description of how it works.

Try some queries like these:

[:a :b]

[:a {:b [:c]}] (note that the AST is recursively built, but only the top keys are actually parsed to trigger reads)

In order to play with this on a server, you’ll want to have some kind of state
available. The most trivial thing you can do is just create a global top-level atom
that holds data. This is sufficient for testing, and we’ll assume we’ve done something
like this on our server:

When building your server there must be a read function that can
pull data to fufill what the parser needs to fill in the result of a query. Fulcro supplies this by default
and gives you the defquery-* macros as helpers to hook into it, but really it is just a multi-method.

For educational purposes, we’re going to walk you through implementing this read function yourself.

The parser understands the grammar, and is written to work as follows:

The parser calls your read with the key that it parsed, along with some other helpful information.

Your read function returns a value for that key (possibly calling the parser recursively if it is a join).

The parser generates the result map by putting that key/value pair into
the result at the correct position (relative to the query).

Note that the parser only processes the query one level deep. Recursion (if you need it)
is controlled by you calling the parser again from within the read function.

The example below is similar to the prior one, but it has a read function that just records what keys it was
triggered for. Give it an arbitrary legal query, and see what happens.

In the example above you should have seen that only the top-level keys trigger reads.

So, the query:

[:kw {:j [:v]}]

would result in a call to your read function on :kw and :j. Two calls. No
automatic recursion. Done. The output value of the parser will be a map (that
parse creates) which contains the keys (from the query, copied over by the
parser) and values (obtained from your read):

{ :kw value-from-read-for-kw :j value-from-read-for-j }

Note that if your read accidentally returns a scalar for :j then you’ve not
done the right thing…​a join like { :j [:k] } expects a result that is a
vector of (zero or more) things or a singleton map that contains key
:k.

{ :kw21:j { :k42 } }
; OR
{ :kw21:j [{ :k42 } {:k43}] }

Dealing with recursive queries is a natural fit for a recusive algorithm, and it
is perfectly fine to invoke the parser function to descend the query. In fact,
the parser is passed as part of your environment.

So, the read function you write will receive three arguments, as described below:

your read function should return some value that makes sense for
that spot in the grammar. There are no real restrictions on what that data
value has to be in this case. You are reading a simple property.
There is no further shape implied by the grammar.
It could be a string, number, Entity Object, JS Date, nil, etc.

Due to additional features of the parser, your return value must be wrapped in a
map with the key :value. If you fail to do this, you will get nothing
in the result.

Thus, a very simple read for props (keywords) could be:

(defn read [env key params] { :value 42 })

below is an example that implements exactly this read and plugs it into a
parser like this:

So now you have a read function that returns the meaning of life the universe and
everything in a single line of code! But now it is obvious that we need to build
an even bigger machine to understand the question.

If your server state is just a flat set of scalar values with unique keyword
identities, then a better read is similarly trivial:

It just assumes the property will be in the top-level of some injected state atom. Let’s try
that out. The database we’re emulating is shown at the bottom of the example.
Run some queries and see what you get. Some suggestions:

Your state probably has some more structure to it than just a flat
bag of properties. Joins are naturally recursive in syntax, and
those that are accustomed to writing parsers probably already see the solution.

First, let’s clarify what the read function will receive for a join. When
parsing:

But just to prove a point about the separation of database format and
query structure we’ll implement this next example
with a basic recursive parse, but use more flat data (the following is live code):

The important bit is the then part of the if. Return a value that is
the recursive parse of the sub-query. Otherwise, we just look up the keyword
in the state (which as you can see is a very flat map).

The first (possibly surprising thing) is that your result includes a nested
object. The parser creates the result, and the recursion naturally nested the
result correctly.

Next you should remember that a join implies there could be one OR many results.
The singleton case is fine (e.g. putting a single map there). If there are
multiple results it should be a vector.

In this case we’re just showing calling the parser recursively. Notice that
it in turn will call your read function again.
In a real application your data will not be this flat so you
will almost certainly not do things in quite this
way.

Let’s put a little better state in our application and write a more realistic parser.

Those of you paying close attention will notice that we have yet to need
recursion. We’ve also done something a bit naive: select-keys assumes
that query contains only keys! What if query followed an ident link to
:married-to:

Now things get interesting, and I’m sure more than one reader will have an
opinion on how to proceed. My aim is to show that the parser can be called
recursively to handle these things, not to find the perfect structure for the
parser in general, so I’m going to do something simple.

The primary trick I’m going to exploit is the fact that env is just a map, and
that we can add stuff to it. When we are in the context of a person, we’ll add
:person to the environment, and pass that to a recursive call to parser.

It can be a little bit of work to build these parsers for the queries (which is why libraries exist so you don’t have to); however,
we hope you can see that it is actually pretty tractable to build them once you understand the basics.

Now we’ll move on to another thing that servers typically need in their queries: parameters!

The Fulcro query grammar accept parameters on most elements. These are intended
to be combined with dynamic queries that will allow your UI to have some control
over what you want to read from the application state (think filtering, pagination,
sorting, and such).

As you might expect, the parameters on a particular expression in the query
are just passed into your read function as the third argument.
You are responsible for both defining and interpreting them.
They have no rules other than that they are maps.

To read the property :load/start-time with a parameter indicating a particular
time unit you might use a query like:

[(:load/start-time {:units:seconds})]

this will invoke read with:

(your-read env :load/start-time { :units:seconds})

the implication is clear. The code is up to you. Let’s add some quick support for this
in our read so you can try it out.

Most of this book has assumed you’ll be using Fulcro’s predefined server parser. Note that you can still do that
and switch out to an alternate (custom) parser at any phase of parsing. You can even install a custom parser in your server
(though then the macros for defining mutations and query handlers won’t work for you).
Parsers can be constructed using the prim/parser function.

Fulcro allows you to augment the built-in query parser for local reads on the client. It uses the exact same techniques
discussed above, and it is similar in that you must be able to start at the root of the query
(even though you may only want to augment something rather deep in the query tree).

Actually, there are two cases that such a custom read must be able to do:

Handle the path from root to the point of interest (it can hand off uninteresting side branches to db→tree.

The first is a limitation of how queries are processed. Fulcro normally runs the query through db→tree, which attempts to
fill the entire result. If you supply a :read-local function during client construction, then your :read-local will
get first shot at each element of the query that was submitted. Note that the query can start at an ident.
If your read-local function returns nil for an element, then the normal Fulcro
processing takes place on that sub-query. If your read-local returns a value, then that is all of the processing that
is done for the sub-tree rooted at that key. Thus, custom client parsing always requires you to process sub-trees of the query, not just
individual elements. Of course, you can use db→tree at any time to "finish out" some subquery against real data.

This has the advantage of letting you dynamically fill queries without having to have a concrete representation of the
graph in your database. This can be helpful if you have some rapidly changing data (e.g. updated by a web worker) and
some views of that data that would otherwise be hard to keep up-to-date.

UI Routing is a very important task in Fulcro. It is the primary means by which you keep your application
running quickly. You see, in Fulcro your query is run from root. If your entire application’s query runs on every
render frame things can get slow indeed.

The solution is easy: use union queries with to-one relations to ensure only the portions of your
query that are active on the UI are processed.

Unfortunately many people find hand-writing union components a litle challenging. Fulcro provides a nice
pre-written facility that can write much of the code for you, making the process more conceptual as UI Routing.

The ident generator for the components and router must all work the same. The router uses the first element of the
ident to pick the screen.

The list of screens to route to in the router are keyed by that first element of the ident.

The components that act as screens should:

Have initial state that will work with ident

Use an ident function that returns the same thing that the router will extract.

The router itself then works like any other Fulcro component. You make a factory for it, join it into your query, and render it
by passing the queried props to it.

Warning

The defrouter macro allows you to specify a vector for the ident argument where both keys are looked up in
the component’s props; However, the defsc macro assumes that a vector shorthand for idents contains a table name and
something to look up. This is the most convenient behavior for both, but since it does not match use this rule: If you’re
using defsc to make a screen component that will be used with a router then you always want the lambda form, not
the vector template in that defsc.

Be sure to look at the database view in the example above. Notice that all that has to happen is a change
of a single ident. This as the effect of switching the rendering, and choosing the
subquery for the remainder of the visible UI.

It is very easy from here to compose together as many of these as you’d like in order to
build a more complicated UI. For example, the settings could have several subscreens as in this
example:

This allows you to build up a tree of routers that keeps your query minimal, and allows for very nice dynamic
structuring of the applicataion at runtime.

If you have screens that could have different instances (for example, different reports), then each report could
have an ID, and routing would involve selecting the screen’s table, as well as a distinct ID.

The problem, of course, is that managing all of these routers in your application logic becomes
somewhat of a chore. Also, it is common to want to mix UI routing with HTML5 history, where only a single
"route" is spelled out, but you may need to logically "switch" any number of these UI routers to reach
the indicated screen.

For example, one could imagine wanting to go to /settings/colors as a URI for the previous example. That single
URI as a concept is a single route to a screen, but the mutation you’d trigger would be to set-route on
two different routers:

Fulcro includes some routing tree primitives to do the mapping from single conceptual "routes" like /settings/colors
to a set of instructions that you need to send to your UI routers. There is an additional concern as well: route
parameters. It is quite common to want to interpret URIs like /user/436 as a route that populates a given screen
with some data.

Thus, the tree support is based on the concept of a Route Handler and Route Parameters.

Define your routing tree. This is a data structure that gives instructions to one-or-more routers that are necessary to
display the screen with a given handler name. In the above example you need to tell both the top router and report router
to change what they are showing in order to get a :status or :graph onto the screen.

(defrouting-tree"A map of route handling instructions. A given route has a handler name (e.g. `:main`) which is
thought of as the target of a routing operation (i.e. interpretation of a URI). It also has a vector
of `router-instruction`s, which say
1. which router should be changed
2. What component instance that router should point to (by ident)
The routing tree for the diagram above is therefore:
"
(r/routing-tree
(r/make-route :main [(r/router-instruction :top-router [:main:top])])
(r/make-route :login [(r/router-instruction :top-router [:login:top])])
(r/make-route :new-user [(r/router-instruction :top-router [:new-user:top])])
(r/make-route :graph [(r/router-instruction :top-router [:report:top])
(r/router-instruction :report-router [:graphing-report:param/report-id])])
(r/make-route :status [(r/router-instruction :top-router [:report:top])
(r/router-instruction :report-router [:status-report:param/report-id])])))

Compose the application as normal, placing the routers as shown in the prior section.

Anything you pass in the :route-params map will get automatically plugged into the parameter
placeholders in your routing tree instructions. By default anything that looks like an integer (only digits) will
be coerced to an integer. Anything that contains only letters will map to a keyword.

If the default coercion isn’t sufficient then you can customize it using parameter coercion.

It is a very common task to need to convert incoming strings (e.g. from a URL) to elements of an ident. If you’d like
to use this support in your own code then use (r/set-ident-route-params ident params) which
supports the coercion and replacement:

Your UI will often want to rely on knowing the "current" route of a given router in order to give
user navigation feedback. You cannot embed your router in the query, because that would often
make the query have a circular reference and blow the stack.

The only real bit of information in a router that is useful is the current route
The current-route function can be used in a mutation or component (by querying for the router
table) to check the route:

here we’re assuming that ensure-report-loaded is a mutation that ensures that there is at least placeholder data in
place (or the UI rendering might look a bit odd or otherwise fail from lack of data). It may also do things like trigger background
loads that will fufill the graph’s needs, something like this:

Additional mutations might do things like garbage collect old data that is not in the view. You may also need to
trigger renders of things like your main screen with follow-on reads (e.g. of a keyword on the root
component of your UI). Of course, combining such things into functions adds a nice touch:

In some cases you will find it most convenient to do your routing within a mutation itself. This will let you
check state, trigger loads, etc. If you trigger loads, then you can also easily defer the routing until the
load completes. Of course, in that case you may want to do something in the state to cause your UI to indicate
the routing is in progress.

There is nothing special about this technique. There are several functions in the routing namespace
that can be used easily within your own mutations:

update-routing-links - For standard union-based defrouter (does not support dynamic code loading routers): Takes
the state map and a route match (map with :handler and :route-params) and returns a new state map with the routes updated.

route-to-impl! - For all kinds of routers (including dynamic): Takes the mutation env and a bidi-style match {:handler/:params}.
Works with dynamic routes. Does swaps against app state, but is safe to use within a mutation.

set-route - Changes the current route on a specificdefrouter instance. Takes a state map, router ID, and a target ident.
Used if not using routing trees or dynamic routers.

Hooking HTML5 or hash-based routing up to this is relatively simple using, for example, pushy and bidi.

We do not provide direct support for this, since your application will need to make a number of decisions that
really are local to the specific app:

How to map URIs to your leaf screens. If you use bidi then bidi-match will return exactly what you need from
a URI route match (e.g. {:handler :x :route-params {:p v}}).

How to grab the URI bits you need. For example, pushy lets you hook up to HTML5 history events.

If a routing decision should be deferred/reversed? E.g. navigation should be denied until a form is saved.

How you want to update the URI on routing. You can define your own additional mutation to do this (e.g. via pushy/set-token!)
and possibly compose it into a new mutations with route-to. The function r/update-routing-links can be used for
such a composition:

On the surface forms are trivial: you have DOM input fields, users put stuff in them, and you submit that to a server.
For really simple forms you already have sufficient tools and you can simply code them however you see fit.

The next most critical thing you’ll want is some help with managing the meta-state that goes with most form
interactions:

When is the content of a field valid?

When should you show a validation error message? E.g. you should not tell a user that they made a mistake
on a field they have yet to touch.

How do you "reset" the form if the user changes their mind or cancels an edit?

How do you ensure that just the data that has changed is sent to the server?

These more advanced interactions require that you track a few things:

The validation rules

Which fields are "complete" (ready for validation)?

What was the state of the form fields before the user started interacting with it?

How do you transition states (e.g. indicate that the updated form is now the new "real state"?

You will also commonly need a way to deal with the fact that a form may cross several entities in your database,
generating a more global top-level form concern: are all of the entities in this form valid?

Again, you can certainly code all of this by hand, but Fulcro includes two different namespaces of helpers that
can make dealing with these aspects of forms a little easier. The reason there are two is that the older
version was not easy to change without breaking existing code, so new functions were written in a new namespace
as an alternative.

The form state support concentrates just on providing utilities to manage the data, and has validation
that is based on Clojure Spec but is completely pluggable. The full form management support is the older
version that attempts to isolate your components a bit more from the event and state management, but at the expense of some
added complexity.

Both are fully supported, though the form state support is considered the cleaner implementation.

The namespace fulcro.ui.form-state (aliased to fs in this chapter) includes functions and mutations for
working with your entity as a form. This support brings functions for dealing with common state storage and form transitions
with minimal opinion or additional complexity.

Your UI is still built and rendered identically to what you’re already used to. The form state support simply
adds some additional state tracking that can help you manage things like field validation and minimal delta
submissions to the server.

This case occurs when you have either some initial state or a function on the client side that generate a new
entity (i.e. with a tempid) and you want to immediately use it with a form. Forms can be nested into
a group, and the functions automatically support initializing the configuration recursively for a given form set.

Say you have a person with multiple (normalized) phone numbers. You want to make a new person and set them up
with an initial phone number to fill out. The tree for that data might look like this:

add-form-config will look at all of the form (and subforms reachable from that form), but it will
only add config to the ones that are missing it. This means the current form state is not reset by this call.

The other very common case is this: You’ve loaded something from the server, and you’d like to use it as the basis
for form fields. In this case the data is already normalized in your state database, and you’ll need to work on
it via a mutation.

The add-form-config* function is your helper for that. The common pattern for using it is:

add-form-config* will look at all of the form (and subforms reachable from that form), but it will
only add config to the ones that are missing it. This means the current form state is not reset by this call.

Validation is completely customizable. There is built-in support for working with Clojure Spec as your validation
layer. In order for this to be effective you should be sure to namespace all of your properties in a globally-unique
way, and then simply write normal specs for them. The section on custom validators describes
the other supported mechanism for validation.

The central function for using specs is fs/get-spec-validity, which can be used on an entire form or a single field.
This function returns one of #{:valid, :invalid, :unchecked}.
Initially, a form’s fields are marked as incomplete. When in this state the validity will always be :unchecked.

Some additional helpers are useful for concise UI code:

(invalid-spec? form field) - Field is optional. Returns true if the form (field) is complete and invalid.

(valid-spec? form field) - Field is optional. Returns true if the form (field) is complete and valid.

(checked? form field) - Field is optional. Returns true if the form (field) is complete. This function works no matter
what validator you’re using.

(dirty? form field) - Field is optional. Returns true if the pristine copy of the form (field) doesn’t match the current entity.

Initially the form config will not consider any of the form fields to be complete. The idea of field "completion" is
so that you can prevent validation on a field until you feel it is a good time. No one wants to see error messages about
fields that they have yet to interact with!

However, depending on what you are editing, you may have different ideas about when fields should be considered complete.
For example, if you just loaded a saved entity from a server, then all of the fields are probably complete by definition,
meaning that you need a way to mark all fields (recursively) complete.

When you first add form config to an entity, all fields are "incomplete". You can iteratively mark fields
complete as the user interacts with them, or trigger a completion "event" all at once. There is a support function
and a mutation for this.

The mark-complete* is meant to be used from within mutations against the app state database. It requires the
state map (not atom), the entity’s ident, and which field you want to mark complete. If you omit the field, then
it marks everything (recursively) complete form that form down.

So, in our earlier example of loading a person for editing, we’d augment that mutation like so:

The mark-complete! mutation can be used for the exact same purpose from the UI. Typically, it is used in things like
:onBlur handlers to indicate that a field is ready for validation. It takes an :entity-ident and :field, but the
:entity-ident is optional if the transaction is invoked on the form component of that field:

You may not wish to use the longer names of properties that are required in order to get stable Clojure Spec support
simply for form validation. In this case you’d still like to use the idea of field completion and validation, but
you’ll want to supply the mechanism whereby validity is determined.

The form traversal code for validation is already in the form state code, and a helper function is provided so
you can leverage it to create your own form validation system. It is quite simple:

Write a function (fn [form field] …​) that returns true if the given field (a keyword) is valid on the given form
(a map of the props for the form that contains that field).

Generate a validator with fs/make-validator

The returned validator works identically to get-spec-validity, but it uses your custom function instead of specs to
determine validity.

For example, you might want to make a simple new user form validation that looks something like this:

As before: you won’t see the error message on an invalid entry until your code has marked the field complete. This moves
a decent amount of clutter out of the primary UI code and into the form support itself.

Once your form is valid and the user indicates a desire to save your interest of course is to send that data to the
server. The dirty-fields function should be used from the UI in order to calculate this and pass it as a parameter
to a mutation. The mutation can then update the local pristine state of form config and indicate a remote operation.

The dirty-fields function returns a map from ident to fields that have changed. If the ident includes a temporary ID,
then all fields for that form will be included. If the ID is not a temp id, then it will only include those fields that
differ from the pristine copy of the original. This will include subform references as to-one or to-many idents (to
indicate the addition or removal of subform instances).

You can ask dirty-fields to either send the explicit new values (only), or a before/after picture of each field. The
latter is particularly useful for easily deriving the addition/removal of references, but is also quite useful if you
would like to do optimistic concurrency (e.g. not apply a change to a server where the old value wasn’t still in the
database).

If you’d like to wait until the server indicates everything is ok, then you can use ptransact! and returning to get back
some submission information, and move the entity→pristine* step to a later mutation:

This example shows the case where a graph of entities (a person and multiple phone numbers) are to be
created in a UI, or are to be loaded from a server. This is a full-stack example, though it doesn’t actually persist
the data (it just prints what the server receives in the Javascript console).

There are two buttons. One will load an existing entity into the editor, and of course submissions will send a minimal
delta. The other button will create a new person, and submissions will send all fields.

The load case, as you can see in the code, is very similar to the prior example, but just includes some extra
code to show you how to put it together with a load interaction.