An Archaeology-Inspired Database

Yoav Rubin

Yoav Rubin is a Senior Software Engineer at Microsoft, and prior to that was a Research Staff Member and a Master Inventor at IBM Research. He works now in the domain of data security in the cloud, and in the past his work focused on developing cloud or web based development environments. Yoav holds an M.Sc. in Medical Research in the field of Neuroscience and B.Sc in Information Systems Engineering. He goes by @yoavrubin on Twitter, and occasionally blogs at http://yoavrubin.blogspot.com.

Introduction

Software development is often viewed as a rigorous process, where the inputs are requirements and the output is the working product. However, software developers are people, with their own perspectives and biases which color the outcome of their work.

In this chapter, we will explore how a change in a common perspective affects the design and implementation of a well-studied type of software: a database.

Database systems are designed to store and query data. This is something that all information workers do; however, the systems themselves were designed by computer scientists. As a result, modern database systems are highly influenced by computer scientists’ definition of what data is, and what can be done with it.

For example, most modern databases implement updates by overwriting old data in-place instead of appending the new data and keeping the old. This mechanism, nicknamed "place-oriented programming" by Rich Hickey, saves storage space but makes it impossible to retrieve the entire history of a particular record. This design decision reflects the computer scientist’s perspective that "history" is less important than the price of its storage.

If you were to instead ask an archaeologist where the old data can be found, the answer would be "hopefully, it's just buried underneath".

(Disclaimer: My understanding of the views of a typical archaeologist is based on visiting a few museums, reading several Wikipedia articles, and watching the entire Indiana Jones series.)

Designing a Database Like an Archaeologist

If we were to ask our friendly archaeologist to design a database, we might expect the requirements to reflect what would be found at an excavation site:

All data is found and catalogued at the site.

Digging deeper will expose the state of things in times past.

Artifacts found at the same layer are from the same period.

Each artifact will consist of state that it accumulated in different periods.

For example, a wall may have Roman symbols on it on one layer, and in a lower layer there may be Greek symbols. Both these observations are recorded as part of the wall's state.

Each artifact has a ‘symbol’ attribute (where a blank means that no update was made).

Solid arrows denote a change in symbol between layers

Dotted arrows are arbitrary relationships of interest between artifacts (e.g., from ‘E’ to ‘A’).

Figure 10.1 - The Excavation Site

If we translate the archaeologist's language into terms a database designer would use:

The excavation site is a database.

Each artifact is an entity with a corresponding ID.

Each entity has a set of attributes, which may change over time.

Each attribute has a specific value at a specific time.

This may look very different from the kinds of databases you are used to working with. This design is sometimes referred to as "functional database", since it uses ideas from the domain of functional programming. The rest of the chapter describes how to implement such a database.

Since we are building a functional database, we will be using a functional programming language called Clojure.

Clojure has several qualities that make it a good implementation language for a functional database, such as out-of-the-box immutability, higher order functions, and metaprogramming facilities. But ultimately, the reason Clojure was chosen was its emphasis on clean, rigorous design, which few programming languages possess.

Laying the Foundation

Let’s start by declaring the core constructs that make up our database.

(defrecord Database [layers top-id curr-time])

A database consists of:

Layers of entities, each with its own unique timestamp (the rings in Figure 1).

A top-id value which is the next available unique ID.

The time at which the database was last updated.

(defrecord Layer [storage VAET AVET VEAT EAVT])

Each layer consists of:

A data store for entities.

Indexes that are used to speed up queries to the database. (These indexes and the meaning of their names will be explained later.)

In our design, a single conceptual ‘database’ may consist of many Database instances, each of which represents a snapshot of the database at curr-time. A Layer may share the exact same entity with another Layer if the entity’s state hasn’t changed between the times that they represent.

Entities

Our database wouldn't be of any use without entities to store, so we define those next. As discussed before, an entity has an ID and a list of attributes; we create them using the make-entity function.

Note that if no ID is given, the entity’s ID is set to be :db/no-id-yet, which means that something else is responsible for giving it an ID. We’ll see how that works later.

Attributes

Each attribute consists of its name, value, and the timestamps of its most recent update as well as the one before that. Each attribute also has two fields that describe its type and cardinality.

In the case that an attribute is used to represent a relationship to another entity, its type will be :db/ref and its value will be the ID of the related entity. This simple type system also acts as an extension point. Users are free to define their own types and leverage them to provide additional semantics for their data.

An attribute's cardinality specifies whether the attribute represents a single value or a set of values. We use this field to determine the set of operations that are permitted on this attribute.

There are a couple of interesting patterns used in this constructor function:

We use Clojure’s Design by Contract pattern to validate that the cardinality parameter is a permissible value.

We use Clojure’s destructuring mechanism to provide a default value of :db/single if one is not given.

We use Clojure’s metadata capabilities to distinguish between an attribute's data (name, value and timestamps) and its metadata (type and cardinality). In Clojure, metadata handling is done using the functions with-meta (to set) and meta (to read).

Attributes only have meaning if they are part of an entity. We make this connection with the add-attr function, which adds a given attribute to an entity's attribute map (called :attrs).

Note that instead of using the attribute’s name directly, we first convert it into a keyword to adhere to Clojure’s idiomatic usage of maps.

Storage

So far we have talked a lot about what we are going to store, without thinking about where we are going to store it. In this chapter, we resort to the simplest storage mechanism: storing the data in memory. This is certainly not reliable, but it simplifies development and debugging and allows us to focus on more interesting parts of the program.

We will access the storage via a simple protocol, which will make it possible to define additional storage providers for a database owner to select from.

Indexing the Data

Now that we've defined the basic elements of our database, we can start thinking about how we're going to query it. By virtue of how we've structured our data, any query is necessarily going to be interested in at least one of an entity's ID, and the name and value of some of its attributes. This triplet of (entity-id, attribute-name, attribute-value) is important enough to our query process that we give it an explicit name: a datom.

Datoms are important because they represent facts, and our database accumulates facts.

If you've used a database system before, you are probably already familiar with the concept of an index, which is a supporting data structure that consumes extra space in order to decrease the average query time. In our database, an index is a three-leveled structure which stores the components of a datom in a specific order. Each index derives its name from the order it stores the datom's components in.

This index is named EAVT, as the top level map holds Entity IDs, the second level holds Attribute names, and the leaves hold Values. The "T" comes from the fact that each layer in the database has its own indexes, hence the index itself is relevant for a specific Time.

The third level set holds the entity-IDs (of the entities whose attribute is at the first level).

Figure 10.3 - AVET

Our indexes are implemented as a map of maps, where the keys of the root map act as the first level, each such key points to a map whose keys act as the index’s second-level and the values are the index’s third level. Each element in the third level is a set, holding the leaves of the index.

Each index stores the components of a datom as some permutation of its canonical 'EAV' ordering (entity_id, attribute-name, attribute-value). However, when we are working with datoms outside of the index, we expect them to be in canonical format. We thus provide each index with functions from-eav and to-eav to convert to and from these orderings.

In most database systems, indexes are an optional component; for example, in an RDBMS (Relational Database Management System) like PostgreSQL or MySQL, you will choose to add indexes only to certain columns in a table. We provide each index with a usage-pred function that determines for an attribute whether it should be included in this index or not.

There is one snag, though: all collections in Clojure are immutable. Since write operations are pretty critical in a database, we define our structure to be an Atom, which is a Clojure reference type that provides the capability of atomic writes.

You may be wondering why we use the always function for the AVET, VEAT and EAVT indexes, and the ref? predicate for the VAET index. This is because these indexes are used in different scenarios, which we’ll see later when we explore queries in depth.

Basic Accessors

Before we can build complex querying facilities for our database, we need to provide a lower-level API that different parts of the system can use to retrieve the components we've built by their associated identifiers from any point in time. Consumers of the database can also use this API; however, it is more likely that they will be using the more fully-featured components built on top of it.

This lower-level API is composed of the following four accessor functions:

Since we treat our database just like any other value, each of these functions take a database as an argument. Each element is retrieved by its associated identifier, and optionally the timestamp of interest. This timestamp is used to find the corresponding layer that our lookup should be applied to.

Evolution

A first usage of the basic accessors is to provide a "read-into-the-past" API. This is possible as, in our database, an update operation is done by appending a new layer (as opposed to overwriting). Therefore we can use the prev-ts property to look at the attribute at that layer, and continue looking deeper into history to observe how the attribute’s value evolved throughout time.

The function evolution-of does exactly that. It returns a sequence of pairs, each consisting of the timestamp and value of an attribute’s update.

Data Behavior and Life Cycle

So far, our discussion has focused on the structure of our data: what the core components are and how they are aggregated together. It's time to explore the dynamics of our system: how data is changed over time through the add--update--remove data lifecycle.

As we've already discussed, data in an archaeologist's world never actually changes. Once it is created, it exists forever and can only be hidden from the world by data in a newer layer. The term "hidden" is crucial here. Older data does not "disappear"—it is buried, and can be revealed again by exposing an older layer. Conversely, updating data means obscuring the old by adding a new layer on top of it with something else. We can thus "delete" data by adding a layer of "nothing" on top of it.

This means that when we talk about data lifecycle, we are really talking about adding layers to our data over time.

The Bare Necessities

The data lifecycle consists of three basic operations:

adding an entity with the add-entity function

removing an entity with the remove-entity function

updating an entity with the update-entity function

Remember that, even though these functions provide the illusion of mutability, all that we are really doing in each case is adding another layer to the data. Also, since we are using Clojure's persistent data structures, from the caller's perspective we pay the same price for these operations as for an "in-place" change (i.e., negligible performance overhead), while maintaining immutability for all other users of the data structure.

Preparing an entity is done by calling the fix-new-entity function and its auxiliary functions next-id, next-ts and update-creation-ts. These latter two helper functions are responsible for finding the next timestamp of the database (done by next-ts), and updating the creation timestamp of the given entity (done by update-creation-ts). Updating the creation timestamp of an entity means going over the attributes of the entity and updating their :ts fields.

All of these components are added as a new layer to the given database. All that’s left is to update the database’s timestamp and top-id fields. That last step occurs on the last line of add-entity, which also returns the updated database.

We also provide an add-entities convenience function that adds multiple entities to the database in one call by iteratively applying add-entity.

(defn add-entities [db ents-seq] (reduce add-entity db ents-seq))

Removing an Entity

Removing an entity from our database means adding a layer in which it does not exist. To do this, we need to:

Remove the entity itself

Update any attributes of other entities that reference it

Clear the entity from our indexes

This "construct-without" process is executed by the remove-entity function, which looks very similar to add-entity:

We begin by using reffing-datoms-to to find all entities that reference ours in the given layer; it returns a sequence of triplets that contain the ID of the referencing entity, as well as the attribute name and the ID of the removed entity.

We then apply update-entity to each triplet to update the attributes that reference our removed entity. (We'll explore how update-entity works in the next section.)

The last step of remove-back-refs is to clear the reference itself from our indexes, and more specifically from the VAET index, since it is the only index that stores reference information.

Updating an Entity

At its essence, an update is the modification of an entity’s attribute’s value. The modification process itself depends on the cardinality of the attribute: an attribute with cardinality :db/multiple holds a set of values, so we must allow items to be added to or removed from this set, or the set to be replaced entirely. An attribute with cardinality :db/single holds a single value, and thus only allows replacement.

Since we also have indexes that provide lookups directly on attributes and their values, these will also have to be updated.

As with add-entity and remove-entity, we won't actually be modifying our entity in place, but will instead add a new layer which contains the updated entity.

All that remains is to remove the old value from the indexes and add the new one to them, and then construct the new layer with all of our updated components. Luckily, we can leverage the code we wrote for adding and removing entities to do this.

Transactions

Each of the operations in our low-level API acts on a single entity. However, nearly all databases have a way for users to do multiple operations as a single transaction. This means:

The batch of operations is viewed as a single atomic operation, so all of the operations either succeed together or fail together.

The database is in a valid state before and after the transaction.

The batch update appears to be isolated; other queries should never see a database state in which only some of the operations have been applied.

We can fulfill these requirements through an interface that consumes a database and a set of operations to be performed, and produces a database whose state reflects the given changes. All of the changes submitted in the batch should be applied through the addition of a single layer. However, we have a problem: All of the functions we wrote in our low-level API add a new layer to the database. If we were to perform a batch with n operations, we would thus see n new layers added, when what we would really like is to have exactly one new layer.

The key here is that the layer we want is the top layer that would be produced by performing those updates in sequence. Therefore, the solution is to execute the user’s operations one after another, each creating a new layer. When the last layer is created, we take only that top layer and place it on the initial database (leaving all the intermediate layers to pine for the fjords). Only after we've done all this will we update the database's timestamp.

All this is done in the transact-on-db function, which receives the initial value of the database and the batch of operations to perform, and returns its updated value.

Note here that we used the term value, meaning that only the caller to this function is exposed to the updated state; all other users of the database are unaware of this change (as a database is a value, and therefore cannot change). In order to have a system where users can be exposed to state changes performed by others, users do not interact directly with the database, but rather refer to it using another level of indirection. This additional level is implemented using Clojure's Atom, a reference type. Here we leverage the main three key features of an Atom, which are:

It references a value.

It is possible to update the referencing of the Atom to another value by executing a transaction (using Clojure's Software Transaction Memory capabilities). The transaction accepts an Atom and a function. That function operates on the value of the Atom and returns a new value. After the execution of the transaction, the Atom references the value that was returned from the function.

Getting to the value that is referenced by the Atom is done by dereferencing it, which returns the state of that Atom at that time.

In between Clojure's Atom and the work done in transact-on-db, there's still a gap to be bridged; namely, to invoke the transaction with the right inputs.

To have the simplest and clearest APIs, we would like users to just provide the Atom and the list of operations, and have the database transform the user input into a proper transaction.

That transformation occurs in the following transaction call chain:

transact → _transact → swap! → transact-on-db

Users call transact with the Atom (i.e., the connection) and the operations to perform, which relays its input to _transact, adding to it the name of the function that updates the Atom (swap!).

(defmacro transact [db-conn & txs] `(_transact ~db-connswap!~@txs))

_transact prepares the call to swap!. It does so by creating a list that begins with swap!, followed by the Atom, then the transact-on-db symbol and the batch of operations.

swap! invokes transact-on-db within a transaction (with the previously prepared arguments), and transact-on-db creates the new state of the database and returns it.

At this point we can see that with few minor tweaks, we can also provide a way to ask "what if" questions. This can be done by replacing swap! with a function that would not make any change to the system. This scenario is implemented with the what-if call chain:

what-if\(\to\)_transact\(\to\)_what-if\(\to\)transact-on-db

The user calls what-if with the database value and the operations to perform. It then relays these inputs to _transact, adding to them a function that mimics swap!'s APIs, without its effect (callled _what-if).

(defmacro what-if [db & ops] `(_transact ~db _what-if ~@ops))

_transact prepares the call to _what-if. It does so by creating a list that begins with _what-if, followed by the database, then the transact-on-db symbol and the batch of operations. _what-if invokes transact-on-db, just like swap! does in the transaction scenario, but does not inflict any change on the system.

(defn-_what-if [db f txs] (f db txs))

Note that we are not using functions, but macros. The reason for using macros here is that arguments to macros do not get evaluated as the call happens; this allows us to offer a cleaner API design where the user provides the operations structured in the same way that any function call is structured in Clojure.

The above process can be seen in the following examples. For Transaction, the user call:

Insight Extraction as Libraries

At this point we have the core functionality of the database in place, and it is time to add its raison d'être: insights extraction. The architecture approach we used here is to allow adding these capabilities as libraries, as different usages of the database would need different such mechanisms.

Graph Traversal

A reference connection between entities is created when an entity’s attribute’s type is :db/ref, which means that the value of that attribute is an ID of another entity. When a referring entity is added to the database, the reference is indexed at the VAET index.The information found in the VAET index can be leveraged to extract all the incoming links to an entity. This is done in the incoming-refs function, which collects all the leaves that are reachable from the entity at that index:

We can also go through all of a given entity’s attributes and collect all the values of attributes of type :db/ref, and by that extract all the outgoing references from that entity. This is done by the outgoing-refs function.

These two functions act as the basic building blocks for any graph traversal operation, as they are the ones that raise the level of abstraction from entities and attributes to nodes and links in a graph. Once we have the ability to look at our database as a graph, we can provide various graph traversing and querying APIs. We leave this as a solved exercise to the reader; one solution can be found in the chapter's source code (see graph.clj).

Querying the Database

The second library we present provides querying capabilities, which is the main concern of this section. A database is not very useful to its users without a powerful query mechanism. This feature is usually exposed to users through a query language that is used to declaratively specify the set of data of interest.

Our data model is based on accumulation of facts (i.e., datoms) over time. For this model, a natural place to look for the right query language is logic programming. A commonly used query language influenced by logic programming is Datalog which, in addition to being well-suited for our data model, has a very elegant adaptation to Clojure’s syntax. Our query engine will implement a subset of the Datalog language from the Datomic database.

Query Language

Let's look at an example query in our proposed language. This query asks: "What are the names and birthdays of entities who like pizza, speak English, and who have a birthday this month?"

Syntax

We use the syntax of Clojure’s data literals directly to provide the basic syntax for our queries. This allows us to avoid having to write a specialized parser, while still providing a form that is familiar and easily readable to programmers familiar with Clojure.

A query is a map with two items:

An item with :where as a key, and with a rule as a value. A rule is a vector of clauses, and a clause is a vector composed of three predicates, each of which operates on a different component of a datom. In the example above, [?e :likes "pizza"] is a clause. This :where item defines a rule that acts as a filter on datoms in our database (like a SQL WHERE clause.)

An item with :find as a key, and with a vector as a value. The vector defines which components of the selected datom should be projected into the results (like a SQL SELECT clause.)

The description above omits a crucial requirement: how to make different clauses sync on a value (i.e., make a join operation between them), and how to structure the found values in the output (specified by the :find part).

We fulfill both of these requirements using variables, which are denoted with a leading ?. The only exception to this definition is the "don't care" variable _ (underscore).

A clause in a query is composed of three predicates; Table 10.2 defines what can act as a predicate in our query language.

Name

Meaning

Example

Constant

Is the value of the item in the datom equal to the constant?

:likes

Variable

Bind the value of the item in the datom to the variable and return true.

?e

Don’t-care

Always returns true.

_

Unary operator

Unary operation that takes a variable as its operand. Bind the datom's item's value to the variable (unless it's an '_'). Replace the variable with the value of the item in the datom. Return the application of the operation.

(bday-mo? _)

Binary operator

A binary operation that must have a variable as one of its operands. Bind the datom's item's value to the variable (unless it's an '_').

Replace the variable with the value of the item in the datom. Return the result of the operation.

(> ?age 20)

: Table 10.2 - Predicates

Limitations of our Query Language

Engineering is all about managing tradeoffs, and designing our query engine is no different. In our case, the main tradeoff we must address is feature-richness versus complexity. Resolving this tradeoff requires us to look at common use-cases of the system, and from there deciding what limitations would be acceptable.

In our database, we decided to build a query engine with the following limitations:

Users cannot define logical operations between the clauses; they are always ‘ANDed’ together. (This can be worked around by using unary or binary predicates.)

If there is more than one clause in a query, there must be one variable that is found in all of the clauses of that query. This variable acts as a joining variable. This limitation simplifies the query optimizer.

A query is only executed on a single database.

While these design decisions result in a query language that is less rich than Datalog, we are still able to support many types of simple but useful queries.

Query Engine Design

While our query language allows the user to specify what they want to access, it hides the details of how this will be accomplished. The query engine is the database component responsible for yielding the data for a given query.

This involves four steps:

Transformation to internal representation: Transform the query from its textual form into a data structure that is consumed by the query planner.

Building a query plan: Determine an efficient plan for yielding the results of the given query. In our case, a query plan is a function to be invoked.

Executing the plan: Execute the plan and send its results to the next phase.

Unification and reporting: Extract only the results that need to be reported and format them as specified.

Phase 1: Transformation

In this phase, we transform the given query from a representation that is easy for the user to understand into a representation that can be consumed efficiently by the query planner.

The :find part of the query is transformed into a set of the given variable names:

(defmacro symbol-col-to-set [coll] (set (mapstr coll)))

The :where part of the query retains its nested vector structure. However, each of the terms in each of the clauses is replaced with a predicate according to Table 10.2.

We are once again relying on the fact that macros do not eagerly evaluate their arguments. This allows us to define a simpler API where users provide variable names as symbols (e.g., ?name) instead of asking the user to understand the internals of the engine by providing variable names as strings ( e.g., "?name"), or even worse, quoting the variable name (e.g., '?name).

At the end of this phase, our example yields the following set for the :find part:

#{"?nm""?bd"}

and the following structure in Table 10.3 for the :where part. (Each cell in the Predicate Clause column holds the metadata found in its neighbor at the Meta Clause column.)

Query Clause

Predicate Clause

Meta Clause

[?e :likes "pizza"]

[#(= % %) #(= % :likes) #(= % "pizza")]

["?e" nil nil]

[?e :name ?nm]

[#(= % %) #(= % :name) #(= % %)]

["?e" nil "?nm"]

[?e :speak "English"]

[#(= % %) #(= % :speak) #(= % "English")]

["?e" nil nil]

[?e :bday (bday-mo? ?bd)]

[#(= % %) #(= % :bday) #(bday-mo? %)]

["?e" nil "?bd"]

: Table 10.3 - Clauses

This structure acts as the query that is executed in a later phase, once the engine decides on the right plan of execution.

Phase 2: Making a Plan

In this phase, we inspect the query in order to construct a good plan to produce the result it describes.

In general, this will involve choosing the appropriate index (Table 10.4) and constructing a plan in the form of a function. We choose the index based on the single joining variable (that can operate on only a single kind of element).

Joining variable operates on

Index to use

Entity IDs

AVET

Attribute names

VEAT

Attribute values

EAVT

: Table 10.4 - Index Selection

The reasoning behind this mapping will become clearer in the next section, when we actually execute the plan produced. For now, just note that the key here is to select an index whose leaves hold the elements that the joining variable operates on.

Locating the index of the joining variable is done by index-of-joining-variable:

We begin by extracting the metadata of each clause in the query. This extracted metadata is a 3-element vector; each element is either a variable name or nil. (Note that there is no more than one variable name in that vector.) Once the vector is extracted, we produce from it (by reducing it) a single value, which is either a variable name or nil. If a variable name is produced, then it appeared in all of the metadata vectors at the same index; i.e., this is the joining variable. We can thus choose to use the index relevant for this joining variable based on the mapping described above.

Once the index is chosen, we construct our plan, which is a function that closes over the query and the index name and executes the operations necessary to return the query results.

Assuming the query was executed on July 4th, the results of executing it on the above data are seen in Table 10.6.

Result Clause

Result Meta

[:likes Pizza #{1}]

["?e" nil nil]

[:name USA #{1}]

["?e" nil "?nm"]

[:speak "English" #{1, 3}]

["?e" nil nil]

[:bday "July 4, 1776" #{1}]

["?e" nil "?bd"]

[:name France #{2}]

["?e" nil "?nm"]

[:bday "July 14, 1789" #{2}]

["?e" nil "?bd"]

[:name Canada #{3}]

["?e" nil "?nm"]

[:bday "July 1, 1867" {3}]

["?e" nil "?bd"]

: Table 10.6 - Query results

Once we have produced all of the result clauses, we need to perform an AND operation between them. This is done by finding all of the elements that passed all the predicate clauses:

(defn items-that-answer-all-conditions [items-seq num-of-conditions]
(->> items-seq ; take the items-seq
(mapvec) ; make each collection (actually a set) into a vector
(reduceinto []) ;reduce all the vectors into one vector
(frequencies) ;count for each item in how many collections (sets) it was in
(filter #(<= num-of-conditions (last %))) ;items that answered all conditions
(mapfirst) ; take from the duos the items themselves
(set))) ; return it as set

In our example, the result of this step is a set that holds the value 1 (which is the entity ID of USA).

We now have to remove the items that didn’t pass all of the conditions:

Finally, we remove all of the result clauses that are "empty" (i.e., their last item is empty). We do this in the last line of the query-index function. Our example leaves us with the items in Table 10.7.

Result Clause

Result Meta

[:likes Pizza #{1}]

["?e" nil nil]

[:name USA #{1}]

["?e" nil "?nm"]

[:bday "July 4, 1776" #{1}]

["?e" nil "?bd"]

[:speak "English" #{1}]

["?e" nil nil]

: Table 10.7 - Filtered query results

We are now ready to report the results. The result clause structure is unwieldy for this purpose, so we will convert it into an an index-like structure (map of maps)—with a significant twist.

To understand the twist, we must first introduce the idea of a binding pair, which is a pair that matches a variable name to its value. The variable name is the one used at the predicate clauses, and the value is the value found in the result clauses.

The twist to the index structure is that now we hold a binding pair of the entity-id / attr-name / value in the location where we held an entity-id / attr-name / value in an index:

Phase 4: Unify and Report

At this point, we’ve produced a superset of the results that the user initially asked for. In this phase, we'll extract the values that the user wants. This process is called unification: it is here that we will unify the binding pairs structure with the vector of variable names that the user defined in the :find clause of the query.

Each unification step is handled by locate-vars-in-query-result, which iterates over a query result (structured as an index entry, but with binding pairs) to detect all the variables and values that the user asked for.

Summary

Our journey started with a conception of a different kind of database, and ended with one that:

Supports ACI transactions (durability was lost when we decided to have the data stored in-memory).

Supports "what if" interactions.

Answers time-related questions.

Handles simple datalog queries that are optimized with indexes.

Provides APIs for graph queries.

Introduces and implements the notion of evolutionary queries.

There are still many things that we could improve: We could add caching to several components to improve performance; support richer queries; and add real storage support to provide data durability, to name a few.

However, our final product can do a great many things, and was implemented in 488 lines of Clojure source code, 73 of which are blank lines and 55 of which are docstrings.

Finally, there's one thing that is still missing: a name. The only sensible option for an in-memory, index-optimized, query-supporting, library developer-friendly, time-aware functional database implemented in 360 lines of Clojure code is CircleDB.