Instructions for installing the QEngine can be found in README.md. If you use the QEngine in your research or software please support us by citing,

The principles for this API are as follows

All functionality described in this API is supported and it will not change in any minor versions except for extremely serious bugs. However, some functionality may become deprecated in future versions. Functionality not discussed in this document is not considered part of the API and it may change even in minor versions.

The QEngine depends heavily on templates and class names are generally not part of the API. The exceptions to this rule are the basic mathematics types and the DataContainer. Objects are constructed from factory-functions starting with 'make' (e.g. makeOperator or makeState), which are either free functions or members. These factory-functions are part of the API, as is the use of the created objects.

QEngine has many debugging checks that may be numerically expensive. To disable them, set DISABLE_DEBUG_CHECKS set to true in your settings.txt file:

set(DISABLE_DEBUG_CHECKS false)

Some of the checks happen in headers, so you will also have to build you own code with the macro QENGINE_DISABLE_DEBUG_CHECKS set to true. How you set this macro depends on the build-system for your code, but if you use cmake, this will work:

add_definitions(-DQENGINE_DISABLE_DEBUG_CHECKS)

All the API is in the namespace qengine with a few sub-namespaces used to distinguish mostly the makeHilbertSpace-functions for different physics. For this reason all the examples assume,

using namespace qengine;

The code contains public functions and members that are not documented here. These are considered internal. To help distinguish them, they are generally kept in other namespaces, but not always. Member-functions that start with an underscore are always internal.

Fundamental Types

This section of the API discuss the fundamental classes for storing numbers.

Scalar

For scalars several type-aliases exist to create the abstraction useful for physicists instead of programmers,

real is double

count_t is unsigned long long

complex is std::complex<double>

Note, that the imaginary unit can be created using the C++ literal from the standard library,

using namespace std::literals::complex_literals;
auto z = 2.0+3.0i;

Vector

The vector-type in QEngine can contain either real (RVec) or complex (CVec) scalars. Both these types have the same constructors,

Misc. Matrix Members

Matrix Arithmetic

Basic Pointwise Arithmetic (element by element): +, -, *, /. This works in any combination of real/complex and matrix/scalar so adding a RMat to complex is possible.

Matrix-Vector and Matrix-Matrix Products (Mat*vec and Mat*Mat).

Kronecker Tensor-Product:

kron(mat, mat) -> mat

Tuple

A tuple is a collection that can hold a list of different types. There is a tuple in the C++-standard library that is the base for the QEngine implementation. In practice, this is used mostly to contain parameters for function-types (potential function, Hamiltonian function, operator function) and for time-stepping, where it is created implicitly from initializer-lists. For cases where it can not be created implicitly, it can be created using the function makeTuple,

makeTuple(params...);

The standard-library tuple cannot be constructed implicitly, so this is just a small wrapper to add implicit constructors. The underlying std::tuple is a public member called data,

auto tup = makeTuple(...);
std::get<0>(tup.data);

Normalize

The normalize-enum is used to indicate to functions if their output should be normalized or not. It is either NORMALIZE or NORMALIZE_NOT.

Generic Physics Concepts

This section of the API discuss the fundamental concepts used in simulating the quantum dynamics.

Hilbert Space

The foundation of any simulation is the Hilbert space, as this is where states and operators exist. A state or an operator are assigned to a Hilbert space upon creation, and an operator can only be applied to a state in the same space.

State

For each physics there exists a type that represents states $|\psi\rangle$. For spatial physics, this is the generic f(x), while second quantized physics have a dedicated State-type.

Operator

The operator-interface covers a range of classes that can transform a state into another state $\hat{O}\cdot\left|\psi\right\rangle \rightarrow \left|\chi\right\rangle$. Operators exist for both spatial and second quantized physics, and in both cases they allow the following operations,

All of the above functions, except for the multiplication, assume the state to be normalized. If debug-checks are enabled, they will throw an error if the norm of the state is more than 1e-10 different from 1.

Diagonalization

Most operators can be diagonalized to find eigenvalues and eigenvectors. This is done with the member-function makeSpectrum, which takes different arguments depending on the operator since the diagonalization uses different algorithms, and returns either a spatial spectrum or a second quantized spectrum. In all cases, the users must specify the index of the highest desired eigenvalue/eigenstate, as it is generally cheaper to find only the lowest N eigenstates. For instance,

.makeSpectrum(count_t largestEigenstate) -> spectrum

This returns a spectrum with the lowest-energy states up to largestEigenstate. With an input 0, it will contain only the ground state and with 1 it will contain the ground state and the first excited state.

EigenState Syntax

Generating the spectrum is useful when both eigenstates and -values or multiple eigenstates are needed. However for generating linear combinations of eigenstates, a simpler syntax exists by using operator[index] on operators. The resulting object only make sense when used inline in a makeState or makeWavefunction call where additional parameters to the diagonalization may be passed in. The general syntax is,

Time Evolution

Time evolution is done with time-steppers that take the role of the time evolution operator
$$
U(t+\Delta t; t) \left| \Psi(t) \right\rangle = \left| \Psi(t+\Delta t) \right\rangle,
$$
which is found by solving the Schrödinger equation
$$
\frac{dU}{dt} = -\frac{i }{\hbar} H U.
$$
This equation is solved by assuming the Hamiltonian is piecewise constant over time-slices of $\Delta t$ where the Schrödinger equation has the solution $U(t+\Delta t;t) = \exp(\frac{-i H \Delta t}{\hbar})$. This exponential can be approximated over small time-steps using different algorithms.
Time-steppers are objects that apply the time evolution operator over a small time step. To this end they have a step member function, which has different parameters depending on the context.
The general interface for time-steppers is,

makeTimeStepper(...); // create timestepper with varying dt, different interfaces.
makeFixedTimeStepper(...); // create timestepper with fixed dt, different interfaces.
.step(...); // time-evolve over small time step, different interfaces.
.state(); // access the current value of the time-evolved state.
.reset(...); // reset the algorithm, different interfaces.

In general, there are four different cases for the use of the time-stepper, varying vs. fixed $\Delta t$, and constant vs. dynamic Hamiltonians. Fixed $\Delta t$ allows for optimizations and it should be used whenever possible.
In a similar manner constant Hamiltonians allow for additional optimizations. For dynamic Hamiltonians, we generally approximate the general solution with the expression $U(t+\Delta t;t) \approx \exp(-\frac{i}{\hbar} \int_t^{t+ \Delta t} H(t') dt' )$, which we solve with some form of midpoint-rule. This gives four make functions,

All these functions initialize the starting wave function and the initial parameters for dynamic Hamiltonians.
Note that $\Delta t$ for fixed timesteppers is initialized at the creation of the Alg-object representing the algorithm. This object may have additional parameters, depending on the specific algorithm to be used. The algorithms are specific to particular physics and are explained below.
These make functions have overloads using default algorithms for different kinds of physics (see default algorithms for spatial and second quantized physics).

These functions time-evolve the initial parameter from the current parameter to the next over a time-interval of length $\Delta t$.
The to-parameters and initialParams are Tuples of whatever the dynamic Hamiltonian is callable with. Dynamic Hamiltonians are represented by either spatial Hamiltonian functions or second quantized operator functions.

In addition, there is a cstep-function, which is useful for stepping dynamic systems without updating the Hamiltonian,

.cstep(); // fixed dt.
.cstep(dt); // non-fixed dt.

For steppers with static Hamiltonians, cstep is identical to step.

Finally, there are two different interfaces for the reset-function, depending on whether the Hamiltonian is static or dynamic,

Spatial Physics

This section of the API discuss the fundamental concepts used in simulating spatial quantum dynamics. This relates to problems with a Schrödinger equation defined on a continuous spatial dimension. To allow numerical simulations, we discretize the spatial dimensions.

One-Particle Hilbert Space

The starting point for any one-particle simulation is setting up the associate Hilbert space,

This creates a numerical grid from xLower to xUpper with dimension steps. As a user you must specific a sufficiently large and finely spaced grid to ensure a proper simulation. The kinematicFactor defines the unit system as discussed below.

The Hilbert space contains the spatial dimension, which is the starting point for any one-particle simulation,

x() -> f_R(x) // returns the numerical grid as a function of x.
spatialDimension() -> f_R(x) // same as x().

The finite-difference approximation to the kinetic part of the Hamiltonian can also be accessed,

T() const -> H_1p // two different names for the same function.
makeKinetic() const -> H_1p

The one-particle Hilbert space also has a number of additional accessors,

This creates a numerical grid from xLower to xUpper with dimension steps. As a user you must specific a sufficiently large and finely spaced grid to ensure a proper simulation. The kinematicFactor defines the unit system as discussed below.

The Hilbert space contains the spatial dimension, which is the starting point for any GPE simulation,

x() -> f_R(x) // returns the numerical grid as a function of x.
spatialDimension() -> f_R(x) // same as x().

The finite-difference approximation to the kinetic part of the Hamiltonian can also be accessed,

T() const -> H_1p // two different names for the same function.
makeKinetic() const -> H_1p

The two-particle Hilbert space is the product of two one-particle Hilbert spaces: $H_{12} = H_1 \otimes H_2 $, but because these two Hilbert spaces are in fact identical it is possible to perform a number of optimizations. The kinematicFactor defines the unit system as discussed below.

The x-dimensions of the subspaces,

x1() -> f_R(x1) // numerical grid with the first subspace as a function of x.
spatialDimension1() -> f_R(x1) // same as above.
x2() -> f_R(x2) // numerical grid with second subspace as a function of x.
spatialDimension2() -> f_R(x2) // same as above.

In general, the Schrödinger equation for two particles contains a term for the external potential of the form $V_1(x_1) + V_2(x_2)$. Because we only consider particles on the same spatial dimension with the same potential, we can do some optimizations by instead defining the potential on a "common" x-axis, $V(x_c)$. The x-dimension of this common x-space can be accessed by,

Unit System

The kinematicFactor defines the unit system used in the simulation. This section is taken from the QEngine article. The regular form of the time-dependent Schrödinger equation is
$$
i \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2} + V(x)\psi.
$$
In order to ease the simulation this equation must be reformulated in rescaled units.
This is achieved using a process known as nondimensionalization where quantities in SI-units are written in product form e.g. $a$ becomes $a=\alpha \tilde{a}$ where $\alpha$ carries both the dimension of $a$ and a magnitude while $\tilde a$ is a non-dimensional scaling value.
Introducing nondimensionalized units,
$$
x = \chi \tilde x, \qquad t=\tau \tilde t,\qquad V=\epsilon \tilde V, \qquad \psi = \xi \tilde\psi.
$$
For instance $\chi$ may be on the order of $\mu \text{m}$. The kinematicFactor is then defined as $\kappa = \tau \hbar/2 m \chi^2$. If these choices are substituted into the time-dependent Schrödinger equation one obtains the dimensionless form,
$$
i \frac{\partial \tilde \psi}{\partial \tilde t} = -\kappa \frac{\partial^2 \tilde \psi}{\partial \tilde x^2} + V(\tilde x)\tilde \psi.
$$
The default choice is $\kappa=0.5$, which is equivalent to $\hbar=m=1$.

Function of x

A central type for spatial simulations in the QEngine is discretized functions of x $(f(x))$ belonging to a particular Hilbert space. Fundamentally, this is a vector containing the value of the function for each point in the x-grid. It can describe both real and complex functions, but cannot change between the two. Real and complex f(x) are used to describe wavefunctions, while real f(x) are used to describe potentials.
Any operations only work with entities living on the same Hilbert space.

Basic Pointwise Arithmetic: (element by element): +, -, *, / . This works with functions of x and scalars.

Note, that these functions are always defined on a grid so the normalization condition is
$$
\sum_{x=a}^b |f(x)|^2 \Delta x = 1,
$$
where $a$ and $b$ are the minimal and maximal grid values. This carries a factor of $\Delta x$ so f.norm() != f.vec().norm().

It is also possible to replace these functions with softbound versions where the step function is replaced by a smooth sigmoid $\Theta_s=\frac{1}{1+\exp(-h (x-x_0))}$ where h is the hardness parameter.
For $h=0$ there is no jump and the jump becomes discontinuous as $h$ approaches infinity.

sstep(x, real left, real hardness);

It is also possible to instantiate a function of x directly from vectors

The initial values are necessary to deduce which types the function can be called with, so it is important to explicitly use floating-point numbers (5.0 instead of 5). To be safe, one can be explicit and write,

auto V = makePotentialFunction(V_func,real(5),real(0));

Potential functions are callable with the arguments put into them and also with Tuples of them.
If all arguments are reals the function is callable with RVec,

One-Particle Hamiltonian

This type of object represents a Hamiltonian $H = T + V$. It acts as an operator on f(x)'s within the same Hilbert space. A one-particle Hamiltonian is not created from make-functions or constructors but only through members of the one-particle Hilbert space and arithmetic.

Diagonalization
A one-particle Hamiltonian is an one-particle operator so the eigenfunctions can be found using a spectrum (see operator for details):

makeSpectrum(count_t largestEigenvalue) -> spatial spectrum

GPE Hamiltonian

This is the non-linear Hamiltonian in from the Gross-Pitaevskii equation $H = T + V + \beta |\psi|^2$. it also acts as operator on f(x)'s from the same Hilbert space. This type of Hamiltonians must be constructed using the GPE term that represents the nonlinearity $\beta |\psi|^2$,

auto gpeTerm = GpeTerm{real beta};

As for the other Hamiltonians the GPE Hamiltonian must be created from the Hilbert space combined arithmetic. As an example,

GPE Diagonalization

Diagonalizing non-linear operators is quite a challenging and it requires more specialized algorithms with a number of fudge-factors. The QEngine implements an optimal damping algorithm, which has the interface,

The default-values for the fudge-factors are somewhat arbitary and they are chosen mostly because they work for the use-cases in the example-programs. We highly recommend to check that your states are true eigenfunctions by checking if they have a low energy variance.

largestEigenvalue: is the largest eigenvalue to find as explained in operator. Currently, anything larger than 0 requires fine-tuning, while anything larger than 2 is unlikely to give correct results.

GROUNDSTATE_ALGORITHM: is the algorithm to be used to find the ground state. It can currently be either GROUNDSTATE_ALGORITHM::OPTIMAL_DAMPING or GROUNDSTATE_ALGORITHM::MIXING. In most cases, GROUNDSTATE_ALGORITHM::OPTIMAL_DAMPING is fastest, but in some cases it does not give correct results. For instance in a double-well, it has been known to find the ground state in only one of the wells.

convergenceCriterion: is the relative goodness of the resulting state that has to be reached for the algorithm to converge. Lower is typically better, but too low can also lead to incorrect results.

maxIterationsPerEigenstate: is the number of iterations the algorithm is allowed to spend on each eigenstate. Typically, more is better.

mixingvalues: is a list of values for the algorithm to use, one for each eigenstate. The algorithm 'mixes' the results of two previous iterations as $\psi^i = \psi^{i-1} - \tau \cdot \chi_i$ where $\tau$ is the mixing value. This means that a higher mixingvalue will be faster, but also will have a higher probability of being incorrect.

When using optimal damping as the ground state algorithm, the first value of the list will not be used.

When passing in fewer mixing-values than eigenfunctions to be calculated, the last value of the list will be used. So when calculating up to the second excited state, but passing in only a single mixing-value, this value will be used for all 3 states.

Two-Particle Hamiltonian

This object represents the Hamiltonian
$$
H = T_1 \otimes I + I \otimes T_2 + V(x_1) \otimes I + I \otimes V(x_2) + \delta(x_1-x_2) \cdot V_i(x),
$$
which is a little verbose but accurate. There is both a common shared potential and a point-interaction with space-dependent strength.
The shared potential is explained in the section on the Two-particle Hilbert space.
The point-interaction is a function of x on a special x-space for point-interactions. It is created through,

makePointInteraction(f(xPotential))

It is not compatible with other spaces, although it all the same functionality as the other function of x. So while this is possible,

Two-Particle Diagonalization

The two-particle Hamiltonian is represented by a general sparse matrix, which uses a different algorithm for diagonalization than the single-particle Hamiltonian. Often, it will be necessary to calculate more eigenvalues/eigenstates than are actually needed to get a correct result. You can always verify if your state is a true eigenstate by calculating the energy variance. The value nExtraStates allows for this. It can be considered a fudge-factor, and will not affect the number of eigenstates in the generated spectrum.
Raising the tolerance may give faster results, but with less accuracy. It is normally not recommended.

Two-Particle Subspace Arithmetic

The entities belonging to the subspaces of the two-particle Hilbert space can be used individually to perform regular one-particle physics. For instance,

In both cases we create new functions such that respectively
$\psi(x_1, x_2) = \psi_1(x_1)\cdot \psi_2(x_2)$
and
$\psi_a(x_1, x_2) = \psi_1(x_1)+\psi_2(x_2)$.
Note, that the order of elements is irrelevant, it only matters which Hilbert space they come from. The resulting objects are still functions of x just defined on the combined Hilbert space, meaning that all functions, such as overlap, still work.

It is even possible to create two-dimensional Hamiltonians by summing two single-particle Hamiltonians,

The resulting Hamiltonian can be diagonalized and applied to two-dimensional wave functions, but it cannot currently be used in time evolution. Results of the diagonalization can be used in two-particle simulation, though.

Hamiltonian Function

Hamiltonian functions are constructed by adding a potential function to a static Hamiltonian. They are callable the in same way as potential functions but return Hamiltonians.
They take their initial values from the potential functions. This allows for creating time-dependent Hamiltonians. As an example,

The example creates a Hamiltonian with an oscillating external harmonic oscillator and an interaction that is stronger the further the particles are from the trap center.

Spatial Spectrum

This type is returned from the makeSpectrum functions for spatial operators. It is a collection of the N lowest eigenvalues and eigenfunctions of the operator. Here, we use the terminology eigenfunction to make it clear that these eigenstates are functions of x and not vectors. It has the following member functions,

These function work with all three spatial Hamiltonians (One-Particle, Two-Particle, and GPE), and can be passed into the corresponding makeTimeStepper/makeFixedTimeStepper-function.

The splitstep-algorithm can also be modified in three ways,

Normalizing the wavefunction after each time-step

normalizing(splitstep(...));

Renormalizing is similar to normalize but it normalizes to the value before the step and not to unit normalization,

renormalizing(splitstep(...));

Adding imaginary boundaries to the simulation. Practically, the wave function will be multiplied with $\exp(V |dt|)$, which creates a decay in the regions with a non-zero and negative imaginary potential.
This can be used to create absorbing boundary conditions instead of the default circular boundaries from the split-step Fourier algorithm.

Default Spatial Time Evolution Algorithms

The default algorithm for all three spatial physics is,

addImagBounds(splitstep(...), defaultImagPot);

The default boundary-potential is zero in the center 3/4 of the x-space, then gets stronger as $x^2$. It is continuous and once differentiable.
This gives the following overloads for timeSteppers using spatial physics,

The simplicity of these types allows their Hilbert space to be inferred, which is an n-level Hilbert space with $n=2$.

Bose-Hubbard Operators

The Bose-Hubbard Hilbert space is a special case, since the states are ordered for a more efficient representation. Due to this specialization the only operators provided are those for building the Bose-Hubbard Hamiltonian.
Additional operators are possible to create, but due to the large size of Bose-Hubbard Hilbert spaces it does not make sense to create them directly from matrices. Creation from Fock operators is possible but not very efficient,

Bose-Hubbard operators are generally represented by sparse matrices. For small systems, dense operators are more efficiently created by the makeDenseOperator function.
Due to the sparsity, makeSpectrum has two extra parameters,

Generally, if the diagonalization fails then extraStates should be increased, typically with several states. This will not increase the number of states in the resulting spectrum.
The same parameters are also added to the makeState-function,

Operator Function

makeOperatorFunction(Callable, initParams...);

This creates a callable object returning an operator, which is initialized with initParams.

Second Quantized Spectrum

This type is returned from the makeSpectrum functions second quantized operators. This is a collection of the N lowest eigenvalues and eigenstates of the operator. It has the following member functions,

Fock States and Fock Operators

For second quantized problems it often makes sense to consider abstract, Hilbert space-free states and operators in an abstract fock-basis. These can be created and manipulated before being applied to an actual physical context.
The fock-state can be created from:

fock::state{count_t... quanta};

So e.g. the state $\left| 0, 3, 1 \right\rangle$ can be created from

auto state = fock::state{0,3,1};

These fock-states allow addition and subtraction, and multiplication with scalars. This allows us to create linear combinations of fock-states:

These operators can be added, subtracted, and multiplied (though not divided) with each other and scalars. This allows construction of complex operators from more simple ones, which can then be applied to a Hilbert space to create a matrix representation,

Note, that fock-operators do not currently allow the functions expectationvalue, variance, or standardDeviation.

DataContainer

The datacontainer is a class to store, save, and load data. It can store basic data types scalars, vectors, matrices, as well as strings and std::vectors of any of these types. It can save to json-format and .mat-format if built with matio.
Creation:

DataContainer(); // creates empty datacontainer.

The DataContainer uses map-syntax to set and get values,

dataContainer["vec"] = vec;
vec = dataContainer["vec"];

If the field "vec" does not exist in the datacontainer then it is automatically created. It is also possible to append to existing fields,

dataContainer["vec"].append(val);

If val is a function of x then it is necessary to get the underlying vector by calling .vec(). As an example,

dataContainer["psi"].append(psi.vec());

When appending a scalar to a vector it will place it at the end. When appending a vector to a matrix (or a vector) with the same number of rows, it will add the vector as a column.
In all other cases, it will append it to what corresponds to a cell-array in MATLAB. The data will also be saved to cell-arrays when saving to mat-file.

It is also possible to add strings to the datacontainer using the syntax,

dataContainer["string"] = std::string("exampleString");

Besides assignment the DataContainer can return serialized data in three ways,

Optimal Control

Control

The Control class represents a discretized multi-valued function of time $\mathbf{u}(t)$. It is used to represent the physical parameters that control a system. A multi-valued control is used for scenarios with multiple control functions, and paramCount is the number of controls.

Control Creation

This section describes different functions for creating controls.

Directly from a list of RVec:

makeControl(std::vector<RVec> params, real dt);

Each entry in the std::vector is one of the control parameters as functions of time.

Linear control:

makeLinearControl(real from, real to, count_t size, real dt);

This creates a linear control [from, to] with size elements, time step dt, and paramCount 1.

It is also possible to create controls using matematical functions similarly to a function of x.

Control for time axis:

makeTimeControl(count_t size, real dt, real t0 = 0.0);

Creates a linspace-control from $t_0$ in size-steps of size dt and paramCount 1. This is useful in combination with mathematical expressions.
There is also a number of specialized functions for creating functions of x, which are useful for defining potentials.

It is also possible to replace these functions with softbounds versions where the step function is replaced by a smooth sigmoid $\Theta_s=\frac{1}{1+\exp(-h (t-t_0))}$ where h is the hardness parameter.
For $h=0$ there is no jump and the jump becomes discontinuous as $h$ approaches infinity.

sstep(Control t, real left, real hardness);

Control Arithmetic

Controls support a number of standard mathematical operations.

Basic Pointwise Arithmetic: (element by element): +, -, *, / . This works with other controls and scalars.

Combining Controls

Like Vector controls can be glued and appended. Append will simply place the back after the front, while glue will do the same, but ignore the first element of the back, asserting that the first element of the back is the same as the last element of the front.

Norms

It is also possible to compute the discretized $L^2$ and $H^1$ norms of controls. These are given as
$$
L^2 : || f || = \sum_i |f_i|^2 dt
$$
$$
H^1 : || f || = \sum_i |\dot{f}_i|^2 dt
$$
The $H^1$-norm is approximated by sum(diff(f))/dt, where diff is the differences between neighbouring elements in the vector describing f.

This accessor returns an expression-template, that can be used similar to a RVec-reference. However, it does not work well with auto.

Raw matrix reference

.mat() -> RMat&

Different parameters are the rows and different time-points are the columns.

MetaData

A control also has additional information known as metaData,

.dt() -> real // value of the time step used in this control.
.setDt(real dt); // redefine the time step.
.size() -> count_t // number of time-points.
.paramCount() -> count_t // number of different control parameters.
.metaData() -> ControlMetadata // struct with members paramCount, controlSize, and dt.
// value of time at the last point of the control.
.endTime() -> real

Control Creation Example

The power of creating controls comes when all the above functionality is combined. Starting with a time-control one can create controls in a mathematical manner,

Basis Control

For some algorithms we do not optimize directly on the control, but parametrize it in a basis of smooth functions as $u(t) = u_0(t) + \sum_m c_m f_m(t)$ where $c_m$ are the basis coefficients. Instead of optimizing the full control, we optimize the basis coefficients, which are contained in a BasisControl class. The basis functions $f_m(t)$ are controls collected in a basis object. A basis control has a number of member functions,

.size() -> count_t // number of time steps.
.paramCount() -> count_t // number of controls.
.mat() -> RMat & // get underlying matrix.
dot(BasisControl, BasisControl) -> real // sum of point-wise products of the two controls.
// Same as .at for control, which can be used as RVec but not auto.
.at(count_t index) -> Expression template

Multiplying a basis with a control evalutes the expression $\sum_m c_m f_m(t)$ without $u_0$,

BasisControl*Basis -> Control

Basis

A Basis is a collection of controls. This can be used for parametrizing the control as,
$$
u(t) = u_0(t) + \sum_m c_m f_m(t).
$$
Here ${f_m(t)}$ is the set of basis controls. A Basis can turn a BasisControl into a regular control. A Basis is constructed from a list of controls,

Basis{std::vector<Control>};
Basis{Control1, Control2, ...};

All controls must have the same size, paramCount, and dt. Basically, a basis is only meant to be constructed and then passed into an algorithm, but it does have a few accessors and manipulators,

Problem

Problem Summing

Problems connect the physics simulation with the optimization algorithms, so the first step to optimal control is to create an optimization problem. In the current version of the QEngine, this means a state-transfer problem possibly with regularization and/or bounds added by using the operator +,

The resulting problem will have the same member functions as a vanilla state-transfer problem, but the cost and gradient-functions will be the sum of the costs and gradients from each problem.

State-Transfer Problem

The state-transfer problem represents the cost-function,
$$
J = \frac{1}{2} \left(1- |\langle \psi_t | \psi(T) \rangle|^2 \right) = \frac{1}{2} \left(1- | \langle \psi_t | U[H(\mathbf{u})] |\psi_0 \rangle|^2 \right),
$$
where $\psi_t$ is the target state, $\psi(T)$ is the final state, $\psi_0$ is the initial state, and $U$ is the time evolution operator. Similar to time evolution, there is a generic way of creating state-transfer problems and a set of defaults for each type of physics. The generic syntax is,

Here, most arguments are the same, but the time-stepping algorithms are defaulted based on the Hamiltonian. These defaults are the same as the defaults for the time evolution (For spatial physics and second quantized).
The exception to this is GPE-physics, where the nonlinearity requires a special backward-stepping algorithm.

In the second function, the dHdu is defaulted to a numerically calculated dHdu.

State-Transfer Problem Members

Before the cost or gradient of the cost function can be calculated the state-transfer problem must first be updated with the current value of the control.

.update(control)

Accessors used for physics and optimal control algorithms,

.fidelity() -> real // The current value of the fidelity.
.cost() -> real // The current value of the cost functional.
.gradient() -> Control // The gradient of the cost functional.

All the quantities are computed for the current value of the control set by .update. The fidelity is between the target state and the final state, which is initial state time evolved from the beginning to the end of the control.
The cost and gradient are defined by the problem sum.

Some additional accessors are,

Initialization values:

.control() -> Control // the last control to update or the initial control.
.initialControl() -> Control // initial value of the control.
.dt() -> real // time step.
.dHdu() -> dHdu; // Derivative of the Hamiltonian

Total number of steps taken:

.nPropagationSteps() -> count_t;
.resetPropogationCount();

The number of propogation steps provides a machine-independent measure of speed.

GpeAdjointSplitstep

The gpeAdjointSplitStep is a time evolution algorithm specifically to perform backwards propagation needed for control of the Gross-Pitaevskii equation. Its interface is the same as normal splitstep, but it only works for time-dependent Hamiltonians with fixed dt,

gpeAdointSplitStep(Hamiltonian H, real dt);

Regularization

The regularization limits how quickly the control can change. The regularization is introduced by adding a term to the cost function that penalizes fast changes
$$
J_r = \frac{\gamma}{2} \int_0^T \dot{u}^2 dt.
$$
The $\gamma$ front factor determines the strength of the regularization. The interface is,

makeRegularization(Control initialControl, real factor = 1.0)

Note: The regularization is not meant to be used on its own, but it has to be added to a state transfer problem (see problem summing). The Regularization has the same basic member functions a state-transfer problem,

Boundary Term

Experimental setups usually have some kind of minimal or maximally allowed control values. For instance there might be some maximally allowed laser intensity. These bounds can be introduced in the QEngine by added a soft boundary term to the cost function,
$$
J_b=\frac{\sigma}{2} \int_0^T \Theta(u_a-u)(u-u_a)^2 + \Theta(u-u_b)(u-u_b)^2 dt,
$$
where $\Theta$ is the Heaviside step function and the bounds on the control are $u_a \leq u(t) \leq u_b$. The value $\sigma$ sets the hardness of the bounds and it typically has to be on the order of several thousands. The interface is,

Note: The boundary term is not meant to be used on its own, but to be added to a state transfer problem (see problem summing). The boundary term has the same basic member functions as the state-transfer problem:

Differentiated Hamiltonian

To calculate the gradient of a cost-function, we need to differentiate the Hamiltonian with regard to different control parameters. For spatial physics, this reduces to differentiating the potential as the kinetic part of the spatial Hamiltonian is always constant wrt. the control parameters.
This differentitation can be performed analytically or numerically. The analytuc result must be supplied by the user,

The analytic approach is typically a factor of two faster than using the numeric approximation.

Algorithms

Algorithms are created from make-functions, just as in the rest of the QEngine. Most optimization algorithms have many places where you would like to adjust specific behavior so the algorithms have many hooks to provide you with flexibility. For you convenience, these can be passed into the make-function in any order. See details in hooks. Once an algorithms has been created a call to .optimize() starts the optimization. This function has no arguments, as all modifications of the algorithm happen through the hooks.

optimize can be called repeatedly, which just continues optimization. This is useful for stopping-criteria that depend on external cues, e.g. button-presses in a real-time program.
The BFGS-algorithm has some internal state, which can be reset by calling

.resetBfgs();

GRAPE

The GRAPE-algorithm applies standard mathematical optimization techniques to optimal control problems, which in this case are steepest descent or BFGS,

GROUP

The GROUP-algorithm applies the same standard optimization techniques as GRAPE to optimal control problems but it does so in a parametrized basis.
So instead of optimizing the control $u(t)$, it optimizes the parameters $c_m$, such that $u(t) = u_0(t) + \sum c_m u_m(t)$ where $u_0$ is the initial control in the problem.
The values $c_m$ are contained in a basiscontrol, which are assumed to be initialized to zero implying $u(t)= u_0(t)$ in the first iteration.

bfgsrestarter. Only for BFGS with default is stepTimesYRestart(1e-10).

dGROUP

This is the dressed version of the GROUP-algorithm. Dressing means that the basis for the algorithm is not fixed but it is reshuffled on the fly principally allowing the algorithm to escape artificial traps. The interface is,

Accessors for GRAPE, GROUP and dGROUP

The accessors for the algorithm-objects for GRAPE, GROUP and dGROUP are very similar, which makes it easier to reuse hooks. Most of them are useful when writing stoppers and collectors. The accessors are:

Hooks: You can access all hooks through .hookName().

Problem You can access the problem through .problem(). This is probably the most important accessor. It is not recommended to call .update(...) on this since it will interfere with the algorithms.

Misc. accessors

.stepSize() -> real // stepsize used in the last finished iteration.
.stepDirection() -> Control/BasisControl // stepdirection in the last finished iteration.
.iteration() -> count_t // index of current iteration.

BFGS type algorithms have a couple of extra accessors,

.stepTimesY() -> real;

The inner product of the step and the difference in gradients. This must be non-zero in order to satisfy the curvature condition, which is needed for a guaranteed descent in quasi-Newton methods. If this number is too close to zero the Hessian approximation may have low quality. If it becomes zero, the algorithm will become NaN-infested.

.stepsSinceLastRestart() -> count_t;

This counts number of iterations since the last restart. If it equals zero, the algorithm just restarted at the end of this iteration.

For GROUP and dGROUP, the algorithm is wrapped in another class that slightly changes the interface.

The gradient-function returns a basis control.
For these algorithms there is also additional accessors,

.coefficients() -> BasisControl // current value of the coefficients defining the control.
.basis() -> Basis // the basis object used in the parametrization.
// u0 that is the static part of the equation generating the control.
.baseControl() -> Control

dGROUP has one specialized accessor,

.numberOfResets() -> count_t;

This counts the number of superiterations used in the optimization.

Hooks

All hooks are callable objects (e.g. lambdas) that are called at particular times during the algorithm. Each hook has a particular interface allowing the user to modify the behavior of the algorithm (e.g. stopping criteria and data collection) but also more algorithmic things such as calculating the stepsize. They are created by calling the corresponding make-function with a callable object (functor) of the correct interface. None of the make-functions take any other parameters.

In general, the algorithm passes a const reference to itself to the hook, so using generic lambdas is recommended.

Stopper

A policy that allows deciding when the algorithm should stop. Always called at the end of an iteration after the collector.

makeStopper(Callable stopper_functor);

Must be callable as (const Alg&) -> boolean. When it returns true, the optimization stops.

This uses an interpolating linesearch-algorithm that fulfills the strong Wolfe-conditions based on the MINPACK subroutine cvsrch. The parameters are,

minStepSize and maxStepSize give bounds for the allowed size of the step, which must be non-negative.

On each call to the stepSizeFinder, an initial guess must be given. Generally, we reuse the result from last iteration for this, but when that is not available e.g. in the first iteration defaultInitialStepSizeGuess is used.

The algorithm works by bracketing an interval containing the minimum. If the length of this interval is below xtol the algorithm exits. This value must be non-negative.

The algorithm must evaluate the cost at several different places. It will do this no more than maxFunctionEvaluations times per call to the stepSizeFinder after which it returns the best value found so far.

BfgsRestarter

BfgsRestarters are similar to stoppers but give a condition for BFGS-algorithms to restart meaning that it resets Hessian approximation to the identity. It is called at the end of each iteration right after the stopper.

makeBfgsRestarter(Callable restart_functor)

It has the same interface as a stoppers(const Alg&) -> bool. If true, the approximation of the Hessian is reset and BFGS restarts.

The first restarter will check the BFGS-accessor .stepTimesY() and restart if it is below tol. The second will restart Bfgs every N iterations. If verbose is true, they will output "restarted bfgs" to the console when triggered.
BfgsRestarters can be summed just like stoppers. The default is probably sufficient in most cases.

DressedRestarter

DressedRestarters are similar to stoppers and BfgsRestarters, but give a condition for dressed algorithms to generate new bases.

makeDressedRestarter(Callable restart_functor)

Same interface as stoppers(const Alg&) -> bool. If true a new basis will be generated for the algorithm starting a new superiteration.

The first will restart if the cost between consecutive iterations has decreased less than minimalCostDecrease. The second will restart if the stepSize is less than stepSizeTolerance. In either case, verbose will write a message to std output if triggered.

BasisMaker

BasisMakers are specific hooks for dressed algorithms. They are called when the dressedRestarter triggers a restart and they return the new basis.

makebasisMaker(Callable maker_functor)

The interface is (void) -> bases. Unlike other hooks, the basisMaker works without any parameters, as it builds a new basis independently.

This creates a basis with basisSize elements of the form $S(t) \sin\left((n+r) \frac{\pi t}{T}\right)$, where $n$ increases for each basis-element from $n = 1$. $r$ is a random factor in the range [-maxRand, maxRand], and $S(t)$ is a shape function that ensures the basis is continuously zero for $t=0$ and $t=T$. An example is,