This class represents the orientation between two arbitrary frames A and D associated with a Space-fixed (extrinsic) X-Y-Z rotation by "roll-pitch-yaw" angles [r, p, y], which is equivalent to a Body-fixed (intrinsic) Z-Y-X rotation by "yaw-pitch-roll" angles [y, p, r]. More...

Given ᴮd/dt(v) (the time derivative in frame B of an arbitrary 3D vector v) and given ᴬωᴮ (frame B's angular velocity in another frame A), this method computes ᴬd/dt(v) (the time derivative in frame A of v) by: ᴬd/dt(v) = ᴮd/dt(v) + ᴬωᴮ x v. More...

Given a column vector containing the stacked columns of the lower triangular part of a square matrix, returning a symmetric matrix whose lower triangular part is the same as the original matrix. More...

Given a column vector containing the stacked columns of the lower triangular part of a square matrix, returning a symmetric matrix whose lower triangular part is the same as the original matrix. More...

Given ᴮd/dt(v) (the time derivative in frame B of an arbitrary 3D vector v) and given ᴬωᴮ (frame B's angular velocity in another frame A), this method computes ᴬd/dt(v) (the time derivative in frame A of v) by: ᴬd/dt(v) = ᴮd/dt(v) + ᴬωᴮ x v.

This mathematical operation is known as the "Transport Theorem" or the "Golden Rule for Vector Differentiation" [Mitiguy 2016, §7.3]. It was discovered by Euler in 1758. Its explicit notation with superscript frames was invented by Thomas Kane in 1950. Its use as the defining property of angular velocity was by Mitiguy in 1993.

In source code and comments, we use the following monogram notations: DtA_v = ᴬd/dt(v) denotes the time derivative in frame A of the vector v. DtA_v_E = [ᴬd/dt(v)]_E denotes the time derivative in frame A of vector v, with the resulting new vector quantity expressed in a frame E.

In source code, this mathematical operation is performed with all vectors expressed in the same frame E as [ᴬd/dt(v)]ₑ = [ᴮd/dt(v)]ₑ + [ᴬωᴮ]ₑ x [v]ₑ which in monogram notation is:

For a matrix of type, e.g. MatrixX<AutoDiffXd> A, the comparable operation B = A.cast<double>() should (and does) fail to compile. Use DiscardGradient(A) if you want to force the cast (and explicitly declare that information is lost).

This method is overloaded to permit the user to call it for double types and AutoDiffScalar types (to avoid the calling function having to handle the two cases differently).

B = DiscardZeroGradient(A, precision) enables casting from a matrix of AutoDiffScalars to AutoDiffScalar::Scalar type, but first checking that the gradient matrix is empty or zero.

For a matrix of type, e.g. MatrixX<AutoDiffXd> A, the comparable operation B = A.cast<double>() should (and does) fail to compile. Use DiscardZeroGradient(A) if you want to force the cast (and the check).

This method is overloaded to permit the user to call it for double types and AutoDiffScalar types (to avoid the calling function having to handle the two cases differently).

Parameters

precision

is passed to Eigen's isZero(precision) to evaluate whether the gradients are zero.

Note: When, for example, n = 100, m = 80, and entries of A, B, Q_half, R_half are sampled from standard normal distributions, where Q = Q_half'*Q_half and similar for R, the absolute error of the solution is 10^{-6}, while the absolute error of the solution computed by Matlab is 10^{-8}.

TODO(weiqiao.han): I may overwrite the RealQZ function to improve the accuracy, together with more thorough tests.

\[ A'XA - X - A'XB(B'XB+R)^{-1}B'XA + Q = 0 \]

Exceptions

std::runtime_error

if Q is not positive semi-definite.

std::runtime_error

if R is not positive definite.

Based on the Schur Vector approach outlined in this paper: "On the Numerical Solution of the Discrete-Time Algebraic Riccati Equation" by Thrasyvoulos Pappas, Alan J. Laub, and Nils R. Sandell

Computes the gradient of the function that converts rotation matrix to quaternion.

Parameters

R

A 3 x 3 rotation matrix

dR

A 9 x N matrix, dR(i,j) is the gradient of R(i) w.r.t x_var(j)

Returns

The gradient G. G is a 4 x N matrix G(0,j) is the gradient of w w.r.t x_var(j) G(1,j) is the gradient of x w.r.t x_var(j) G(2,j) is the gradient of y w.r.t x_var(j) G(3,j) is the gradient of z w.r.t x_var(j)

Initialize a single autodiff matrix given the corresponding value matrix.

Set the values of auto_diff_matrix to be equal to val, and for each element i of auto_diff_matrix, resize the derivatives vector to num_derivatives, and set derivative number deriv_num_start + i to one (all other elements of the derivative vector set to zero).

Initialize a single autodiff matrix given the corresponding value matrix.

Create autodiff matrix that matches mat in size with derivatives of compile time size Nq and runtime size num_derivatives. Set its values to be equal to val, and for each element i of auto_diff_matrix, set derivative number deriv_num_start + i to one (all other derivatives set to zero).

Given a series of Eigen matrices, create a tuple of corresponding AutoDiff matrices with values equal to the input matrices and properly initialized derivative vectors.

The size of the derivative vector of each element of the matrices in the output tuple will be the same, and will equal the sum of the number of elements of the matrices in args. If all of the matrices in args have fixed size, then the derivative vectors will also have fixed size (being the sum of the sizes at compile time of all of the input arguments), otherwise the derivative vectors will have dynamic size. The 0th element of the derivative vectors will correspond to the derivative with respect to the 0th element of the first argument. Subsequent derivative vector elements correspond first to subsequent elements of the first input argument (traversed first by row, then by column), and so on for subsequent arguments.

Computes a matrix of AutoDiffScalars from which both the value and the Jacobian of a function

\[ f:\mathbb{R}^{n\times m}\rightarrow\mathbb{R}^{p\times q} \]

(f: R^n*m -> R^p*q) can be extracted.

The derivative vector for each AutoDiffScalar in the output contains the derivatives with respect to all components of the argument \( x \).

The return type of this function is a matrix with the `best' possible AutoDiffScalar scalar type, in the following sense:

If the number of derivatives can be determined at compile time, the AutoDiffScalar derivative vector will have that fixed size.

If the maximum number of derivatives can be determined at compile time, the AutoDiffScalar derivative vector will have that maximum fixed size.

If neither the number, nor the maximum number of derivatives can be determined at compile time, the output AutoDiffScalar derivative vector will be dynamically sized.

f should have a templated call operator that maps an Eigen matrix argument to another Eigen matrix. The scalar type of the output of \( f \) need not match the scalar type of the input (useful in recursive calls to the function to determine higher order derivatives). The easiest way to create an f is using a C++14 generic lambda.

The algorithm computes the Jacobian in chunks of up to MaxChunkSize derivatives at a time. This has three purposes:

It makes it so that derivative vectors can be allocated on the stack, eliminating dynamic allocations and improving performance if the maximum number of derivatives cannot be determined at compile time.

It gives control over, and limits the number of required instantiations of the call operator of f and all the functions it calls.

Excessively large derivative vectors can result in CPU capacity cache misses; even if the number of derivatives is fixed at compile time, it may be better to break up into chunks if that means that capacity cache misses can be prevented.

Parameters

f

function

x

function argument value at which Jacobian will be evaluated

Returns

AutoDiffScalar matrix corresponding to the Jacobian of f evaluated at x.

Canonical form of quat, which means that either the original quat is returned or a quaternion representing the same orientation but with negated [w, x, y, z], to ensure a positive w in returned quaternion.

Resize derivatives vector of each element of a matrix to to match the size of the derivatives vector of a given scalar.

If the mat and scalar inputs are AutoDiffScalars, resize the derivatives vector of each element of the matrix mat to match the number of derivatives of the scalar. This is useful in functions that return matrices that do not depend on an AutoDiffScalar argument (e.g. a function with a constant output), while it is desired that information about the number of derivatives is preserved.

Parameters

mat

matrix, for which the derivative vectors of the elements will be resized

over a 2π interval), wraps value into the interval [low, high). Precisely, wrap_to returns: value + k*(high-low) for the unique integer value k that lands the output in the desired interval. low and high must be finite, and low < high.