Skip to content
edrumwri edited this page Oct 4, 2015 · 28 revisions

Some of the functions below are static; some require a LinAlg object because temporary storage is necessary. For real-time applications, you should call the relevant linear algebraic functions (for example, solve_LS_fast(.)) using dummy inputs allocated to the largest anticipated side on your object before real-time execution begins.

On the remainder of this page, MatrixNx refers to either MatrixNd or MatrixNf; VectorNx refers to either VectorNd or VectorNf; LinAlgx refers to either LinAlgd or LinAlgf.

Include files

#include <Ravelin/LinAlgd.h>
#include <Ravelin/LinAlgf.h>

General matrices

Inverts a square matrix (it is generally a bad idea to invert matrices, with respect to both computational efficiency and numerical robustness- see most references on numerical linear algebra):

template <typename X> X& LinAlgx::invert(X& A);

Inversion example:

MatrixNd A(5,5);
...
LinAlgd _LA;
_LA.invert(A);  // inverse of A will be returned in A

Solves a system of linear equations AX=B for X (X can be a vector or a matrix):

template <typename X, typename Y> Y& LinAlgx::solve_fast(X& A, Y& XB);

Solution example:

MatrixNd A(5,5);
MatrixNd B(5,3);
...
LinAlgx _LA;
_LA.solve_fast(A, B);  // solution will be in B

LU factorization, inversion, and solving

LU factorization requires an integer vector as workspace.

static bool LinAlgx::factor_LU(X& A, std::vector<int>& pivwork)
static X& LinAlgx::inverse_LU(X& M, const std::vector<int>& pivwork);
static X& LinAlgx::solve_LU_fast(const Y& M, bool transpose, const std::vector<int>& pivwork, X& XB)

Again, it is recommended that you use the solve_LU_fast(.) function, if possible, instead of inverse_LU(.): the former is faster and more stable numerically.

Solution example:

std::vector<int> pivwork;
MatrixNd A(5,5);
MatrixNd XB(5,3);
...
LinAlgd _LA;
_LA.factor_LU(A, pivwork);
_LA.solve_LU_fast(A, true, pivwork, XB);  // solve transpose(A)*X=B
...
_LA.inverse_LU(A, pivwork);  // inverse of A will be returned in A

Tridiagonal solving

Solves a tridiagonal matrix:

// d is the diagonal of the matrix, dl is the sub-diagonal, and du is the super diagonal
// X can be a vector or matrix
template <typename X> static X& LinAlgx::solve_tridiagonal_fast(VectorNx& dl, VectorNx& d, VectorNx& du, X& XB);

Triangular matrix solving

Systems of linear equations with either upper or lower triangular matrices are solved with the following method:

// utri indicates whether the matrix is upper triangular or lower triangular
// if transpose_A is set to 'true', then the system A'X = B is solved 
static template <typename X, Y> X& LinAlgx::solve_tri_fast(const Y& A, bool utri, bool transpose_A, X& XB)

Note that the matrix storage is not compact (approximately half of the space is wasted).

Example:

MatrixNd U(5,5);
VectorNd xb(5);
...
LinAlgd::solve_tri_fast(U, true, false, xb); // solves Ux = b with an upper triangular matrix 

Symmetric positive-definite factorization and systems

Cholesky factorization can be applied to symmetric, positive definite matrices; the factor_chol(.) function is also an effective method (O(n^3) but fast) for determining whether a matrix is positive definite.

template <typename X> static bool LinAlgx::factor_chol(X& A)
template <typename X, Y> static X& LinAlgx::solve_chol_fast(const Y& M, X& XB)
template <typename X> static X& LinAlgx::inverse_chol(X& A)

The following functions check whether a matrix is symmetric, positive-semi-definite (PSD) or symmetric, positive-definite (PD). If the tolerance is negative, the tolerance will be set automatically.

template <typename X> bool LinAlgx::is_SPSD(X& m, REAL tol = -1.0)
template <typename X> bool LinAlgx::is_SPD(X& m, REAL tol -1.0)

The following functions combine the Cholesky factorization and inverse/solve into a single operation:

template <typename X, Y> static X& LinAlgx::solve_SPD_fast(Y& A, X& XB)
template <typename X> static X& LinAlgx::inverse_SPD(X& A)

LDL^T factorization, inversion, and solving

LDL^T factorization can be applied to general symmetric matrices.

static void LinAlgx::factor_LDL(MatrixNx& M, std::vector<int>& IPIV);
template <typename X> 
static X& LinAlgx::solve_LDL_fast(const MatrixNx& M, const std::vector<int>& pivwork, X& XB)

The following functions combine the LDL^T factorization and inverse/solve into a single operation:

template <typename X> X& LinAlgx::solve_symmetric_fast(MatrixNx& A, X& XB)
template <typename X> X& LinAlgx::inverse_symmetric(X& A)

Eigenvalue and eigenvector computation

These functions compute a eigenvector decomposition of a matrix. By default, the matrix produces the eigenvalues. If the eig_symm_plus(.) function is used, the eigenvectors are computed as well.

template <typename X, Y> void LinAlgx::eig_symm(X& A, Y& evals)
template <typename X, Y> void LinAlgx::eig_symm_plus(X& A_evecs, Y& evals)

QR factorization and solving

void LinAlgx::factor_QR(MatrixNx& AR, MatrixNx& Q);
void LinAlgx::factor_QR(MatrixNx& AR, MatrixNx& Q, std::vector<int>& PI);

SVD, least squares, and pseudo-inversion

Two algorithms permit a very robust solution to the singular value decomposition (SVD) and solving least-squares problems using the SVD. If tolerances are not provided for least-squares algorithms, the algorithms select such a tolerance automatically.

template <typename X> void LinAlgx::svd1(X& A, MatrixNx& U, VectorNx& S, MatrixNx& V)
template <typename X> void LinAlgx::svd2(X& A, MatrixNx& U, VectorNx& S, MatrixNx& V)
template <typename X, Y> X& LinAlgx::solve_LS_fast1(Y& A, X& XB, REAL tol = (REAL) -1.0)
template <typename X, Y> X& LinAlgx::solve_LS_fast2(Y& A, X& XB, REAL tol = (REAL) -1.0)

Desired algorithms can be passed into the pseudo_invert(.) function, which produces a SVD-based pseudo-inverse and the least-squares solver.

MatrixNx& LinAlgx::pseudo_invert(MatrixNx& A, void (*svd)(MatrixNx&, MatrixNx&, VectorNx&, MatrixNx&), REAL tol=(REAL) -1.0);
template <typename X, Y> X& LinAlgx::solve_LS_fast(Y& A, X& XB, SVD svd_algo, REAL tol)

A generic SVD and least-squares function is available, which attempts to use the first SVD algorithm and falls back to the second SVD algorithm in case the first fails.

template <typename X> void LinAlgx::svd(X& A, MatrixNx& U, VectorNx& S, MatrixNx& V)
template <typename X, Y, Z, Vec> X& LinAlgx::solve_LS_fast(const Y& U, const Vec& S, const Z& V, X& XB, REAL tol = (REAL) -1.0)

Computing matrix rank and condition number

Computing the rank or condition number of a matrix requires a singular value decomposition, which is an O(n^3) operation. The tolerance is computed automatically by the algorithm if the tolerance is negative.

template <typename X> unsigned LinAlgx::calc_rank(X& A, REAL tol = (REAL) -1.0);
template <typename X> REAL LinAlgx::cond(X& A);

Nullspace computation

Nullspace computation is performed using the singular value decomposition (an O(n^3) operation). A tolerance can be used to determine whether a singular value is effectively zero; if the tolerance is negative, a tolerance will be computed.

template <typename Y> MatrixNx& LinAlgx::nullspace(Y& A, MatrixNx& nullspace, REAL tol = -1.0)

Clone this wiki locally