namespace Eigen { /** \eigenManualPage TopicLinearAlgebraDecompositions Catalogue of dense decompositions This page presents a catalogue of the dense matrix decompositions offered by Eigen. For an introduction on linear solvers and decompositions, check this \link TutorialLinearAlgebra page \endlink. To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink. \section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen
Generic information, not Eigen-specific Eigen-specific
Decomposition Requirements on the matrix Speed Algorithm reliability and accuracy Rank-revealing Allows to compute (besides linear solving) Linear solver provided by Eigen Maturity of Eigen's implementation Optimizations
PartialPivLU Invertible Fast Depends on condition number - - Yes Excellent Blocking, Implicit MT
FullPivLU - Slow (no blocking) Proven Yes Rank, kernel, image Yes Excellent -
HouseholderQR - Fast Depends on condition number - Orthogonalization Yes Excellent Blocking
ColPivHouseholderQR - Fast Good Yes Orthogonalization Yes Excellent -
FullPivHouseholderQR - Slow (no blocking) Proven Yes Orthogonalization Yes Average -
CompleteOrthogonalDecomposition - Fast Good Yes Orthogonalization Yes Excellent -
LLT Positive definite Very fast Depends on condition number - - Yes Excellent Blocking
LDLT Positive or negative semidefinite1 Very fast Good - - Yes Excellent -
\n Singular values and eigenvalues decompositions
BDCSVD (divide \& conquer) - One of the fastest SVD algorithms Excellent Yes Singular values/vectors, least squares Yes (and does least squares) Excellent Blocked bidiagonalization
JacobiSVD (two-sided) - Slow (but fast for small matrices) Proven3 Yes Singular values/vectors, least squares Yes (and does least squares) Excellent R-SVD
SelfAdjointEigenSolver Self-adjoint Fast-average2 Good Yes Eigenvalues/vectors - Excellent Closed forms for 2x2 and 3x3
ComplexEigenSolver Square Slow-very slow2 Depends on condition number Yes Eigenvalues/vectors - Average -
EigenSolver Square and real Average-slow2 Depends on condition number Yes Eigenvalues/vectors - Average -
GeneralizedSelfAdjointEigenSolver Square Fast-average2 Depends on condition number - Generalized eigenvalues/vectors - Good -
\n Helper decompositions
RealSchur Square and real Average-slow2 Depends on condition number Yes - - Average -
ComplexSchur Square Slow-very slow2 Depends on condition number Yes - - Average -
Tridiagonalization Self-adjoint Fast Good - - - Good -
HessenbergDecomposition Square Average Good - - - Good -
\b Notes: \section TopicLinAlgPracticalGuidance Practical guidance The following recommendations apply to the most common use cases: \li Symmetric positive definite systems: Use \b LLT. It is the fastest solver and has excellent numerical properties for this class of problems. For semidefinite or nearly singular symmetric systems, use \b LDLT. \li General invertible systems: Use \b PartialPivLU. It uses cache-friendly blocking and implicit multi-threading, making it the fastest general-purpose solver. Partial pivoting is sufficient for virtually all practical problems. \li Least squares (over- or under-determined systems): Use \b CompleteOrthogonalDecomposition as the default. Like the SVD, it robustly computes the minimum-norm solution for rank-deficient and under-determined problems, but at QR-like speed. Use \b BDCSVD when you also need singular values or vectors, not just the least squares solution. \li Full-rank least squares (overdetermined systems): When the matrix is known to be full rank, \b HouseholderQR is the fastest option. For very tall and skinny well-conditioned matrices, solving via the normal equations with \b LLT can be faster still. \li FullPivLU and FullPivHouseholderQR use complete pivoting, which prevents the use of cache-friendly blocking algorithms and makes them significantly slower than their partial/column pivoting counterparts. In practice, complete pivoting rarely provides meaningful accuracy benefits. These decompositions are primarily useful for debugging, pedagogy, or the very rare case where column pivoting is insufficient. \section TopicLinAlgTerminology Terminology
Selfadjoint
For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for \em hermitian. More generally, a matrix \f$ A \f$ is selfadjoint if and only if it is equal to its adjoint \f$ A^* \f$. The adjoint is also called the \em conjugate \em transpose.
Positive/negative definite
A selfadjoint matrix \f$ A \f$ is positive definite if \f$ v^* A v > 0 \f$ for any non zero vector \f$ v \f$. In the same vein, it is negative definite if \f$ v^* A v < 0 \f$ for any non zero vector \f$ v \f$
Positive/negative semidefinite
A selfadjoint matrix \f$ A \f$ is positive semi-definite if \f$ v^* A v \ge 0 \f$ for any non zero vector \f$ v \f$. In the same vein, it is negative semi-definite if \f$ v^* A v \le 0 \f$ for any non zero vector \f$ v \f$
Blocking
Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.
Implicit Multi Threading (MT)
Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algorithm itself is not parallelized, but that it relies on parallelized matrix-matrix product routines.
Explicit Multi Threading (MT)
Means the algorithm is explicitly parallelized to take advantage of multicore processors via OpenMP.
Meta-unroller
Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.
*/ }