Skip to content

Singular Value Decomposition (SVD)

Formula

\[ A = U\,\Sigma\,V^T \]

Parameters

  • \(A\in\mathbb{R}^{m\times n}\)
  • \(U\in\mathbb{R}^{m\times m}\): orthonormal columns (left singular vectors)
  • \(\Sigma\in\mathbb{R}^{m\times n}\): diagonal nonnegative singular values
  • \(V\in\mathbb{R}^{n\times n}\): orthonormal columns (right singular vectors)

What it means

Decomposes any matrix into rotations/reflections and axis-aligned scaling.

What it's used for

  • Low-rank approximation and PCA.
  • Solving least squares with rank deficiency.

Key properties

  • Singular values are \(\sqrt{\lambda_i(A^T A)}\)
  • Best rank-\(k\) approximation via truncation (Eckart-Young)
  • Works for non-square and non-symmetric matrices

Common gotchas

  • SVD is not unique: signs and subspaces can flip when singular values repeat.
  • \(U\) and \(V\) are orthonormal, not necessarily square if using thin SVD.

Example

For \(A=egin{bmatrix}1&0\0&0 \end{bmatrix}\), \(U=I,\;\Sigma=\operatorname{diag}(1,0),\;V=I\).

How to Compute (Pseudocode)

Input: matrix A (m x n)
Output: singular vectors/values U, Sigma, V^T

# High-level dense workflow used in many libraries
reduce A to bidiagonal form B using orthogonal transforms
compute singular values/vectors of bidiagonal B (iterative solver)
accumulate transforms to recover U and V
return U, Sigma, V^T

Complexity

  • Time: Typically \(O(mn\min(m,n))\) for dense full SVD (often summarized as \(O(n^3)\) for square \(n \times n\) matrices)
  • Space: \(O(mn)\) for dense storage, plus workspaces/factors
  • Assumptions: Dense direct full SVD; truncated/iterative sparse SVD can be much cheaper and has different complexity

See also