Householder transformation
Template:Short description In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder.<ref>Template:Cite journal</ref>
Definition
Operator and transformation
The Householder operator<ref>Template:Harvnb</ref> may be defined over any finite-dimensional inner product space <math> V</math> with inner product <math> \langle \cdot, \cdot \rangle </math> and unit vector <math> u\in V</math> as
- <math> H_u(x) := x - 2\,\langle x,u \rangle\,u\,.</math><ref>Template:Cite book</ref>
It is also common to choose a non-unit vector <math>q \in V</math>, and normalize it directly in the Householder operator's expression:<ref>Template:Harvnb</ref>
- <math>H_q \left ( x \right ) = x - 2\, \frac{\langle x, q \rangle}{\langle q, q \rangle}\, q \,.</math>
Such an operator is linear and self-adjoint.
If <math>V=\mathbb{C}^n</math>, note that the reflection hyperplane can be defined by its normal vector, a unit vector <math display="inline">\vec v\in V</math> (a vector with length <math display="inline">1</math>) that is orthogonal to the hyperplane. The reflection of a point <math display="inline">x</math> about this hyperplane is the Householder transformation:
- <math>\vec x - 2\langle \vec x, \vec v\rangle \vec v = \vec x - 2\vec v\left(\vec v^* \vec x\right), </math>
where <math>\vec x</math> is the vector from the origin to the point <math>x</math>, and <math display="inline">\vec v^*</math> is the conjugate transpose of <math display="inline">\vec v</math>.
Householder matrix
The matrix constructed from this transformation can be expressed in terms of an outer product as:
- <math>P = I - 2\vec v\vec v^*</math>
is known as the Householder matrix, where <math display="inline">I</math> is the identity matrix.
Properties
The Householder matrix has the following properties:
- it is Hermitian: <math display="inline">P = P^*</math>,
- it is unitary: <math display="inline">P^{-1} = P^*</math> (via the Sherman-Morrison formula),
- hence it is involutory: <math display="inline">P = P^{-1}</math>.
- A Householder matrix has eigenvalues <math display="inline">\pm 1</math>. To see this, notice that if <math display="inline">\vec x</math> is orthogonal to the vector <math display="inline">\vec v</math> which was used to create the reflector, then <math display="inline">P_v\vec x = (I-2\vec v\vec v^*)\vec x = \vec x-2\langle\vec v,\vec x\rangle\vec v = \vec x</math>, i.e., <math display="inline">1</math> is an eigenvalue of multiplicity <math display="inline">n - 1</math>, since there are <math display="inline">n - 1</math> independent vectors orthogonal to <math display="inline">\vec v</math>. Also, notice <math display="inline">P_v\vec v = (I-2\vec v\vec v^*)\vec v = \vec v - 2\langle\vec v,\vec v\rangle\vec v = -\vec v</math> (since <math>\vec v</math> is by definition a unit vector), and so <math display="inline">-1</math> is an eigenvalue with multiplicity <math display="inline">1</math>.
- The determinant of a Householder reflector is <math display="inline">-1</math>, since the determinant of a matrix is the product of its eigenvalues, in this case one of which is <math display="inline">-1</math> with the remainder being <math display="inline">1</math> (as in the previous point), or via the Matrix determinant lemma.
Example
Consider the normalization of a vector <math>\vec v</math> containing <math>1</math> in each entry,
- <math>\vec v=\frac{1}{\sqrt{2}}\begin{bmatrix} 1\\1 \end{bmatrix}.</math>
Then the Householder matrix corresponding to the vector <math>v</math> is
- <math>P_v=\begin{bmatrix}1&0\\0&1\end{bmatrix}-2\left(\frac{1}{\sqrt{2}}\begin{bmatrix} 1\\1 \end{bmatrix}\right)\left(\frac{1}{\sqrt{2}}\begin{bmatrix} 1&1 \end{bmatrix}\right)</math>
- <math>\quad=\begin{bmatrix}1&0\\0&1\end{bmatrix}-\begin{bmatrix} 1\\1 \end{bmatrix}\begin{bmatrix} 1&1 \end{bmatrix}</math>
- <math>\quad=\begin{bmatrix}1&0\\0&1\end{bmatrix}-\begin{bmatrix}1&1\\1&1\end{bmatrix}</math>
- <math>\quad=\begin{bmatrix}0&-1\\-1&0\end{bmatrix}.</math>
Note that if we have another vector <math>\vec q</math> representing a coordinate in the 2D plane
- <math>\vec q = \begin{bmatrix}x\\y\end{bmatrix},</math>
then in this case <math>P_v</math> flips and negates the <math>x</math> and <math>y</math> coordinates, in other words we have
- <math>P_v\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}-y\\-x\end{bmatrix},</math>
which corresponds to reflecting the vector across the line <math>y=-x</math>, which our original vector <math>\vec v</math> is normal to.
Applications
Geometric optics
In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see Template:Section link).
Numerical linear algebra
Note that representing a Householder matrix requires only the entries of a single vector, not of an entire matrix (which in most algorithms is never explicitly formed), thereby minimizing the required storage and memory references needed to use them.
Further, multiplying a Householder matrix by a vector does not involve a full matrix-vector multiplication, but rather only one vector dot product, and then one axpy operation. This means its arithmetic complexity is of the same order of two low-level BLAS-1 operations. Therefore, Householder matrices are extremely arithmetically efficient.<ref name="saad">Template:Cite book</ref>
Finally, using <math>\hat{\cdot}</math> to denote the computed value and <math>\cdot</math> to denote the mathematically exact value, then for a given Household matrix <math>P</math>,
<math>\widehat{P b}=(P+\Delta P)b</math>
Where <math>\vert\vert\Delta P\vert\vert_F\leq\tilde{\gamma_n}:=\frac{cnu}{1-cnu}</math> (where <math>u</math> is unit roundoff, <math>n</math> the size of the matrix <math>P</math>, and <math>c</math> some small constant). In other words, multiplications by Householder matrices are also extremely backwards stable.<ref name="Higham">Template:Cite book</ref>
Since Householder transformations minimize storage, memory references, arithmetic complexity, and optimize numerical stability, they are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix,<ref name=taboga>Template:Cite web</ref> to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian matrices, the symmetry can be preserved, resulting in tridiagonalization.<ref>Template:Cite journal</ref><ref name="G&VL">Template:Cite book</ref>
QR decomposition
Householder transformations can be used to calculate a QR decomposition. Consider a matrix triangularized up to column <math>i</math>, then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix
<math> \begin{bmatrix} a_{11} & a_{12} & \cdots & & & a_{1n} \\ 0 & a_{22} & \cdots & & & a_{1n} \\ \vdots & & \ddots & & & \vdots \\ 0 & \cdots & 0 & x_{1}=a_{ii} & \cdots & a_{in} \\ 0 & \cdots & 0 & \vdots & & \vdots \\ 0 & \cdots & 0 & x_{n}=a_{ni} & \cdots & a_{nn} \end{bmatrix} </math>
via the matrix
<math> \begin{bmatrix} I_{i-1}&0\\ 0&P_v \end{bmatrix} </math>.
(note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition)
If we can find a <math>\vec v</math> so that <math display="block">
P_v \vec{x} = \alpha\vec{e_1}
</math> we could accomplish this. Thinking geometrically, we are looking for a plane so that the reflection about this plane happens to land directly on the basis vector. In other words, Template:NumBlk for some constant <math>\alpha</math>. However, for this to happen, we must have <math display="block">
\vec v\propto\vec x-\alpha\vec e_1 \text{.}
</math> And since <math>\vec v</math> is a unit vector, this means that we must have Template:NumBlk Now if we apply equation (Template:EquationNote) back into equation (Template:EquationNote), we get <math display="block">
\vec x-\alpha\vec e_1 =
2 \left\langle \vec{x}, \frac{ \vec{x}-\alpha\vec{e}_1 }{ \|\vec{x}-\alpha\vec{e}_1\|_2 } \right\rangle
\frac{ \vec{x}-\alpha\vec{e}_1 }{ \|\vec x-\alpha\vec e_1\|_2 }
</math> Or, in other words, by comparing the scalars in front of the vector <math>\vec x - \alpha\vec e_1</math> we must have <math display="block">
\|\vec x-\alpha\vec e_1\|_2^2 = 2\langle\vec x,\vec x-\alpha e_1\rangle \text{.}
</math> Or <math display="block">
\|\vec x\|_2^2-2\alpha x_1+\alpha^2 = 2(\| \vec x\|_2^2-\alpha x_1)
</math> Which means that we can solve for <math>\alpha</math> as <math display="block">
\alpha = \pm\|\vec x\|_2
</math> This completes the construction; however, in practice we want to avoid catastrophic cancellation in equation (Template:EquationNote). To do so, we choose<ref name="saad"/> the sign of <math>\alpha</math> as <math display="block">
\alpha=-\sgn(\mathrm{Re}(x_1))\|\vec x\|_2
</math>
Tridiagonalization (Hessenberg)
This procedure is presented in Numerical Analysis by Burden and Faires, and works when the matrix is symmetric. In the non-symmetric case, it is still useful as a similar procedure can result in a Hessenberg matrix.
It uses a slightly altered <math>\operatorname{sgn}</math> function with <math>\operatorname{sgn}(0) = 1</math>.<ref name='burden'>Template:Cite book</ref> In the first step, to form the Householder matrix in each step we need to determine <math display="inline">\alpha</math> and <math display="inline">r</math>, which are:
- <math>\begin{align}
\alpha &= -\operatorname{sgn}\left(a_{21}\right)\sqrt{\sum_{j=2}^n a_{j1}^2}; \\
r &= \sqrt{\frac{1}{2}\left(\alpha^2 - a_{21}\alpha\right)};
\end{align}</math>
From <math display="inline">\alpha</math> and <math display="inline">r</math>, construct vector <math display="inline">v</math>:
- <math>\vec v^{(1)} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix},</math>
where <math display="inline">v_1 = 0</math>, <math display="inline">v_2 = \frac{a_{21} - \alpha}{2r}</math>, and
- <math>v_k = \frac{a_{k1}}{2r}</math> for each <math>k = 3, 4 \ldots n</math>
Then compute:
- <math>\begin{align}
P^1 &= I - 2\vec v^{(1)} \left(\vec v^{(1)}\right)^\textsf{T} \\
A^{(2)} &= P^1 AP^1
\end{align}</math>
Having found <math display="inline">P^1</math> and computed <math display="inline">A^{(2)}</math> the process is repeated for <math display="inline">k = 2, 3, \ldots, n - 2</math> as follows:
- <math>\begin{align}
\alpha &= -\operatorname{sgn}\left(a^k_{k+1,k}\right)\sqrt{\sum_{j=k+1}^n \left(a^k_{jk}\right)^2} \\[2pt]
r &= \sqrt{\frac{1}{2}\left(\alpha^2 - a^k_{k+1,k}\alpha\right)} \\[2pt]
v^k_1 &= v^k_2 = \cdots = v^k_k = 0 \\[2pt]
v^k_{k+1} &= \frac{a^k_{k+1,k} - \alpha}{2r} \\
v^k_j &= \frac{a^k_{jk}}{2r} \text{ for } j = k + 2,\ k + 3,\ \ldots,\ n \\
P^k &= I - 2\vec v^{(k)} \left(\vec v^{(k)}\right)^\textsf{T} \\
A^{(k+1)} &= P^k A^{(k)}P^k
\end{align}</math>
Continuing in this manner, the tridiagonal and symmetric matrix is formed.
Examples
In this example, also from Burden and Faires,<ref name="burden" /> the given matrix is transformed to the similar tridiagonal matrix A3 by using the Householder method.
- <math>\mathbf{A} = \begin{bmatrix}
4 & 1 & -2 & 2 \\ 1 & 2 & 0 & 1 \\ -2 & 0 & 3 & -2 \\ 2 & 1 & -2 & -1
\end{bmatrix},</math>
Following those steps in the Householder method, we have:
The first Householder matrix:
- <math>\begin{align}
Q_1 &= \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -\frac{1}{3} & \frac{2}{3} & -\frac{2}{3} \\
0 & \frac{2}{3} & \frac{2}{3} & \frac{1}{3} \\
0 & -\frac{2}{3} & \frac{1}{3} & \frac{2}{3}
\end{bmatrix}, \\
A_2 = Q_1 A Q_1 &= \begin{bmatrix}
4 & -3 & 0 & 0 \\
-3 & \frac{10}{3} & 1 & \frac{4}{3} \\
0 & 1 & \frac{5}{3} & -\frac{4}{3} \\
0 & \frac{4}{3} & -\frac{4}{3} & -1
\end{bmatrix},
\end{align}</math>
Used <math display="inline">A_2</math> to form
- <math>\begin{align}
Q_2 &= \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -\frac{3}{5} & -\frac{4}{5} \\
0 & 0 & -\frac{4}{5} & \frac{3}{5}
\end{bmatrix}, \\
A_3 = Q_2 A_2 Q_2 &= \begin{bmatrix}
4 & -3 & 0 & 0 \\
-3 & \frac{10}{3} & -\frac{5}{3} & 0 \\
0 & -\frac{5}{3} & -\frac{33}{25} & \frac{68}{75} \\
0 & 0 & \frac{68}{75} & \frac{149}{75}
\end{bmatrix},
\end{align}</math>
As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps.
Quantum computation
As unitary matrices are useful in quantum computation, and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of an oracle function represented by what turns out to be a Householder transformation:
<math>\begin{cases}
U_\omega |x\rang = -|x\rang & \text{for } x = \omega \text{, that is, } f(x) = 1, \\
U_\omega |x\rang = |x\rang & \text{for } x \ne \omega \text{, that is, } f(x) = 0.
\end{cases}</math>
(here the <math>|x\rangle</math> is part of the bra-ket notation and is analogous to <math>\vec x</math> which we were using previously)
This is done via an algorithm that iterates via the oracle function <math>U_\omega</math> and another operator <math>U_s</math> known as the Grover diffusion operator defined by
<math>|s\rangle = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} |x\rangle. </math> and <math>U_s = 2 \left|s\right\rangle\!\! \left\langle s\right| - I</math>.
Computational and theoretical relationship to other unitary transformations
Template:See also The Householder transformation is a reflection about a hyperplane with unit normal vector <math display="inline">v</math>, as stated earlier. An <math display="inline">N</math>-by-<math display="inline">N</math> unitary transformation <math display="inline">U</math> satisfies <math display="inline">UU^* = I</math>. Taking the determinant (<math display="inline">N</math>-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvalues <math display="inline">\lambda_i</math> have unit modulus. This can be seen directly and swiftly:
- <math>\begin{align}
\frac{\operatorname{Trace}\left(UU^*\right)}{N} &=
\frac{\sum_{j=1}^N\left|\lambda_j\right|^2}{N} = 1, &
\operatorname{det}\left(UU^*\right) &=
\prod_{j=1}^N \left|\lambda_j\right|^2 = 1.
\end{align}</math>
Since arithmetic and geometric means are equal if the variables are constant (see inequality of arithmetic and geometric means), we establish the claim of unit modulus.
For the case of real valued unitary matrices we obtain orthogonal matrices, <math display="inline">UU^\textsf{T} = I</math>. It follows rather readily (see Orthogonal matrix) that any orthogonal matrix can be decomposed into a product of 2-by-2 rotations, called Givens rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length.
The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner.<ref>Template:Cite journal</ref>
Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines.
See also
Notes
<references />
References
- Template:Cite journal
- Template:Cite journal
- Template:Cite journal (Herein Householder Transformation is cited as a top 10 algorithm of this century)
- Template:Cite book
- Template:Citation