Home  

Random  

Nearby  



Log in  



Settings  



Donate  



About Wikipedia  

Disclaimers  



Wikipedia





Polar decomposition





Article  

Talk  



Language  

Watch  

Edit  





Inmathematics, the polar decomposition of a square realorcomplex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size.[1]

If a real matrix is interpreted as a linear transformationof-dimensional space , the polar decomposition separates it into a rotationorreflection of, and a scaling of the space along a set of orthogonal axes.

The polar decomposition of a square matrix always exists. If isinvertible, the decomposition is unique, and the factor will be positive-definite. In that case, can be written uniquely in the form , where is unitary and is the unique self-adjoint logarithm of the matrix .[2] This decomposition is useful in computing the fundamental group of (matrix) Lie groups.[3]

The polar decomposition can also be defined as where is a symmetric positive-definite matrix with the same eigenvalues as but different eigenvectors.

The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a complex number as, where is its absolute value (a non-negative real number), and is a complex number with unit norm (an element of the circle group).

The definition may be extended to rectangular matrices by requiring to be a semi-unitary matrix and to be a positive-semidefinite Hermitian matrix. The decomposition always exists and is always unique. The matrix is unique if and only if has full rank. [4]

Geometric interpretation

edit

A real square   matrix   can be interpreted as the linear transformationof  that takes a column vector  to . Then, in the polar decomposition  , the factor   is an   real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by   into a scaling of the space   along each eigenvector  of  by a scale factor   (the action of  ), followed by a rotation of   (the action of  ).

Alternatively, the decomposition   expresses the transformation defined by   as a rotation ( ) followed by a scaling ( ) along certain orthogonal directions. The scale factors are the same, but the directions are different.

Properties

edit

The polar decomposition of the complex conjugateof  is given by   Note that gives the corresponding polar decomposition of the determinantofA, since   and  . In particular, if   has determinant 1 then both   and   have determinant 1.

The positive-semidefinite matrix P is always unique, even if Aissingular, and is denoted as where   denotes the conjugate transposeof . The uniqueness of P ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that   is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian square root.[5]IfA is invertible, then P is positive-definite, thus also invertible and the matrix U is uniquely determined by 

Relation to the SVD

edit

In terms of the singular value decomposition (SVD) of  ,  , one has where  ,  , and   are unitary matrices (called orthogonal matrices if the field is the reals  ). This confirms that   is positive-definite and   is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.

One can also decompose   in the form Here   is the same as before and   is given by This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.

The polar decomposition of a square invertible real matrix   is of the form   where   is a positive-definite matrix and   is an orthogonal matrix.

Relation to normal matrices

edit

The matrix   with polar decomposition  isnormal if and only if   and   commute:  , or equivalently, they are simultaneously diagonalizable.

Construction and proofs of existence

edit

The core idea behind the construction of the polar decomposition is similar to that used to compute the singular-value decomposition.

Derivation for normal matrices

edit

If isnormal, then it is unitarily equivalent to a diagonal matrix:   for some unitary matrix   and some diagonal matrix  . This makes the derivation of its polar decomposition particularly straightforward, as we can then write   where   is a diagonal matrix containing the phases of the elements of  , that is,   when  , and   when  .

The polar decomposition is thus  , with   and   diagonal in the eigenbasis of   and having eigenvalues equal to the phases and absolute values of those of  , respectively.

Derivation for invertible matrices

edit

From the singular-value decomposition, it can be shown that a matrix   is invertible if and only if   (equivalently,  ) is. Moreover, this is true if and only if the eigenvalues of   are all not zero.[6]

In this case, the polar decomposition is directly obtained by writing   and observing that   is unitary. To see this, we can exploit the spectral decomposition of   to write  .

In this expression,   is unitary because   is. To show that also   is unitary, we can use the SVD to write  , so that   where again   is unitary by construction.

Yet another way to directly show the unitarity of   is to note that, writing the SVDof  in terms of rank-1 matrices as  , where  are the singular values of  , we have   which directly implies the unitarity of   because a matrix is unitary if and only if its singular values have unitary absolute value.

Note how, from the above construction, it follows that the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.

General derivation

edit

The SVD of a square matrix   reads  , with   unitary matrices, and   a diagonal, positive semi-definite matrix. By simply inserting an additional pair of  s or  s, we obtain the two forms of the polar decomposition of  : More generally, if   is some rectangular   matrix, its SVD can be written as   where now   and   are isometries with dimensions   and  , respectively, where  , and   is again a diagonal positive semi-definite square matrix with dimensions  . We can now apply the same reasoning used in the above equation to write  , but now   is not in general unitary. Nonetheless,   has the same support and range as  , and it satisfies   and  . This makes   into an isometry when its action is restricted onto the support of  , that is, it means that   is a partial isometry.

As an explicit example of this more general case, consider the SVD of the following matrix: We then have which is an isometry, but not unitary. On the other hand, if we consider the decomposition of we find which is a partial isometry (but not an isometry).

Bounded operators on Hilbert space

edit

The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.

The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.

The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shiftonl2(N), then |A| = {A*A}1/2 = I. So if A = U |A|, U must be A, which is not unitary.

The existence of a polar decomposition is a consequence of Douglas' lemma:

Lemma — IfA, B are bounded operators on a Hilbert space H, and A*AB*B, then there exists a contraction C such that A = CB. Furthermore, C is unique if ker(B*) ⊂ ker(C).

The operator C can be defined by C(Bh) := Ah for all hinH, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement to all of H. The lemma then follows since A*AB*B implies ker(B) ⊂ ker(A).

In particular. If A*A = B*B, then C is a partial isometry, which is unique if ker(B*) ⊂ ker(C). In general, for any bounded operator A,   where (A*A)1/2 is the unique positive square root of A*A given by the usual functional calculus. So by the lemma, we have   for some partial isometry U, which is unique if ker(A*) ⊂ ker(U). Take P to be (A*A)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U', where P' is positive and U' a partial isometry.

When H is finite-dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.

By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: U is in the von Neumann algebra generated by A. If A is invertible, the polar part U will be in the C*-algebra as well.

Unbounded operators

edit

IfA is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decomposition   where |A| is a (possibly unbounded) non-negative self adjoint operator with the same domain as A, and U is a partial isometry vanishing on the orthogonal complement of the range ran(|A|).

The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(A*A) = dom(B*B) and A*Ah = B*Bh for all h ∈ dom(A*A), then there exists a partial isometry U such that A = UB. U is unique if ran(B) ⊂ ker(U). The operator A being closed and densely defined ensures that the operator A*A is self-adjoint (with dense domain) and therefore allows one to define (A*A)1/2. Applying the lemma gives polar decomposition.

If an unbounded operator Aisaffiliated to a von Neumann algebra M, and A = UP is its polar decomposition, then U is in M and so is the spectral projection of P, 1B(P), for any Borel set Bin[0, ∞).

Quaternion polar decomposition

edit

The polar decomposition of quaternions   with orthonormal basis quaternions   depends on the unit 2-dimensional sphere  ofsquare roots of minus one, known as right versors. Given any   on this sphere, and an angle π < aπ , the versor   is on the unit 3-sphereof  For a = 0 and a = π , the versor is 1 or −1, regardless of which r is selected. The norm t of a quaternion q is the Euclidean distance from the origin to q. When a quaternion is not just a real number, then there is a unique polar decomposition:

 

Here r, a, t are all uniquely determined such that r is a right versor ( r 2 = –1 ), a satisfies 0 < a < π , and t > 0 .

Alternative planar decompositions

edit

In the Cartesian plane, alternative planar ring decompositions arise as follows:

Numerical determination of the matrix polar decomposition

edit

To compute an approximation of the polar decomposition A = UP, usually the unitary factor U is approximated.[8][9] The iteration is based on Heron's method for the square root of 1 and computes, starting from  , the sequence  

The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.

This basic iteration may be refined to speed up the process:

See also

edit

References

edit
  1. ^ Hall 2015 Section 2.5
  • ^ Hall 2015 Theorem 2.17
  • ^ Hall 2015 Section 13.3
  • ^ Higham, Nicholas J.; Schreiber, Robert S. (1990). "Fast polar decomposition of an arbitrary matrix". SIAM J. Sci. Stat. Comput. 11 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 648–655. CiteSeerX 10.1.1.111.9239. doi:10.1137/0911038. ISSN 0196-5204. S2CID 14268409.
  • ^ Hall 2015 Lemma 2.18
  • ^ Note how this implies, by the positivity of  , that the eigenvalues are all real and strictly positive.
  • ^ Sobczyk, G.(1995) "Hyperbolic Number Plane", College Mathematics Journal 26:268–80
  • ^ Higham, Nicholas J. (1986). "Computing the polar decomposition with applications". SIAM J. Sci. Stat. Comput. 7 (4). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 1160–1174. CiteSeerX 10.1.1.137.7354. doi:10.1137/0907079. ISSN 0196-5204.
  • ^ Byers, Ralph; Hongguo Xu (2008). "A New Scaling for Newton's Iteration for the Polar Decomposition and its Backward Stability". SIAM J. Matrix Anal. Appl. 30 (2). Philadelphia, PA, USA: Society for Industrial and Applied Mathematics: 822–843. CiteSeerX 10.1.1.378.6737. doi:10.1137/070699895. ISSN 0895-4798.

  • Retrieved from "https://en.wikipedia.org/w/index.php?title=Polar_decomposition&oldid=1224394007"
     



    Last edited on 18 May 2024, at 03:42  





    Languages

     


    Català
    Čeština
    Deutsch
    Français

    Italiano
    עברית
    Polski
    Русский
    Svenska
    Українська
    اردو

     

    Wikipedia


    This page was last edited on 18 May 2024, at 03:42 (UTC).

    Content is available under CC BY-SA 4.0 unless otherwise noted.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Terms of Use

    Desktop