K iddlZddlmZddlmZddlmZmZddlm Z ddl m Z m Z ddl mZmZdd lmZed fd Zd Zd ZddZddZddZddZedd fdZedd fdZdZdZddZdZdZy)N)S) expand_mul)Minsqrt)sign)NonSquareMatrixErrorNonPositiveDefiniteMatrixError)_get_intermediate_simp_iszero)_find_reasonable_pivot_naiveFc|j||d\}}t|}|jt|j|}|d|ddf}||fS)aReturns a pair of matrices (`C`, `F`) with matching rank such that `A = C F`. Parameters ========== iszerofunc : Function, optional A function used for detecting whether an element can act as a pivot. ``lambda x: x.is_zero`` is used by default. simplify : Bool or Function, optional A function used to simplify elements when looking for a pivot. By default SymPy's ``simplify`` is used. Returns ======= (C, F) : Matrices `C` and `F` are full-rank matrices with rank as same as `A`, whose product gives `A`. See Notes for additional mathematical details. Examples ======== >>> from sympy import Matrix >>> A = Matrix([ ... [1, 3, 1, 4], ... [2, 7, 3, 9], ... [1, 5, 3, 1], ... [1, 2, 0, 8] ... ]) >>> C, F = A.rank_decomposition() >>> C Matrix([ [1, 3, 4], [2, 7, 9], [1, 5, 1], [1, 2, 8]]) >>> F Matrix([ [1, 0, -2, 0], [0, 1, 1, 0], [0, 0, 0, 1]]) >>> C * F == A True Notes ===== Obtaining `F`, an RREF of `A`, is equivalent to creating a product .. math:: E_n E_{n-1} ... E_1 A = F where `E_n, E_{n-1}, \dots, E_1` are the elimination matrices or permutation matrices equivalent to each row-reduction step. The inverse of the same product of elimination matrices gives `C`: .. math:: C = \left(E_n E_{n-1} \dots E_1\right)^{-1} It is not necessary, however, to actually compute the inverse: the columns of `C` are those from the original matrix with the same column indices as the indices of the pivot columns of `F`. References ========== .. [1] https://en.wikipedia.org/wiki/Rank_factorization .. [2] Piziak, R.; Odell, P. L. (1 June 1999). "Full Rank Factorization of Matrices". Mathematics Magazine. 72 (3): 193. doi:10.2307/2690882 See Also ======== sympy.matrices.matrixbase.MatrixBase.rref T)simplify iszerofuncpivotsN)rreflenextractrangerows)MrrF pivot_colsrankCs c/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sympy/matrices/decompositions.py_rank_decompositionr s^lFFHMAz z?D %-,A %4%( A a4Kct|jDcgc]}g}}|jD] \}}}||ks ||j|"t |}|g|jz}|g|jz}t|jD]?}||ddD]2}|||kr||}|||<|}|||kr|||k(s)|x||<||<4A||fScc}w)aaLiu's algorithm, for pre-determination of the Elimination Tree of the given matrix, used in row-based symbolic Cholesky factorization. Examples ======== >>> from sympy import SparseMatrix >>> S = SparseMatrix([ ... [1, 0, 3, 2], ... [0, 0, 1, 0], ... [4, 0, 0, 5], ... [0, 6, 7, 0]]) >>> S.liupc() ([[0], [], [0], [1, 2]], [4, 3, 4, 4]) References ========== .. [1] Symbolic Sparse Cholesky Factorization using Elimination Trees, Jeroen Van Grondelle (1999) https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.7582 N)rrrow_listappendr) rrRc_infparentvirtualts r_liupcr+ms 4166]##A#::<1a 6 aDKKN!fCeAFFlFeAFFlG 166]+1cr +A!*q.$QZ !*q. qzS )**q GAJ ++ f9) $s CcJ|j\}}t|}tj|}t |j D]W}||D]3}||k7s ||k7s||j |||}||k7s.||k7r%5tt||||<Y|S)aVSymbolic cholesky factorization, for pre-determination of the non-zero structure of the Cholesky factororization. Examples ======== >>> from sympy import SparseMatrix >>> S = SparseMatrix([ ... [1, 0, 3, 2], ... [0, 0, 1, 0], ... [4, 0, 0, 5], ... [0, 6, 7, 0]]) >>> S.row_structure_symbolic_cholesky() [[0], [], [0], [1, 2]] References ========== .. [1] Symbolic Sparse Cholesky Factorization using Elimination Trees, Jeroen Van Grondelle (1999) https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.7582 ) liupcrcopydeepcopyrrr"sortedset)rr$r(r'Lrowkjs r _row_structure_symbolic_choleskyr5s0 IAvAC a D 166]'1 As(qAvQq!1Is(qAv  T!W&Q ' Krc ddlm}|js td|r|js t d|s|j s t d|j|j|j|rt|jD]tD]9dfz |ftfdtDz zf<;|ftfdtDz }|jdur td t|f<nt|jD]}tD]9dfz |ftfd tDz zf<;t|ftfd tDz f<|jS) aReturns the Cholesky-type decomposition L of a matrix A such that L * L.H == A if hermitian flag is True, or L * L.T == A if hermitian is False. A must be a Hermitian positive-definite matrix if hermitian is True, or a symmetric matrix if it is False. Examples ======== >>> from sympy import Matrix >>> A = Matrix(((25, 15, -5), (15, 18, 0), (-5, 0, 11))) >>> A.cholesky() Matrix([ [ 5, 0, 0], [ 3, 3, 0], [-1, 1, 3]]) >>> A.cholesky() * A.cholesky().T Matrix([ [25, 15, -5], [15, 18, 0], [-5, 0, 11]]) The matrix can have complex entries: >>> from sympy import I >>> A = Matrix(((9, 3*I), (-3*I, 5))) >>> A.cholesky() Matrix([ [ 3, 0], [-I, 2]]) >>> A.cholesky() * A.cholesky().H Matrix([ [ 9, 3*I], [-3*I, 5]]) Non-hermitian Cholesky-type decomposition may be useful when the matrix is not positive-definite. >>> A = Matrix([[1, 2], [2, 1]]) >>> L = A.cholesky(hermitian=False) >>> L Matrix([ [1, 0], [2, sqrt(3)*I]]) >>> L*L.T == A True See Also ======== sympy.matrices.dense.DenseMatrix.LDLdecomposition sympy.matrices.matrixbase.MatrixBase.LUdecomposition QRdecomposition rMutableDenseMatrixMatrix must be square.Matrix must be Hermitian.Matrix must be symmetric.c3XK|]!}|f|fjz#ywN conjugate.0r3Lir4s r z_cholesky.. s/F!Q$!Q$ 1 1 33F'*c3XK|]!}|f|fjz#ywr=r>rAr3rBrCs rrDz_cholesky..s/BAAadGAadG--//BrEF Matrix must be positive-definitec3<K|]}|f|fzywr=r@s rrDz_cholesky..s%:A!Q$!Q$:sc32K|]}|fdzywNrJrGs rrDz_cholesky..s11AadGQJ1s)denser8 is_squarer is_hermitian ValueError is_symmetriczerosrrsum is_positiver r_new)r hermitianr8Lii2rBrCr4s @@@r _choleskyrYsr* ;;"#;<<455 Q^^-455  " "166166 2Aqvv !A1X I!Q$K!AqD'FU1XFF+GH!Q$ IadGBqBBCD5(46884jAadG !qvv 3A1X =!Q$K!AqD':q::+;<!Q$ =1QT71a1123AadG  3 66!9rc ddlm}|js td|r|js t d|s|j s t dttt}|j}|j|j}tt|D]'}||D]}||k7r|||f|||f<d}||D]U} | |ks ||D]D} | |kr<| | k(s|r!|||| f||| fjzz }1|||| f||| fzz }DTn||||f|z |||fz |||f<|||f|||f<d}||D]9} | |kr1|r!|||| f||| fjzz }+|||| fdzz }9n||||f|z } |r| jdur t!d t#| |||f<*|j%|S) as Returns the Cholesky decomposition L of a matrix A such that L * L.T = A A must be a square, symmetric, positive-definite and non-singular matrix Examples ======== >>> from sympy import SparseMatrix >>> A = SparseMatrix(((25,15,-5),(15,18,0),(-5,0,11))) >>> A.cholesky() Matrix([ [ 5, 0, 0], [ 3, 3, 0], [-1, 1, 3]]) >>> A.cholesky() * A.cholesky().T == A True The matrix can have complex entries: >>> from sympy import I >>> A = SparseMatrix(((9, 3*I), (-3*I, 5))) >>> A.cholesky() Matrix([ [ 3, 0], [-I, 2]]) >>> A.cholesky() * A.cholesky().H Matrix([ [ 9, 3*I], [-3*I, 5]]) Non-hermitian Cholesky-type decomposition may be useful when the matrix is not positive-definite. >>> A = SparseMatrix([[1, 2], [2, 1]]) >>> L = A.cholesky(hermitian=False) >>> L Matrix([ [1, 0], [2, sqrt(3)*I]]) >>> L*L.T == A True See Also ======== sympy.matrices.sparse.SparseMatrix.LDLdecomposition sympy.matrices.matrixbase.MatrixBase.LUdecomposition QRdecomposition rr7r9r:r;rrMFrH)rNr8rOr rPrQrRr rrow_structure_symbolic_choleskyrSrrrr?rUr rrV) rrWr8dps CrowstrucrrCr4summp1p2r3Cjj2s r_cholesky_sparserb"sSl* ;;"#;<<455 Q^^-455&z:>C113I"((0A 3y> ")%1( %AAvAqD'!Q$#A, "BAv"+A, "B!Av#%8'0(,!R%1b59K9K9M0M(M(,!R%1b50A(A % "" "qAw~1a489!Q$AqD'!Q$"1A1u$ AadGAadG,=,=,?$??D AadGQJ.D1QT7T>*!1!1U!:8:<<t*!Q$Q( %)%V 66!9rc ddlm}|js td|r|js t d|s|j s t d|j|j|j|j|j|rt|jD]tD]:dfz |ftfdtDz zf<<|ftfdtDz f<fjdustd nt|jD]vtD]:dfz |ftfd tDz zf<<|ftfd tDz f<x|j|jfS) aReturns the LDL Decomposition (L, D) of matrix A, such that L * D * L.H == A if hermitian flag is True, or L * D * L.T == A if hermitian is False. This method eliminates the use of square root. Further this ensures that all the diagonal entries of L are 1. A must be a Hermitian positive-definite matrix if hermitian is True, or a symmetric matrix otherwise. Examples ======== >>> from sympy import Matrix, eye >>> A = Matrix(((25, 15, -5), (15, 18, 0), (-5, 0, 11))) >>> L, D = A.LDLdecomposition() >>> L Matrix([ [ 1, 0, 0], [ 3/5, 1, 0], [-1/5, 1/3, 1]]) >>> D Matrix([ [25, 0, 0], [ 0, 9, 0], [ 0, 0, 9]]) >>> L * D * L.T * A.inv() == eye(A.rows) True The matrix can have complex entries: >>> from sympy import I >>> A = Matrix(((9, 3*I), (-3*I, 5))) >>> L, D = A.LDLdecomposition() >>> L Matrix([ [ 1, 0], [-I/3, 1]]) >>> D Matrix([ [9, 0], [0, 4]]) >>> L*D*L.H == A True See Also ======== sympy.matrices.dense.DenseMatrix.cholesky sympy.matrices.matrixbase.MatrixBase.LUdecomposition QRdecomposition rr7r9r:r;c3hK|])}|f|fjz||fz+ywr=r>rAr3DrBrCr4s rrDz$_LDLdecomposition..sC7K<=AadGAadG--//!Q$77K/2c3hK|])}|f|fjz||fz+ywr=r>rAr3rfrBrCs rrDz$_LDLdecomposition..s<JAAadGAadG--//!Q$7JrgFrHc3LK|]}|f|fz||fzywr=rJres rrDz$_LDLdecomposition..s77?01AadGAadGOAadG+7?s!$c3BK|]}|fdz||fzywrLrJris rrDz$_LDLdecomposition..s*#I1AadGQJqAw$6#Is)rNr8rOr rPrQrRrSreyerrTrUr rV)rrWr8rfrBrCr4s @@@@r_LDLdecompositionrmsh* ;;"#;<<455 Q^^-455  " "166166 2A   (Aqvv 8A1X LqAw;1a437KAFq7K4K*KL!Q$ LAwJqJJKAadGAw""e+4688 8qvv JA1X @qAw;1a437?5:1X7?4?*?@!Q$ @1g#Ia#I IIAadG  J 66!9affQi rc lddlm}|js td|r|js t d|s|j s t dttt}|j}|j|j}|j|j|j}tt|D]C}||D]7}||k7r|||f|||f<d} ||D]e} | |kr]||D]T} | |krL| | k(s|r)| ||| f||| fj!z|| | fzz } 9| ||| f||| fz|| | fzz } Tcen||||f| z |||fz |||f<|||f|||f<d} ||D]I} | |krA|r)| ||| f||| fj!z|| | fzz } 3| ||| fdz|| | fzz } In||||f| z |||f<|s|||fj"dus/t%d F|j'||j'|fS) a Returns the LDL Decomposition (matrices ``L`` and ``D``) of matrix ``A``, such that ``L * D * L.T == A``. ``A`` must be a square, symmetric, positive-definite and non-singular. This method eliminates the use of square root and ensures that all the diagonal entries of L are 1. Examples ======== >>> from sympy import SparseMatrix >>> A = SparseMatrix(((25, 15, -5), (15, 18, 0), (-5, 0, 11))) >>> L, D = A.LDLdecomposition() >>> L Matrix([ [ 1, 0, 0], [ 3/5, 1, 0], [-1/5, 1/3, 1]]) >>> D Matrix([ [25, 0, 0], [ 0, 9, 0], [ 0, 0, 9]]) >>> L * D * L.T == A True rr7r9r:r;rrMFrH)rNr8rOr rPrQrRr rr[rlrrScolsrrr?rUr rV) rrWr8r\ LrowstrucrBrfrCr4r^r_r`r3s r_LDLdecomposition_sparserqs<* ;;"#;<<455 Q^^-455&z:>C113I"&&qvv.A"((8A 3y> "'<1& >> from sympy import Matrix >>> a = Matrix([[4, 3], [6, 3]]) >>> L, U, _ = a.LUdecomposition() >>> L Matrix([ [ 1, 0], [3/2, 1]]) >>> U Matrix([ [4, 3], [0, -3/2]]) See Also ======== sympy.matrices.dense.DenseMatrix.cholesky sympy.matrices.dense.DenseMatrix.LDLdecomposition QRdecomposition LUdecomposition_Simple LUdecompositionFF LUsolve )rsimpfunc rankcheckc||kr jS||k(r jS|jkr||fSjSr=)zeroonerorCr4rcombineds rentry_Lz!_LUdecomposition..entry_LsH q566M !V55L  AqD> !vv rc4||kDr jS||fSr=)rvrxs rentry_Uz!_LUdecomposition..entry_Us Qqvv2HQTN2r)LUdecomposition_SimplerVrro) rrrsrtprzr|rBUrys ` @r_LUdecompositionrBsqL**jY+0KHa 3 x}}hmmW5A x}}hmmW5A a7Nrc |r tj|jvr(|j|j|j gfSt }|jg}dtdjdz D]}d}|j k7rQ|rOfdt||jD}t|||\} } } } | du}|rdz |j k7r|rO|r|k7r td dn|| z} | |r|fcS D]\}}|||zf<|| k7r}|j|| g| d|f|d|fc|d|f<| d|f<| j f|j fc|j f<| j f<dz}t|dzjD]Y}||f|fz ||f<t|j D]$}|||f||f||fzz ||f<&[|k7r/t|dzjD]}|j|f<dz j k(s|fcS|rZ|tjj dz tjj dz fr td|fS)aq$Compute the PLU decomposition of the matrix. Parameters ========== rankcheck : bool, optional Determines if this function should detect the rank deficiency of the matrixis and should raise a ``ValueError``. iszerofunc : function, optional A function which determines if a given expression is zero. The function should be a callable that takes a single SymPy expression and returns a 3-valued boolean value ``True``, ``False``, or ``None``. It is internally used by the pivot searching algorithm. See the notes section for a more information about the pivot searching algorithm. simpfunc : function or None, optional A function that simplifies the input. If this is specified as a function, this function should be a callable that takes a single SymPy expression and returns an another SymPy expression that is algebraically equivalent. If ``None``, it indicates that the pivot search algorithm should not attempt to simplify any candidate pivots. It is internally used by the pivot searching algorithm. See the notes section for a more information about the pivot searching algorithm. Returns ======= (lu, row_swaps) : (Matrix, list) If the original matrix is a $m, n$ matrix: *lu* is a $m, n$ matrix, which contains result of the decomposition in a compressed form. See the notes section to see how the matrix is compressed. *row_swaps* is a $m$-element list where each element is a pair of row exchange indices. ``A = (L*U).permute_backward(perm)``, and the row permutation matrix $P$ from the formula $P A = L U$ can be computed by ``P=eye(A.row).permute_forward(perm)``. Raises ====== ValueError Raised if ``rankcheck=True`` and the matrix is found to be rank deficient during the computation. Notes ===== About the PLU decomposition: PLU decomposition is a generalization of a LU decomposition which can be extended for rank-deficient matrices. It can further be generalized for non-square matrices, and this is the notation that SymPy is using. PLU decomposition is a decomposition of a $m, n$ matrix $A$ in the form of $P A = L U$ where * $L$ is a $m, m$ lower triangular matrix with unit diagonal entries. * $U$ is a $m, n$ upper triangular matrix. * $P$ is a $m, m$ permutation matrix. So, for a square matrix, the decomposition would look like: .. math:: L = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ L_{1, 0} & 1 & 0 & \cdots & 0 \\ L_{2, 0} & L_{2, 1} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_{n-1, 0} & L_{n-1, 1} & L_{n-1, 2} & \cdots & 1 \end{bmatrix} .. math:: U = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, n-1} \\ 0 & U_{1, 1} & U_{1, 2} & \cdots & U_{1, n-1} \\ 0 & 0 & U_{2, 2} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & U_{n-1, n-1} \end{bmatrix} And for a matrix with more rows than the columns, the decomposition would look like: .. math:: L = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ L_{1, 0} & 1 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ L_{2, 0} & L_{2, 1} & 1 & \cdots & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ L_{n-1, 0} & L_{n-1, 1} & L_{n-1, 2} & \cdots & 1 & 0 & \cdots & 0 \\ L_{n, 0} & L_{n, 1} & L_{n, 2} & \cdots & L_{n, n-1} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ L_{m-1, 0} & L_{m-1, 1} & L_{m-1, 2} & \cdots & L_{m-1, n-1} & 0 & \cdots & 1 \\ \end{bmatrix} .. math:: U = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, n-1} \\ 0 & U_{1, 1} & U_{1, 2} & \cdots & U_{1, n-1} \\ 0 & 0 & U_{2, 2} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & U_{n-1, n-1} \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \end{bmatrix} Finally, for a matrix with more columns than the rows, the decomposition would look like: .. math:: L = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ L_{1, 0} & 1 & 0 & \cdots & 0 \\ L_{2, 0} & L_{2, 1} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_{m-1, 0} & L_{m-1, 1} & L_{m-1, 2} & \cdots & 1 \end{bmatrix} .. math:: U = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, m-1} & \cdots & U_{0, n-1} \\ 0 & U_{1, 1} & U_{1, 2} & \cdots & U_{1, m-1} & \cdots & U_{1, n-1} \\ 0 & 0 & U_{2, 2} & \cdots & U_{2, m-1} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 & \cdots & U_{m-1, m-1} & \cdots & U_{m-1, n-1} \\ \end{bmatrix} About the compressed LU storage: The results of the decomposition are often stored in compressed forms rather than returning $L$ and $U$ matrices individually. It may be less intiuitive, but it is commonly used for a lot of numeric libraries because of the efficiency. The storage matrix is defined as following for this specific method: * The subdiagonal elements of $L$ are stored in the subdiagonal portion of $LU$, that is $LU_{i, j} = L_{i, j}$ whenever $i > j$. * The elements on the diagonal of $L$ are all 1, and are not explicitly stored. * $U$ is stored in the upper triangular portion of $LU$, that is $LU_{i, j} = U_{i, j}$ whenever $i <= j$. * For a case of $m > n$, the right side of the $L$ matrix is trivial to store. * For a case of $m < n$, the below side of the $U$ matrix is trivial to store. So, for a square matrix, the compressed output matrix would be: .. math:: LU = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, n-1} \\ L_{1, 0} & U_{1, 1} & U_{1, 2} & \cdots & U_{1, n-1} \\ L_{2, 0} & L_{2, 1} & U_{2, 2} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_{n-1, 0} & L_{n-1, 1} & L_{n-1, 2} & \cdots & U_{n-1, n-1} \end{bmatrix} For a matrix with more rows than the columns, the compressed output matrix would be: .. math:: LU = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, n-1} \\ L_{1, 0} & U_{1, 1} & U_{1, 2} & \cdots & U_{1, n-1} \\ L_{2, 0} & L_{2, 1} & U_{2, 2} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_{n-1, 0} & L_{n-1, 1} & L_{n-1, 2} & \cdots & U_{n-1, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ L_{m-1, 0} & L_{m-1, 1} & L_{m-1, 2} & \cdots & L_{m-1, n-1} \\ \end{bmatrix} For a matrix with more columns than the rows, the compressed output matrix would be: .. math:: LU = \begin{bmatrix} U_{0, 0} & U_{0, 1} & U_{0, 2} & \cdots & U_{0, m-1} & \cdots & U_{0, n-1} \\ L_{1, 0} & U_{1, 1} & U_{1, 2} & \cdots & U_{1, m-1} & \cdots & U_{1, n-1} \\ L_{2, 0} & L_{2, 1} & U_{2, 2} & \cdots & U_{2, m-1} & \cdots & U_{2, n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \cdots & \vdots \\ L_{m-1, 0} & L_{m-1, 1} & L_{m-1, 2} & \cdots & U_{m-1, m-1} & \cdots & U_{m-1, n-1} \\ \end{bmatrix} About the pivot searching algorithm: When a matrix contains symbolic entries, the pivot search algorithm differs from the case where every entry can be categorized as zero or nonzero. The algorithm searches column by column through the submatrix whose top left entry coincides with the pivot position. If it exists, the pivot is the first entry in the current search column that iszerofunc guarantees is nonzero. If no such candidate exists, then each candidate pivot is simplified if simpfunc is not None. The search is repeated, with the difference that a candidate may be the pivot if ``iszerofunc()`` cannot guarantee that it is nonzero. In the second search the pivot is the first candidate that iszerofunc can guarantee is nonzero. If no such candidate exists, then the pivot is the first candidate for which iszerofunc returns None. If no such candidate exists, then the search is repeated in the next column to the right. The pivot search algorithm differs from the one in ``rref()``, which relies on ``_find_reasonable_pivot()``. Future versions of ``LUdecomposition_simple()`` may use ``_find_reasonable_pivot()``. See Also ======== sympy.matrices.matrixbase.MatrixBase.LUdecomposition LUdecompositionFF LUsolve rrTc3,K|] }|f ywr=rJ)rAr#lu pivot_cols rrDz*_LUdecomposition_Simple..sJAr!Y,'JsNzRank of matrix is strictly less than number of rows or columns. Pass keyword argument rankcheck=False to compute the LU decomposition of this matrix.)rZeroshaperSrror as_mutablerr rQr"rvr)rrrsrtr\ row_swaps pivot_row iszeropivotsub_colpivot_row_offset pivot_valueis_assumed_non_zeroind_simplified_pairscandidate_pivot_rowoffsetval start_colrowr%rrs @@r_LUdecomposition_SimplersD vvwwqvvqvv&**&(C BII1bggk*f!  166!kJy!&&1IJG-Wj(K U k+>@T&-KQ 166!k i/ HI I '7&>dIP`D`  &; y= 0 4KFC03By6!9, - 4 + +   i)<= >&) 34bAiK9O6P MBy!I+% &+>) +K(L & "''(99:By)TVT[T[J[?[<\ YBy)BGG++ ,b1DiPRPWPWFW1W.XM Q0 SCBsI~&r)Y*>'??@ sI~ 9bgg. S CFbi.@IqLAQ.Q!QR36  S S  ! Y]BGG4 ,%&VV3 >" , Q  y= Mf!P 3rww(1,c"''277.Ca.GGH JHI I y=rc&ddlm}|j}|j}|j|j }}|j ||||}}}|||} d} t|dz D] } || | fdk(rt| dz|D] } || | fs n td|| | df|| | dfc|| | df<|| | df<|| d| f|| d| fc|| d| f<|| d| f<|| ddf|| ddfc|| ddf<|| ddf<|| | fx|| | f<} | | z| | | f<t| dz|D]H}||| fx||| f<}t| dz|D]}| |||fz|| |f|zz | z |||f<!d||| f<J| } | | |dz |dz f<||| |fS)aICompute a fraction-free LU decomposition. Returns 4 matrices P, L, D, U such that PA = L D**-1 U. If the elements of the matrix belong to some integral domain I, then all elements of L, D and U are guaranteed to belong to I. See Also ======== sympy.matrices.matrixbase.MatrixBase.LUdecomposition LUdecomposition_Simple LUsolve References ========== .. [1] W. Zhou & D.J. Jeffrey, "Fraction-free matrix factors: new forms for LU and QR factors". Frontiers in Computer Science in China, Vol 2, no. 1, pp. 67-80, 2008. r) SparseMatrixrzMatrix is not full rankN) sympy.matricesrrSrlrrorrrQ)rrrSrlnmrrBPDDoldpivotr3kpivotUkkrCUikr4s r_LUdecompositionFFr+s4,,!!ECvvqvvqA||~s1vs1v!qAQ{BH 1q5\ QT7a<Aq/ <VQY< <!!:;;&' mQq!"uX #AaeHa m&' mQq"1"uX #Aa!eHa m&' lAadG !AadGQvqy\1a4 1a43c>1a4q1ua AadG #AadGc1q5!_ E1a4=1QT7S=8HD!Q$ EAadG  /2 Bq1ua!e| aQ;rc|j}|j\}}||k\r||zj\}}g}t|jD]#\}}|j r|j |%|dd|f}t|jDcgc]}||vst|||f} }|j| }|j\}} ||z|jz} n||zj\} }g}t|jD]#\}}|j r|j |%| dd|f} t|jDcgc]}||vst|||f} }|j| }| j\} } || z|jz}| ||fScc}wcc}w)aReturns a Condensed Singular Value decomposition. Explanation =========== A Singular Value decomposition is a decomposition in the form $A = U \Sigma V^H$ where - $U, V$ are column orthogonal matrix. - $\Sigma$ is a diagonal matrix, where the main diagonal contains singular values of matrix A. A column orthogonal matrix satisfies $\mathbb{I} = U^H U$ while a full orthogonal matrix satisfies relation $\mathbb{I} = U U^H = U^H U$ where $\mathbb{I}$ is an identity matrix with matching dimensions. For matrices which are not square or are rank-deficient, it is sufficient to return a column orthogonal matrix because augmenting them may introduce redundant computations. In condensed Singular Value Decomposition we only return column orthogonal matrices because of this reason If you want to augment the results to return a full orthogonal decomposition, you should use the following procedures. - Augment the $U, V$ matrices with columns that are orthogonal to every other columns and make it square. - Augment the $\Sigma$ matrix with zero rows to make it have the same shape as the original matrix. The procedure will be illustrated in the examples section. Examples ======== we take a full rank matrix first: >>> from sympy import Matrix >>> A = Matrix([[1, 2],[2,1]]) >>> U, S, V = A.singular_value_decomposition() >>> U Matrix([ [ sqrt(2)/2, sqrt(2)/2], [-sqrt(2)/2, sqrt(2)/2]]) >>> S Matrix([ [1, 0], [0, 3]]) >>> V Matrix([ [-sqrt(2)/2, sqrt(2)/2], [ sqrt(2)/2, sqrt(2)/2]]) If a matrix if square and full rank both U, V are orthogonal in both directions >>> U * U.H Matrix([ [1, 0], [0, 1]]) >>> U.H * U Matrix([ [1, 0], [0, 1]]) >>> V * V.H Matrix([ [1, 0], [0, 1]]) >>> V.H * V Matrix([ [1, 0], [0, 1]]) >>> A == U * S * V.H True >>> C = Matrix([ ... [1, 0, 0, 0, 2], ... [0, 0, 3, 0, 0], ... [0, 0, 0, 0, 0], ... [0, 2, 0, 0, 0], ... ]) >>> U, S, V = C.singular_value_decomposition() >>> V.H * V Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> V * V.H Matrix([ [1/5, 0, 0, 0, 2/5], [ 0, 1, 0, 0, 0], [ 0, 0, 1, 0, 0], [ 0, 0, 0, 0, 0], [2/5, 0, 0, 0, 4/5]]) If you want to augment the results to be a full orthogonal decomposition, you should augment $V$ with an another orthogonal column. You are able to append an arbitrary standard basis that are linearly independent to every other columns and you can run the Gram-Schmidt process to make them augmented as orthogonal basis. >>> V_aug = V.row_join(Matrix([[0,0,0,0,1], ... [0,0,0,1,0]]).H) >>> V_aug = V_aug.QRdecomposition()[0] >>> V_aug Matrix([ [0, sqrt(5)/5, 0, -2*sqrt(5)/5, 0], [1, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 0, 1], [0, 2*sqrt(5)/5, 0, sqrt(5)/5, 0]]) >>> V_aug.H * V_aug Matrix([ [1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]]) >>> V_aug * V_aug.H Matrix([ [1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]]) Similarly we augment U >>> U_aug = U.row_join(Matrix([0,0,1,0])) >>> U_aug = U_aug.QRdecomposition()[0] >>> U_aug Matrix([ [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0]]) >>> U_aug.H * U_aug Matrix([ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) >>> U_aug * U_aug.H Matrix([ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) We add 2 zero columns and one row to S >>> S_aug = S.col_join(Matrix([[0,0,0]])) >>> S_aug = S_aug.row_join(Matrix([[0,0,0,0], ... [0,0,0,0]]).H) >>> S_aug Matrix([ [2, 0, 0, 0, 0], [0, sqrt(5), 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0]]) >>> U_aug * S_aug * V_aug.H == C True N) Hr diagonalize enumeratediagonalis_zeror"rrrdiagQRdecompositioninv) AAHrrVrrankedrCx Singular_valsr&rs r_singular_value_decompositionrgs^ B 77DAqAvQ##%1jajjl+ !DAq99 a  ! aiL05aff M1fa1gM M AFFM "  "1 EEAEEGOB##%1jajjl+ !DAq99 a  ! aiL05aff M1fa1gM M AFFM "  "1 FUQUUW  a7N)NNs G#G) G 3G c d}ttt}|j}g}|}|j|j}t |jD]}t |D]|} |dd| fj r||dd| f|dd|f||dd| f|dd| fz || |f<||| |f|| |f<|dd|fxx|dd| f|| |fzzcc<~||dd|f|dd|f<|dd|fj dus|j||j|||f<|jt |j|}|j|t |j}|rSt |jD];} |dd| fj} |dd| fxx| zcc<|| ddfxx| zcc<=|j||j|fS)Nc(|j|dS)NT)rW)dot)uvs rrz&_QRdecomposition_optional..dot9suuQ$u''rT) r rrrSroris_zero_matrixr"rwrrnorm __class__) r normalizerr\rrQr$r4rCrs r_QRdecomposition_optionalr8s( !Z 8C A F A A 166] q )AAw%%!AqD'1QT7+c!AqD'1QT7.CCAadG!AqD'lAadG adGqAw1a4( (G  )a1g,!Q$ QT7 ! ! - MM! eeAadG  %-(A &%-(Aqvv AQT7<<>D adGtOG adGtOG  ;;q>1;;q> ))rct|dS)alReturns a QR decomposition. Explanation =========== A QR decomposition is a decomposition in the form $A = Q R$ where - $Q$ is a column orthogonal matrix. - $R$ is a upper triangular (trapezoidal) matrix. A column orthogonal matrix satisfies $\mathbb{I} = Q^H Q$ while a full orthogonal matrix satisfies relation $\mathbb{I} = Q Q^H = Q^H Q$ where $I$ is an identity matrix with matching dimensions. For matrices which are not square or are rank-deficient, it is sufficient to return a column orthogonal matrix because augmenting them may introduce redundant computations. And an another advantage of this is that you can easily inspect the matrix rank by counting the number of columns of $Q$. If you want to augment the results to return a full orthogonal decomposition, you should use the following procedures. - Augment the $Q$ matrix with columns that are orthogonal to every other columns and make it square. - Augment the $R$ matrix with zero rows to make it have the same shape as the original matrix. The procedure will be illustrated in the examples section. Examples ======== A full rank matrix example: >>> from sympy import Matrix >>> A = Matrix([[12, -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> Q, R = A.QRdecomposition() >>> Q Matrix([ [ 6/7, -69/175, -58/175], [ 3/7, 158/175, 6/175], [-2/7, 6/35, -33/35]]) >>> R Matrix([ [14, 21, -14], [ 0, 175, -70], [ 0, 0, 35]]) If the matrix is square and full rank, the $Q$ matrix becomes orthogonal in both directions, and needs no augmentation. >>> Q * Q.H Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> Q.H * Q Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> A == Q*R True A rank deficient matrix example: >>> A = Matrix([[12, -51, 0], [6, 167, 0], [-4, 24, 0]]) >>> Q, R = A.QRdecomposition() >>> Q Matrix([ [ 6/7, -69/175], [ 3/7, 158/175], [-2/7, 6/35]]) >>> R Matrix([ [14, 21, 0], [ 0, 175, 0]]) QRdecomposition might return a matrix Q that is rectangular. In this case the orthogonality condition might be satisfied as $\mathbb{I} = Q.H*Q$ but not in the reversed product $\mathbb{I} = Q * Q.H$. >>> Q.H * Q Matrix([ [1, 0], [0, 1]]) >>> Q * Q.H Matrix([ [27261/30625, 348/30625, -1914/6125], [ 348/30625, 30589/30625, 198/6125], [ -1914/6125, 198/6125, 136/1225]]) If you want to augment the results to be a full orthogonal decomposition, you should augment $Q$ with an another orthogonal column. You are able to append an identity matrix, and you can run the Gram-Schmidt process to make them augmented as orthogonal basis. >>> Q_aug = Q.row_join(Matrix.eye(3)) >>> Q_aug = Q_aug.QRdecomposition()[0] >>> Q_aug Matrix([ [ 6/7, -69/175, 58/175], [ 3/7, 158/175, -6/175], [-2/7, 6/35, 33/35]]) >>> Q_aug.H * Q_aug Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> Q_aug * Q_aug.H Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) Augmenting the $R$ matrix with zero row is straightforward. >>> R_aug = R.col_join(Matrix([[0, 0, 0]])) >>> R_aug Matrix([ [14, 21, 0], [ 0, 175, 0], [ 0, 0, 0]]) >>> Q_aug * R_aug == A True A zero matrix example: >>> from sympy import Matrix >>> A = Matrix.zeros(3, 4) >>> Q, R = A.QRdecomposition() They may return matrices with zero rows and columns. >>> Q Matrix(3, 0, []) >>> R Matrix(0, 4, []) >>> Q*R Matrix([ [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) As the same augmentation rule described above, $Q$ can be augmented with columns of an identity matrix and $R$ can be augmented with rows of a zero matrix. >>> Q_aug = Q.row_join(Matrix.eye(3)) >>> R_aug = R.col_join(Matrix.zeros(3, 4)) >>> Q_aug * Q_aug.T Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> R_aug Matrix([ [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) >>> Q_aug * R_aug == A True See Also ======== sympy.matrices.dense.DenseMatrix.cholesky sympy.matrices.dense.DenseMatrix.LDLdecomposition sympy.matrices.matrixbase.MatrixBase.LUdecomposition QRsolve T)r)r)rs r_QRdecompositionr_sh %Q$ 77rc8|j}|js td|j}|j |}|}t |dz D]?}||dzd|f}|ddddfj r&t|ddk7r)|dt|d|jzz|d<n|d|jz|d<||jz }||dzdddfd|z|j||dzdddfzzz ||dzdddf<|dd|dzdf|dd|dzdfd|zz|jzz |dd|dzdf<|dd|dzdf|dd|dzdfd|zz|jzz |dd|dzdf<B||fS)aConverts a matrix into Hessenberg matrix H. Returns 2 matrices H, P s.t. $P H P^{T} = A$, where H is an upper hessenberg matrix and P is an orthogonal matrix Examples ======== >>> from sympy import Matrix >>> A = Matrix([ ... [1,2,3], ... [-3,5,6], ... [4,-8,9], ... ]) >>> H, P = A.upper_hessenberg_decomposition() >>> H Matrix([ [1, 6/5, 17/5], [5, 213/25, -134/25], [0, 216/25, 137/25]]) >>> P Matrix([ [1, 0, 0], [0, -3/5, 4/5], [0, 4/5, 3/5]]) >>> P * H * P.H == A True References ========== .. [#] https://mathworld.wolfram.com/HessenbergDecomposition.html r9rMrNr) rrOr rorlrrrrr)rrrrrr4rrs r_upper_hessenberg_decompositionrsJ A ;;"#;<< A aA A 1q5\E a!efaiL QRU8 " "  !:?Q4$qt*qvvx//AaDQ4!&&(?AaD LQ|a!eqssQq1uvqy\/A&BB!a%&!) AEF|qAEF|q1u'=&DD!QUV) AEF|qAEF|q1u'=&DD!QUV) !E$ a4Kr)T) r. sympy.corersympy.core.functionr(sympy.functions.elementary.miscellaneousrr$sympy.functions.elementary.complexesr exceptionsr r utilitiesr r determinantr rr+r5rYrbrmrqrrrrrrrrJrrrs *>5L65'.]@.`$N\|nbU nU p$+TUcJ+2DBH :xOb$*Nt8l@r