~L i8),dZddgZddlmZddlZddlmZmZddlm Z m Z  dded e d ee d eed ef d Z dded ee d ee d eed e eeeff dZ dded ee d ee d eed e eeeff dZ dded ee ded e d e eeeff dZy)zBImplement various linear algebra algorithms for low rank matrices. svd_lowrank pca_lowrank)OptionalN) _linalg_utilsTensor)handle_torch_functionhas_torch_functionAqniterMreturnc|dn|}|jstj|n |j}tj}t j |jd|||j}|||}| ||||z }t jj|j}t|D]} ||j|}||||j|z }t jj|j}|||}| ||||z }t jj|j}|S)aReturn tensor :math:`Q` with :math:`q` orthonormal columns such that :math:`Q Q^H A` approximates :math:`A`. If :math:`M` is specified, then :math:`Q` is such that :math:`Q Q^H (A - M)` approximates :math:`A - M`. without instantiating any tensors of the size of :math:`A` or :math:`M`. .. note:: The implementation is based on the Algorithm 4.4 from Halko et al., 2009. .. note:: For an adequate approximation of a k-rank matrix :math:`A`, where k is not known in advance but could be estimated, the number of :math:`Q` columns, q, can be chosen according to the following criteria: in general, :math:`k <= q <= min(2*k, m, n)`. For large low-rank matrices, take :math:`q = k + 5..10`. If k is relatively small compared to :math:`min(m, n)`, choosing :math:`q = k + 0..2` may be sufficient. .. note:: To obtain repeatable results, reset the seed for the pseudorandom number generator Args:: A (Tensor): the input tensor of size :math:`(*, m, n)` q (int): the dimension of subspace spanned by :math:`Q` columns. niter (int, optional): the number of subspace iterations to conduct; ``niter`` must be a nonnegative integer. In most cases, the default value 2 is more than enough. M (Tensor, optional): the input tensor's mean of size :math:`(*, m, n)`. References:: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv `_). dtypedevice) is_complex_utilsget_floating_dtypermatmultorchrandnshaperlinalgqrQrangemH) r r r r rrRXr_s T/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/_lowrank.pyget_approximate_basisr% s#bAEE01 F % %a (AGGE ]]F AGGBK%AA q! A} q!  A 5\! 144O =F144O#A LLOOA   1aL =F1aL A LLOOA  ! Hc&tjjse||f}tt t |j tjt dfs t|rtt|||||St||||S)aReturn the singular value decomposition ``(U, S, V)`` of a matrix, batches of matrices, or a sparse matrix :math:`A` such that :math:`A \approx U \operatorname{diag}(S) V^{\text{H}}`. In case :math:`M` is given, then SVD is computed for the matrix :math:`A - M`. .. note:: The implementation is based on the Algorithm 5.1 from Halko et al., 2009. .. note:: For an adequate approximation of a k-rank matrix :math:`A`, where k is not known in advance but could be estimated, the number of :math:`Q` columns, q, can be chosen according to the following criteria: in general, :math:`k <= q <= min(2*k, m, n)`. For large low-rank matrices, take :math:`q = k + 5..10`. If k is relatively small compared to :math:`min(m, n)`, choosing :math:`q = k + 0..2` may be sufficient. .. note:: This is a randomized method. To obtain repeatable results, set the seed for the pseudorandom number generator .. note:: In general, use the full-rank SVD implementation :func:`torch.linalg.svd` for dense matrices due to its 10x higher performance characteristics. The low-rank SVD will be useful for huge sparse matrices that :func:`torch.linalg.svd` cannot handle. Args:: A (Tensor): the input tensor of size :math:`(*, m, n)` q (int, optional): a slightly overestimated rank of A. niter (int, optional): the number of subspace iterations to conduct; niter must be a nonnegative integer, and defaults to 2 M (Tensor, optional): the input tensor's mean of size :math:`(*, m, n)`, which will be broadcasted to the size of A in this function. References:: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv `_). N)r r r ) rjit is_scriptingsetmaptypeissubsetrr rr _svd_lowrank)r r r r tensor_opss r$rrVs}j 99 ! ! #V 3tZ()22 \\4: &  ,(Zau  Qeq 11r&c|dn|}|jdd\}}tj}||j|j }||kr|j }| |j }t ||||}||j |}||||j |z }tjj|d\} } } | j } |j| } ||kr| | } } | | | fS)Nr r F) full_matrices) rrr broadcast_tosizer r%rrsvd) r r r r mnrrBUSVhVs r$r.r.sYAA 7723`_). )r r?r r2Nr1rzq(=z>) must be non-negative integer and not greater than min(m, n)=zniter(=z) must be non-negative integerr3rz8pca_lowrank input is expected to be 2-dimensional tensor)r2)dimrT)rAkeepdim)rr(r)r,rr rrrmin ValueErrorrrr. is_sparselensparsesumindiceszerosrrsparse_coo_tensorvaluesonesmmmTmean)r r r?r r8r9rccolumn_indicesrJC_t ones_m1_tr Cs r$rrs#D 99 ! ! # 7%,, &+=qd+C(aT1& WWRS\FQy 1aL1fc!Qi!RSVWXZ[S\R] ^   QJ75')GHII  % %a (E Aq66  qww<1 WX X LL  QE  *Q .Q++   &&!((   $ %% QXXZ!QuQXX JJqwws|q!f4E!((S LLOOC + . .Aq33 FFudF +AE1ET::r&)rN)r1rN)NTr)__doc____all__typingrrrrrtorch.overridesrr intr%tuplerr.boolrr&r$r_smH - ( 1E  G  G  G  C=G  G  G X =2 =2}=2 C==2 =2  666 !" =2D  } C=   666 !" H n; n;}n; n;  n;  666 !" n;r&