~L iddlmZddlZddlmZmZmZddlZddlm Z ddl m Z m Z mZmZmZerddlmZdgZe j(Zej*d  d d Z d d Zy)) annotationsN)Callable TYPE_CHECKINGUnion)dim) _ellipsis AnonymousAxiscomma_separate parse_patternvalidate_rearrange_expressions)Sequence rearrangec lt||\}}t|||td|jD}|jrO|t |jdz z }t |j dz }||zx}|kDrUtd|d|dd}t |j }t |jx}|k7rtd|d|d||z|z} | dk(rdStd t| Dig} d|jD]} t| trf| D]"} t| tsJf| <dz $| r=td } f| <| j| | j| dz y| tk(r-t} tfd t|D| <|z td |  dfd }||j}||j}tfd| D}tfd|j!D}d}d|dt#d| d|rdj%d|Dndzdt#|dt#|dz|rdt#|gdndz}t'|t)|S)azTranslate an `einops`-style pattern into a callable that performs the rearrange using first-class dimensions. Since the an equivalent result is computed for tensors with the same number of dimensions, with the same pattern and specified axes lengths, this function can be memoized. Args: tensor_ndim (int): the number of dimensions in the tensor to rearrange pattern (str): the `einops`-style rearrangement pattern axes_lengths (int): any additional length specifications for dimensions Returns: Callable[[torch.Tensor], torch.Tensor]: a callable that performs the rearrangement c3"K|]}|  ywN).0rs `/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/functorch/einops/rearrange.py z-_create_rearrange_callable...s:##g:s rz!Number of dimensions in pattern (zH) must be less than or equal to the number of dimensions in the tensor ()rz;) must be equal to the number of dimensions in the tensor (c|Srr)tensors rz,_create_rearrange_callable..Esfc3&K|] }d| yw)dNr)ris rrz-_create_rearrange_callable..Gs-M!!g-Ms1c3.K|] }|zywrr)rjdims_ifirst_class_dimss rrz-_create_rearrange_callable..]s 312 !,3Unexpected dimension: cg}|D]e}t|tr$|jtfd|D7|tk(r|j tYt d||S)z|Convert a `ParsedExpression.composition` into a `Tensor.__getitem__` index of strings representing first class dims.c36K|]}|D]}|ywrr)r identifierridentifier_dim_maps rrzJ_create_rearrange_callable..composition_to_dims..ms3j#A sr&) isinstancelistappendtupler extend ValueError) compositiondim_composition dimensionr*s rcomposition_to_dimsz7_create_rearrange_callable..composition_to_dimsds >@$ GI)T*&&*3i'&&'9)'DE #9)!EFF Grc3.K|] }|dywrNr)raxisr*s rrz-_create_rearrange_callable..{sHd(.q1Hr%c38K|]\}}|d|fywr6r)rr7lengthr*s rrz-_create_rearrange_callable..|s*2>$ D !! $f-s do_rearrangezdef z(tensor): z = dims(z) c34K|]\}}d|d|dyw)z z.size =  Nr)rrr9s rrz-_create_rearrange_callable..s(3@C$se8F82.sz tensor = tensor[z].order(z return tensor.sum(z, keepdim=False) z return tensor )r1z5Sequence[Union[list[Union[str, AnonymousAxis]], str]]returnz!list[Union[str, tuple[str, ...]]])r r sumr1 has_ellipsislen identifiersr0r.ranger+r,strr r-r itemsr joinexeclocals) tensor_ndimpattern axes_lengthsleftright n_anon_dimsn_ellipsis_dims n_named_dims pattern_ndimn_dims anon_axesr3r) anon_axisr4 left_dims right_dims anon_dimsspecified_lengthscustom_rearrange_callable_namecustom_rearrange_callable_coder#r$r*s @@@r_create_rearrange_callabler[s`" 6KD%"4 =:)9)9::K %T-=-=)>)BC4++,q0 ',6 6L+ E3L>B--8M<  4++,  0 01 1Lk A3L>B*m1. O +k 9F {$$(--MuV}-M(MKM%'IF%%C i &'  !*c2222B62J1L":.!    )#. 1A&1I0K"9-  +  +!  ) #"J-236;O6L3. z * o %F5i[AB B+C.J **$D$4$45I$U%6%67JHiHHIBNBTBTBV&4"123!"234HVHC I! GGDU   ! !: ;8NS]D^C__b c d%^YK%@$AAS T& #( '( 82 33rc t|tjstj|}t |j |fi|}||S)aA native implementation of `einops.rearrange`, a reader-friendly smart element reordering for multidimensional tensors. This operation includes functionality of transpose (axes permutation), reshape (view), squeeze, unsqueeze, stack, concatenate and other operations. See: https://einops.rocks/api/rearrange/ Args: tensor (Tensor or sequence of Tensor): the tensor(s) to rearrange pattern (str): the rearrangement pattern axes_lengths (int): any additional length specifications for dimensions Returns: Tensor: the rearranged tensor Examples: >>> # suppose we have a set of 32 images in "h w c" format (height-width-channel) >>> images = torch.randn((32, 30, 40, 3)) >>> # stack along first (batch) axis, output is a single array >>> rearrange(images, "b h w c -> b h w c").shape torch.Size([32, 30, 40, 3]) >>> # concatenate images along height (vertical axis), 960 = 32 * 30 >>> rearrange(images, "b h w c -> (b h) w c").shape torch.Size([960, 40, 3]) >>> # concatenated images along horizontal axis, 1280 = 32 * 40 >>> rearrange(images, "b h w c -> h (b w) c").shape torch.Size([30, 1280, 3]) >>> # reordered axes to "b c h w" format for deep learning >>> rearrange(images, "b h w c -> b c h w").shape torch.Size([32, 3, 30, 40]) >>> # flattened each image into a vector, 3600 = 30 * 40 * 3 >>> rearrange(images, "b h w c -> b (c h w)").shape torch.Size([32, 3600]) >>> # split each image into 4 smaller (top-left, top-right, bottom-left, bottom-right), 128 = 32 * 2 * 2 >>> rearrange(images, "b (h1 h) (w1 w) c -> (b h1 w1) h w c", h1=2, w1=2).shape torch.Size([128, 15, 20, 3]) >>> # space-to-depth operation >>> rearrange(images, "b (h h1) (w w1) c -> b h w (c h1 w1)", h1=2, w1=2).shape torch.Size([32, 15, 20, 12]) )r+torchTensorstackr[ndim)rrJrKrearrange_callables rrrsJf fell +V$3 W , f %%r)rIintrJrDrKrbr>z&Callable[[torch.Tensor], torch.Tensor])rzAUnion[torch.Tensor, list[torch.Tensor], tuple[torch.Tensor, ...]]rJrDrKrbr>z torch.Tensor) __future__r functoolstypingrrrr] functorch._Cr_C_parsingr r r r r collections.abcr__all__dims lru_cacher[rrrrrms"11 "( - wwS{4{4"{447{4+{4{4|:& M:& :&:& :&r