L i*4 ddlZddlZddlmZddlmZddlmZmZm Z m Z m Z ddl Z ddl mZddl mZddlmZgZdeded eeed feed fffd Zd e e j.deded eeeffd Zdeed fdeed fd eeed feeefffdZe deeeZe de j:eZe dede j>de d eefdZ!e dede j>de d eefdZ!dZ!d=dedede d dfdZ"de j:de jFd dfdZ$de j:fd Z%e d!Z&e d"eeee'eeeZ(e d#ee j:ge&fd$e j:d e&fd%Z)e d#ee j:gefd$e(d e(fd&Z)d'Z)deed fde eeefde j>de d eeed feeeefd fff d(Z* d>d)ejVd*ee j:d+e d,fd-Z, d=d.ejZd)ejVd/e.d0e.d1eed2e d dfd3Z/d)ejVd4ee j:d/e.d0e.d df d5Z0d6eeefd7ed8ed dfd9Z1de j:d e fd:Z2d;eejZd eejZfd<Z3y)?N) OrderedDict) Container)AnyCallableOptionaloverloadTypeVar)nn)PackedSequenceargskwargsreturn.cg}t|}|jD]'\}}|j||j|)t|t|fS)a Turn argument list into separate key list and value list (unpack_kwargs does the opposite). Inspiration: https://github.com/facebookresearch/fairscale/blob/eeb6684/fairscale/internal/containers.py#L70 Usage:: kwarg_keys, flat_args = pack_kwargs(1, 2, a=3, b=4) assert kwarg_keys == ("a", "b") assert flat_args == (1, 2, 3, 4) args, kwargs = unpack_kwargs(kwarg_keys, flat_args) assert args == (1, 2) assert kwargs == {"a": 3, "b": 4} Returns: Tuple[Tuple[Any, ...], Tuple[str, ...]]: The first tuple element gives gives both positional args and kwarg values, where the positional args proceed kwarg values and kwarg values are ordered consistently with the kwarg keys. The second tuple element gives the kwarg keys. The second tuple element's length is at most the first tuple element's length. )listitemsappendtuple)r r kwarg_keys flat_argskvs ]/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/distributed/utils.py _pack_kwargsrs^(J:I 1!  U:. ..dtypec||fSdtjdtjffd }t||t||fS)z Cast floating point tensors in ``args`` and ``kwargs`` to ``input_dtype``. This respects the existing ``requires_grad`` on the tensors. xrcrtj|r|jk(r|S|jSN)torchis_floating_pointrto)rrs rcast_fnz%_cast_forward_inputs..cast_fn;s/&&q)QWW-=HttE{r)r Tensor_apply_to_tensors)rr r r#s` r_cast_forward_inputsr&.sK }V|5<<ELL gt ,.?.P QQrrrc t|t|ksJdt|dt|t|dk(r|ifS|dt| }tt||t| d}||fS)zSee _pack_kwargs.ztoo many keys z vs. rN)lendictzip)rrr r s r_unpack_kwargsr+Cs z?c)n , Z)s9~.>? , :!"} 'J' (D #j)S_,<,>"?@ AF <rSTinputs target_device!use_side_stream_for_tensor_copiescyrr.r/r0s r _recursive_tor4Usrcyrr2r3s rr4r4[src>fd |}d|S#dwxYw)z-Recursively moves input to the target_device.cvt|tjtfrVt|tr|jj n |j }| k(r|fS s|j fS|jdk(r|j fSddlm }| }|5|j }dddtjj j5tjj}|j|t|trjj|n-ttjsJ|j|ddd|fSddlm}||r,t%t' |Dcgc]}t||c}St|t(r)t+|dkDrt-t%t' |St|t,r7t+|dkDr)t%t' |Dcgc] }t-|c}St|t.rKt+|dkDr=t%t' |j1Dcgc]}t||c}S|gS#1swYxYw#1swYfSxYwcc}wcc}wcc}w)Ncpur) _get_stream_is_namedtuple) isinstancer r$r datadevicer"typetorch.nn.parallel._functionsr9 accelerator device_indexindexcurrent_stream wait_stream record_stream torch.nn.parallel.scatter_gatherr;r*maprr(rr)r) objr>r9streamoutputrDr;r ir/to_mapr0s rrMz_recursive_to..to_mapds5 cELL.9 :(23(GSXX__SZZF&v 4}-//;;%'FF=133D%]33 VVM2F3&&33M4G4GH =%*%6%6%E%E%GN"..v6"#~6 11.A)&%,,???,,^< =y C # 14cF1GHIDIt$H H c5 !c#hlS-./ / c4 SX\%(#fc*:%;<DG< < c4 SX\*-s6399;/G*HIQIDIaLI Iu 533 =y I=Is+"J%B JJ,J13J6JJ)Nr2)r.r/r0resrMs `` @rr4r4as*+\Vn Jscondsraise_assertion_errorcb|s-t|tj|r t|yy)zwAlternate to ``assert`` when in the backward context to print the error message ``s`` since otherwise, it is swallowed.N)print traceback print_stackAssertionError)rOrPrQs r _p_assertrWs0  a  # # ! rtensorsizectj5tjjj s|j j |jk(}|sZ|j j }t|dk(d|j j|jdddy#1swYyxYw)z Allocate storage for ``tensor`` with the given size. Returns: bool: ``True`` if this method allocated storage and ``False`` if the storage was already allocated. rzCTensor storage should have been resized to be 0 but got PLACEHOLDErN) r no_grad distributed_functional_collectivesis_torchdynamo_compiling_typed_storage_sizenumelrW_resize_)rXrYalready_allocatedtensor_storage_sizes r_alloc_storageres  ?  88QQS & 5 5 7 = = ?4::< O $&,&;&;&=&C&C&E#'1,Y%%'00> ? ? ?s B4CCc tj5tjjj s|j j dk(}|syt|jdk(d|jd|j j d|j|j jddddy#1swYyxYw)z Frees the underlying storage of ``tensor``. Returns: bool: ``True`` if the method freed the storage and ``False`` if the storage was already freed. rzVFreeing a tensor's storage is unsafe when it is not the sole occupant storage offset: z storage size: z tensor shape: N) r r[r\r]r^r_r`rWstorage_offsetshaperb)rX already_freeds r _free_storagerjs  4  88QQS"11399;q@M ))+q0''-'<'<'>&?@%%+%:%:%<%B%B%D$EF%%+\\N4%%'003 4 4 4s CC##C,QRfn containercyrr2rmrns rr%r%s rcyrr2rps rr%r%sMPrc"fd|S)zFRecursively apply to all tensor in different kinds of container types.c \ddlm}t|tjr |St |drrt j|}t j|Dcic])}|j t||j+}}t j|fi|St|tr5|j}|jD]\}} |||<|St|tr |j|St|t r*|jDcic]\}}| |c}}S||r fd|D}t#||St|t$t&t(frt#| fd|DS|Scc}wcc}}w)Nrr:__dataclass_fields__c3.K|] }|ywrr2.0elapplys r z3_apply_to_tensors..apply..s)59)c3.K|] }|ywrr2rvs rrzz3_apply_to_tensors..apply..s1591r{)rGr;r<r r$hasattr dataclassesreplacefieldsnamegetattrr __class__rr r=r)r?rrset) rr;dcfchangesodkeyvaluerNryrms rryz _apply_to_tensors..applyspC a &a5L Q. /$$Q'B * !&&MH 4 89 B*#uCu%B B A )q)C47C= D%- .471q11 1H)Cs ".F#4F(r2)rmrnrys` @rr%r%s:  rc|r t|||ng}|r t|||ng}t|t|kr>|jtt|t|z Dcgc]}dc}nTt|t|kr=|jtt|t|z Dcgc]}ic}t |t |fScc}wcc}w)Nr2)r4r(extendranger)r.r r/r0 moved_inputs moved_kwargs_s r _to_kwargsrs  fm-NO   fm-NO   <3|,,sboolr4rWSizererjrkrrlr%r ProcessGrouprModuleintrrrrrr2rrrs-#%== - //s/uU38_eCQTHo5U/V/:R EKK R RR 38_ R* S#X ,1#s(O  5c?DcN *+  CtU# C~.  #llOS !W    #llOS 1X  5p$C$C$$$?5<<?uzz?d?(4%,,4, CL CtUCncJ  %,,"# 05      P(ELL>3#67PAP!P P F4 #s(O4 T#s(^ $4<<4(, 4  5c?E$sCx.#"56 67 44'+P$$P %,, P ] #P# W IIW$$WW W #,C. W  W W:  $$   %        S#X 03 33tBII4 ?r