L iG ddlZddlZddlZddlmZmZddlmZmZm Z m Z m Z m Z ddl Z ddlmcmZddl mZmZddlmZej,GddZGdd Zd ed efd Zd ed eeeffdZd ed efdZded efdZ ded efdZ!ded efdZ"ded efdZ#defdZ$dZ%dejLdeedfde'eefd eeejPeffdZ)dZ*ded efdZ+d efdZ,d Z-dejLfd!Z.dejLfd"Z/dejLd efd#Z0dejLd e edffd$Z1 d9deede'eefd%ed ee jdfd&Z3d'fd(Z4d)fd*Z5Gd+d,Z6d-e jdd e jdfd.Z7de jpj*d efd/Z9dejLd ee:ee:effd0Z;ejxjzejxj|ejxj~ejxjgZAe d1d2d3ed4e d1d ejxfd5ZBe d3ed4e d6d e ejxfd7ZBd1d2d8ZBy):N)IterableIterator)AnyCallableLiteralOptionaloverloadUnion)_C_utils_internal) OpOverloadc,eZdZUdZeed<eed<dZy)Kernelz$Models a (function, source location)funcsourcec&|j|i|SN)r)selfargskwargss Z/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/_library/utils.py__call__zKernel.__call__styy$)&))N)__name__ __module__ __qualname____doc__r__annotations__strrrrrrs. N K*rrc$eZdZdZdefdZddZy)RegistrationHandlez2Does something when someone calls .destroy() on it on_destroyc||_yr _on_destroy)rr#s r__init__zRegistrationHandle.__init__s %rNc$|jyrr%)rs rdestroyzRegistrationHandle.destroys r)returnN)rrrrrr'r)r rrr"r"s<&8&rr" stacklevelr*ctjtj|}|jd|j }|S)zGet a string that represents the caller. Example: "/path/to/foo.py:42" Use stacklevel=1 to get the caller's source Use stacklevel=2 to get the caller's caller's source etc. :)inspect getframeinfosys _getframefilenamelineno)r+framers r get_sourcer5#s;  z!: ;Eq /F Mrqualnamecr|jd}t|dk7rtd|d|d|dfS)Nz::zAExpected `qualname` to be of the form "namespace::name", but got zf. The qualname passed to the torch.library APIs must consist of a namespace and a name, e.g. aten::sinr)splitlen ValueError)r6splitss rparse_namespacer>1sR ^^D !F 6{a**248 9  !9fQi rct|\}}d|vr|jd\}}nd}ttj|}t||}t||S)N.default)r>r:getattrtorchops)r6 namespacenamer nspackets r lookup_oprI=sV%h/OIt d{Ch I &B R F 68 $$ropcBt|tsJ|jdvS)N>atenprimprims) isinstancer rE)rJs r is_builtinrPHs" b* %% % <<4 44rschemacd}t|tjjr||Sddlm}t|t r|j |}t||sJ||S)zCheck if the schema is functional. An operator is functional if: - it does not mutate any of its inputs - it does not return a view on any of its inputs - it has at least one return c|jry|j}t|dkDxrtd|D}|ry|jsyy)NFrc3jK|]+}|jduxr|jj -ywr) alias_infois_write).0rs r z>is_functional_schema..is_functional..Zs35 GHALL $ BQ\\-B-B)B B5 s13T) is_mutablereturnsr;any)rQretsis_non_mutating_views r is_functionalz+is_functional_schema..is_functionalVsS   ~~"4y1} 5 LP5 2  ~~rr)FunctionSchema)rOrCr r`torchgen.modelrparse)rQr_r`s ris_functional_schemarcMsb &%((112V$$.&#%%f- fn -- -   rtypc F|tjtjjk(xs|tjtjtjjk(xs|tjtjtjjk(xsZ|tjtjtjtjjk(Sr)r ListType TensorTypeget OptionalTyperds ris_tensorlist_like_typerkps r{{2==,,.// U "++boobmm.?.?.ABC C U "//"++bmm.?.?.A"BC C U "//"++boobmm>O>O>Q.R"ST T rc|tjjk(xs4|tjtjjk(Sr)r rgrhrirjs ris_tensor_like_typermzs: "--##% % T @Q@Q@S0T)TTrc|jdk7ry|j}t|jdk(sy|jdjy|jdjj }t|dk7ryt t|}t|jdkry|jd}|jy|jjsy|jj }t|dk7ry|t t|k7ry|jddD]}|jyy)aNCheck if an op is an inplace aten op, i.e. it mutates and returns the first arg. TODO: torchgen/model.py's FunctionSchema.parse is the source of truth for this, but not all PyTorch builds have torchgen (due to the yaml dependency being weird). Figure this out. Example: add_(Tensor(a!) x, Tensor y) -> Tensor(a) rLFr9rNT) rE_schemar;r[rU after_setnextiter argumentsrV)rJrQ alias_setloc first_argargs rmutates_and_returns_first_argrx~s( ||v ZZF v~~ ! # ~~a##+q!,,66I 9~ tI C 6  q   #I#    ( ($$..I 9~ d4 ?### >> % rcg}i}tt|jD]}|j|}|jrE|j|vr||j||j<I|j ||j<c|t|kr|j |||j |j t||fSr)ranger;rs kwarg_onlyrF default_valueappendtuple)rQrrnew_args new_kwargsiinfos r fill_defaultsrsHJ 3v''( ) 4" ??yyF"(.tyy(9 499%(,(:(: 499%3t9}Q( 2 23 4 ?J &&rr.rc#Kt|jt|t|zk\sJtt|jD]}|j|}|jr"|j|vr|||jf@|t|k\r.|js!|j|vr|||jf||||fyw)zzips schema.arguments and (args, kwargs) together. Assumes that (args, kwargs) were the inputs to some torch._ops.OpOverload: that is, (args, kwargs) must be bindable to the schema (args, kwargs). N)r;rsrzr{rF)rQrrrrs r zip_schemars v CIF $; ;; ; 3v''( ) " ??yyF"F499---  D >??tyyF':F499--- DGm  sCCc \ddlm}|j}t|tj j s tdd}g}|jD]}t|tjjtjjjfr|j||ct|tjjjtt fr&|j|Dcgc] }|| c}tdt#|t%j&|j(j*|}||}|j-|j.t!|j0j3t|fScc}w)Nr)FunctionSchemaGenzfx_node's target must be a hop.c|jjdd}|;|jdk(sJt|jj |j }|S)Nvalget_attr)metarhrJrBgraph owning_moduletarget)nodemeta_vals r_collect_example_valz5hop_schema_from_fx_node.._collect_example_valsM99==-  77j( ((tzz77EHrzUnsupported arg type )torchgen.gen_schema_utilsrrrOrC_opsHigherOrderOperator RuntimeErrorrfxNoderr}immutable_collectionsimmutable_listlistr~typer. signaturerbind from_example_namersitems) rrhoprexample_inputsrwx bound_argsexample_outputs rhop_schema_from_fx_nodersW; ++C c5::99 :<==NyyD cEHHMM588==+=+=> ?  ! !"6s"; <  %((00??uM   ! !C"Hq#7#:"H I!6tCykBC CD*N):):3<<)H)M)M *J *$/N  ) ) 5--3356n9M8O #Is>F) ct|tsJt|ry|j}|jsyt |j dkDryy)NFrT)rOr rProrZr;r[)rJrQs rcan_generate_trivial_fake_implrsJ b* %% %"~ ZZF    6>>Q rc$ttddS)zIf an op was defined in C++ and extended from Python using the torch.library APIs, returns if we require that there have been a m.set_python_module("mylib.ops") call from C++ that associates the C++ op with a python module. REQUIRES_SET_PYTHON_MODULET)rBr r rrrequires_set_python_modulers ?$@$ GGrct|tjjjsJtjj j ||jf\}}|Dcgc]w}t|tjr[tjj|jtjjjr t|y}}|j||||Scc}wr)rOrCutils_python_dispatchTorchDispatchMode_pytree tree_flattenvaluesTensorr _dispatch_keyshas DispatchKeyPythonr__torch_dispatch__) curr_mode op_overloadrrargs_flattened_aoverload_typess rhandle_dispatch_moders i!=!=!O!O PP P ++88$ 9PQNA  a & HH # #A & * *588+?+?+F+F G QN  ' ' ^T6 RRs1A."s6q||6sr\rsrQs rhas_kwarg_only_argsr!s 6V%5%56 66rc|jD];}t|jst|js.|js;yy)NTF)rsrmrrkr{)rQrs rhas_kwarg_only_tensorsr%sD   #AFF+/Fqvv/N ||   rc:td|jDS)z Given a schema, returns True if the schema has a Tensor arg. A Tensor arg is any arg with a type annotation that might involve Tensor. c3tK|]0}t|jxst|j2ywr)rmrrkrs rrYz!has_tensor_arg..4s3  QVV $ G(?(G Gs68rrs rhas_tensor_argr/s$ !! rct|jD]C\}}|jtjj us1|j dk(sA|cSy)zx Given a schema, returns the id of the `device: torch.device` argument. If it does not exist, returns None. deviceN) enumeratersrr DeviceObjTyperhrF)rQindexrws rget_device_arg_indexr:sR   0 01 s 88r''++- -#((h2FL rallowed_nestingc#Kfd}|D]}||Ed{|jD]}||Ed{y7,7 w)Nc3Kt|tjr|ydkDr9t|ttfr"t t|idz Ed{yyy7w)Nrr9)rOrCrr~r iter_tensors)rwrs rcheckziter_tensors..checkHsV c5<< (I q ZeT]%C#E#JOa4GH H H&D HsAA$A"A$)r)rrrrrwkwargs ` rrrEsVI : <  sA A$A AA A cyNz???r r rrrTrc|Dchc]6}t|tjst|j 8}}|}t|t s|f}t |iD]]}t|j }t|j |vrt|d|d|j|_ycc}w)zO custom operators' outputs must not alias any inputs or other outputs.  (with implementation in ): The output of this custom operator (1) must not also be an input to this custom operator and (2) may not alias any inputs to this custom operator or other returns. The most common way to trigger this error is if we have y = custom_op(x) and y and x are the same Tensor. Please instead return a clone of the offending output tensor(s) (e.g. return x.clone()) or refactor the custom operator to not return y.N) rOrCriduntyped_storager~rradd) rFprevresult get_moduletstorages tuple_resulttensorkeys rcheck_aliasing_constraintrTs26UAAu||9T1$$&'UHUL fe $y |R0'')* f$$& '8 3&1*, @, -    S Vs CCcyrr r rrrrnrrc|}t|ts|f}tj|||rt |d|dy)z custom operators' outputs must not have any aliases This version uses C++ implementation for perf. Only List container is supported. Tensors in Lists with not only Tensors are checked. rrN)rOr~r '_any_output_is_alias_to_input_or_outputr)rFrrrrrs r_c_check_aliasing_constraintrnsWL fe $y  11$ Mf-jl^ <( )  NrceZdZdZdZdZy)MutationCheckerz Check if an operator mutated its arguments. Usage: checker = MutationChecker(op, flat_args, args_spec) op(*args, **kwargs) checker.check() c||_||_||_|Dcgc])}t|tj r t |nd+c}|_ycc}wr)rJ args_spec flat_argsrOrCr hash_tensorreal_pre_hashes)rrJrrrs rr'zMutationChecker.__init__sJ""MV HIjELL9KNt C  s.Ac<jDcgc])}t|tjr t |nd+}}t j |Dcgc]\}}t|tjrrt|tjrXtj|| xrA|jjxr|jj nd}}}tj|j\}}tjj||D]W\}}fd} t!|j"r | ||*t%|j"s@|dn t'|} | || Yycc}wcc}}w)Nc |j|k(rytjjd|jdjj d|jrdndd)Nz: for argument 'z': the operator's schema z specified that the operator mutateszdoes not mutatea* the argument, but this seems to be empirically wrong. Please make the schema and operator behavior consistent. You can specify that an operator mutates a Tensor by e.g. changing its schema type from 'Tensor name' to 'Tensor(a!) name'(use different identifiers (a, b, c, ...) for different Tensors))rVrrJrrFro)r was_mutatedrs r check_onez(MutationChecker.check..check_onesi==K/"ww}}o%5dii[@Yww'($15IDU#VWWX  rF)rrOrCrrziprequalisnanallpytreetree_unflattenrrrJrormrrkr\) rrreal_post_hashesprepostrwas_mutated_argswas_mutated_kwargsrrwas_any_mutateds ` rrzMutationChecker.checksk^^ )ELL9KNt C  !!5!57GH  T#u||,D%,,1O C& & ?YY[__&=4::<+;+;+=>   06/D/D 0 ,,", GGOO-/A"  1 D+ #499-$ ,(3+6+>%C DT$0- 1  s .FBFN)rrrrr'rr rrrrs %1rrrcZ|jjjS)zNSome inexpensive hash. Used as a quick and dirty indicator for tensor mutation)detachfloatmean)rs rrrs 88:    " " $$rct|ry|j}tjj |drytj j j|}|tjj |drytj jjj|}|jjytjj |dryy|jyy)zIf an operator (that stays alive until FakeTensorMode) has a Fake kernel. Don't use this if the operator decomposes before FakeTensorMode. TCompositeImplicitAutogradCompositeExplicitAutogradMetaF)rrrCr %_dispatch_has_kernel_for_dispatch_key_library custom_ops_maybe_get_opdefsimple_registry singletonfind fake_implkernel _abstract_fn)rJrFopdefentrys rhas_fake_kernelrs&b) 88D xx55 ) NN % % 6 6t J b>66)) )+uxx||V%O%OPP r)r9)C dataclassesr.r0collections.abcrrtypingrrrrr r rCtorch.utils._pytreerrrr r torch._opsr dataclassrr"intrr5r~r>rIboolrPrcrkrmrxrr`dictArgumentrrrrrrrrrrrrrrrrrrrr)needs_exact_stridesneeds_contiguous_stridesneeds_fixed_stride_orderr*r'r#r rrr;sJ .DD $$%! *** 3 3   c  eCHo  %% %5:5$5 !!!FUSUTU#j#L'$    %*38_ >B38n  eBKK$%& 4#L z d  HDHS$7 1 172#4#42,,!2!2uS$Y7GFG  *  "38n  ?B   ell  >K4IV 27171t%5<<%ELL%  --$<  1 1 eDItCy