~L izUddlZddlZddlZddlZddlZddlZddlZddlmZddl m Z m Z m Z m Z mZmZmZmZddlmZmZddlZddlmZddlmZmZmZmZmZddlmZddl m!Z!m"Z"ddl#m$Z$dd l%m&Z&gd Z'ed Z(ed Z)e*a+e*e,e-d <e*Z.e*e,e-d<dgZ/dZ0GddZ1dZ2ejfdZ4gZ5e6e1e-d<ejndZ8ejrddddZ:e:jvdSde1fdZeddde,dee,ee,fde d!e=fde e1d df d#Z>e dSde1d$e,d%e,d e e e)e(fge e)e(fffd&Z>ejr dTddde,dee,ee,fde e e)e(fde e1d e=f d'Z>es-e>jv dSde1d$e,d%e,d e e e)e(fge e)e(fffd(ZeGd?e?f d@ZHdddAd8eDdBe dCe e d dfdDZI dTddd8eDdEe de e de e1fdFZJ dTddd8eDde e fdGZKdHZLdUdIZMd8eDd%ee,ejfd ejjfdJZQdKZR dTeRdLdddMd8eejjHejjefdNeUe d!fdOe eVe,e fdPee,ee,fdQe?d eVe,e,ff dRZWy)VN)Sequence)AnyCallableLiteralOptionaloverload TYPE_CHECKINGTypeVarUnion) deprecated ParamSpec)_cast_maybe_get_opdef custom_op CustomOpDefdevice_types_t) infer_schema) triton_op wrap_triton) OpOverload)_dtype)Libraryimpldefinefallthrough_kernel impl_abstractregister_autocast register_fakeregister_torch_dispatch register_vmapget_ctx get_kernelrrrr_T_P_impls_defsprimctd)zZ A dummy function to pass to ``Library.impl`` in order to register a fallthrough. z,fallthrough_kernel() should never be called.)NotImplementedErrorS/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/library.pyrrEs L MMr+cneZdZdZddZdZddddZdddd Zd Zdd Z dddd d Z ddddZ dZ y)ra1 A class to create libraries that can be used to register new operators or override operators in existing libraries from Python. A user can optionally pass in a dispatch keyname if they only want to register kernels corresponding to only one specific dispatch key. To create a library to override operators in an existing library (with name ns), set the kind to "IMPL". To create a new library (with name ns) to register new operators, set the kind to "DEF". To create a fragment of a possibly existing library to register operators (and bypass the limitation that there is only one library for a given namespace), set the kind to "FRAGMENT". Args: ns: library name kind: "DEF", "IMPL", "FRAGMENT" dispatch_key: PyTorch dispatch key (default: "") c 4ddlm}|dvr td||tvr|dk(s|dk(r t|dt j d d}|j |j}}tjj||||||_ ||_ t|_t|_g|_||_||_t'j(|t*t,|jt.|j|j |j| y) Nr)_SCHEMA_TO_SIGNATURE_CACHE)IMPLDEFFRAGMENTzUnsupported kind: r1r2zJ is a reserved namespace. Please try creating a library with another name.)limit)torch.fx.operator_schemasr/ ValueError_reserved_namespaces traceback extract_stackfilenamelinenotorch_C_dispatch_librarymnsset_op_defs _op_impls_registration_handleskind dispatch_keyweakreffinalize _del_libraryr%r&)selfr@rErFr/framer:r;s r,__init__zLibrary.__init___sH 2 2148 8 % %45=DJ>5<<& % : : "lHf! "%% #&5TV" (     NN  MM  & & FF & r+cVd|jd|jd|jdS)Nz Library(kind=z, ns=z, dispatch_key=z)>)rEr@rF)rJs r,__repr__zLibrary.__repr__s-tyyktwwitGXGXFYY[\\r+r*)tagsc|dvrtd||jJt|tjr|f}|j dd}d|vr|j ddn|}t tj|jxr.t ttj|j|}|jj||t|}|j dd}|jdz|z}|rOttj|j} t| |} tjj| |jj|t j||S)aDefines a new operator and its semantics in the ns namespace. Args: schema: function schema to define a new operator. alias_analysis (optional): Indicates if the aliasing properties of the operator arguments can be inferred from the schema (default behavior) or not ("CONSERVATIVE"). tags (Tag | Sequence[Tag]): one or more torch.Tag to apply to this operator. Tagging an operator changes the operator's behavior under various PyTorch subsystems; please read the docs for the torch.Tag carefully before applying it. Returns: name of the operator as inferred from the schema. Example:: >>> my_lib = Library("mylib", "DEF") >>> my_lib.define("sum(Tensor self) -> Tensor") ) FROM_SCHEMA CONSERVATIVEzInvalid alias_analysis type (r.::) RuntimeErrorr? isinstancer<Tagsplithasattropsr@getattrrtuple_ops_refresh_packetrBaddr&) rJschemaalias_analysisrOname packet_namehas_preexisting_packetresultqualnamer@packets r,rzLibrary.definesD. !D D!=n=MNO Ovv!!! dEII &7D||C #,/4Kdjjoa(T !(DGG!<"  EIItww 'B v~uT{C||C #77T>D( "DGG,BR-F JJ & &v . (# ( r+F)allow_overridectjjj|dz}t j |}t j|}|dn |j}||jdrd}|jd|} tjjjj| } |t|| |} n|} | jj!| |||} |j"j%| y)z?Registers the fake impl for an operator defined in the library.Nz torchvision.rV)librj)r<_libraryutils get_sourcesys _getframeinspect getmodule__name__ startswithr@simple_registry singletonfind_check_pystubs_once fake_implregisterrDappend) rJop_namefn _stacklevelrjsourcerK caller_modulecaller_module_namerhentryfunc_to_registerhandles r,_register_fakezLibrary._register_fakes%%00qA k*))%0 &3%:T @V@V  ).@.K.K / "& ggYb *..88==hG  )22xAST ! )) f$~*  ""))&1r+c|jd|}tjjjj |}|j j||}|jj|y)aRegisters a torch_dispatch rule for the given operator and torch_dispatch_class. This allows for open registration to specify the behavior between the operator and the torch_dispatch_class without needing to modify the torch_dispatch_class or the operator directly. The torch_dispatch_class is either a Tensor subclass with `__torch_dispatch__` or a TorchDispatchMode. If it is a Tensor subclass, we expect fn to have the following signature: (cls, func: OpOverload, types: Tuple[type, ...], args, kwargs) -> Any If it is a TorchDispatchMode, we expect fn to have the following signature: (mode, func: OpOverload, types: Tuple[type, ...], args, kwargs) -> Any rVN) r@r<rnrwrxrytorch_dispatch_rulesr|rDr})rJr~torch_dispatch_classrrhrrs r,_register_torch_dispatch_rulez%Library._register_torch_dispatch_rulese"ggYb *..88==hG++445I2N ""))&1r+cN|dk(r |j}tj|jtjj j sJt|tr|}nUt|tr:|jj}|jj}|dk7r|dz|z}n td|jdz|jddzdz|z}|t vr8tdj#|jdd||j|j$J|j$j&}||j|jdd|t j)||j*j)|y) aRegister the operator to use the AOTI-compiled implementation. Args: op_name: operator name (along with the overload) or OpOverload object. dispatch_key: dispatch key that the input function should be registered for. By default, it uses the dispatch key that the library was created with. Example:: >>> my_lib = Library("aten", "IMPL") >>> my_lib._impl_with_aoti_compile("div.Tensor", "CPU") rQrUzd_impl_with_aoti_compile should be passed either a name or an OpOverload object as the first argument/rVThis is not allowed since there's already a kernel registered from python overriding {}'s behavior for {} dispatch key and {} namespace.N)rFr<DispatchKeySethasr= DispatchKeyDenserXstrr_schemard overload_namerWr@rZr%formatr?impl_with_aoti_compilerarC)rJr~rFrdrkeyimpl_fns r,_impl_with_aoti_compilezLibrary._impl_with_aoti_compilesm 2 ,,L##L155ehh6J6J6P6PQQQ gs #D  ,??''D#OO99M"czM1(  ggmdjj.r22S8<G &=DDJFJJt$R(,E vv!!! FF99D)"-|< 3 3r+) with_keysetrjcnt|stdt||dk(r |j}t |t r|}nUt |t r:|jj}|jj}|dk7r|dz|z}n td|jdz|jddzdz|z}|s@|tvr8tdj|jdd||j|d k(rF|} d| vr|jd| } tj j#| d rtd |d |j$J|j$j'||dk7r|nd ||tj)||j*j)|y )a6Registers the function implementation for an operator defined in the library. Args: op_name: operator name (along with the overload) or OpOverload object. fn: function that's the operator implementation for the input dispatch key or :func:`~fallthrough_kernel` to register a fallthrough. dispatch_key: dispatch key that the input function should be registered for. By default, it uses the dispatch key that the library was created with. with_keyset: flag controlling if the current dispatcher call keyset should be passed as the first argument to :attr:`fn` when calling. This should be used to create the appropriate keyset for redispatch calls. allow_override: Flag controlling if we want to override an existing registered kernel implementation. This is by default off, and will error you're trying to register a kernel to a dispatch key with a kernel already registered. Example:: >>> my_lib = Library("aten", "IMPL") >>> def div_cpu(self, other): >>> return self * (1 / other) >>> my_lib.impl("div.Tensor", div_cpu, "CPU") z;Input function is required to be a callable but found type rQrUzQimpl should be passed either a name or an OpOverload object as the first argumentrrVrrMetaCompositeImplicitAutogradz?We should not register a meta kernel directly to the operator 'z', because it has a CompositeImplicitAutograd kernel in core. Instead we should let the operator decompose, and ensure that we have meta kernels for the base ops that it decomposes into.N)callable TypeErrortyperFrXrrrrdrrWr@rZr%rr<r=%_dispatch_has_kernel_for_dispatch_keyr?rrarC) rJr~rrFrrjrdrrdispatcher_op_names r,rz Library.impls6|MdSUhZX  2 ,,L gs #D  ,??''D#OO99M"czM1c ggmdjj.r22S8<GC6MDDJFJJt$R(,E  6 !!% --(,y3E2F%G" xx=="$?#UVZU[\AAvv!!!  (B.L4O      3 3r+rc|dk(r |j}|jdk7rtd|j|dk7sJ|jJ|jj |||y)aRegisters the function implementation as the fallback for the given key. This function only works for a library with global namespace ("_"). Args: fn: function used as fallback for the given dispatch key or :func:`~fallthrough_kernel` to register a fallthrough. dispatch_key: dispatch key that the input function should be registered for. By default, it uses the dispatch key that the library was created with. with_keyset: flag controlling if the current dispatcher call keyset should be passed as the first argument to :attr:`fn` when calling. This should be used to create the appropriate keyset for redispatch calls. Example:: >>> my_lib = Library("_", "IMPL") >>> def fallback_kernel(op, *args, **kwargs): >>> # Handle all autocast ops generically >>> # ... >>> my_lib.fallback(fallback_kernel, "Autocast") rQ_z]Fallback can only be registered using library fragment on the global namespace "_" but it is N)rFr@rWr?fallback)rJrrFrs r,rzLibrary.fallbackvsu, 2 ,,L 77c>qrvryryqz} r!!!vv!!!  b+6r+c>|j|jjd|_|jD]}|j|jj t |j za|jD]}|jd\}}|jdd}ttj|sFttj|}t||smt|||jj|y)NrVrUr)r?resetrDdestroyclearr%rCrBrZr[r<r\r]delattr_dirremove)rJrrdr@name_with_overload namespaces r,_destroyzLibrary._destroys 66  FFLLN00 F NN   ""((*$.. MM (D&*ZZ%5 "B"%++C03D599b) 2.I9d+ It $ NN ! !$ ' (r+NrQ)rl) ru __module__ __qualname____doc__rLrNrrrrrrrr*r+r,rrLsa$% N]00d252:2,/ d)+U ;@QVU n!75!7F(r+rc|D]*}|}d} d|vr|jd\}} || f|vs&||| f=,||z}||z}|D]} | j||jyy)NrQrU)rZrr) captured_implsop_impls captured_defsop_defsregistration_handlesr?schema_to_signature_cacheop_defrdrrs r,rIrIs A &="(,,s"3 D-   ' '*4*?@ AhNWM& }  r+c/vK t|i|}||jy#jwxYwwN)rr)argskwargsrms r,_scoped_libraryrs1t&v&   s9$969 _keep_alivez \(.*\) -> .*r*)rmrOcft|tstdt|tj j j|\}}|!t|d}tj|tj|std|d|j||zd|y)aDefines a new operator. In PyTorch, defining an op (short for "operator") is a two step-process: - we need to define the op (by providing an operator name and schema) - we need to implement behavior for how the operator interacts with various PyTorch subsystems, like CPU/CUDA Tensors, Autograd, etc. This entrypoint defines the custom operator (the first step) you must then perform the second step by calling various ``impl_*`` APIs, like :func:`torch.library.impl` or :func:`torch.library.register_fake`. Args: qualname (str): The qualified name for the operator. Should be a string that looks like "namespace::name", e.g. "aten::sin". Operators in PyTorch need a namespace to avoid name collisions; a given operator may only be created once. If you are writing a Python library, we recommend the namespace to be the name of your top-level module. schema (str): The schema of the operator. E.g. "(Tensor x) -> Tensor" for an op that accepts one Tensor and returns one Tensor. It does not contain the operator name (that is passed in ``qualname``). lib (Optional[Library]): If provided, the lifetime of this operator will be tied to the lifetime of the Library object. tags (Tag | Sequence[Tag]): one or more torch.Tag to apply to this operator. Tagging an operator changes the operator's behavior under various PyTorch subsystems; please read the docs for the torch.Tag carefully before applying it. Example:: >>> import torch >>> import numpy as np >>> >>> # Define the operator >>> torch.library.define("mylib::sin", "(Tensor x) -> Tensor") >>> >>> # Add implementations for the operator >>> @torch.library.impl("mylib::sin", "cpu") >>> def f(x): >>> return torch.from_numpy(np.sin(x.numpy())) >>> >>> # Call the new operator from torch.ops. >>> x = torch.randn(3) >>> y = torch.ops.mylib.sin(x) >>> assert torch.allclose(y, x.sin()) zGdefine(qualname, schema): expected qualname to be instance of str, got Nr2zadefine(qualname, schema, ...): expected schema to look like e.g. "(Tensor x) -> Tensor" but got ""rQ)rcrO)rXrr6rr<rnroparse_namespacerrr}NAMELESS_SCHEMA fullmatchr)rhrbrmrOrrds r,rrsb h $**.x.)9 ;  nn**::8DOIt {i,3  $ $V ,81   JJtf}RdJ;r+rmcfd}|S)zOThe old torch.library.define. We're keeping this around for BC reasons cPj}j|||Sr)rr)frdrcrmrbs r,wrapz_..wrap)s&zz&.1 qr+r*)rmrbrcrs``` r,rr#s  Kr+)rmrhtypesfuncreturn.cyrr*rhrrrms r,rr1s/2r+cyrr*rs r,rr;s r+rdrFcyrr*)rmrdrFs r,rrFs 69r+c"t||||dS)a Register an implementation for a device type for this operator. You may pass "default" for ``types`` to register this implementation as the default implementation for ALL device types. Please only use this if the implementation truly supports all device types; for example, this is true if it is a composition of built-in PyTorch operators. This API may be used as a decorator. You can use nested decorators with this API provided they return a function and are placed inside this API (see Example 2). Some valid types are: "cpu", "cuda", "xla", "mps", "ipu", "xpu". Args: qualname (str): Should be a string that looks like "namespace::operator_name". types (str | Sequence[str]): The device types to register an impl to. lib (Optional[Library]): If provided, the lifetime of this registration will be tied to the lifetime of the Library object. Examples: >>> import torch >>> import numpy as np >>> # Example 1: Register function. >>> # Define the operator >>> torch.library.define("mylib::mysin", "(Tensor x) -> Tensor") >>> >>> # Add implementations for the cpu device >>> @torch.library.impl("mylib::mysin", "cpu") >>> def f(x): >>> return torch.from_numpy(np.sin(x.numpy())) >>> >>> x = torch.randn(3) >>> y = torch.ops.mylib.mysin(x) >>> assert torch.allclose(y, x.sin()) >>> >>> # Example 2: Register function with decorator. >>> def custom_decorator(func): >>> def wrapper(*args, **kwargs): >>> return func(*args, **kwargs) + 1 >>> return wrapper >>> >>> # Define the operator >>> torch.library.define("mylib::sin_plus_one", "(Tensor x) -> Tensor") >>> >>> # Add implementations for the operator >>> @torch.library.impl("mylib::sin_plus_one", "cpu") >>> @custom_decorator >>> def f(x): >>> return torch.from_numpy(np.sin(x.numpy())) >>> >>> # Call the new operator from torch.ops. >>> x = torch.randn(3) >>> >>> y1 = torch.ops.mylib.sin_plus_one(x) >>> y2 = torch.sin(x) + 1 >>> assert torch.allclose(y1, y2) Frmdisable_dynamo)_implrs r,rrNsB 5$C FFr+chdtttfdtttfffd }|S)z1Legacy torch.library.impl API. Kept around for BCrrc.j||Sr)r)rrFrmrds r,rz_..wraps HHT1l +Hr+)rr$r#)rmrdrFrs``` r,rrs/  HRV$ "b&)9  r+Frrcyrr*rhrrrmrs r,rrs/2r+cyrr*rs r,rrs r+c<t|tr|f}ti|D]O}tjj |}|rj |6j t|Qdtdtfddffd }||S||y)Nr.rcJtjjj \}}"t |d}t j |n}r3tjfd}D]}|j ||yD]}|j |y)Nr2c|i|Srr*)rrrs r,func_no_dynamoz0_impl..register_..func_no_dynamosT,V,,r+) r<rnrorrrr}_disable_dynamor) rrruse_librrrkeysrmrhs ` r, register_z_impl..register_s~~++;;HE 1 ;i4G   w 'G   " " -# - < X~s; < 2 XtS1 2r+) rXrrAr<r=_parse_dispatch_keyra_device_type_to_keyrobject) rhrrrmrtypis_dispatch_keyrrs ` `` @r,rrs% r7D /((66s;  HHSM HH(- . /2f-2$22( |$r+ device_typecL|dk(rytjj|S)NdefaultCompositeExplicitAutograd)r<r=_dispatch_key_for_device)rs r,rrs$i + 88 , ,[ 99r+z`torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.)categoryrlrmrc.||dz}t||||S)zmThis API was renamed to :func:`torch.library.register_fake` in PyTorch 2.4. Please use that instead. rlr)r)rhrrmrs r,rrs$ !Ao 4Sk JJr+ztorch._ops.OpOverloadz%torch._library.custom_ops.CustomOpDefop device_typesct|ttjjtj j jfstd|dt|t|tjjr |j}t|}||j||St|tsJ|d}t||||dS)aRegister an implementation for a device type for this operator. Some valid device_types are: "cpu", "cuda", "xla", "mps", "ipu", "xpu". This API may be used as a decorator. Args: op (str | OpOverload): The operator to register an impl to. device_types (None | str | Sequence[str]): The device_types to register an impl to. If None, we will register to all device types -- please only use this option if your implementation is truly device-type-agnostic. func (Callable): The function to register as the implementation for the given device types. lib (Optional[Library]): If provided, the lifetime of this registration Examples:: >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) >>> import torch >>> from torch import Tensor >>> from torch.library import custom_op >>> import numpy as np >>> >>> # Create a custom op that works on cpu >>> @custom_op("mylib::numpy_sin", mutates_args=(), device_types="cpu") >>> def numpy_sin(x: Tensor) -> Tensor: >>> x_np = x.numpy() >>> y_np = np.sin(x_np) >>> return torch.from_numpy(y_np) >>> >>> # Add implementations for the cuda device >>> @torch.library.register_kernel("mylib::numpy_sin", "cuda") >>> def _(x): >>> x_np = x.cpu().numpy() >>> y_np = np.sin(x_np) >>> return torch.from_numpy(y_np).to(device=x.device) >>> >>> x_cpu = torch.randn(3) >>> x_cuda = x_cpu.cuda() >>> assert torch.allclose(numpy_sin(x_cpu), x_cpu.sin()) >>> assert torch.allclose(numpy_sin(x_cuda), x_cuda.sin()) zregister_kernel(): got unexpected type for op: rTr)rXrr<r_rrn custom_opsrr6r_namerregister_kernelr)rrrrmopdefs r,rrsd  S%**'')B)B)N)N O rd"A$r( L  "ejj++, XX R E $$\488 b#  2 \4S FFr+ cast_inputsc t|ttjjtj j jfstd|dt|dvrtdt|tjjr |j}t|}||jSt|tsJ|}tj jj| tj jj|\}}|!t!|d}t"j%|dtjjfd}| tj&j(j*| tj&j(j, fd fd } d k(r|j/|| d d S|j/|| dd S)a8Register an autocast dispatch rule for this custom op. Valid `device_type` include: "cpu" and "cuda". Args: op (str | OpOverload): The operator to register an autocast dispatch rule to. device_type(str): Device type to use. 'cuda' or 'cpu'. The type is the same as the `type` attribute of a :class:`torch.device`. Thus, you may obtain the device type of a tensor using `Tensor.device.type`. cast_inputs (:class:`torch.dtype`): When custom op runs in an autocast-enabled region, casts incoming floating-point Tensors to the target dtype (non-floating-point Tensors are not affected), then executes custom op with autocast disabled. lib (Optional[Library]): If provided, the lifetime of this registration Examples:: >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) >>> import torch >>> from torch import Tensor >>> from torch.library import custom_op >>> >>> # Create a custom op that works on cuda >>> @torch.library.custom_op("mylib::my_sin", mutates_args=()) >>> def my_sin(x: Tensor) -> Tensor: >>> return torch.sin(x) >>> >>> # Register autocast dispatch rule for the cuda device >>> torch.library.register_autocast("mylib::my_sin", "cuda", torch.float16) >>> >>> x = torch.randn(3, dtype=torch.float32, device="cuda") >>> with torch.autocast("cuda", dtype=torch.float16): >>> y = torch.ops.mylib.my_sin(x) >>> assert y.dtype == torch.float16 zregister_autocast(r)cpucudazUnknown device type: r2rcfd}|S)Ncjrjjj|Sr)has_kernel_for_dispatch_key py_kernelspoppy_impl)kernelrFrs r,innerzAregister_autocast.._maybe_override_py_impl..inners9--l; !!,/+2::l+F3 3r+r*)rrFrs`` r,_maybe_override_py_implz2register_autocast.._maybe_override_py_impls 4  r+ct|dk(sJdtjjtjjj tjjtjjj z}tjj|5t|cdddS#1swYyxYwNrz%Custom ops do not support kwargs yet.) lenr<r=rr AutocastCPU AutocastCUDA_ExcludeDispatchKeyGuardr)rrautocast_keyset_oprrs r,_autocast_py_implz,register_autocast.._autocast_py_impls6{aH!HH((11 HH , , HH # #EHH$8$8$E$E FGXX . . ? ?dK=> ? ? ?s 9CCc>t|dk(sJd|i|Sr )r )rrrrs r,rz!register_autocast..kernels+6{aH!HH $1&11r+rr Trr )rXrr<r_rrnrrr6rrrrro lookup_oprrrr}r=rr r r) rrrrmrrhropnamerrrrs `` @@r,rrLsT  S%**'')B)B)N)N O  $CDH: N  /)0 >??"ejj++, XX R E &&{K@@ b#  H ..   ( ( 2C,,<>> import torch >>> import numpy as np >>> from torch import Tensor >>> >>> # Example 1: an operator without data-dependent output shape >>> @torch.library.custom_op("mylib::custom_linear", mutates_args=()) >>> def custom_linear(x: Tensor, weight: Tensor, bias: Tensor) -> Tensor: >>> raise NotImplementedError("Implementation goes here") >>> >>> @torch.library.register_fake("mylib::custom_linear") >>> def _(x, weight, bias): >>> assert x.dim() == 2 >>> assert weight.dim() == 2 >>> assert bias.dim() == 1 >>> assert x.shape[1] == weight.shape[1] >>> assert weight.shape[0] == bias.shape[0] >>> assert x.device == weight.device >>> >>> return (x @ weight.t()) + bias >>> >>> with torch._subclasses.fake_tensor.FakeTensorMode(): >>> x = torch.randn(2, 3) >>> w = torch.randn(3, 3) >>> b = torch.randn(3) >>> y = torch.ops.mylib.custom_linear(x, w, b) >>> >>> assert y.shape == (2, 3) >>> >>> # Example 2: an operator with data-dependent output shape >>> @torch.library.custom_op("mylib::custom_nonzero", mutates_args=()) >>> def custom_nonzero(x: Tensor) -> Tensor: >>> x_np = x.numpy(force=True) >>> res = np.stack(np.nonzero(x_np), axis=1) >>> return torch.tensor(res, device=x.device) >>> >>> @torch.library.register_fake("mylib::custom_nonzero") >>> def _(x): >>> # Number of nonzero-elements is data-dependent. >>> # Since we cannot peek at the data in an fake impl, >>> # we use the ctx object to construct a new symint that >>> # represents the data-dependent size. >>> ctx = torch.library.get_ctx() >>> nnz = ctx.new_dynamic_size() >>> shape = [nnz, x.dim()] >>> result = x.new_empty(shape, dtype=torch.int64) >>> return result >>> >>> from torch.fx.experimental.proxy_tensor import make_fx >>> >>> x = torch.tensor([0, 1, 2, 3, 4, 0]) >>> trace = make_fx(torch.ops.mylib.custom_nonzero, tracing_mode="symbolic")(x) >>> trace.print_readable() >>> >>> assert torch.allclose(trace(x), torch.ops.mylib.custom_nonzero(x)) zregister_fake(rctjjj\}}"t |d}t j |n}|j||dz|S)Nr2rl)rrj)r<rnrorrrr}r)rrr~rrjrmr stacklevels r,r|zregister_fake..register sm"^^11AA"E 7 ;i4G   w 'G TzA~n   r+rl) rXrr<r_rrnrrr6rrrr)rrrmrrjrr|rs` ` ` @r,rrsJ  S%**'')B)B)N)N O >"-LTRTXJWXX"ejj++, XX R E  <&& &&&t, , b#  J  |a ~r+) setup_contextrmbackwardrc t|ttjjtj j jfstd|dt|t|tjjr |j}t|}||j||yt|tsJ|}tj jj|}|j}t jj!|st#d|d|dt jj%|rt'd|t j(j+||}t j(j-||}tj jj/|\} } |!t1| d }t2j5||j7| |d d y) aRegister a backward formula for this custom op. In order for an operator to work with autograd, you need to register a backward formula: 1. You must tell us how to compute gradients during the backward pass by providing us a "backward" function. 2. If you need any values from the forward to compute gradients, you can use `setup_context` to save values for backward. ``backward`` runs during the backward pass. It accepts ``(ctx, *grads)``: - ``grads`` is one or more gradients. The number of gradients matches the number of outputs of the operator. The ``ctx`` object is `the same ctx object `_ used by :class:`torch.autograd.Function`. The semantics of ``backward_fn`` are the same as :meth:`torch.autograd.Function.backward`. ``setup_context(ctx, inputs, output)`` runs during the forward pass. Please save quantities needed for backward onto the ``ctx`` object via either :meth:`torch.autograd.function.FunctionCtx.save_for_backward` or assigning them as attributes of ``ctx``. If your custom op has kwarg-only arguments, we expect the signature of ``setup_context`` to be ``setup_context(ctx, inputs, keyword_only_inputs, output)``. Both ``setup_context_fn`` and ``backward_fn`` must be traceable. That is, they may not directly access :meth:`torch.Tensor.data_ptr` and they must not depend on or mutate global state. If you need a non-traceable backward, you can make it a separate custom_op that you call inside ``backward_fn``. If you need different autograd behavior on different devices, then we recommend creating two different custom operators, one for each device that needs different behavior, and switching between them at runtime. Examples: >>> import torch >>> import numpy as np >>> from torch import Tensor >>> >>> @torch.library.custom_op("mylib::numpy_sin", mutates_args=()) >>> def numpy_sin(x: Tensor) -> Tensor: >>> x_np = x.cpu().numpy() >>> y_np = np.sin(x_np) >>> return torch.from_numpy(y_np).to(device=x.device) >>> >>> def setup_context(ctx, inputs, output) -> Tensor: >>> x, = inputs >>> ctx.save_for_backward(x) >>> >>> def backward(ctx, grad): >>> x, = ctx.saved_tensors >>> return grad * x.cos() >>> >>> torch.library.register_autograd( ... "mylib::numpy_sin", backward, setup_context=setup_context ... ) >>> >>> x = torch.randn(3, requires_grad=True) >>> y = numpy_sin(x) >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) >>> assert torch.allclose(grad_x, x.cos()) >>> >>> # Example with a keyword-only arg >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) >>> def numpy_mul(x: Tensor, *, val: float) -> Tensor: >>> x_np = x.cpu().numpy() >>> y_np = x_np * val >>> return torch.from_numpy(y_np).to(device=x.device) >>> >>> def setup_context(ctx, inputs, keyword_only_inputs, output) -> Tensor: >>> ctx.val = keyword_only_inputs["val"] >>> >>> def backward(ctx, grad): >>> return grad * ctx.val >>> >>> torch.library.register_autograd( ... "mylib::numpy_mul", backward, setup_context=setup_context ... ) >>> >>> x = torch.randn(3, requires_grad=True) >>> y = numpy_mul(x, val=3.14) >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) >>> assert torch.allclose(grad_x, torch.full_like(x, 3.14)) zregister_autograd(rN)rz=Cannot register autograd formula for non-functional operator z with schema zP. Please create a functional operator and register an autograd formula for that.zregister_autograd with kwarg-only Tensor args. In the original definition of the op, please make your tensors not kwarg-only. Got: r2AutogradTr)rXrr<r_rrnrrr6rrrregister_autogradrorris_functional_schemarWhas_kwarg_only_tensorsr)autogradInfomake_autograd_implrrrr}r) rrrrmrrhrbinfoautograd_kernelrrs r,rr3sv  S%**'')B)B)N)N O  $CDH: N  "ejj++, XX R E   F b#  H    ' ' 1B ZZF >> . .v 6Kd-x(O P  ~~,,V4!8      ! !(M :D''::2tDO,,< Any`` If it is a TorchDispatchMode, we expect ``func`` to have the following signature: ``(mode, func: OpOverload, types: Tuple[type, ...], args, kwargs) -> Any`` ``args`` and ``kwargs`` will have been normalized the same way they are in ``__torch_dispatch__`` (see :ref:`torch-dispatch-calling-convention`). Examples: >>> import torch >>> >>> @torch.library.custom_op("mylib::foo", mutates_args={}) >>> def foo(x: torch.Tensor) -> torch.Tensor: >>> return x.clone() >>> >>> class MyMode(torch.utils._python_dispatch.TorchDispatchMode): >>> def __torch_dispatch__(self, func, types, args=(), kwargs=None): >>> return func(*args, **kwargs) >>> >>> @torch.library.register_torch_dispatch("mylib::foo", MyMode) >>> def _(mode, func, types, args, kwargs): >>> x, = args >>> return x + 1 >>> >>> x = torch.randn(3) >>> y = foo(x) >>> assert torch.allclose(y, x) >>> >>> with MyMode(): >>> y = foo(x) >>> assert torch.allclose(y, x + 1) zregister_torch_dispatch(rctjjj\}}"t |d}t j |n}|j|||S)Nr2)r<rnrorrrr}r)rrr~rrmrrs r,r|z)register_torch_dispatch..registers^"^^11AA"E 7 ;i4G   w 'G--g7KTR r+) rXrr<r_rrnrrr6rrrr)rrrrmrr|s`` ` r,rrsh  S%**'')B)B)N)N O &rd*I$r( T  "ejj++, XX R E ,,-A4HH b#   |~r+ctttjjtj j jfstddtttjjr jt}||j|SttsJtj jjj}t jj!|rt#d|fd}||S||S)a Register a vmap implementation to support :func:`torch.vmap` for this custom op. This API may be used as a decorator (see examples). In order for an operator to work with :func:`torch.vmap`, you may need to register a vmap implementation in the following signature: ``vmap_func(info, in_dims: Tuple[Optional[int]], *args, **kwargs)``, where ``*args`` and ``**kwargs`` are the arguments and kwargs for ``op``. We do not support kwarg-only Tensor args. It specifies how do we compute the batched version of ``op`` given inputs with an additional dimension (specified by ``in_dims``). For each arg in ``args``, ``in_dims`` has a corresponding ``Optional[int]``. It is ``None`` if the arg is not a Tensor or if the arg is not being vmapped over, otherwise, it is an integer specifying what dimension of the Tensor is being vmapped over. ``info`` is a collection of additional metadata that may be helpful: ``info.batch_size`` specifies the size of the dimension being vmapped over, while ``info.randomness`` is the ``randomness`` option that was passed to :func:`torch.vmap`. The return of the function ``func`` is a tuple of ``(output, out_dims)``. Similar to ``in_dims``, ``out_dims`` should be of the same structure as ``output`` and contain one ``out_dim`` per output that specifies if the output has the vmapped dimension and what index it is in. Examples: >>> import torch >>> import numpy as np >>> from torch import Tensor >>> from typing import Tuple >>> >>> def to_numpy(tensor): >>> return tensor.cpu().numpy() >>> >>> lib = torch.library.Library("mylib", "FRAGMENT") >>> @torch.library.custom_op("mylib::numpy_cube", mutates_args=()) >>> def numpy_cube(x: Tensor) -> Tuple[Tensor, Tensor]: >>> x_np = to_numpy(x) >>> dx = torch.tensor(3 * x_np ** 2, device=x.device) >>> return torch.tensor(x_np ** 3, device=x.device), dx >>> >>> def numpy_cube_vmap(info, in_dims, x): >>> result = numpy_cube(x) >>> return result, (in_dims[0], in_dims[0]) >>> >>> torch.library.register_vmap(numpy_cube, numpy_cube_vmap) >>> >>> x = torch.randn(3) >>> torch.vmap(numpy_cube)(x) >>> >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) >>> def numpy_mul(x: Tensor, y: Tensor) -> Tensor: >>> return torch.tensor(to_numpy(x) * to_numpy(y), device=x.device) >>> >>> @torch.library.register_vmap("mylib::numpy_mul") >>> def numpy_mul_vmap(info, in_dims, x, y): >>> x_bdim, y_bdim = in_dims >>> x = x.movedim(x_bdim, -1) if x_bdim is not None else x.unsqueeze(-1) >>> y = y.movedim(y_bdim, -1) if y_bdim is not None else y.unsqueeze(-1) >>> result = x * y >>> result = result.movedim(-1, 0) >>> return result, 0 >>> >>> >>> x = torch.randn(3) >>> y = torch.randn(3) >>> torch.vmap(numpy_mul)(x, y) .. note:: The vmap function should aim to preserve the semantics of the entire custom operator. That is, ``grad(vmap(op))`` should be replaceable with a ``grad(map(op))``. If your custom operator has any custom behavior in the backward pass, please keep this in mind. zregister_vmap(rzregister_vmap with kwarg-only Tensor args. In the original definition of the op, please make your tensors not kwarg-only. Got: ctjjj\}}!t |dt j ddlmddl m fd}j||ddy) Nr2r) custom_function_call_vmap_helper)&retrieve_current_functorch_interpreterc.}|g|i|Srr*)keysetrr interpreterr'rrr(s r, wrapped_funcz5register_vmap..register..wrapped_funczs/@BK3T2(,06 r+FuncTorchBatchedTr) r<rnrorrrr}"torch._functorch.autograd_functionr'torch._functorch.pyfunctorchr(r) rrrr,r'r(rmrrhs ` @@r,r|zregister_vmap..registerose"NN00@@J 6 ;)Z0C   s #WW  '9tLr+)rXrr<r_rrnrrr6rrrr rorrrr))rrrmrrbr|rhs` ` @r,r r sj  S%**'')B)B)N)N O >"-LTRTXJWXX"ejj++, XX R E ""4(( b#  H    ' ' 1B ZZF~~,,V4!8   M& |~r+c"dfd}|S)NFc r |i|Stjjj }|jr d |i|Stj j |jj|jj}|ftjjjrz|j}|jj}td dd|d|d |d}|k7r2|jj}td d|d d |d d |i|S) NTz Operator 'z' was defined in C++ and has a Python fake impl. In this situation, we require there to also be a companion C++ `m.set_python_module("z\")` call, but we could not find one. Please add that to to the top of the C++ TORCH_LIBRARY(z-, ...) block the operator was registered in ()rz?' specified that its python fake impl is in the Python module 'z ' but it was actually found in 'zM'. Please either move the fake impl or correct the m.set_python_module call ()r<rnror_defined_in_pythonr=_dispatch_pystubrrdrrequires_set_python_moduler_handledebugrW) rrr maybe_pystubr cpp_filename pystub_moduleactual_module_namecheckedrrhs r,rz"_check_pystubs_once..innersg (( ( ^^ ! ! + +H 5 G(( (xx00 JJOORZZ55   ~~##>>@LL !zz//1 "  +;;M:NO;;D+F33?. C)OM!]2!zz//1 "  +00=?-./@@L~QP T$V$$r+r*)rrhr;rr<s``` @r,rzrzsG$%L Lr+cRtjjjS)zget_ctx() returns the current AbstractImplCtx object. Calling ``get_ctx()`` is only valid inside of an fake impl (see :func:`torch.library.register_fake` for more usage details. )r<rnr{global_ctx_getterr*r+r,r!r!s >> # # 5 5 77r+ct|ttjjfst d|dt |t|tjjr |j}t|tr( tjjj|}tjj||S#t$rt d|dwxYw)a% Returns the computed kernel for a given operator and dispatch key. This function retrieves the kernel that would be executed for a given operator and dispatch key combination. The returned SafeKernelFunction can be used to call the kernel in a boxed fashion. The intended use case for this function is to retrieve the original kernel for a given dispatch key and then register another kernel to the same dispatch key that calls into the original kernel for certain cases. Args: op: Operator name (along with the overload) or OpOverload object Can be a string (e.g., "aten::add.Tensor"), an OpOverload, or a CustomOpDef. dispatch_key (str | torch.DispatchKey): The dispatch key to get the kernel for. Can be a string (e.g., "CPU", "CUDA") or a DispatchKey enum value. Returns: torch._C._SafeKernelFunction: A safe kernel function that can be used to call the kernel. Raises: RuntimeError: If the operator does not exist. Example: >>> # Get the CPU kernel for torch.add >>> kernel = torch.library.get_kernel("aten::add.Tensor", "CPU") >>> >>> # You can also use DispatchKey enum >>> kernel = torch.library.get_kernel("aten::add.Tensor", torch.DispatchKey.CPU) >>> >>> # Or use an OpOverload directly >>> kernel = torch.library.get_kernel(torch.ops.aten.add.Tensor, "CPU") >>> >>> # Example: Using get_kernel in a custom op with conditional dispatch >>> # Get the original kernel for torch.sin >>> original_sin_kernel = torch.library.get_kernel("aten::sin", "CPU") >>> >>> # If input has negative values, use original sin, otherwise return zeros >>> def conditional_sin_impl(dispatch_keys, x): >>> if (x < 0).any(): >>> return original_sin_kernel.call_boxed(dispatch_keys, x) >>> else: >>> return torch.zeros_like(x) >>> >>> lib = torch.library.Library("aten", "IMPL") >>> # with_keyset=True so the first argument to the impl is the current DispatchKeySet >>> which needs to be the first argument to ``kernel.call_boxed`` >>> lib.impl("sin", conditional_sin_impl, "CPU", with_keyset=True) >>> >>> # Test the conditional behavior >>> x_positive = torch.tensor([1.0, 2.0]) >>> x_mixed = torch.tensor([-1.0, 2.0]) >>> torch.sin(x_positive) tensor([0., 0.]) >>> torch.sin(x_mixed) tensor([-0.8415, 0.9093]) z get_kernel(rzInvalid dispatch key: N) rXrr<r_rr6rrr=r __members__KeyError._dispatch_get_computed_kernel_for_dispatch_key)rrFs r,r"r"sv b3 5 56 7;rd*I$r(TUU"ejj++, XX,$ P 88//;;LIL 88 B B2| TT P5l^DE4 O Ps 'C C&) test_schematest_autograd_registrationtest_faketensortest_aot_dispatch_dynamicT) test_utilsraise_exceptionatolrtolrrrGrHc Jddlmcmcm}|j |||||||S)aGiven an operator and some sample arguments, tests if the operator is registered correctly. That is, when you use the torch.library/TORCH_LIBRARY APIs to create a custom op, you specified metadata (e.g. mutability info) about the custom op and these APIs require that the functions you pass them satisfy certain properties (e.g. no data pointer access in the fake/meta/abstract kernel) ``opcheck`` tests these metadata and properties. Concretely, we test the following: - test_schema: If the schema matches the implementation of the operator. For example: if the schema specifies a Tensor is mutated, then we check the implementation mutates the Tensor. If the schema specifies that we return a new Tensor, then we check that the implementation returns a new Tensor (instead of an existing one or a view of an existing one). - test_autograd_registration: If the operator supports training (autograd): we check that its autograd formula is registered via torch.library.register_autograd or a manual registration to one or more DispatchKey::Autograd keys. Any other DispatchKey-based registrations may lead to undefined behavior. - test_faketensor: If the operator has a FakeTensor kernel (and if it is correct). The FakeTensor kernel is necessary ( but not sufficient) for the operator to work with PyTorch compilation APIs (torch.compile/export/FX). We check that a FakeTensor kernel (also sometimes known as a meta kernel) was registered for the operator and that it is correct. This test takes the result of running the operator on real tensors and the result of running the operator on FakeTensors and checks that they have the same Tensor metadata (sizes/strides/dtype/device/etc). - test_aot_dispatch_dynamic: If the operator has correct behavior with PyTorch compilation APIs (torch.compile/export/FX). This checks that the outputs (and gradients, if applicable) are the same under eager-mode PyTorch and torch.compile. This test is a superset of ``test_faketensor`` and is an e2e test; other things it tests are that the operator supports functionalization and that the backward pass (if it exists) also supports FakeTensor and functionalization. For best results, please call ``opcheck`` multiple times with a representative set of inputs. If your operator supports autograd, please use ``opcheck`` with inputs with ``requires_grad = True``; if your operator supports multiple devices (e.g. CPU and CUDA), please use ``opcheck`` with inputs on all supported devices. Args: op: The operator. Must either be a function decorated with :func:`torch.library.custom_op` or an OpOverload/OpOverloadPacket found in torch.ops.* (e.g. torch.ops.aten.sin, torch.ops.mylib.foo) args: The args to the operator kwargs: The kwargs to the operator test_utils: Tests that we should run. Default: all of them. Example: ("test_schema", "test_faketensor") raise_exception: If we should raise an exception on the first error. If False, we will return a dict with information on if each test passed or not. rtol (Optional[float]): Relative tolerance for floating point comparisons. If specified ``atol`` must also be specified. If omitted, default values based on the ``dtype`` are selected (see the table in :func:`torch.testing.assert_close`). atol (Optional[float]): Absolute tolerance for floating point comparisons. If specified ``rtol`` must also be specified. If omitted, default values based on the ``dtype`` are selected (see the table in :func:`torch.testing.assert_close`). .. warning:: opcheck and :func:`torch.autograd.gradcheck` test different things; opcheck tests if your usage of torch.library APIs is correct while :func:`torch.autograd.gradcheck` tests if your autograd formula is mathematically correct. Use both to test custom ops that support gradient computation. Example: >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) >>> def numpy_mul(x: Tensor, y: float) -> Tensor: >>> x_np = x.numpy(force=True) >>> z_np = x_np * y >>> return torch.from_numpy(z_np).to(x.device) >>> >>> @numpy_mul.register_fake >>> def _(x, y): >>> return torch.empty_like(x) >>> >>> def setup_context(ctx, inputs, output): >>> y, = inputs >>> ctx.y = y >>> >>> def backward(ctx, grad): >>> return grad * ctx.y, None >>> >>> numpy_mul.register_autograd(backward, setup_context=setup_context) >>> >>> sample_inputs = [ >>> (torch.randn(3), 3.14), >>> (torch.randn(2, 3, device='cuda'), 2.718), >>> (torch.randn(1, 10, requires_grad=True), 1.234), >>> (torch.randn(64, 64, device='cuda', requires_grad=True), 90.18), >>> ] >>> >>> for args in sample_inputs: >>> torch.library.opcheck(numpy_mul, args) rN)rGrHrJrI)torch.testing._internal.opteststesting _internaloptestsopcheck)rrrrGrHrIrJrOs r,rPrPs6j65 ??  '    r+rr)rz$torch._library.fake_impl.FakeImplCtx)X contextlib functoolsrsrerqr8rGcollections.abcrtypingrrrrrr r r typing_extensionsr r r<torch._libraryrntorch._library.custom_opsrrrrrtorch._library.infer_schemartorch._library.tritonrr torch._opsr torch.typesr__all__r#r$rAr%r__annotations__r&r7rrrIcontextmanagerrrlistcompilersingledispatchrr|rrrboolrr FutureWarningr_op_identifierrrintrrrr rzr!rr=_SafeKernelFunctionr"_OPCHECK_DEFAULT_UTILSr_OpOverloadPacketr^dictrPr*r+r,rks $   4 !58! $ T]t_ 5C%s3xxNe(e(P :  T']"**_- $(r?<?