L iL \ddlmZmZddlZddlmZmZddlmZm Z ddgZ da de d dfd Z d Z dd e d eeee ej fd efdZd e d dfdZd e d dfdZd e d dfdZ dd e deeej,d dfdZ ddededededeeej,d df dZde fdZy))OptionalUnionN)_get_privateuse1_backend_name_rename_privateuse1_backend)handle_torch_functionhas_torch_function_unaryrename_privateuse1_backend(generate_methods_for_privateuse1_backend privateuseone backend_namereturnct||ay)a Rename the privateuse1 backend device to make it more convenient to use as a device name within PyTorch APIs. The steps are: (1) (In C++) implement kernels for various torch operations, and register them to the PrivateUse1 dispatch key. (2) (In python) call torch.utils.rename_privateuse1_backend("foo") You can now use "foo" as an ordinary device string in python. Note: this API can only be called once per process. Attempting to change the external backend after it's already been set will result in an error. Note(AMP): If you want to support AMP on your device, you can register a custom backend module. The backend must register a custom backend module with ``torch._register_device_module("foo", BackendModule)``. BackendModule needs to have the following API's: (1) ``get_amp_supported_dtype() -> List[torch.dtype]`` get the supported dtypes on your "foo" device in AMP, maybe the "foo" device supports one more dtype. Note(random): If you want to support to set seed for your device, BackendModule needs to have the following API's: (1) ``_is_in_bad_fork() -> bool`` Return ``True`` if now it is in bad_fork, else return ``False``. (2) ``manual_seed_all(seed int) -> None`` Sets the seed for generating random numbers for your devices. (3) ``device_count() -> int`` Returns the number of "foo"s available. (4) ``get_rng_state(device: Union[int, str, torch.device] = 'foo') -> Tensor`` Returns a list of ByteTensor representing the random number states of all devices. (5) ``set_rng_state(new_state: Tensor, device: Union[int, str, torch.device] = 'foo') -> None`` Sets the random number generator state of the specified "foo" device. And there are some common funcs: (1) ``is_available() -> bool`` Returns a bool indicating if "foo" is currently available. (2) ``current_device() -> int`` Returns the index of a currently selected device. For more details, see https://pytorch.org/tutorials/advanced/extend_dispatcher.html#get-a-dispatch-key-for-your-backend For an existing example, see https://github.com/bdhirsh/pytorch_open_registration_example Example:: >>> # xdoctest: +SKIP("failing") >>> torch.utils.rename_privateuse1_backend("foo") # This will work, assuming that you've implemented the right C++ kernels # to implement torch.ones. >>> a = torch.ones(2, device="foo") N)r_privateuse1_backend_name)r s f/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/utils/backend_registration.pyr r sv - ,c>t||rtd|d|y)NzThe custom device module of z" has already been registered with )hasattr RuntimeError)moduleattrs r_check_register_oncerRs1vt*6(2TUYTZ [  rcustom_backend_namedevicec(fd}||St|trtj|}t|tjrA|jk7rt dd|j |}|S|j }|S|}|S)Ncd}ttr9ttt|rttt|Sy)Ncurrent_devicer)rtorchgetattr)_get_device_indexrs r_get_current_device_indexz8_normalization_device.._get_current_device_index\sL, 5- .7 E. /1B4 S775*=>@QRT TrzInvalid device, must be z device) isinstancestrrrtyperindex)rrr device_idxs` r_normalization_devicer&Ys~(** FC f%&%,,' ;;- -!9:M9NgVW W \\ !24J   J  rctdtjdtffd t tjddj _ttjd ddtjdttttjfdtjffd t tj_ttjy)Nselfr c|t|rtj|f|S|jjk(SN)rr__get__rr#)r(rwrap_tensor_backends rr,zM_generate_tensor_methods_for_privateuse1_backend..wrap_tensor_backend|s9 #D )()<)D)DtgtT T{{#666ris_rct|rt|f|f|dd|St|}|jdt j d||d|S)aPerform Tensor device conversion. Call the to operator implementation. .. note:: If the ``self`` Tensor already has the correct :class:`torch.device`, then ``self`` is returned. Otherwise, the returned tensor is a copy of ``self`` with the desired :class:`torch.device`. Args: device (int, optional): if specified, all parameters will be copied to that device non_blocking (bool): If ``True`` and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. **kwargs (dict): For compatibility, may contain the key ``memory_format`` argument. F)r non_blocking:)rrr&torr)r(rr/kwargsr%rwrap_tensor_tos rr4zH_generate_tensor_methods_for_privateuse1_backend..wrap_tensor_tos( $D )("    ++>G tww <<#6"7q EF%   rNF) propertyrTensorboolrfget__name__setattrrrintr)rr,r4s`@@r0_generate_tensor_methods_for_privateuse1_backendr={s 7%,,7477 -@,A'BC*-.A-B(C% ELLC 3457JK6:" ll" sELL012"  " H':;1N ELL-~>rcttjstddd ddtjj j jdttttjfdtjj j jffd }ttjjttjj|y) NzCan not automatically generate zJ() method for torch.nn.Module.Because torch.Tensor doesn't has the method 7().For this error, you can try setting for_tensor=True.r(rr c0|jfdS)Move all model parameters and buffers to the custom device. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on device while being optimized. .. note:: This method modifies the module in-place. Args: device (int, optional): if specified, all parameters will be copied to that device c(t|Sr*)r)trrs rzZ_generate_module_methods_for_privateuse1_backend..wrap_module_to..s%DWQ0C%DV%Lr)_apply)r(rrs `rwrap_module_tozH_generate_module_methods_for_privateuse1_backend..wrap_module_tos {{LMMrr*)rrr7rnnmodulesrTrrr<rrModuler;)rrFs` r0_generate_module_methods_for_privateuse1_backendrKs 5< EHHOO0.Arc ttjdrttjstddddd tdtj j jjdtffd }ttj j jjdttj j jjd|dtj j jjdtj j jjffd }ttj j jjttj j jj|y) Nr-z"Can not automatically generate is_z() or z_() method for torch.nn.utils.rnn.PackedSequence.Because torch.Tensor doesn't has the method is_z()or r?r(r cJ|jjjk(Sr*)datarr#r(rs rr,zV_generate_packed_sequence_methods_for_privateuse1_backend..wrap_tensor_backendsyy$$(;;;rcDtjd|jj|jjj |i|}|jj k(r|j |i|S|jdi|j |i|S)rAr1)dtyperr)rtensorrNrQrr2r#update)r(argsr3exrs rrFzQ_generate_packed_sequence_methods_for_privateuse1_backend..wrap_module_tosQU\\"DIIOODII>0 0477D+F+ + x!456tww'''r) rrr7rr6rGutilsrnnPackedSequencer8rr;)rr,rFs` r9_generate_packed_sequence_methods_for_privateuse1_backendrYs} 5<<3':&;!< =W )F01D0EV"#$>>Q=RS%&'C D  <%((.."4"4"C"C<<<++::cBUAV.wrap_storage_backend s{{#666rr-c<t|}t|dr|j|k(r|S|jrt ddt j |jt jd|}|j|||S)aReturn a copy of this object in custom device memory. If this object is already in device memory and on the correct device, then no copy is performed and the original object is returned. Args: device (int): The destination device id. Defaults to the current device. non_blocking (bool): If ``True`` and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. r-z)Can not support a sparse storage move to z backendr0)r) r&r get_device is_sparserrUntypedStoragesizercopy_)r(rr/r%untyped_storagers rwrap_storage_tozJ_generate_storage_methods_for_privateuse1_backend..wrap_storage_tos++>G 43234 5 J. >>;.wrap_typed_storage_backend;s1 113$$++004GGGrctjjr*|jvrt dd|jdt |j ||fi|}|j|S)NzCannot create z storage as z' dtype is not supported by this backend)rrgrhrQrrri_new_wrapped_storage)r(rr/r3custom_backend_storagerrZs rwrap_typed_storage_tozP_generate_storage_methods_for_privateuse1_backend..wrap_typed_storage_toGs  113 /@!@ !4 56jj\!HJ 8 w  ! !#68 ,8*"(8*(()?@@rr5)r6rrg _StorageBaser8rr; TypedStorage)rrZr]rerjrns`` r1_generate_storage_methods_for_privateuse1_backendrqsn 75==#=#=7$7733s;N:O5PQ  ""c*=)>$?AU@335HI EMM & &(;_M H)C)CHHH++s3F2G-HI  "" ! "#"EJ Amm(( A  # # A++-@A E   35JKr for_tensor for_modulefor_packed_sequence for_storagect}|r t||r t||r t|||r t |yy)a Automatically generate attributes and methods for the custom backend after rename privateuse1 backend. In the default scenario, storage-related methods will not be generated automatically. When you implement kernels for various torch operations, and register them to the PrivateUse1 dispatch key. And call the function torch.rename_privateuse1_backend("foo") to rename your backend name. At this point, you can easily register specific methods and attributes by calling this function. Just like torch.Tensor.foo(), torch.Tensor.is_foo, torch.Storage.foo(), torch.Storage.is_foo. Note: We recommend you use generic functions (check devices are equal or to(device=)). We provide these methods for convenience only and they will be "monkey patched" onto the objects and so will not be properly typed. For Storage methods generate, if you need to support sparse data storage, you need to extend the implementation yourself. Args: for_tensor (bool): whether register related methods for torch.Tensor class. for_module (bool): whether register related methods for torch.nn.Module class. for_storage (bool): whether register related methods for torch.Storage class. unsupported_dtype (List[torch.dtype]): takes effect only when the storage method needs to be generated, indicating that the storage does not support the torch.dtype type. Example:: >>> # xdoctest: +SKIP("failing") >>> torch.utils.rename_privateuse1_backend("foo") >>> torch.utils.generate_methods_for_privateuse1_backend() # Then automatically generate backend-related attributes and methods. >>> a = torch.tensor(2).foo() >>> a.is_foo >>> hasattr(torch.nn.Module, 'foo') N)rr=rKrqrY)rrrsrtrurZrs rr r YsJN8989LM89LM9 !2 ABUVr func_namect|tsJdt|dt}t t |d}t ||d}||&d|d|d}|d|dz }|d |d z }t ||S) aW Return the func named `func_name` defined in custom device module. If not defined, return `None`. And the func is registered with `torch.utils.rename_privateuse1_backend('foo')` and `torch._register_device_module('foo', BackendModule)`. If the custom device module or the func is not defined, it will give warning or error message. Args: func_name (str): return the callable func named func_name defined in custom device module. Example:: class DummyfooModule: @staticmethod def is_available(): return True @staticmethod def func_name(*args, **kwargs): .... torch.utils.rename_privateuse1_backend("foo") torch._register_device_module("foo", DummyfooModule) foo_is_available_func = torch.utils.backend_registration._get_custom_mod_func("is_available") if foo_is_available_func: foo_is_available = foo_is_available_func() func_ = torch.utils.backend_registration._get_custom_mod_func("func_name") if func_: result = func_(*args, **kwargs) Attention: This function is not meant to be used directly by users, which is why it is marked as private. It is a convenience function for backend implementers to more easily call the hooks into their backend extensions. z"func_name must be `str`, but got `z`.NzTry to call torch..z-. The backend must register a custom backend z,module with `torch._register_device_module('z', BackendModule)`. And z3BackendModule needs to have the following API's: `z(*args, **kwargs)`. )r!r"r#rrrr)rwr custom_device_modfunctionmessages r_get_custom_mod_funcr}s8 i % ,T)_,=R@ %12L|T:()T:H H$4&|nAi[@mnA,OghhI)Tjkk7## Orr*)TTTFN)typingrrrtorch._Crrtorch.overridesrr__all__rr"r rr<rr&r=rKrYlistrQrqr8r r}r1rrrsj" OK ()S T ,=-S=-T=-@ QU&.uS#u||5K/L&MD2?#2?RV2?jB#BRVB@3T3T 3TnPTPLPL19$u{{:K1LPL PLh $59 5W5W5W5W 5W  U[[ 12 5W  5Wp'C'r