L i ddlmZmZddlZddlZddlmZddlm Z ddgZ GddejZ ejjedddd d ed ee d eed dfdZy))AnyOptionalN) _to_dlpack)Device DLDeviceType from_dlpackcHeZdZdZdZdZdZdZdZdZ dZ d Z d Z d Z d Zd ZdZdZy)r))))))) ) ) ) ) ))))N)__name__ __module__ __qualname__kDLCPUkDLCUDA kDLCUDAHost kDLOpenCL kDLVulkankDLMetalkDLVPIkDLROCM kDLROCMHost kDLExtDevkDLCUDAManaged kDLOneAPI kDLWebGPU kDLHexagonkDLMAIAX/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/utils/dlpack.pyrrsM FGKIIH FGKINIIJGr,ato_dlpack(tensor) -> PyCapsule Returns an opaque object (a "DLPack capsule") representing the tensor. .. note:: ``to_dlpack`` is a legacy DLPack interface. The capsule it returns cannot be used for anything in Python other than use it as input to ``from_dlpack``. The more idiomatic use of DLPack is to call ``from_dlpack`` directly on the tensor object - this works when that object has a ``__dlpack__`` method, which PyTorch and most other libraries indeed have now. .. warning:: Only call ``from_dlpack`` once per capsule produced with ``to_dlpack``. Behavior when a capsule is consumed multiple times is undefined. Args: tensor: a tensor to be exported The DLPack capsule shares the tensor's memory. )devicecopy ext_tensorr.r/returnz torch.Tensorc0t|dr,i}d|d<|||d<|tt|trtj|}t|tjsJdt |tj j||d<|j}|dtjtjfvr_tjjd|d }|dtjk(}|r|jdk(rd n |j}||d < |jd i|}n ||Jd |}tj j%|S#t $r&|j#d|jd i|}YMwxYw) afrom_dlpack(ext_tensor) -> Tensor Converts a tensor from an external library into a ``torch.Tensor``. The returned PyTorch tensor will share the memory with the input tensor (which may have come from another library). Note that in-place operations will therefore also affect the data of the input tensor. This may lead to unexpected issues (e.g., other libraries may have read-only flags or immutable data structures), so the user should only do this if they know for sure that this is fine. Args: ext_tensor (object with ``__dlpack__`` attribute, or a DLPack capsule): The tensor or DLPack capsule to convert. If ``ext_tensor`` is a tensor (or ndarray) object, it must support the ``__dlpack__`` protocol (i.e., have a ``ext_tensor.__dlpack__`` method). Otherwise ``ext_tensor`` may be a DLPack capsule, which is an opaque ``PyCapsule`` instance, typically produced by a ``to_dlpack`` function or method. device (torch.device or str or None): An optional PyTorch device specifying where to place the new tensor. If None (default), the new tensor will be on the same device as ``ext_tensor``. copy (bool or None): An optional boolean indicating whether or not to copy ``self``. If None, PyTorch will copy only if necessary. Examples:: >>> import torch.utils.dlpack >>> t = torch.arange(4) # Convert a tensor directly (supported in PyTorch >= 1.10) >>> t2 = torch.from_dlpack(t) >>> t2[:2] = -1 # show that memory is shared >>> t2 tensor([-1, -1, 2, 3]) >>> t tensor([-1, -1, 2, 3]) # The old-style DLPack usage, with an intermediate capsule object >>> capsule = torch.utils.dlpack.to_dlpack(t) >>> capsule >>> t3 = torch.from_dlpack(capsule) >>> t3 tensor([-1, -1, 2, 3]) >>> t3[0] = -9 # now we're sharing memory between 3 tensors >>> t3 tensor([-9, -1, 2, 3]) >>> t2 tensor([-9, -1, 2, 3]) >>> t tensor([-9, -1, 2, 3]) __dlpack__)r r max_versionr/z&from_dlpack: unsupported device type: dl_devicerzcuda:r streamzQdevice and copy kwargs not supported when ext_tensor is already a DLPack capsule.r+)hasattr isinstancestrtorchr.type_C_torchDeviceToDLDevice__dlpack_device__rrr#cudacurrent_stream cuda_streamr3 TypeErrorpop _from_dlpack) r0r.r/kwargs ext_devicer6is_cuda stream_ptrdlpacks r-rr:s~z<("$ &}  !F6N  &#&f-fell3 8fG 3#((("A"A&"IF; 113  a=\11<3G3GH HZZ..z!}o/FGF !m|';';;G&&*<*<*AvGYGYJ)F8  5*Z**4V4F~$,  ( .  88  (( 5 JJ} %*Z**4V4F 5s'E&&,FF)typingrrr:enumtorch._Cr to_dlpack torch.typesr_Device__all__IntEnumrr< _add_docstrboolrr+r,r-rTs ,)  4<<&Y!8!% r)r) W r) 4. r)  r)r,