L i5ddlmZmZddlZddlmZmZmZgdZGddeZ GddeZ Gd d eZ Gd d eZ d Z dZGddeZGddeZGddeZy))AnyUnionN)_DecoratorContextManager_NoParamDecoratorContextManagerF)no_grad enable_gradset_grad_enabledinference_modeset_multithreading_enabledcDeZdZdZd fd Zd dZdedededdfd ZxZS) raContext-manager that disables gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call :meth:`Tensor.backward()`. It will reduce memory consumption for computations that would otherwise have `requires_grad=True`. In this mode, the result of every computation will have `requires_grad=False`, even when the inputs have `requires_grad=True`. There is an exception! All factory functions, or functions that create a new Tensor and take a requires_grad kwarg, will NOT be affected by this mode. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. .. note:: No-grad is one of several mechanisms that can enable or disable gradients locally see :ref:`locally-disable-grad-doc` for more information on how they compare. .. note:: This API does not apply to :ref:`forward-mode AD `. If you want to disable forward AD for a computation, you can unpack your dual tensors. Example:: >>> # xdoctest: +SKIP >>> x = torch.tensor([1.], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False >>> @torch.no_grad() ... def tripler(x): ... return x * 3 >>> z = tripler(x) >>> z.requires_grad False >>> # factory function exception >>> with torch.no_grad(): ... a = torch.nn.Parameter(torch.rand(10)) >>> a.requires_grad True returnNcltjjst|d|_yNF)torch _jit_internal is_scriptingsuper__init__prev)self __class__s ^/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/autograd/grad_mode.pyrzno_grad.__init__Ks'""//1 G   c`tj|_tjdyr)ris_grad_enabledrr rs r __enter__zno_grad.__enter__Ps ))+  u%rexc_type exc_value tracebackcBtj|jyN)rr rrrr r!s r__exit__zno_grad.__exit__Ts tyy)rrN) __name__ __module__ __qualname____doc__rrrr% __classcell__rs@rrrs43j &*****rrc0eZdZdZd dZdedededdfdZy) r aContext-manager that enables gradient calculation. Enables gradient calculation, if it has been disabled via :class:`~no_grad` or :class:`~set_grad_enabled`. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. .. note:: enable_grad is one of several mechanisms that can enable or disable gradients locally see :ref:`locally-disable-grad-doc` for more information on how they compare. .. note:: This API does not apply to :ref:`forward-mode AD `. Example:: >>> # xdoctest: +SKIP >>> x = torch.tensor([1.], requires_grad=True) >>> with torch.no_grad(): ... with torch.enable_grad(): ... y = x * 2 >>> y.requires_grad True >>> y.backward() >>> x.grad tensor([2.]) >>> @torch.enable_grad() ... def doubler(x): ... return x * 2 >>> with torch.no_grad(): ... z = doubler(x) >>> z.requires_grad True >>> @torch.enable_grad() ... def tripler(x): ... return x * 3 >>> with torch.no_grad(): ... z = tripler(x) >>> z.requires_grad True rNcttj|_tjj dy)NT)rrr_C_set_grad_enabledrs rrzenable_grad.__enter__s$))+  ""4(rrr r!cVtjj|jyr#rr/r0rr$s rr%zenable_grad.__exit__ ""499-rr&)r'r(r)r*rrr%rrr r Xs.,\).....rr c|eZdZdZdeddfdZdedeffd ZddZd e d e d e ddfd Z de fd Z de fdZ ddZxZS)r aContext-manager that sets gradient calculation on or off. ``set_grad_enabled`` will enable or disable grads based on its argument :attr:`mode`. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. Args: mode (bool): Flag whether to enable grad (``True``), or disable (``False``). This can be used to conditionally enable gradients. .. note:: set_grad_enabled is one of several mechanisms that can enable or disable gradients locally see :ref:`locally-disable-grad-doc` for more information on how they compare. .. note:: This API does not apply to :ref:`forward-mode AD `. Example:: >>> # xdoctest: +SKIP >>> x = torch.tensor([1.], requires_grad=True) >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> _ = torch.set_grad_enabled(True) >>> y = x * 2 >>> y.requires_grad True >>> _ = torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False moderNctj|_||_tjj |yr#)rrrr6r/r0rr6s rrzset_grad_enabled.__init__s+))+   ""4(r orig_funccttjj|jt||Sr#)rr/r0rr__call__)rr9rs rr;zset_grad_enabled.__call__s) ""499-w **rcVtjj|jyr#)rr/r0r6rs rrzset_grad_enabled.__enter__r3rrr r!cVtjj|jyr#r2r$s rr%zset_grad_enabled.__exit__r3rcLtj|d|jdS)Nz(mode=))rtypenamer6rs r__str__zset_grad_enabled.__str__s#..&'vdii[::rct|Sr#)strrs r__repr__zset_grad_enabled.__repr__s 4yrc8|j|jSz- Create a copy of this class rr6rs rclonezset_grad_enabled.clone~~dii((rr&)rr )r'r(r)r*boolrrr;rrr%rCrArDrHr+r,s@rr r sr&P)T)d) +!++......;;#)rr cbeZdZdZd deddffd Zd fd ZddZded ed eddfd Z dd Z xZ S)r a Context manager that enables or disables inference mode. InferenceMode is analogous to :class:`~no_grad` and should be used when you are certain your operations will not interact with autograd (e.g., during data loading or model evaluation). Compared to :class:`~no_grad`, it removes additional overhead by disabling view tracking and version counter bumps. It is also more restrictive, in that tensors created in this mode cannot be used in computations recorded by autograd. This context manager is thread-local; it does not affect computation in other threads. Also functions as a decorator. .. note:: Inference mode is one of several mechanisms that can locally enable or disable gradients. See :ref:`locally-disable-grad-doc` for a comparison. If avoiding the use of tensors created in inference mode in autograd-tracked regions is difficult, consider benchmarking your code with and without inference mode to weigh the performance benefits against the trade-offs. You can always use :class:`~no_grad` instead. .. note:: Unlike some other mechanisms that locally enable or disable grad, entering inference_mode also disables :ref:`forward-mode AD `. Args: mode (bool or function): Either a boolean flag to enable or disable inference mode, or a Python function to decorate with inference mode enabled. Example:: >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_AUTOGRAD) >>> import torch >>> x = torch.ones(1, 2, 3, requires_grad=True) >>> with torch.inference_mode(): ... y = x * x >>> y.requires_grad False >>> # xdoctest: +SKIP("want string isn't quite right") >>> y._version Traceback (most recent call last): File "", line 1, in RuntimeError: Inference tensors do not track version counter. >>> @torch.inference_mode() ... def func(x): ... return x * x >>> out = func(x) >>> out.requires_grad False >>> @torch.inference_mode() ... def doubler(x): ... return x * 2 >>> out = doubler(x) >>> out.requires_grad False r6rNcltjjst|||_yr#)rrrrrr6)rr6rs rrzinference_mode.__init__s'""//1 G   rc\t|trt| |S||Sr#) isinstancerJr__new__)clsr6rs rrOzinference_mode.__new__s* dD !7?3' 'suT{rctjj|j|_|jj yr#)rr/_InferenceModer6_inference_mode_contextrrs rrzinference_mode.__enter__s/',xx'>'>tyy'I$ $$..0rrr r!c>|jj|||yr#)rSr%r$s rr%zinference_mode.__exit__s $$--h 9Mrc8|j|jSrFrGrs rrHzinference_mode.clone"rIr)Tr&)rr ) r'r(r)r*rJrrOrrr%rHr+r,s@rr r sQ:xTT  1NNNNN)rr cdtjj|}|j|Sr#)rr/rRr)r6 mode_contexts r_enter_inference_moderX)s(88**40L rc*|jdddyr#)r%)r6s r_exit_inference_moderZ/sMM$d#rcHeZdZdZdeddfdZd dZdeded eddfd Zd d Z y)r a7Context-manager that sets multithreaded backwards on or off. ``set_multithreading_enabled`` will enable or disable multithreaded backwards based on its argument :attr:`mode`. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. Args: mode (bool): Flag whether to enable multithreaded backwards (``True``), or disable (``False``). .. note:: This API does not apply to :ref:`forward-mode AD `. r6rNctjj|_tjj |||_yr#)rr/_is_multithreading_enabledr_set_multithreading_enabledr6r8s rrz#set_multithreading_enabled.__init__Es/HH779  ,,T2 rcyr#r4rs rrz$set_multithreading_enabled.__enter__J rrr r!cVtjj|jyr#)rr/r^rr$s rr%z#set_multithreading_enabled.__exit__Ms ,,TYY7rc8|j|jSrFrGrs rrHz set_multithreading_enabled.clonePrIrr&)rr r'r(r)r*rJrrrr%rHr4rrr r 3sE"Td  88888)rr cFeZdZdZdeddfdZd dZdeded eddfd Zd Z y) _force_original_view_trackingaLContext-manager that sets whether or not to always enable view-replay in autograd. ``set_view_replay_enabled`` will enable or disable view-replay based on its argument :attr:`mode`. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. When a tensor view is mutated, the autograd engine needs to decide whether or not to regenerate the "updated view" by either replaying the chain of views from the updated base, or with a single call to as_strided. If set_view_replay_enabled is set to True, then autograd will always use view replay. Otherwise, it will fall back to its existing logic. Args: mode (bool): Flag whether to enable view-replay (``True``), or disable (``False``). r6rNctjj|_tjj |||_yr#)rr/_is_view_replay_enabledr_set_view_replay_enabledr6r8s rrz&_force_original_view_tracking.__init__ms/HH446  ))$/ rcyr#r4rs rrz'_force_original_view_tracking.__enter__rr`rrr r!cVtjj|jyr#)rr/rhrr$s rr%z&_force_original_view_tracking.__exit__us ))$))4rc8|j|jSr#rGrs rrHz#_force_original_view_tracking.clonexs~~dii((rr&rcr4rrrereWsE*Td  55555)rrecleZdZdZdeej eej dffddfdZd dZ d dZ y) _unsafe_preserve_version_countera2DO NOT USE THIS UNLESS YOU KNOW EXACTLY WHAT YOU'RE DOING. This context manager can lead to arbitrary silent-correctness issues in any other part of your code (even the ones not touched directly by the context manager)! Ordinarily, autograd will track mutations to tensors by incrementing it's `._version` attribute. This is generally important for correctness, as for example, mutating a tensor that autograd has saved for the backwards pass can result in incorrect gradients, and autograd uses the version counter to detect and error out in this situation. However, there are rare instances where it might be useful to hide mutations from autograd. For example: if a tensor is very large, and you'd like to free its memory by storing it elsewhere, and re-populate the tensor right before it is needed by autograd. Args: tensor (torch.Tensor): the tensor in question, that you would like to preserve the version counter of. .. note:: This API does not apply to :ref:`forward-mode AD `. tensors.rNct|tjr|fn||_t|jtsJt d|jD|_y)Nc34K|]}|jywr#)_version).0ts r z<_unsafe_preserve_version_counter.__init__..s"D!1::"Ds)rNrTensorrntuple prev_versions)rrns rrz)_unsafe_preserve_version_counter.__init__sE%/%FzG $,,...""Dt||"DDrcyr#r4rs rrz*_unsafe_preserve_version_counter.__enter__r`rctjjj|j|j yr#)rr/ _autograd_unsafe_set_version_counterrnrw)rargss rr%z)_unsafe_preserve_version_counter.__exit__s& 66t||TEWEWXrr&) r'r(r)r*rrrurvrrr%r4rrrmrm|sE,EellE%,,:K4L&L MERVE  Yrrm)typingrrrtorch.utils._contextlibrrr__all__rr r r rXrZr rermr4rrrs  @*-@*F4.14.nB)/B)JR)-R)j $!)!9!)H")$<")J Y'? Yr