L idHdZddlZddlZddgZGddZGddZy)zAutograd anomaly mode.Ndetect_anomalyset_detect_anomalyc2eZdZdZdd dZd dZdeddfdZy) ra Context-manager that enable anomaly detection for the autograd engine. This does two things: - Running the forward pass with detection enabled will allow the backward pass to print the traceback of the forward operation that created the failing backward function. - If ``check_nan`` is ``True``, any backward computation that generate "nan" value will raise an error. Default ``True``. .. warning:: This mode should be enabled only for debugging as the different tests will slow down your program execution. Example: >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_ANOMALY) >>> import torch >>> from torch import autograd >>> class MyFunc(autograd.Function): ... @staticmethod ... def forward(ctx, inp): ... return inp.clone() ... ... @staticmethod ... def backward(ctx, gO): ... # Error during the backward pass ... raise RuntimeError("Some error in backward") ... return gO.clone() >>> def run_fn(a): ... out = MyFunc.apply(a) ... return out.sum() >>> inp = torch.rand(10, 10, requires_grad=True) >>> out = run_fn(inp) >>> out.backward() Traceback (most recent call last): File "", line 1, in File "/your/pytorch/install/torch/_tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "", line 8, in backward RuntimeError: Some error in backward >>> with autograd.detect_anomaly(): ... inp = torch.rand(10, 10, requires_grad=True) ... out = run_fn(inp) ... out.backward() Traceback of forward call that caused the error: File "tmp.py", line 53, in out = run_fn(inp) File "tmp.py", line 44, in run_fn out = MyFunc.apply(a) Traceback (most recent call last): File "", line 4, in File "/your/pytorch/install/torch/_tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "", line 8, in backward RuntimeError: Some error in backward returnNctj|_||_tj|_t jddy)NzqAnomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging.) stacklevel)torchis_anomaly_enabledprev check_nanis_anomaly_check_nan_enabledprev_check_nanwarningswarn)selfr s a/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/autograd/anomaly_mode.py__init__zdetect_anomaly.__init__Os@,,. "#@@B  8  cDtjd|jy)NT)r set_anomaly_enabledr rs r __enter__zdetect_anomaly.__enter__Zs !!$7rargscXtj|j|jyNr rr rrrs r__exit__zdetect_anomaly.__exit__] !!$))T-@-@ArTrN)__name__ __module__ __qualname____doc__rrobjectrrrrr s)@D  8BfBBrc>eZdZdZd dededdfdZd dZdeddfd Zy) raTContext-manager that sets the anomaly detection for the autograd engine on or off. ``set_detect_anomaly`` will enable or disable the autograd anomaly detection based on its argument :attr:`mode`. It can be used as a context-manager or as a function. See ``detect_anomaly`` above for details of the anomaly detection behaviour. Args: mode (bool): Flag whether to enable anomaly detection (``True``), or disable (``False``). check_nan (bool): Flag whether to raise an error when the backward generate "nan" moder rNctj|_tj|_tj ||yr)r r r rrr)rr*r s rrzset_detect_anomaly.__init__rs3,,. #@@B !!$ 2rcyrr(rs rrzset_detect_anomaly.__enter__ws rrcXtj|j|jyrrrs rrzset_detect_anomaly.__exit__zr rr!r") r#r$r%r&boolrrr'rr(rrrras< 3T3d3d3  BfBBr)r&rr __all__rrr(rrr0s6  1 2RBRBjBBr