L iddlmZmZddlZddlmZddlmZgdZGddejZ Gdd ejZ Gd d ejZ y) )AnyOptionalN)nn)QConfig) QuantStub DeQuantStub QuantWrappercheZdZdZddeeffd ZdejdejfdZ xZ S)raQuantize stub module, before calibration, this is same as an observer, it will be swapped as `nnq.Quantize` in `convert`. Args: qconfig: quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules qconfigc6t||r||_yyNsuper__init__r selfr __class__s a/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/ao/quantization/stubs.pyrzQuantStub.__init__  "DL xreturnc|Sr rrs rforwardzQuantStub.forwardrr ) __name__ __module__ __qualname____doc__rrrtorchTensorr __classcell__rs@rrr s4# 1# %,,rrcheZdZdZddeeffd ZdejdejfdZ xZ S)raDequantize stub module, before calibration, this is same as identity, this will be swapped as `nnq.DeQuantize` in `convert`. Args: qconfig: quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules r c6t||r||_yyr rrs rrzDeQuantStub.__init__&rrrrc|Sr rrs rrzDeQuantStub.forward+rrr ) rrr r!rrrr"r#rr$r%s@rrrs3# # %,,rrceZdZUdZeed<eed<ejed<dejffd Z de jde jfdZ xZ S) r aA wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the `quantization` utility functions to add the quant and dequant modules, before `convert` function `QuantStub` will just be observer, it observes the input tensor, after `convert`, `QuantStub` will be swapped to `nnq.Quantize` which does actual quantization. Similarly for `DeQuantStub`. quantdequantmodulect|t|dd}|jdt ||jdt ||jd||j |jy)Nr r*r+r,)rrgetattr add_modulerrtraintraining)rr,r rs rrzQuantWrapper.__init__?s` &)T2 7!34  ;w#78 &) 6??#rXrch|j|}|j|}|j|Sr )r*r,r+)rr2s rrzQuantWrapper.forwardGs* JJqM KKN||Ar)rrr r!r__annotations__rrModulerr"r#rr$r%s@rr r /sK    II$ryy$%,,rr ) typingrrr"rtorch.ao.quantizationr__all__r5rrr rrrr9sH ) 7 $"))$299r