L i.ddlZddlmZddlZddlmZddlmZmZddl m Z m Z ddl m Z ddlmZgd ZGd d eZGd d eZGddeZGddeZGdde eZy)N)Any)Tensor) functionalinit) ParameterUninitializedParameter)LazyModuleMixin)Module)BilinearIdentity LazyLinearLinearc@eZdZdZdededdffd ZdedefdZxZS) r aA placeholder identity operator that is argument-insensitive. Args: args: any argument (unused) kwargs: any keyword argument (unused) Shape: - Input: :math:`(*)`, where :math:`*` means any number of dimensions. - Output: :math:`(*)`, same shape as the input. Examples:: >>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 20]) argskwargsreturnNc"t|y)Nsuper__init__)selfrr __class__s ]/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/nn/modules/linear.pyrzIdentity.__init__+s inputc|Sz( Runs the forward pass. rrs rforwardzIdentity.forward.s  r) __name__ __module__ __qualname____doc__rrrr! __classcell__rs@rr r s5(cSTVrr c eZdZUdZddgZeed<eed<eed< d dedededdffd Z dd Z d edefd Z de fd Z xZS)raCApplies an affine linear transformation to the incoming data: :math:`y = xA^T + b`. This module supports :ref:`TensorFloat32`. On certain ROCm devices, when using float16 inputs this module will use :ref:`different precision` for backward. Args: in_features: size of each input sample out_features: size of each output sample bias: If set to ``False``, the layer will not learn an additive bias. Default: ``True`` Shape: - Input: :math:`(*, H_\text{in})` where :math:`*` means any number of dimensions including none and :math:`H_\text{in} = \text{in\_features}`. - Output: :math:`(*, H_\text{out})` where all but the last dimension are the same shape as the input and :math:`H_\text{out} = \text{out\_features}`. Attributes: weight: the learnable weights of the module of shape :math:`(\text{out\_features}, \text{in\_features})`. The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where :math:`k = \frac{1}{\text{in\_features}}` bias: the learnable bias of the module of shape :math:`(\text{out\_features})`. If :attr:`bias` is ``True``, the values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{\text{in\_features}}` Examples:: >>> m = nn.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) in_features out_featuresweightNbiasrc&||d}t|||_||_t t j ||ffi||_|r%t t j |fi||_n|jdd|jy)Ndevicedtyper,) rrr)r*rtorchemptyr+r,register_parameterreset_parameters)rr)r*r,r/r0factory_kwargsrs rrzLinear.__init__`s%+U; &( KK{3 F~ F   !%++l"Mn"MNDI  # #FD 1 rcLtj|jtjd|j dtj |j\}}|dkDrdtj|z nd}tj|j | |yy)W Resets parameters based on their initialization used in ``__init__``. )aNrr )rkaiming_uniform_r+mathsqrtr,_calculate_fan_in_and_fan_outuniform_)rfan_in_bounds rr4zLinear.reset_parametersusx dkkTYYq\: 99 ::4;;GIFA-3aZA &))QE MM$))eVU 3 !rrcXtj||j|jSr)Flinearr+r,r s rr!zLinear.forwardsxxt{{DII66rcXd|jd|jd|jduS)@ Return the extra representation of the module. z in_features=, out_features=, bias=N)r)r*r,rs r extra_reprzLinear.extra_reprs:d../t?P?P>QQXY]YbYbjnYnXopprTNNrNr"r#r$r% __constants__int__annotations__rboolrr4r!strrJr&r's@rrr5s#J#N3M N         * 47V77 qCqrrc 8eZdZ ddedededdffd ZxZS)NonDynamicallyQuantizableLinearNr)r*r,rc.t||||||y)N)r,r/r0r)rr)r*r,r/r0rs rrz(NonDynamicallyQuantizableLinear.__init__s"  Du  rrK)r"r#r$rOrQrr&r's@rrTrTs>             rrTc eZdZUdZgdZeed<eed<eed<eed< ddedededed df fd Z dd Z d ed ed efdZ d e fdZ xZS)r aApplies a bilinear transformation to the incoming data: :math:`y = x_1^T A x_2 + b`. Args: in1_features: size of each first input sample, must be > 0 in2_features: size of each second input sample, must be > 0 out_features: size of each output sample, must be > 0 bias: If set to ``False``, the layer will not learn an additive bias. Default: ``True`` Shape: - Input1: :math:`(*, H_\text{in1})` where :math:`H_\text{in1}=\text{in1\_features}` and :math:`*` means any number of additional dimensions including none. All but the last dimension of the inputs should be the same. - Input2: :math:`(*, H_\text{in2})` where :math:`H_\text{in2}=\text{in2\_features}`. - Output: :math:`(*, H_\text{out})` where :math:`H_\text{out}=\text{out\_features}` and all but the last dimension are the same shape as the input. Attributes: weight: the learnable weights of the module of shape :math:`(\text{out\_features}, \text{in1\_features}, \text{in2\_features})`. The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where :math:`k = \frac{1}{\text{in1\_features}}` bias: the learnable bias of the module of shape :math:`(\text{out\_features})`. If :attr:`bias` is ``True``, the values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where :math:`k = \frac{1}{\text{in1\_features}}` Examples:: >>> m = nn.Bilinear(20, 30, 40) >>> input1 = torch.randn(128, 20) >>> input2 = torch.randn(128, 30) >>> output = m(input1, input2) >>> print(output.size()) torch.Size([128, 40]) ) in1_features in2_featuresr*rWrXr*r+Nr,rc\||d}t||dkrtd|||_||_||_t tj|||ffi||_ |r%t tj|fi||_ n|jdd|jy)Nr.rz"in1_features must be > 0, but got r,) rr ValueErrorrWrXr*rr1r2r+r,r3r4) rrWrXr*r,r/r0r5rs rrzBilinear.__init__s%+U;  1 A,PQ Q((( KK|\B Un U   !%++l"Mn"MNDI  # #FD 1 rcdtj|jjdz }t j |j| ||j #t j |j | |yy)r7r N)r;r<r+sizerr>r,)rrAs rr4zBilinear.reset_parameterssaDIIdkk..q122 dkkE651 99 MM$))eVU 3 !rinput1input2cZtj|||j|jSr)rCbilinearr+r,)rr]r^s rr!zBilinear.forwards!zz&&$++tyyAArc rd|jd|jd|jd|jduS)rFz in1_features=z, in2_features=rGrHN)rWrXr*r,rIs rrJzBilinear.extra_reprsL D--.od>O>O=PQ --.gdiit6K5L N rrKrLrMr's@rr r s#JEM N         44BfBfBB  C rr cbeZdZUdZeZeed<eed< d dede ddffd Z d fd Z d d Z xZ S) raA :class:`torch.nn.Linear` module where `in_features` is inferred. In this module, the `weight` and `bias` are of :class:`torch.nn.UninitializedParameter` class. They will be initialized after the first call to ``forward`` is done and the module will become a regular :class:`torch.nn.Linear` module. The ``in_features`` argument of the :class:`Linear` is inferred from the ``input.shape[-1]``. Check the :class:`torch.nn.modules.lazy.LazyModuleMixin` for further documentation on lazy modules and their limitations. Args: out_features: size of each output sample bias: If set to ``False``, the layer will not learn an additive bias. Default: ``True`` Attributes: weight: the learnable weights of the module of shape :math:`(\text{out\_features}, \text{in\_features})`. The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where :math:`k = \frac{1}{\text{in\_features}}` bias: the learnable bias of the module of shape :math:`(\text{out\_features})`. If :attr:`bias` is ``True``, the values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where :math:`k = \frac{1}{\text{in\_features}}` r+r,Nr*rc||d}t|dddtdi||_||_|rtdi||_yy)Nr.rFr)rrrr+r*r,)rr*r,r/r0r5rs rrzLazyLinear.__init__"sP%+U; Au%,>~> ( .@@DI rcd|js|jdk7rt| yyy)r7rN)has_uninitialized_paramsr)rr4)rrs rr4zLazyLinear.reset_parameters.s2,,.43C3Cq3H G $ &4I.rc|jrtj5|jd|_|j j |j|jf|j&|jj |jf|jddd|jdk(ro|jd|j jdk(s1Jd|jdd|j jd|jd|_yy#1swYxYw)zW Infers ``in_features`` based on ``input`` and initializes parameters. Nrz%The in_features inferred from input: z/ is not equal to in_features from self.weight: ) rer1no_gradshaper)r+ materializer*r,r4r s rinitialize_parametersz LazyLinear.initialize_parameters5s  ( ( * (#(;;r?  ''):):D?%%'  (   q ;;r?dkk&7&7&;; 7 B7HIA;;$$R()+ ; %{{2D  !  ( (s BD44D=rKrL)r"r#r$r%r cls_to_becomerrPrOrQrr4rkr&r's@rrrsM8M ""  HL A A'+ A  A'/rr)r;typingrr1rtorch.nnrrCrtorch.nn.parameterrrlazyr moduler __all__r rrTr rrrrrssl  *@! v>WqVWq~  f  \ v\ ~E/&E/r