L iy ddlmZddlmcmZddlmZddlm Z m Z m Z ddl m Z ddlmZmZmZgdZGd d e ZGd d eZGd deZGddeZGdde ZGddeZGddeZGddeZGdde ZGddeZGddeZGdd eZGd!d"e ZGd#d$eZ Gd%d&eZ!Gd'd(eZ"Gd)d*eZ#Gd+d,eZ$Gd-d.eZ%y)/)SequenceN)Tensor) _size_2_t _size_4_t _size_6_t)Module)_ntuple_pair _quadruple) CircularPad1d CircularPad2d CircularPad3d ConstantPad1d ConstantPad2d ConstantPad3dReflectionPad1dReflectionPad2dReflectionPad3dReplicationPad1dReplicationPad2dReplicationPad3d ZeroPad1d ZeroPad2d ZeroPad3dcFeZdZUdgZeeed<dZdedefdZ de fdZ y)_CircularPadNdpaddingctN)NotImplementedErrorselfinputs ^/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/nn/modules/padding.py_check_input_dimz_CircularPadNd._check_input_dim%s!!r$returncf|j|tj||jdS)Ncircular)r&Fpadrr"s r%forwardz_CircularPadNd.forward(s' e$uuUDLL*55r'c|jSr rr#s r% extra_reprz_CircularPadNd.extra_repr,,, r'N) __name__ __module__ __qualname__ __constants__rint__annotations__r&rr-strr1r'r%rr!s7KM c]"6V66!C!r'rcJeZdZUdZeeefed<deddffd ZddZ xZ S)r aPads the input tensor using circular padding of the input boundary. Tensor values at the beginning of the dimension are used to pad the end, and values at the end are used to pad the beginning. If negative padding is applied then the ends of the tensor get removed. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 2-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`) Note that padding size should be less than or equal to the corresponding input dimension. Shape: - Input: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`. - Output: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`, where :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("not sure why xdoctest is choking on this") >>> m = nn.CircularPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[2., 3., 0., 1., 2., 3., 0., 1.], [6., 7., 4., 5., 6., 7., 4., 5.]]]) >>> # using different paddings for different sides >>> m = nn.CircularPad1d((3, 1)) >>> m(input) tensor([[[1., 2., 3., 0., 1., 2., 3., 0.], [5., 6., 7., 4., 5., 6., 7., 4.]]]) rr(NcBt|t||_yr super__init__r rr#r __class__s r%r?zCircularPad1d.__init__Y W~ r'c|jdk7r1|jdk7rtd|jdyy)Nzexpected 2D or 3D input (got D input)dim ValueErrorr"s r%r&zCircularPad1d._check_input_dim]@ 99;!  q 0>> m = nn.CircularPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[4., 5., 3., 4., 5., 3., 4.], [7., 8., 6., 7., 8., 6., 7.], [1., 2., 0., 1., 2., 0., 1.], [4., 5., 3., 4., 5., 3., 4.], [7., 8., 6., 7., 8., 6., 7.], [1., 2., 0., 1., 2., 0., 1.], [4., 5., 3., 4., 5., 3., 4.]]]]) >>> # using different paddings for different sides >>> m = nn.CircularPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[5., 3., 4., 5., 3.], [8., 6., 7., 8., 6.], [2., 0., 1., 2., 0.], [5., 3., 4., 5., 3.], [8., 6., 7., 8., 6.]]]]) rr(NcBt|t||_yr r>r?r rr@s r%r?zCircularPad2d.__init__ !'* r'c|jdk7r1|jdk7rtd|jdyy)NrEzexpected 3D or 4D input (got rFrGr"s r%r&zCircularPad2d._check_input_dimrJr'rK) r3r4r5rLrMr7r8rr?r&rNrOs@r%rrbs8.`3S#% &&+ +d+Tr'rcReZdZUdZeeeeeeefed<deddffd ZddZ xZ S)rajPads the input tensor using circular padding of the input boundary. Tensor values at the beginning of the dimension are used to pad the end, and values at the end are used to pad the beginning. If negative padding is applied then the ends of the tensor get removed. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Note that padding size should be less than or equal to the corresponding input dimension. Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.CircularPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.CircularPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input) rr(NcNt|td||_yNr>r?r rr@s r%r?zCircularPad3d.__init__  !wqz'* r'c|jdk7r1|jdk7rtd|jdyy)NrUzexpected 4D or 5D input (got rFrGr"s r%r&zCircularPad3d._check_input_dimrJr'rK) r3r4r5rLrMr7r8rr?r&rNrOs@r%rrs<$L3S#sC/ 00+ +d+Tr'rcheZdZUddgZeed<eeed<deddffd Zde de fdZ de fdZ xZ S) _ConstantPadNdrvaluer(Nc0t|||_yr )r>r?r`)r#r`rAs r%r?z_ConstantPadNd.__init__s  r'r$cZtj||jd|jS)Nconstant)r+r,rr`r"s r%r-z_ConstantPadNd.forwardsuuUDLL*djjAAr'c:d|jd|jS)Nzpadding=z, value=)rr`r0s r%r1z_ConstantPadNd.extra_reprs$,,x |<>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 4) >>> input tensor([[[-1.0491, -0.7152, -0.0749, 0.8530], [-1.3287, 1.8966, 0.1466, -0.2771]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000, 3.5000], [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000, 3.5000]]]) >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 3) >>> input tensor([[[ 1.6616, 1.4523, -1.1255], [-3.6372, 0.1182, -1.8652]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000], [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad1d((3, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000], [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]]) rr`r(NcDt||t||_yr r=r#rr`rAs r%r?zConstantPad1d.__init__s W~ r') r3r4r5rLrMr7r8rrer?rNrOs@r%rrs8)V38_& &%&D&&r'rcReZdZUdZddgZeeeeefed<dede ddffd Z xZ S)raPads the input tensor boundaries with a constant value. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 4-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.ConstantPad2d(2, 3.5) >>> input = torch.randn(1, 2, 2) >>> input tensor([[[ 1.6585, 0.4320], [-0.8701, -0.4649]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000], [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320], [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) rr`r(NcDt||t||_yr rRrhs r%r?zConstantPad2d.__init__As !'* r') r3r4r5rLr6rMr7r8rrer?rNrOs@r%rrsH(T(M 3S#% &&+ +%+D++r'rcNeZdZUdZeeeeeeefed<dededdffd Z xZ S)raPads the input tensor boundaries with a constant value. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input) rr`r(NcPt||td||_yrXrZrhs r%r?zConstantPad3d.__init__is" !wqz'* r') r3r4r5rLrMr7r8rrer?rNrOs@r%rrFsB@3S#sC/ 00+ +%+D++r'rc@eZdZUdgZeeed<dedefdZde fdZ y)_ReflectionPadNdrr$r(cDtj||jdS)Nreflectr+r,rr"s r%r-z_ReflectionPadNd.forwardrsuuUDLL)44r'c|jSr r/r0s r%r1z_ReflectionPadNd.extra_reprur2r'N r3r4r5r6rr7r8rr-r9r1r:r'r%rnrnns2KM c]5V55!C!r'rncBeZdZUdZeeefed<deddffd ZxZ S)ra@Pads the input tensor using the reflection of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 2-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`) Note that padding size should be less than the corresponding input dimension. Shape: - Input: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`. - Output: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`, where :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> m = nn.ReflectionPad1d(2) >>> # xdoctest: +IGNORE_WANT("other tests seem to modify printing styles") >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[2., 1., 0., 1., 2., 3., 2., 1.], [6., 5., 4., 5., 6., 7., 6., 5.]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad1d((3, 1)) >>> m(input) tensor([[[3., 2., 1., 0., 1., 2., 3., 2.], [7., 6., 5., 4., 5., 6., 7., 6.]]]) rr(NcBt|t||_yr r=r@s r%r?zReflectionPad1d.__init__rBr' r3r4r5rLrMr7r8rr?rNrOs@r%rry1 D38_& &d&&r'rcFeZdZUdZeeeeefed<deddffd ZxZ S)raaPads the input tensor using the reflection of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 4-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`) Note that padding size should be less than the corresponding input dimension. Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("not sure why xdoctest is choking on this") >>> m = nn.ReflectionPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.], [5., 4., 3., 4., 5., 4., 3.], [8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.]]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[7., 6., 7., 8., 7.], [4., 3., 4., 5., 4.], [1., 0., 1., 2., 1.], [4., 3., 4., 5., 4.], [7., 6., 7., 8., 7.]]]]) rr(NcBt|t||_yr rRr@s r%r?zReflectionPad2d.__init__rSr' r3r4r5rLrMr7r8rr?rNrOs@r%rr7+Z3S#% &&+ +d++r'rcJeZdZUdZeeeeeeefed<deddffd ZxZ S)ra|Pads the input tensor using the reflection of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Note that padding size should be less than the corresponding input dimension. Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("not sure why xdoctest is choking on this") >>> m = nn.ReflectionPad3d(1) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2) >>> m(input) tensor([[[[[7., 6., 7., 6.], [5., 4., 5., 4.], [7., 6., 7., 6.], [5., 4., 5., 4.]], [[3., 2., 3., 2.], [1., 0., 1., 0.], [3., 2., 3., 2.], [1., 0., 1., 0.]], [[7., 6., 7., 6.], [5., 4., 5., 4.], [7., 6., 7., 6.], [5., 4., 5., 4.]], [[3., 2., 3., 2.], [1., 0., 1., 0.], [3., 2., 3., 2.], [1., 0., 1., 0.]]]]]) rr(NcNt|td||_yrXrZr@s r%r?zReflectionPad3d.__init__ r[r' r3r4r5rLrMr7r8rr?rNrOs@r%rrs;-^3S#sC/ 00+ +d++r'rc@eZdZUdgZeeed<dedefdZde fdZ y)_ReplicationPadNdrr$r(cDtj||jdS)N replicaterqr"s r%r-z_ReplicationPadNd.forwardsuuUDLL+66r'c|jSr r/r0s r%r1z_ReplicationPadNd.extra_reprr2r'Nrsr:r'r%rrs2KM c]7V77!C!r'rcBeZdZUdZeeefed<deddffd ZxZ S)ra%Pads the input tensor using replication of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 2-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`) Note that the output dimensions must remain positive. Shape: - Input: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`. - Output: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`, where :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("not sure why xdoctest is choking on this") >>> m = nn.ReplicationPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[0., 0., 0., 1., 2., 3., 3., 3.], [4., 4., 4., 5., 6., 7., 7., 7.]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad1d((3, 1)) >>> m(input) tensor([[[0., 0., 0., 0., 1., 2., 3., 3.], [4., 4., 4., 4., 5., 6., 7., 7.]]]) rr(NcBt|t||_yr r=r@s r%r?zReplicationPad1d.__init__?rBr'rvrOs@r%rrrwr'rcFeZdZUdZeeeeefed<deddffd ZxZ S)ra2Pads the input tensor using replication of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 4-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`) Note that the output dimensions must remain positive. Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> m = nn.ReplicationPad2d(2) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [3., 3., 3., 4., 5., 5., 5.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.]]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [3., 3., 4., 5., 5.], [6., 6., 7., 8., 8.]]]]) rr(NcBt|t||_yr rRr@s r%r?zReplicationPad2d.__init__trSr'rzrOs@r%rrDr{r'rcJeZdZUdZeeeeeeefed<deddffd ZxZ S)ratPads the input tensor using replication of the input boundary. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Note that the output dimensions must remain positive. Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.ReplicationPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input) rr(NcNt|td||_yrXrZr@s r%r?zReplicationPad3d.__init__r[r'r~rOs@r%rrys; D3S#sC/ 00+ +d++r'rcNeZdZUdZeeefed<deddffd Zde fdZ xZ S)raPads the input tensor boundaries with zero. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in both boundaries. If a 2-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`) Shape: - Input: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`. - Output: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`, where :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.ZeroPad1d(2) >>> input = torch.randn(1, 2, 4) >>> input tensor([[[-1.0491, -0.7152, -0.0749, 0.8530], [-1.3287, 1.8966, 0.1466, -0.2771]]]) >>> m(input) tensor([[[ 0.0000, 0.0000, -1.0491, -0.7152, -0.0749, 0.8530, 0.0000, 0.0000], [ 0.0000, 0.0000, -1.3287, 1.8966, 0.1466, -0.2771, 0.0000, 0.0000]]]) >>> m = nn.ZeroPad1d(2) >>> input = torch.randn(1, 2, 3) >>> input tensor([[[ 1.6616, 1.4523, -1.1255], [-3.6372, 0.1182, -1.8652]]]) >>> m(input) tensor([[[ 0.0000, 0.0000, 1.6616, 1.4523, -1.1255, 0.0000, 0.0000], [ 0.0000, 0.0000, -3.6372, 0.1182, -1.8652, 0.0000, 0.0000]]]) >>> # using different paddings for different sides >>> m = nn.ZeroPad1d((3, 1)) >>> m(input) tensor([[[ 0.0000, 0.0000, 0.0000, 1.6616, 1.4523, -1.1255, 0.0000], [ 0.0000, 0.0000, 0.0000, -3.6372, 0.1182, -1.8652, 0.0000]]]) rr(Nc&t||dyNgr>r?r@s r%r?zZeroPad1d.__init__ #&r'c|jSz@ Return the extra representation of the module. r/r0s r%r1zZeroPad1d.extra_repr,, r') r3r4r5rLrMr7r8rr?r9r1rNrOs@r%rrs8)V38_' 'd'!C!r'rcReZdZUdZeeeeefed<deddffd Zde fdZ xZ S)raOPads the input tensor boundaries with zero. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 4-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> m = nn.ZeroPad2d(2) >>> input = torch.randn(1, 1, 3, 3) >>> input tensor([[[[-0.1678, -0.4418, 1.9466], [ 0.9604, -0.4219, -0.5241], [-0.9162, -0.5436, -0.6446]]]]) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> # using different paddings for different sides >>> m = nn.ZeroPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000], [ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000], [ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]]) rr(Nc&t||dyrrr@s r%r?zZeroPad2d.__init__ rr'c|jSrr/r0s r%r1zZeroPad2d.extra_repr rr') r3r4r5rLrMr7r8rr?r9r1rNrOs@r%rrs>*X3S#% &&' 'd'!C!r'rcVeZdZUdZeeeeeeefed<deddffd Zde fdZ xZ S)raPads the input tensor boundaries with zero. For `N`-dimensional padding, use :func:`torch.nn.functional.pad()`. Args: padding (int, tuple): the size of the padding. If is `int`, uses the same padding in all boundaries. If a 6-`tuple`, uses (:math:`\text{padding\_left}`, :math:`\text{padding\_right}`, :math:`\text{padding\_top}`, :math:`\text{padding\_bottom}`, :math:`\text{padding\_front}`, :math:`\text{padding\_back}`) Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}` :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}` :math:`W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}` Examples:: >>> m = nn.ZeroPad3d(3) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ZeroPad3d((3, 3, 6, 6, 0, 1)) >>> output = m(input) rr(Nc&t||dyrrr@s r%r?zZeroPad3d.__init__7rr'c|jSrr/r0s r%r1zZeroPad3d.extra_repr:rr') r3r4r5rLrMr7r8rr?r9r1rNrOs@r%rrsB@3S#sC/ 00' 'd'!C!r'r)&collections.abcrtorch.nn.functionalnn functionalr+torchrtorch.nn.common_typesrrrmoduler utilsr r r __all__rr rrr_rrrrnrrrrrrrrrrr:r'r%rs%$AA--  & !V !/TN/Td9TN9Tx/TN/Td =V = 0&N0&f0+N0+f%+N%+P!v!'&&'&T2+&2+j4+&4+n!!'&('&T2+(2+j'+('+T5! 5!p6! 6!r*! *!r'