L i{ddlmZddlmcmZddlmZddlm Z m Z m Z m Z m Z mZmZmZmZddlmZddlmZmZmZgdZGd d eZGd d eZGd deZGddeZGddeZGddeZGddeZGddeZ GddeZ!Gdde!Z"Gdde!Z#Gdd e!Z$Gd!d"eZ%Gd#d$eZ&Gd%d&eZ'Gd'd(e'Z(Gd)d*e'Z)Gd+d,e'Z*Gd-d.eZ+Gd/d0e+Z,Gd1d2e+Z-Gd3d4e+Z.Gd5d6eZ/Gd7d8e/Z0Gd9d:e/Z1Gd;d __classcell__r5s@r6r(r(-sMO )- !$# #%# #  #  ## #" C r7r(cFeZdZUdZeed<eed<eed<eed<defdZy) ra Applies a 1D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, L)` and output :math:`(N, C, L_{out})` can be precisely described as: .. math:: out(N_i, C_j, k) = \max_{m=0, \ldots, \text{kernel\_size} - 1} input(N_i, C_j, stride \times k + m) If :attr:`padding` is non-zero, then the input is implicitly padded with negative infinity on both sides for :attr:`padding` number of points. :attr:`dilation` is the stride between the elements within the sliding window. This `link`_ has a nice visualization of the pooling parameters. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. Args: kernel_size: The size of the sliding window, must be > 0. stride: The stride of the sliding window, must be > 0. Default value is :attr:`kernel_size`. padding: Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2. dilation: The stride between elements within a sliding window, must be > 0. return_indices: If ``True``, will return the argmax along with the max values. Useful for :class:`torch.nn.MaxUnpool1d` later ceil_mode: If ``True``, will use `ceil` instead of `floor` to compute the output shape. This ensures that every element in the input tensor is covered by a sliding window. Shape: - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`. - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where ``ceil_mode = False`` .. math:: L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}}\right\rfloor + 1 where ``ceil_mode = True`` .. math:: L_{out} = \left\lceil \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1 + (stride - 1)}{\text{stride}}\right\rceil + 1 - Ensure that the last pooling starts inside the image, make :math:`L_{out} = L_{out} - 1` when :math:`(L_{out} - 1) * \text{stride} >= L_{in} + \text{padding}`. Examples:: >>> # pool of size=3, stride=2 >>> m = nn.MaxPool1d(3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input) .. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md r)r*r+r,inputc tj||j|j|j|j |j |jSRuns the forward pass.)r.r-)F max_pool1dr)r*r+r,r.r-r4rIs r6forwardzMaxPool1d.forwardD||     KK LL MMnn..  r7Nr?r@rA__doc__rrDrrPr9r7r6rrQs.7r    V  r7rcFeZdZUdZeed<eed<eed<eed<defdZy) ra Applies a 2D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, H, W)`, output :math:`(N, C, H_{out}, W_{out})` and :attr:`kernel_size` :math:`(kH, kW)` can be precisely described as: .. math:: \begin{aligned} out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times h + m, \text{stride[1]} \times w + n) \end{aligned} If :attr:`padding` is non-zero, then the input is implicitly padded with negative infinity on both sides for :attr:`padding` number of points. :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension Args: kernel_size: the size of the window to take a max over stride: the stride of the window. Default value is :attr:`kernel_size` padding: Implicit negative infinity padding to be added on both sides dilation: a parameter that controls the stride of elements in the window return_indices: if ``True``, will return the max indices along with the outputs. Useful for :class:`torch.nn.MaxUnpool2d` later ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where .. math:: H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]} \times (\text{kernel\_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]} \times (\text{kernel\_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor Examples:: >>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input) .. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md r)r*r+r,rIc tj||j|j|j|j |j |jSrK)rM max_pool2dr)r*r+r,r.r-rOs r6rPzMaxPool2d.forwardrQr7Nr?r@rArSr rDrrPr9r7r6rrs.;z    V  r7rcFeZdZUdZeed<eed<eed<eed<defdZy) ra Applies a 3D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, D, H, W)`, output :math:`(N, C, D_{out}, H_{out}, W_{out})` and :attr:`kernel_size` :math:`(kD, kH, kW)` can be precisely described as: .. math:: \begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times d + k, \text{stride[1]} \times h + m, \text{stride[2]} \times w + n) \end{aligned} If :attr:`padding` is non-zero, then the input is implicitly padded with negative infinity on both sides for :attr:`padding` number of points. :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be: - a single ``int`` -- in which case the same value is used for the depth, height and width dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension Args: kernel_size: the size of the window to take a max over stride: the stride of the window. Default value is :attr:`kernel_size` padding: Implicit negative infinity padding to be added on all three sides dilation: a parameter that controls the stride of elements in the window return_indices: if ``True``, will return the max indices along with the outputs. Useful for :class:`torch.nn.MaxUnpool3d` later ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where .. math:: D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor .. math:: H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor Examples:: >>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50, 44, 31) >>> output = m(input) .. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md r)r*r+r,rIc tj||j|j|j|j |j |jSrK)rM max_pool3dr)r*r+r,r.r-rOs r6rPzMaxPool3d.forward4rQr7Nr?r@rArSr rDrrPr9r7r6rrs.?B    V  r7rceZdZdefdZy) _MaxUnpoolNdr/cTd|jd|jd|jSNz kernel_size=z , stride=z , padding=r)r*r+r=s r6r>z_MaxUnpoolNd.extra_reprB-d../y ZPTP\P\~^^r7N)r?r@rArEr>r9r7r6r]r]As_C_r7r]c eZdZUdZeed<eed<eed< d dedeededdffd Z d ded ed ee e defd Z xZ S)ra Computes a partial inverse of :class:`MaxPool1d`. :class:`MaxPool1d` is not fully invertible, since the non-maximal values are lost. :class:`MaxUnpool1d` takes in as input the output of :class:`MaxPool1d` including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note: This operation may behave nondeterministically when the input indices has repeat values. See https://github.com/pytorch/pytorch/issues/80827 and :doc:`/notes/randomness` for more information. .. note:: :class:`MaxPool1d` can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument :attr:`output_size` in the forward call. See the Inputs and Example below. Args: kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window. It is set to :attr:`kernel_size` by default. padding (int or tuple): Padding that was added to the input Inputs: - `input`: the input Tensor to invert - `indices`: the indices given out by :class:`~torch.nn.MaxPool1d` - `output_size` (optional): the targeted output size Shape: - Input: :math:`(N, C, H_{in})` or :math:`(C, H_{in})`. - Output: :math:`(N, C, H_{out})` or :math:`(C, H_{out})`, where .. math:: H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{kernel\_size}[0] or as given by :attr:`output_size` in the call operator Example:: >>> # xdoctest: +IGNORE_WANT("do other tests modify the global state?") >>> pool = nn.MaxPool1d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool1d(2, stride=2) >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]]) >>> # Example showcasing the use of output_size >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]]) >>> output, indices = pool(input) >>> unpool(output, indices, output_size=input.size()) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]]) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]]) r)r*r+Nr/ct|t||_t||n||_t||_yr1)r2r3rr)r*r+r4r)r*r+r5s r6r3zMaxUnpool1d.__init__< ";/);f+N w' r7rIindices output_sizecrtj|||j|j|j|SrL)rM max_unpool1dr)r*r+r4rIrfrgs r6rPzMaxUnpool1d.forward0~~ 7D,,dkk4<<  r7Nrr1) r?r@rArSrrDrr3rlistintrPrFrGs@r6rrFs8t   '+ ( (# ( (  (RV  &, ;CDI;N  r7rc eZdZUdZeed<eed<eed< d dedeededdffd Z d ded ed ee e defd Z xZ S)ra Computes a partial inverse of :class:`MaxPool2d`. :class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost. :class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d` including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note: This operation may behave nondeterministically when the input indices has repeat values. See https://github.com/pytorch/pytorch/issues/80827 and :doc:`/notes/randomness` for more information. .. note:: :class:`MaxPool2d` can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument :attr:`output_size` in the forward call. See the Inputs and Example below. Args: kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window. It is set to :attr:`kernel_size` by default. padding (int or tuple): Padding that was added to the input Inputs: - `input`: the input Tensor to invert - `indices`: the indices given out by :class:`~torch.nn.MaxPool2d` - `output_size` (optional): the targeted output size Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where .. math:: H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} .. math:: W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} or as given by :attr:`output_size` in the call operator Example:: >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = torch.tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[[ 0., 0., 0., 0.], [ 0., 6., 0., 8.], [ 0., 0., 0., 0.], [ 0., 14., 0., 16.]]]]) >>> # Now using output_size to resolve an ambiguous size for the inverse >>> input = torch.tensor([[[[ 1., 2., 3., 4., 5.], [ 6., 7., 8., 9., 10.], [11., 12., 13., 14., 15.], [16., 17., 18., 19., 20.]]]]) >>> output, indices = pool(input) >>> # This call will not work without specifying output_size >>> unpool(output, indices, output_size=input.size()) tensor([[[[ 0., 0., 0., 0., 0.], [ 0., 7., 0., 9., 0.], [ 0., 0., 0., 0., 0.], [ 0., 17., 0., 19., 0.]]]]) r)r*r+Nr/ct|t||_t||n||_t||_yr1)r2r3rr)r*r+rds r6r3zMaxUnpool2d.__init__s;  -v'9F L W~ r7rIrfrgcrtj|||j|j|j|Sri)rM max_unpool2dr)r*r+rks r6rPzMaxUnpool2d.forwardrlr7rmr1) r?r@rArSr rDrr3rrnrorPrFrGs@r6rrsEN   '+ & &# & &  &RV  &, ;CDI;N  r7rc eZdZUdZeed<eed<eed< d dedeededdffd Z d ded ed ee e defd Z xZ S)ra Computes a partial inverse of :class:`MaxPool3d`. :class:`MaxPool3d` is not fully invertible, since the non-maximal values are lost. :class:`MaxUnpool3d` takes in as input the output of :class:`MaxPool3d` including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. Note: This operation may behave nondeterministically when the input indices has repeat values. See https://github.com/pytorch/pytorch/issues/80827 and :doc:`/notes/randomness` for more information. .. note:: :class:`MaxPool3d` can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument :attr:`output_size` in the forward call. See the Inputs section below. Args: kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window. It is set to :attr:`kernel_size` by default. padding (int or tuple): Padding that was added to the input Inputs: - `input`: the input Tensor to invert - `indices`: the indices given out by :class:`~torch.nn.MaxPool3d` - `output_size` (optional): the targeted output size Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where .. math:: D_{out} = (D_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} .. math:: H_{out} = (H_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} .. math:: W_{out} = (W_{in} - 1) \times \text{stride[2]} - 2 \times \text{padding[2]} + \text{kernel\_size[2]} or as given by :attr:`output_size` in the call operator Example:: >>> # pool of square window of size=3, stride=2 >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool3d(3, stride=2) >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15)) >>> unpooled_output = unpool(output, indices) >>> unpooled_output.size() torch.Size([20, 16, 51, 33, 15]) r)r*r+Nr/ct|t||_t||n||_t||_yr1)r2r3rr)r*r+rds r6r3zMaxUnpool3d.__init__4rer7rIrfrgcrtj|||j|j|j|Sri)rM max_unpool3dr)r*r+rks r6rPzMaxUnpool3d.forward?rlr7rmr1) r?r@rArSr rDrr3rrnrorPrFrGs@r6rrs4l   '+ ( (# ( (  (RV  &, ;CDI;N  r7rc eZdZgdZdefdZy) _AvgPoolNd)r)r*r+r.count_include_padr/cTd|jd|jd|jSr_r`r=s r6r>z_AvgPoolNd.extra_reprQrar7N)r?r@rArBrEr>r9r7r6ryryHsM_C_r7ryc eZdZUdZeed<eed<eed<eed<eed< d dedededededdf fd Zd edefd Z xZ S) ra~Applies a 1D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, L)`, output :math:`(N, C, L_{out})` and :attr:`kernel_size` :math:`k` can be precisely described as: .. math:: \text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} \text{input}(N_i, C_j, \text{stride} \times l + m) If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. .. note:: pad should be at most half of effective kernel size. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding` can each be an ``int`` or a one-element tuple. Args: kernel_size: the size of the window stride: the stride of the window. Default value is :attr:`kernel_size` padding: implicit zero padding to be added on both sides ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation Shape: - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`. - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where .. math:: L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor Per the note above, if ``ceil_mode`` is True and :math:`(L_{out} - 1) \times \text{stride} \geq L_{in} + \text{padding}`, we skip the last window as it would start in the right padded region, resulting in :math:`L_{out}` being reduced by one. Examples:: >>> # pool with window of size=3, stride=2 >>> m = nn.AvgPool1d(3, stride=2) >>> m(torch.tensor([[[1., 2, 3, 4, 5, 6, 7]]])) tensor([[[2., 4., 6.]]]) r)r*r+r.rzNr/ct|t||_t||n||_t||_||_||_yr1)r2r3rr)r*r+r.rz)r4r)r*r+r.rzr5s r6r3zAvgPool1d.__init__sK ";/(:f L w' "!2r7rIctj||j|j|j|j |j Sri)rM avg_pool1dr)r*r+r.rzrOs r6rPzAvgPool1d.forwards=||     KK LL NN  " "   r7)NrFT) r?r@rArSrrDrCr3rrPrFrGs@r6rrUs1f  O !"& 3 3 3 3  3  3  3  V    r7rceZdZUdZgdZeed<eed<eed<eed<eed< ddedeedededed ee d dffd Z d e d e fd Z xZ S)ra Applies a 2D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, H, W)`, output :math:`(N, C, H_{out}, W_{out})` and :attr:`kernel_size` :math:`(kH, kW)` can be precisely described as: .. math:: out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n) If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. .. note:: pad should be at most half of effective kernel size. The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding` can either be: - a single ``int`` or a single-element tuple -- in which case the same value is used for the height and width dimension - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension Args: kernel_size: the size of the window stride: the stride of the window. Default value is :attr:`kernel_size` padding: implicit zero padding to be added on both sides ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation divisor_override: if specified, it will be used as divisor, otherwise size of the pooling region will be used. Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where .. math:: H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor Per the note above, if ``ceil_mode`` is True and :math:`(H_{out} - 1)\times \text{stride}[0]\geq H_{in} + \text{padding}[0]`, we skip the last window as it would start in the bottom padded region, resulting in :math:`H_{out}` being reduced by one. The same applies for :math:`W_{out}`. Examples:: >>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input) r)r*r+r.rzdivisor_overrider)r*r+r.rzNrr/c~t|||_||n||_||_||_||_||_yr1r2r3r)r*r+r.rzrr4r)r*r+r.rzrr5s r6r3zAvgPool2d.__init__D &!'!3f+  "!2 0r7rIc tj||j|j|j|j |j |jSri)rM avg_pool2dr)r*r+r.rzrrOs r6rPzAvgPool2d.forward F||     KK LL NN  " "  ! !  r7NrFTN)r?r@rArSrBr rDrCrror3rrPrFrGs@r6rrs>@M  O '+"&*.11#1 1  1  1#3-1 1"  V    r7rceZdZUdZgdZeed<eed<eed<eed<eed< ddedeedededed ee d dffd Z d e d e fd Z fdZ xZS)ra Applies a 3D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C, D, H, W)`, output :math:`(N, C, D_{out}, H_{out}, W_{out})` and :attr:`kernel_size` :math:`(kD, kH, kW)` can be precisely described as: .. math:: \begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ & \frac{\text{input}(N_i, C_j, \text{stride}[0] \times d + k, \text{stride}[1] \times h + m, \text{stride}[2] \times w + n)} {kD \times kH \times kW} \end{aligned} If :attr:`padding` is non-zero, then the input is implicitly zero-padded on all three sides for :attr:`padding` number of points. Note: When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. .. note:: pad should be at most half of effective kernel size. The parameters :attr:`kernel_size`, :attr:`stride` can either be: - a single ``int`` -- in which case the same value is used for the depth, height and width dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension Args: kernel_size: the size of the window stride: the stride of the window. Default value is :attr:`kernel_size` padding: implicit zero padding to be added on all three sides ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation divisor_override: if specified, it will be used as divisor, otherwise :attr:`kernel_size` will be used Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where .. math:: D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor .. math:: H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor Per the note above, if ``ceil_mode`` is True and :math:`(D_{out} - 1)\times \text{stride}[0]\geq D_{in} + \text{padding}[0]`, we skip the last window as it would start in the padded region, resulting in :math:`D_{out}` being reduced by one. The same applies for :math:`W_{out}` and :math:`H_{out}`. Examples:: >>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50, 44, 31) >>> output = m(input) rr)r*r+r.rzNrr/c~t|||_||n||_||_||_||_||_yr1rrs r6r3zAvgPool3d.__init__orr7rIc tj||j|j|j|j |j |jSri)rM avg_pool3dr)r*r+r.rzrrOs r6rPzAvgPool3d.forwardrr7ct|||jjdd|jjdd|jjddy)Nr+rr.FrzT)r2 __setstate__r< setdefault)r4dr5s r6rzAvgPool3d.__setstate__sM Q   A.   e4   !4d;r7r)r?r@rArSrBr rDrCrror3rrPrrFrGs@r6rrsENM  O '+"&*.11#1 1  1  1#3-1 1"  V    <>> # pool of square window of size=3, and target output size 13x12 >>> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >>> # pool of square window and target output size being half of input image size >>> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input) .. _Fractional MaxPooling: https://arxiv.org/abs/1412.6071 r)r-rg output_ratior)r-rgrNr/ct|t||_||_|j d|| t|nd|_| t|nd|_| | td| | td|jEd|jdcxkrdkrnnd|jdcxkrdksntd|dyy)N_random_sampleszQFractionalMaxPool2d requires specifying either an output size, or a pooling ratio9only one of output_size and output_ratio may be specifiedrr*output_ratio must be between 0 and 1 (got )) r2r3rr)r-register_bufferrgr ValueErrorr4r)rgrr-rr5s r6r3zFractionalMaxPool2d.__init__s  -, .@1<1H5-d3?3KE,/QU  <#75   "|'?K     ())!,0q0Q9J9J19M5QPQ5Q @aP6R )r7rIctj||j|j|j|j |j SN)r)rMfractional_max_pool2dr)rgrr-rrOs r6rPzFractionalMaxPool2d.forwardE&&              00   r7NNFN)r?r@rArSrBr rDrCrrr3rrPrFrGs@r6rrs'RUM ,0-1$ i(z*    : V r7rc eZdZUdZgdZeed<eed<eed<eed< d dede ede ededdf fd Z d e fd Z xZ S) racApplies a 3D fractional max pooling over an input signal composed of several input planes. Fractional MaxPooling is described in detail in the paper `Fractional MaxPooling`_ by Ben Graham The max-pooling operation is applied in :math:`kT \times kH \times kW` regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes. .. note:: Exactly one of ``output_size`` or ``output_ratio`` must be defined. Args: kernel_size: the size of the window to take a max over. Can be a single number `k` (for a square kernel of `k x k x k`) or a tuple `(kt x kh x kw)`, `k` must greater than 0. output_size: the target output size of the image of the form `oT x oH x oW`. Can be a tuple `(oT, oH, oW)` or a single number oH for a square image `oH x oH x oH` output_ratio: If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the range (0, 1) return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to :meth:`nn.MaxUnpool3d`. Default: ``False`` Shape: - Input: :math:`(N, C, T_{in}, H_{in}, W_{in})` or :math:`(C, T_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, T_{out}, H_{out}, W_{out})` or :math:`(C, T_{out}, H_{out}, W_{out})`, where :math:`(T_{out}, H_{out}, W_{out})=\text{output\_size}` or :math:`(T_{out}, H_{out}, W_{out})=\text{output\_ratio} \times (T_{in}, H_{in}, W_{in})` Examples: >>> # pool of cubic window of size=3, and target output size 13x12x11 >>> m = nn.FractionalMaxPool3d(3, output_size=(13, 12, 11)) >>> # pool of cubic window and target output size being half of input size >>> m = nn.FractionalMaxPool3d(3, output_ratio=(0.5, 0.5, 0.5)) >>> input = torch.randn(20, 16, 50, 32, 16) >>> output = m(input) .. _Fractional MaxPooling: https://arxiv.org/abs/1412.6071 rr)r-rgrNr/cnt|t|tr|dks(t|tt fr t d|Dstd|t||_ ||_ |jd|| t|nd|_ | t|nd|_ | | td| | td|j`d|jdcxkrdkr8nn5d|jdcxkrdkrnnd|jdcxkrdksntd |d yy) Nrc3&K|] }|dkD yw)rNr9).0ks r6 z/FractionalMaxPool3d.__init__..%s3!A3sz)kernel_size must greater than 0, but got rzQFractionalMaxPool3d requires specifying either an output size, or a pooling ratiorrrr)r2r3 isinstancerotuplernallrrr)r-rrgrrs r6r3zFractionalMaxPool3d.__init__sS  {C ([A-= {UDM 23{33H VW W";/, .@3>3J7;/PT5A5MGL1SW  <#75   "|'?K     (D%%a(,1,))!,0q0))!,0q0 @aP1 )r7rIctj||j|j|j|j |j Sr)rMfractional_max_pool3dr)rgrr-rrOs r6rPzFractionalMaxPool3d.forward@rr7r)r?r@rArSrBr rDrCrrr3rrPrFrGs@r6rrs%NUM ,0-1$ $$i($z* $  $ $L V r7rc jeZdZUgdZeed<eed< d dededeededdf fd Z de fd Z xZ S) _LPPoolNd) norm_typer)r*r.rr.Nr)r*r/cZt|||_||_||_||_yr1)r2r3rr)r*r.)r4rr)r*r.r5s r6r3z_LPPoolNd.__init__Qs- "& "r7c:djdi|jS)NzXnorm_type={norm_type}, kernel_size={kernel_size}, stride={stride}, ceil_mode={ceil_mode}r9r:r=s r6r>z_LPPoolNd.extra_repr^s' + $$*F <-1]] < r7)NF) r?r@rArBfloatrDrCr rr3rEr>rFrGs@r6rrKsaGMO )- # #! #% #  #  # C r7rc6eZdZUdZeed<eed<dedefdZy)raApplies a 1D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: .. math:: f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} - At p = :math:`\infty`, one gets Max Pooling - At p = 1, one gets Sum Pooling (which is proportional to Average Pooling) .. note:: If the sum to the power of `p` is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case. Args: kernel_size: a single int, the size of the window stride: a single int, the stride of the window. Default value is :attr:`kernel_size` ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape Shape: - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`. - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where .. math:: L_{out} = \left\lfloor\frac{L_{in} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor Examples:: >>> # power-2 pool of window of length 3, with stride 2. >>> m = nn.LPPool1d(2, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input) r)r*rIr/ctj|t|j|j|j |j Sri)rM lp_pool1drrr)r*r.rOs r6rPzLPPool1d.forward5{{ 5($*:*:DKK  r7NrRr9r7r6rres)@  V  r7rc6eZdZUdZeed<eed<dedefdZy)ra$Applies a 2D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: .. math:: f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} - At p = :math:`\infty`, one gets Max Pooling - At p = 1, one gets Sum Pooling (which is proportional to average pooling) The parameters :attr:`kernel_size`, :attr:`stride` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension .. note:: If the sum to the power of `p` is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case. Args: kernel_size: the size of the window stride: the stride of the window. Default value is :attr:`kernel_size` ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where .. math:: H_{out} = \left\lfloor\frac{H_{in} - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor Examples:: >>> # power-2 pool of square window of size=3, stride=2 >>> m = nn.LPPool2d(2, 3, stride=2) >>> # pool of non-square window of power 1.2 >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input) r)r*rIr/ctj|t|j|j|j |j Sri)rM lp_pool2drrr)r*r.rOs r6rPzLPPool2d.forwardrr7NrWr9r7r6rrs)+Z  V  r7rc6eZdZUdZeed<eed<dedefdZy)r aApplies a 3D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: .. math:: f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} - At p = :math:`\infty`, one gets Max Pooling - At p = 1, one gets Sum Pooling (which is proportional to average pooling) The parameters :attr:`kernel_size`, :attr:`stride` can either be: - a single ``int`` -- in which case the same value is used for the height, width and depth dimension - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the height dimension and the third `int` for the width dimension .. note:: If the sum to the power of `p` is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case. Args: kernel_size: the size of the window stride: the stride of the window. Default value is :attr:`kernel_size` ceil_mode: when True, will use `ceil` instead of `floor` to compute the output shape Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where .. math:: D_{out} = \left\lfloor\frac{D_{in} - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor .. math:: H_{out} = \left\lfloor\frac{H_{in} - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor .. math:: W_{out} = \left\lfloor\frac{W_{in} - \text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor Examples:: >>> # power-2 pool of square window of size=3, stride=2 >>> m = nn.LPPool3d(2, 3, stride=2) >>> # pool of non-square window of power 1.2 >>> m = nn.LPPool3d(1.2, (3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50, 44, 31) >>> output = m(input) r)r*rIr/ctj|t|j|j|j |j Sri)rM lp_pool3drrr)r*r.rOs r6rPzLPPool3d.forwardrr7Nr[r9r7r6r r s)/b  V  r7r cPeZdZUddgZeed< ddededdffd ZdefdZ xZ S)_AdaptiveMaxPoolNdrgr-r/Nc>t|||_||_yr1)r2r3rgr-)r4rgr-r5s r6r3z_AdaptiveMaxPoolNd.__init__s  &,r7c d|jSNz output_size=rgr=s r6r>z_AdaptiveMaxPoolNd.extra_reprd../00r7)F) r?r@rArBrCrDr r3rEr>rFrGs@r6rrsD"$45MDI-*-<@- -1C1r7rc(eZdZUdZeed<defdZy)r!a8Applies a 1D adaptive max pooling over an input signal composed of several input planes. The output size is :math:`L_{out}`, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size :math:`L_{out}`. return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool1d. Default: ``False`` Shape: - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`. - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where :math:`L_{out}=\text{output\_size}`. Examples: >>> # target output size of 5 >>> m = nn.AdaptiveMaxPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input) rgrIcXtj||j|jSri)rMadaptive_max_pool1drgr-rOs r6rPzAdaptiveMaxPool1d.forward1#$$UD,<,Q>QRRr7NrRr9r7r6r!r!s.SVSr7r!c(eZdZUdZeed<defdZy)r"aEApplies a 2D adaptive max pooling over an input signal composed of several input planes. The output is of size :math:`H_{out} \times W_{out}`, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size of the image of the form :math:`H_{out} \times W_{out}`. Can be a tuple :math:`(H_{out}, W_{out})` or a single :math:`H_{out}` for a square image :math:`H_{out} \times H_{out}`. :math:`H_{out}` and :math:`W_{out}` can be either a ``int``, or ``None`` which means the size will be the same as that of the input. return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default: ``False`` Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where :math:`(H_{out}, W_{out})=\text{output\_size}`. Examples: >>> # target output size of 5x7 >>> m = nn.AdaptiveMaxPool2d((5, 7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveMaxPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) rgrIcXtj||j|jSri)rMadaptive_max_pool2drgr-rOs r6rPzAdaptiveMaxPool2d.forward\rr7Nr?r@rArSrrDrrPr9r7r6r"r"6s!FSVSr7r"c(eZdZUdZeed<defdZy)r#aApplies a 3D adaptive max pooling over an input signal composed of several input planes. The output is of size :math:`D_{out} \times H_{out} \times W_{out}`, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size of the image of the form :math:`D_{out} \times H_{out} \times W_{out}`. Can be a tuple :math:`(D_{out}, H_{out}, W_{out})` or a single :math:`D_{out}` for a cube :math:`D_{out} \times D_{out} \times D_{out}`. :math:`D_{out}`, :math:`H_{out}` and :math:`W_{out}` can be either a ``int``, or ``None`` which means the size will be the same as that of the input. return_indices: if ``True``, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default: ``False`` Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` or :math:`(C, D_{out}, H_{out}, W_{out})`, where :math:`(D_{out}, H_{out}, W_{out})=\text{output\_size}`. Examples: >>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5, 7, 9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) rgrIcXtj||j|jSri)rMadaptive_max_pool3drgr-rOs r6rPzAdaptiveMaxPool3d.forwardrr7Nr?r@rArSr rDrrPr9r7r6r#r#as"HSVSr7r#c:eZdZdgZdeddffd ZdefdZxZS)_AdaptiveAvgPoolNdrgr/Nc0t|||_yr1)r2r3rg)r4rgr5s r6r3z_AdaptiveAvgPoolNd.__init__s &r7c d|jSrrr=s r6r>z_AdaptiveAvgPoolNd.extra_reprrr7) r?r@rArBr r3rEr>rFrGs@r6rrs)"OM'O''1C1r7rc,eZdZUdZeed<dedefdZy)r$aApplies a 1D adaptive average pooling over an input signal composed of several input planes. The output size is :math:`L_{out}`, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size :math:`L_{out}`. Shape: - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`. - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where :math:`L_{out}=\text{output\_size}`. Examples: >>> # target output size of 5 >>> m = nn.AdaptiveAvgPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input) rgrIr/cBtj||jS)z( Runs the forward pass. )rMadaptive_avg_pool1drgrOs r6rPzAdaptiveAvgPool1d.forwards$$UD,<,<==r7NrRr9r7r6r$r$s"*>V>>r7r$c,eZdZUdZeed<dedefdZy)r%aApplies a 2D adaptive average pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a ``int``, or ``None`` which means the size will be the same as that of the input. Shape: - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. - Output: :math:`(N, C, S_{0}, S_{1})` or :math:`(C, S_{0}, S_{1})`, where :math:`S=\text{output\_size}`. Examples: >>> # target output size of 5x7 >>> m = nn.AdaptiveAvgPool2d((5, 7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveAvgPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveAvgPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) rgrIr/cBtj||jSri)rMadaptive_avg_pool2drgrOs r6rPzAdaptiveAvgPool2d.forward$$UD,<,<==r7Nrr9r7r6r%r%#@>V>>r7r%c,eZdZUdZeed<dedefdZy)r&a(Applies a 3D adaptive average pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Args: output_size: the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D. D, H and W can be either a ``int``, or ``None`` which means the size will be the same as that of the input. Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` or :math:`(C, D_{in}, H_{in}, W_{in})`. - Output: :math:`(N, C, S_{0}, S_{1}, S_{2})` or :math:`(C, S_{0}, S_{1}, S_{2})`, where :math:`S=\text{output\_size}`. Examples: >>> # target output size of 5x7x9 >>> m = nn.AdaptiveAvgPool3d((5, 7, 9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveAvgPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveAvgPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) rgrIr/cBtj||jSri)rMadaptive_avg_pool3drgrOs r6rPzAdaptiveAvgPool3d.forwardrr7Nrr9r7r6r&r&rr7r&)3typingrtorch.nn.functionalnn functionalrMtorchrtorch.nn.common_typesrrrrr r r r r modulerutilsrrr__all__r(rrrr]rrrryrrrrrrrrr rr!r"r#rr$r%r&r9r7r6rs   ** 0! ! HI I XM M `Q Q h_6_ P ,P f] ,] @L ,L ^ _ _R R jk k \x< x(S*(SV)S*)SX11>*>>%>*%>P%>*%>r7