L i^ddlmZddlZddlmZddlmZmZddlm Z ddl m Z dd gZ Gd de Z Gd d e Zy) )OptionalN)Tensor) functionalinit) Parameter)Module Embedding EmbeddingBagc,eZdZUdZgdZeed<eed<eeed<eeed<eed<e ed<e ed <e ed <e ed < ddededeedeedede d e d ee de dd ffd Z ddZ ddZ de de fdZdefdZe ddZxZS)r aA simple lookup table that stores embeddings of a fixed dictionary and size. This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. Args: num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector padding_idx (int, optional): If specified, the entries at :attr:`padding_idx` do not contribute to the gradient; therefore, the embedding vector at :attr:`padding_idx` is not updated during training, i.e. it remains as a fixed "pad". For a newly constructed Embedding, the embedding vector at :attr:`padding_idx` will default to all zeros, but can be updated to another value to be used as the padding vector. max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` is renormalized to have norm :attr:`max_norm`. norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. scale_grad_by_freq (bool, optional): If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default ``False``. sparse (bool, optional): If ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Attributes: weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from :math:`\mathcal{N}(0, 1)` Shape: - Input: :math:`(*)`, IntTensor or LongTensor of arbitrary shape containing the indices to extract - Output: :math:`(*, H)`, where `*` is the input shape and :math:`H=\text{embedding\_dim}` .. note:: Keep in mind that only a limited number of optimizers support sparse gradients: currently it's :class:`optim.SGD` (`CUDA` and `CPU`), :class:`optim.SparseAdam` (`CUDA` and `CPU`) and :class:`optim.Adagrad` (`CPU`) .. note:: When :attr:`max_norm` is not ``None``, :class:`Embedding`'s forward method will modify the :attr:`weight` tensor in-place. Since tensors needed for gradient computations cannot be modified in-place, performing a differentiable operation on ``Embedding.weight`` before calling :class:`Embedding`'s forward method requires cloning ``Embedding.weight`` when :attr:`max_norm` is not ``None``. For example:: n, d, m = 3, 5, 7 embedding = nn.Embedding(n, d, max_norm=1.0) W = torch.randn((m, d), requires_grad=True) idx = torch.tensor([1, 2]) a = ( embedding.weight.clone() @ W.t() ) # weight must be cloned for this to be differentiable b = embedding(idx) @ W.t() # modifies weight in-place out = a.unsqueeze(0) + b.unsqueeze(1) loss = out.sigmoid().prod() loss.backward() Examples:: >>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0, 2, 0, 5]]) >>> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]]) >>> # example of changing `pad` vector >>> padding_idx = 0 >>> embedding = nn.Embedding(3, 3, padding_idx=padding_idx) >>> embedding.weight Parameter containing: tensor([[ 0.0000, 0.0000, 0.0000], [-0.7895, -0.7089, -0.0364], [ 0.6778, 0.5803, 0.2678]], requires_grad=True) >>> with torch.no_grad(): ... embedding.weight[padding_idx] = torch.ones(3) >>> embedding.weight Parameter containing: tensor([[ 1.0000, 1.0000, 1.0000], [-0.7895, -0.7089, -0.0364], [ 0.6778, 0.5803, 0.2678]], requires_grad=True) )num_embeddings embedding_dim padding_idxmax_norm norm_typescale_grad_by_freqsparser rrrrrweightfreezerN_weight_freezereturnc  | | d} t |||_||_|F|dkDr||jks2Jd|dkr&||j k\sJd|j|z}||_||_||_||_|Attj||ffi| | |_ |j||_yt|j||gk(sJdt|| |_ ||_y)Ndevicedtyperz)Padding_idx must be within num_embeddings) requires_grad?Shape of weight does not match num_embeddings and embedding_dim)super__init__r rrrrrrtorchemptyrreset_parameterslistshaper)selfr rrrrrrrrrrfactory_kwargs __class__s ]/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/nn/modules/sparse.pyr zEmbedding.__init__s@%+U; ,*  "Q"T%8%88?8q"t':':&::?:#11K? &  ""4 ?# ^];N~N")kDK  ! ! #  &+ QQ Q$Gw;GDK cbtj|j|jyNrnormal_r_fill_padding_idx_with_zeror&s r)r#zEmbedding.reset_parameters T[[! ((*r*c|jFtj5|j|jj ddddyy#1swYyxYwNrrr!no_gradrfill_r0s r)r/z%Embedding._fill_padding_idx_with_zeroS    ' 7 D,,-33A6 7 7 ( 7 7 )AAinputc tj||j|j|j|j |j |jSr,)F embeddingrrrrrr)r&r9s r)forwardzEmbedding.forwardsD{{  KK    MM NN  # # KK  r*cd}|j|dz }|j|dz }|jdk7r|dz }|jdur|dz }|jdur|dz }|j d i|j S) N!{num_embeddings}, {embedding_dim}, padding_idx={padding_idx}, max_norm={max_norm}, norm_type={norm_type}F), scale_grad_by_freq={scale_grad_by_freq}z , sparse=True)rrrrrformat__dict__)r&ss r) extra_reprzEmbedding.extra_reprs /    ' . .A == $ ( (A >>Q  * *A  " "% / < >> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embedding = nn.Embedding.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([1]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> embedding(input) tensor([[ 4.0000, 5.1000, 6.3000]]) rB4Embeddings parameter is expected to be 2-dimensional) r rrrrrrrr)dimr%) cls embeddingsrrrrrrrowscolsr<s r)from_pretrainedzEmbedding.from_pretrainedsbJ~~1$ B $ %% d#1  r*) NN@FFNFNNrN)TNNrRFF)__name__ __module__ __qualname____doc__ __constants__int__annotations__rfloatboolrr r#r/r=strrI classmethodrQ __classcell__r(s@r)r r s1aFM#uo N L L &*$(#($(---c] - 5/ -  -!--&!-- -^+7  V     )C ) 33r*ceZdZUdZgdZeed<eed<eeed<eed<e ed<e ed<e ed <e ed <e ed <eeed < ddededeedede d e d e dee d e d eedd ffd Z ddZ ddZ dde dee dee de fdZde fdZe dde de deedede d e d e d e d eeddfdZxZS)r aLCompute sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings. For bags of constant length, no :attr:`per_sample_weights`, no indices equal to :attr:`padding_idx`, and with 2D inputs, this class * with ``mode="sum"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=1)``, * with ``mode="mean"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=1)``, * with ``mode="max"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=1)``. However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these operations. EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by ``mode``. If :attr:`per_sample_weights` is passed, the only supported ``mode`` is ``"sum"``, which computes a weighted sum according to :attr:`per_sample_weights`. Args: num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` is renormalized to have norm :attr:`max_norm`. norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. scale_grad_by_freq (bool, optional): if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default ``False``. Note: this option is not supported when ``mode="max"``. mode (str, optional): ``"sum"``, ``"mean"`` or ``"max"``. Specifies the way to reduce the bag. ``"sum"`` computes the weighted sum, taking :attr:`per_sample_weights` into consideration. ``"mean"`` computes the average of the values in the bag, ``"max"`` computes the max value over each bag. Default: ``"mean"`` sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when ``mode="max"``. include_last_offset (bool, optional): if ``True``, :attr:`offsets` has one additional element, where the last element is equivalent to the size of `indices`. This matches the CSR format. padding_idx (int, optional): If specified, the entries at :attr:`padding_idx` do not contribute to the gradient; therefore, the embedding vector at :attr:`padding_idx` is not updated during training, i.e. it remains as a fixed "pad". For a newly constructed EmbeddingBag, the embedding vector at :attr:`padding_idx` will default to all zeros, but can be updated to another value to be used as the padding vector. Note that the embedding vector at :attr:`padding_idx` is excluded from the reduction. Attributes: weight (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` initialized from :math:`\mathcal{N}(0, 1)`. Examples:: >>> # an EmbeddingBag module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long) >>> offsets = torch.tensor([0, 4], dtype=torch.long) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]]) >>> # Example with padding_idx >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2) >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long) >>> offsets = torch.tensor([0, 4], dtype=torch.long) >>> embedding_sum(input, offsets) tensor([[ 0.0000, 0.0000, 0.0000], [-0.7082, 3.2145, -2.6251]]) >>> # An EmbeddingBag can be loaded from an Embedding like so >>> embedding = nn.Embedding(10, 3, padding_idx=2) >>> embedding_sum = nn.EmbeddingBag.from_pretrained( embedding.weight, padding_idx=embedding.padding_idx, mode='sum') ) r rrrrmoderinclude_last_offsetrr rrrrrrbrrcrNrrc | | d} t|||_||_||_||_||_| F| dkDr| |jks2Jd| dkr&| |j k\sJd|j| z} | |_|7ttj||ffi| |_ |jn1t|j||gk(sJdt||_ ||_||_| |_y)Nrrz)padding_idx must be within num_embeddingsr)rr r rrrrrrr!r"rr#r$r%rbrrc)r&r rrrrrbrrrcrrrr'r(s r)r zEmbeddingBag.__init__ts;%+U; ,*  ""4  "Q"T%8%88?8q"t':':&::?:#11K? & ?# ^];N~NDK  ! ! # &+ QQ Q$G,DK  #6 r*cbtj|j|jyr,r-r0s r)r#zEmbeddingBag.reset_parametersr1r*c|jFtj5|j|jj ddddyy#1swYyxYwr3r4r0s r)r/z(EmbeddingBag._fill_padding_idx_with_zeror7r8r9offsetsper_sample_weightsc tj||j||j|j|j |j |j||j|j S)aForward pass of EmbeddingBag. Args: input (Tensor): Tensor containing bags of indices into the embedding matrix. offsets (Tensor, optional): Only used when :attr:`input` is 1D. :attr:`offsets` determines the starting index position of each bag (sequence) in :attr:`input`. per_sample_weights (Tensor, optional): a tensor of float / double weights, or None to indicate all weights should be taken to be ``1``. If specified, :attr:`per_sample_weights` must have exactly the same shape as input and is treated as having the same :attr:`offsets`, if those are not ``None``. Only supported for ``mode='sum'``. Returns: Tensor output shape of `(B, embedding_dim)`. .. note:: A few notes about ``input`` and ``offsets``: - :attr:`input` and :attr:`offsets` have to be of the same type, either int or long - If :attr:`input` is 2D of shape `(B, N)`, it will be treated as ``B`` bags (sequences) each of fixed length ``N``, and this will return ``B`` values aggregated in a way depending on the :attr:`mode`. :attr:`offsets` is ignored and required to be ``None`` in this case. - If :attr:`input` is 1D of shape `(N)`, it will be treated as a concatenation of multiple bags (sequences). :attr:`offsets` is required to be a 1D tensor containing the starting index positions of each bag in :attr:`input`. Therefore, for :attr:`offsets` of shape `(B)`, :attr:`input` will be viewed as having ``B`` bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. ) r; embedding_bagrrrrrbrrcr)r&r9rgrhs r)r=zEmbeddingBag.forwards]H  KK  MM NN  # # II KK   $ $     r*c <d}|j|dz }|jdk7r|dz }|jdur|dz }|dz }|j|dz }|jd i|j j Dcic]\}}|t|c}}Scc}}w) Nr?rArBrCFrDz , mode={mode}r@rE)rrrrrFrGitemsrepr)r&rHkvs r)rIzEmbeddingBag.extra_reprs / == $ ( (A >>Q  * *A  " "% / < >> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([[1, 0]]) >>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> embeddingbag(input) tensor([[ 2.5000, 3.7000, 4.6500]]) rBrK) r rrrrrrbrrcr)rLr%rr) rMrNrrrrrbrrcrrOrP embeddingbags r)rQzEmbeddingBag.from_pretrainedsuN~~1$ B $ %% d1 3#  17J )r*) NrRFmeanFNFNNNrS)NN)TNrRFrrFFN)rTrUrVrWrXrYrZrr[r\rr]r r#r/r=rIr^rQr_r`s@r)r r sKZ Muo N I L# %)#($($)%).7.7.75/ .7  .7 ! .7.7.7&!.7".7c].7 .7`+7%)/3 0 0 &!0 %V, 0  0 d JC J$(#($)%)7775/ 7  7 ! 777"7c]7 77r*)typingrr!rtorch.nnrr;rtorch.nn.parameterrmoduler __all__r r rEr*r)rxs@ *(  '}}@U6Ur*