L i̤ddlmZddlZddlmZddlmZejjGddeZ ejjGddeZ ejjGd d eZ ejjGd d eZ ejjGd deZejjGddeZejjGddeZejjGddeZejjGddeZejjGddeZejjGddeZeZejjGddeZejjGddeZejjGdd eZejjGd!d"eZejjGd#d$eZejjGd%d&eZejjGd'd(eZejjGd)d*eZy)+)OptionalN) ModelOutputceZdZUdZdZeejed<dZ ee ejed<dZ ee ejed<y)FlaxBaseModelOutputan Base class for model's outputs, with potential hidden states and attentions. Args: last_hidden_state (`jnp.ndarray` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Nlast_hidden_state hidden_states attentions) __name__ __module__ __qualname____doc__rrjnpndarray__annotations__r tupler h/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/modeling_flax_outputs.pyrrsM&04x ,326M8E#++./6/3Js{{+,3rrcheZdZUdZdZeejed<dZ ee ejed<y)"FlaxBaseModelOutputWithNoAttentiona Base class for model's outputs, with potential hidden states. Args: last_hidden_state (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`): Sequence of hidden-states at the output of the last layer of the model. hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, num_channels, height, width)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. Nrr ) r r r rrrrrrr rrrrrr0s5 04x ,326M8E#++./6rrceZdZUdZdZeejed<dZ eejed<dZ ee ejed<y),FlaxBaseModelOutputWithPoolingAndNoAttentiona Base class for model's outputs that also contains a pooling of the last hidden states. Args: last_hidden_state (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`): Sequence of hidden-states at the output of the last layer of the model. pooler_output (`jnp.ndarray` of shape `(batch_size, hidden_size)`): Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, num_channels, height, width)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. Nr pooler_outputr ) r r r rrrrrrrr rrrrrrBsH 04x ,3+/M8CKK(/26M8E#++./6rrcheZdZUdZdZeejed<dZ ee ejed<y)(FlaxImageClassifierOutputWithNoAttentiona Base class for outputs of image classification models. Args: logits (`jnp.ndarray` of shape `(batch_size, config.num_labels)`): Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the model at the output of each stage. Nlogitsr ) r r r rrrrrrr rrrrrrWs4 %)FHS[[ !(26M8E#++./6rrceZdZUdZdZeejed<dZ ee e ejfed<dZ ee ejed<dZee ejed<y)FlaxBaseModelOutputWithPasta Base class for model's outputs, with potential hidden states and attentions. Args: last_hidden_state (`jnp.ndarray` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. past_key_values (`dict[str, jnp.ndarray]`): Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape *[batch_size, max_length]*. hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Nrpast_key_valuesr r )r r r rrrrrrr dictstrr rr rrrrrjsj,04x ,38