L i8 |ddlZddlZddlZddlZddlmZddlmZddlm Z m Z m Z ddl m ZddlZddlmZddlZddlmZmZddlmZmZddlmZmZddlmZd d l m!Z!d d l"m#Z#d d l$m%Z%m&Z&d d l'm(Z(d dl)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7m8Z8m9Z9m:Z:d dl;mm?Z?e?rddl@mAZAddlBmCZDddlBmEZFe9jeHZIdZJeejdejejejeejdeJeejdejdZOd#dZPGdde0e%ZQe4eQjeQ_ReQjj7eQjjjdddeQj_Sd ZU d$d!ZVd"ZWy)%N)partial)UnpicklingError)AnyOptionalUnion) FrozenDictunfreeze) from_bytesto_bytes) flatten_dictunflatten_dict)PRNGKey)PretrainedConfig)custom_object_save)FlaxGenerationMixinGenerationConfig)*load_pytorch_checkpoint_in_flax_state_dict)FLAX_WEIGHTS_INDEX_NAMEFLAX_WEIGHTS_NAMESAFE_WEIGHTS_INDEX_NAMESAFE_WEIGHTS_NAMEWEIGHTS_INDEX_NAME WEIGHTS_NAMEPushToHubMixinadd_code_sample_docstrings%add_start_docstrings_to_model_forward cached_file copy_func download_urlhas_fileis_offline_mode is_remote_urlloggingreplace_return_docstrings)convert_file_size_to_intget_checkpoint_shard_files)is_safetensors_available) safe_open) load_file) save_filecL|tjjd|zzS)NgZd;?)jaxnnsigmoidxs f/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/modeling_flax_utils.py quick_gelur3Es svv~~eai( ((F) approximateT)gelurelusiluswishgelu_newr3gelu_pytorch_tanhtanhc t|}g}i}d}d}t|d}|D]Z}||j||jjz}||z|kDr|j |i}d}||||<||z }||z }\|j |t |dk(r t|didfSi} i} t|D]A\} } tjdd| dzdd t |dd} | | | <| D]}| | |< Cd |i}|| d }| |fS) a Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a given size. The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. If one of the model's weight is bigger that `max_shard_size`, it will end up in its own sub-checkpoint which will have a size greater than `max_shard_size`. Args: params (`Union[Dict, FrozenDict]`): A `PyTree` of model parameters. max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). r/seprNz.msgpack-05dz-of- total_size)metadata weight_map) r&r sizedtypeitemsizeappendlenr enumeratereplace)paramsmax_shard_sizesharded_state_dicts current_blockcurrent_block_sizerCweightsitem weight_sizerEshardsidxshard shard_file weight_namerDindexs r2flax_shard_checkpointr[Us*.n=NMJ6s+G "dm((74=+>+>+G+GG   +n <  & &} 5M!" %dm dk)k!  "}- 1$!#6q#9:D@@J F 341 U&..zQsQwsm4PSTgPhilOmmu;vw "z  1K&0J{ # 11j)H! z.FlaxPreTrainedModel.__init__..sfr4)rbzVModel weights are not initialized as `_do_init` is set to `False`. Make sure to call `z2.init_weights` manually to initialize the weights.)logger warning_once ValueError_config_modulerkeyrGrb can_generaterfrom_model_configgeneration_config_is_initialized init_weightsr- eval_shaperinfo __class____name___params_shape_treesetr r keys_required_paramsrM) selfr`rarbrcrGrd random_paramsparams_shape_treeinit_fns r2__init__zFlaxPreTrainedModel.__init__sN  ^  >45 5 >45 5  4= &OSO`O`Ob!1!C!CF!Khl (  --dhh DM #/Dm T d//[IG #w A  KK&&*nn&=&=%>>pr  #4!$L:K1L$M$R$R$T U 'DK r4rngrMreturnctd|)Nz&init method has to be implemented for NotImplementedError)r}rrbrMs r2rtz FlaxPreTrainedModel.init_weightss!$J4&"QRRr4ctd|)Nz8gradient checkpointing method has to be implemented for rr}s r2enable_gradient_checkpointingz1FlaxPreTrainedModel.enable_gradient_checkpointings!$\]a\b"cddr4c ||fi|S)zZ All context managers that the model should be initialized under go here. rh)clsr`kwargss r2 _from_configz FlaxPreTrainedModel._from_configs 6$V$$r4cy)z= :str: Identifies that this is a Flax model. flaxrhrs r2 frameworkzFlaxPreTrainedModel.frameworks r4c|jSrg)rmrs r2r`zFlaxPreTrainedModel.config ||r4c|jSrg)rnrs r2razFlaxPreTrainedModel.modulerr4cH|js td|jS)Nz`params` cannot be accessed from model when the model is created with `_do_init=False`. You must call `init_weights` manually and store the params outside of the model and pass it explicitly where needed.)rsrl_paramsrs r2rMzFlaxPreTrainedModel.paramss)##3  ||r4c|jSrg)r|rs r2required_paramsz#FlaxPreTrainedModel.required_params s$$$r4c|jSrg)ryrs r2rz%FlaxPreTrainedModel.params_shape_trees&&&r4c&|js tdt|tr t |}t t |j}t|j|z dkDrtd|j|z ||_ y)Nz}`params` cannot be set from model when the model is created with `_do_init=False`. You store the params outside of the model.rzVSome parameters are missing. Make sure that `params` include the following parameters ) rsrl isinstancerr rzr r{rJrr)r}rM param_keyss r2rMzFlaxPreTrainedModel.paramss##=  fj )f%Ff-2245 t##j0 1A 5"22Z?@B  r4maskc:fd}| tjj||St|}tjj |\}}t |t |jD]\}} |s ||| || <t|S)zk Helper method to cast floating-point values of given parameter `PyTree` to given `dtype`. ct|tjr?tj|jtj r|j }|Srg)rjnpndarray issubdtyperGfloatingastype)paramrGs r2conditional_castz?FlaxPreTrainedModel._cast_floating_to..conditional_cast+s9%-#..cll2[ U+Lr4) r- tree_utiltree_mapr tree_flattenzipsortedr{r ) r}rMrGrr flat_params flat_mask_maskedros ` r2_cast_floating_toz%FlaxPreTrainedModel._cast_floating_to%s   <==))*:FC C"6* }}11$7 1y&1A1A1C*DE FKFC#3K4D#E C  Fk**r4cD|j|tj|S)a Cast the floating-point `params` to `jax.numpy.bfloat16`. This returns a new `params` tree and does not cast the `params` in place. This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Arguments: params (`Union[Dict, FrozenDict]`): A `PyTree` of model parameters. mask (`Union[Dict, FrozenDict]`): A `PyTree` with same structure as the `params` tree. The leaves should be booleans, `True` for params you want to cast, and should be `False` for those you want to skip. Examples: ```python >>> from transformers import FlaxBertModel >>> # load model >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision >>> model.params = model.to_bf16(model.params) >>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) >>> # then pass the mask as follows >>> from flax import traverse_util >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> flat_params = traverse_util.flatten_dict(model.params) >>> mask = { ... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) ... for path in flat_params ... } >>> mask = traverse_util.unflatten_dict(mask) >>> model.params = model.to_bf16(model.params, mask) ```)rrbfloat16r}rMrs r2to_bf16zFlaxPreTrainedModel.to_bf16<sJ%%fcllDAAr4cD|j|tj|S)ay Cast the floating-point `params` to `jax.numpy.float32`. This method can be used to explicitly convert the model parameters to fp32 precision. This returns a new `params` tree and does not cast the `params` in place. Arguments: params (`Union[Dict, FrozenDict]`): A `PyTree` of model parameters. mask (`Union[Dict, FrozenDict]`): A `PyTree` with same structure as the `params` tree. The leaves should be booleans, `True` for params you want to cast, and should be `False` for those you want to skip Examples: ```python >>> from transformers import FlaxBertModel >>> # Download model and configuration from huggingface.co >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> # By default, the model params will be in fp32, to illustrate the use of this method, >>> # we'll first cast to fp16 and back to fp32 >>> model.params = model.to_f16(model.params) >>> # now cast back to fp32 >>> model.params = model.to_fp32(model.params) ```)rrfloat32rs r2to_fp32zFlaxPreTrainedModel.to_fp32cs2%%fckk4@@r4cD|j|tj|S)a Cast the floating-point `params` to `jax.numpy.float16`. This returns a new `params` tree and does not cast the `params` in place. This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Arguments: params (`Union[Dict, FrozenDict]`): A `PyTree` of model parameters. mask (`Union[Dict, FrozenDict]`): A `PyTree` with same structure as the `params` tree. The leaves should be booleans, `True` for params you want to cast, and should be `False` for those you want to skip Examples: ```python >>> from transformers import FlaxBertModel >>> # load model >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> # By default, the model params will be in fp32, to cast these to float16 >>> model.params = model.to_fp16(model.params) >>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) >>> # then pass the mask as follows >>> from flax import traverse_util >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> flat_params = traverse_util.flatten_dict(model.params) >>> mask = { ... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) ... for path in flat_params ... } >>> mask = traverse_util.unflatten_dict(mask) >>> model.params = model.to_fp16(model.params, mask) ```)rrfloat16rs r2to_fp16zFlaxPreTrainedModel.to_fp16~sJ%%fckk4@@r4c |jdrt|}t|d}|St|d5}t ||j }ddd|S#1swYSxYw#t tjjf$rx} t|5}|j jdr tdt|#1swYnxYwn"#ttf$rtd|dwxYwYd}~Sd}~wwxYw) N .safetensors.r?rbversionYou seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.Unable to convert to Flax deserializable object. )endswithsafe_load_filer openr readrmsgpack exceptions ExtraData startswithOSErrorrlUnicodeDecodeError)rresolved_archive_filestatestate_fefs r2load_flax_weightsz%FlaxPreTrainedModel.load_flax_weightss) l$--n=&'<=&u#6$ !/6<'&sGLLN;E<  !<   !3!3!=!=> l l/00Avvx**95%2 )a/000' 3 l 23H2IIijkk l0  ls])A+ A+AA+A(#A+(A++#D C2C  C CDC88DDc8i}|D]e} t|d5}t||j}dddtd}|j|~tjgt|dS#1swYKxYw#ttj j f$rS}t|5}|jjdr tdt|#1swYnxYwYd}~d}~wttf$rtd|dwxYw) ab This is the same as [`flax.serialization.from_bytes`] (https:lax.readthedocs.io/en/latest/_modules/flax/serialization.html#from_bytes) but for a sharded checkpoint. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. Args: shard_files (`list[str]`: The list of shard files to load. Returns: `Dict`: A nested dictionary of the model parameters, in the expected format for flax models : `{'model': {'params': {'...'}}}`. rNrrrrr>r?)rr rrrrrrrrlrr updategccollectr )r shard_filesstate_sharded_dictrXrrrrs r2load_flax_sharded_weightsz-FlaxPreTrainedModel.load_flax_sharded_weightss & % J a*d+^_`` as? BA:B:B ?B#D) C742C&&C/ +C77"Dcbdt|jvrdt|jvryy)z Returns whether this model can generate sequences with `.generate()`. Returns: `bool`: Whether this model can generate sequences with `.generate()`. GenerationMixinFT)strprepare_inputs_for_generationgenerate)rs r2rpz FlaxPreTrainedModel.can_generates2 C$E$E F FK\`cdgdpdp`qKqr4Fmain)r` cache_dirignore_mismatched_sizesforce_downloadlocal_files_onlytokenrevisionpretrained_model_name_or_pathrrrrrrc| jdd} | jdd} | jdd}| jdd}| jdd}| jdd}| jd d}| jd d }| jd d }| jdd}| jdd}|)tjdt| t d|}|d urt j ddd|d}|||d<tr|st jdd }t|ts4||n|}|jj|f|d || |||| ||||d | \}}n| j}| t|dd}||d<d}|t|}t j"j%|}|rt j"j't j"j)||t*r't j"j)||t*}nt j"j't j"j)||t,r)t j"j)||t,}d }nt/rit j"j't j"j)||t0r't j"j)||t0}nt/rgt j"j't j"j)|t0r&t j"j)|t0}n5| rit j"j't j"j)||t2r't j"j)||t2}n| rkt j"j't j"j)||t4r)t j"j)||t4}d }n]t/rrt j"j't j"j)|t6r1t j"j)|t6}d }t9dt j"j't j"j)||t2rt;dt*d|dt;dt*dt2d|dt j"j't j"j)||r|}d }n)t=|r|}t?|}n| rt2}nt*} |||| |||| |dd|d }tA||fi|}||t*k(rtA|t,fi|}|d }|| rtA|t4fi|}|d }|t0}tA|t0fi|}|| ||||d!} tC|t6fi| r d }t9dtC|t2fi| rt;|d"t*dtC|t4fi| rt;|d"t,d#t;|d"t*dt2d |rGt jd'|}|jGt j"jHd(}nt jd'd)nd}|rtK|||||| |||| ||* \}}d}!t0k(r^tM|d+5}"|"jO}#ddd#|#jQd,d-vrt;d.|d/|#jQd,d0k(}!||g| d |i|}$| s|!rtS|$||}%nw|r|jU|}%n|jW|}%|r/tXjZj]t^j`|%}%n!tXjZj]d1|%}%d2|%vr|jbte|$jfd3vr;|jb|%d3vr*|%d3|jb|%d3<|%d2|jb|%d2<|jbte|$jfd3vr|jb|%d3vr|jb|%d3i|jb|%d2id4}%n{|jbte|$jfvr|jb|%vr|%|jb}%|jbte|$jfvr|jb|%vr|jb|%i}%ti|%}%titk|r |$jln |$jf}&|$jntq|%jsz }'tq|%js|$jnz }(|(jD]})d5|)d(vs |(ju|)|'r%|s#t j d6|d7|'d8|'|_;g}*|%D]}+|+|&vs|%|+jx|&|+jxk7s(|r7|*j{|+|%|+jx|&|+jxf|&|+|%|+<at d9|+d:|%|+jxd;|&|+jxd<|'r|r|'D] },|&|,|%|,< |(D]})|%|)=t}|(d=kDrbt j d>|d?|$j~jd@|(dA|$j~jdB|$j~jdC n-t jdD|$j~jdEt}|'d=kDr4t j dF|$j~jdG|dH|'dInUt}|*d=k(rGt jdJ|$j~jdK|dL|$j~jdMt}|*d=kDrddNj)|*D+-.cgc]\}+}-}.dO|+dP|-dQ|.dRc}.}-}+}/t j dF|$j~jdG|dS|/dItXjZj]dT|%}0|0D1cgc]}1|0|1t^jk(s|1}2}1|0D1cgc]}1|0|1t^jk(s|1}3}1t}|2d=kDr3t j dU|$j~jdV|dW|2dXt}|3d=kDr3t j dU|$j~jdY|dW|3dX|$jr' tj|f||| |||| |||dZ | |$_E|rt|%|$_6|$S|$t|%fS#t:$rtD$r!t;d$|d%|d&t*dt2d wxYw#1swYxYwcc}.}-}+wcc}1wcc}1w#t:$rt jd[YwxYw)\ao Instantiate a pretrained flax model from a pre-trained model configuration. The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task. The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those weights are discarded. Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`): Can be either: - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [`~FlaxPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`. - A path or url to a *pt index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In this case, `from_pt` should be set to `True`. dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`): The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and [`~FlaxPreTrainedModel.to_bf16`]. model_args (sequence of positional arguments, *optional*): All remaining positional arguments will be passed to the underlying model's `__init__` method. config (`Union[PretrainedConfig, str, os.PathLike]`, *optional*): Can be either: - an instance of a class derived from [`PretrainedConfig`], - a string or path valid as input to [`~PretrainedConfig.from_pretrained`]. Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [`~PreTrainedModel.save_pretrained`] and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (`bool`, *optional*, defaults to `False`): Load the model weights from a PyTorch checkpoint save file (see docstring of `pretrained_model_name_or_path` argument). ignore_mismatched_sizes (`bool`, *optional*, defaults to `False`): Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels). force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download: Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies (`dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only(`bool`, *optional*, defaults to `False`): Whether or not to only look at local files (i.e., do not try to download the model). token (`str` or `bool`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use the token generated when running `hf auth login` (stored in `~/.huggingface`). revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass `revision="refs/pr/"`. subfolder (`str`, *optional*, defaults to `""`): In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded: - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([`~PretrainedConfig.from_pretrained`]). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function. Examples: ```python >>> from transformers import BertConfig, FlaxBertModel >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxBertModel.from_pretrained("google-bert/bert-base-cased") >>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). >>> model = FlaxBertModel.from_pretrained("./test/saved_model/") >>> # Loading from a PyTorch checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file("./pt_model/config.json") >>> model = FlaxBertModel.from_pretrained("./pt_model/pytorch_model.bin", from_pt=True, config=config) ```from_ptFresume_downloadNproxiesuse_auth_tokentrust_remote_code_from_pipeline _from_autordT subfolderr^ _commit_hashadapter_kwargsrThe `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.V`token` and `use_auth_token` are both specified. Please set only the argument `token`.zgThe argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.modelr) file_typerfrom_auto_classusing_pipelinez+Offline mode: forcing local_files_only=True) rreturn_unused_kwargsrrrrrrrrrrrGzASupport for sharded checkpoints using safetensors is coming soon!zError no file named z found in directory zc but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those weights.z or r) rrrrrr user_agentrr _raise_exceptions_for_gated_repo%_raise_exceptions_for_missing_entriesr)rrrrrz& does not appear to have a file named zk but there is a sharded file for PyTorch weights. Use `from_pt=True` to load this model from those weights.zCan't load the model for 'z'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'z=' is the correct path to a directory containing a file named zloading weights file z from cache at ) rrrrrrrrrr)rformat)pttfrz"The safetensors archive passed at zf does not contain the valid metadata. Make sure you save your model with the `save_pretrained` method.rc\tj|tjddS)Ncpu)backendr)r- device_put local_devicesr0s r2riz5FlaxPreTrainedModel.from_pretrained..s#3K\K\ejKklmKn9or4 batch_statsrM)rMrnum_batches_trackedzThe checkpoint z is missing required keys: zI. Make sure to call model.init_weights to initialize the missing weights.z)Trying to load the pretrained weight for z failed: checkpoint has shape z, which is incompatible with the model shape zd. Using `ignore_mismatched_sizes=True` if you really want to load this checkpoint inside this model.rz(Some weights of the model checkpoint at z! were not used when initializing z: z, - This IS expected if you are initializing z from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing z from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).z9All model checkpoint weights were used when initializing z. zSome weights of z3 were not initialized from the model checkpoint at z and are newly initialized: zo You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.zAll the weights of z/ were initialized from the model checkpoint at zf. If your task is similar to the task the model of the checkpoint was trained on, you can already use z* for predictions without further training. z- z: found shape z in the checkpoint and z in the model instantiatedz= and are newly initialized because the shapes did not match: c|jSrg)rGr0s r2riz5FlaxPreTrainedModel.from_pretrained..s r4zSome of the weights of zD were initialized in float16 precision from the model checkpoint at z: z You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.zE were initialized in bfloat16 precision from the model checkpoint at ) rrrrrrrrrrzZGeneration config file not found, using a generation config created from the model config.)Gpopwarningswarn FutureWarningrlrjwarningr"rvrr config_classfrom_pretrainedcopygetattrrospathisdirisfilejoinrrr(rrrrrrr#r rr! Exceptionsplitr@r'r)rDgetrrrr-rrrarraybase_model_prefixdictrr r rMrrzr{remove _missing_keysshaperIrJrwrxrrrprrrr )4rrrGr`rrrrrr model_argsrrrrrr from_pipelinerrdr commit_hashrr config_path model_kwargs is_shardedis_local archive_filefilenamercached_file_kwargshas_file_kwargssafetensors_from_ptrsafetensors_metadatarr random_state missing_keysunexpected_keysunexpected_keymismatched_keysro missing_keyshape1shape2mismatched_warning param_dtypesk fp16_params bf16_paramss4 r2rz#FlaxPreTrainedModel.from_pretraineds5B**Y. **%6=**Y-$4d;"JJ':DA #3T:  **\59::j$/JJ{B/ jj6  JJ' .  % MME   l#E  $ NN  $+Tcd  $+8J' (  %5 KKE F# &"23$*$6&>"'',,/LiYj"kl#%77<<0MyZk#lLWW^^BGGLL1NPY[r$st#%77<<0MyZq#rL!%J-/BGGNNGGLL!> K\]5$&77<<0MyZk#lL-/BGGNNGGLL!>@QR5$&77<<0MO`#aL =Z\egs0t!u#%77<<0MyZf#gLGGLL!> K]^"$&77<<0MyZl#mL!%J-/BGGNNGGLL!>@WX5$&77<<0MOf#gL!%J-.qrrWW^^BGGLL1NPY[g$hi!./@.AAUVsUtu## "./@.Al^Sg89< Y8U VW< <=8(45R(S%+H0HW&/*8#*+:,BC(4%066rww{{CBG 3H:_MbLcde$( ! 'A-%#- /!1%!#( ( $ !1$ ( (0FC 4q'(zz|$ 4#+/C/G/G/QYm/m89N8OPXX#7":":8"D"L FKZK(KlK )>uF[]ghE556KL--.CD ..syy%@ ../oqvw E !%%T%2I2I(2S-TT))U8_<"'/#2G2G"Hh',]';Ck" ?. &Nn% &  ! # NN:;X:YZ!!&!9!9 :"_&>%?@12388=8P8P7QR   ! #!%0?+VV^F83J6(Rlm"  NN"5??#;#;"<=123./0<< }}--.?G ".QQ,q/S[[2PqQ Q".RQ,q/S\\2QqR R { a  NN)%//*B*B)CD++H*I[MZbb  { a  NN)%//*B*B)CD++H*I[MZbb      *:*J*J1 +'#1$3#%5%'.#0 + +'( )%0ELL.// /Y !45R4ST==Z<[\>>O=PPTUaTbbceL 4 4FRRH  p  sIC>w3x)x6 x=-x=7yy,&y33x&)x3y('y(save_directorysafe_serializationc |jdd}|)tjdt| t d|}|||d<t j j|rtjd|dyt j|d |rr|jd d} |jd |jt j jd } |j| fi|} |j|} t j j|}|j j"d dg|j$_|j(t+|||j$|j$j-||j/r|j0j-||rt2nt4} t j j7|| } t9||n |j:|\}}t j<|D]}t j j7||}| j?ddj?dd}|jA|sWt j j|sw||vs|t jB||t|r/||n |j:}tE|d}tG|| ddinGtI| d5}||n |j:}tK|}|jM|dddnt j j7|tN}tI|dd5}tQjR|dddz}|jM|dddtjUd|dtW|d |d|jYD]b\}}tIt j j7||d!5}t[|d"}tK|}|jM|ddddtjUd#| |r|j]|   |$yy#1swY;xYw#1swYxYw#1swYxYw)%a Save a model and its configuration file to a directory, so that it can be re-loaded using the `[`~FlaxPreTrainedModel.from_pretrained`]` class method Arguments: save_directory (`str` or `os.PathLike`): Directory to which to save. Will be created if it doesn't exist. push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). If a single weight of the model is bigger than `max_shard_size`, it will be in its own checkpoint shard which will be bigger than `max_shard_size`. token (`str` or `bool`, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use the token generated when running `hf auth login` (stored in `~/.huggingface`). kwargs (`dict[str, Any]`, *optional*): Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save the model using `safetensors` or through msgpack. rNrrrzProvided path (z#) should be a directory, not a fileT)exist_okcommit_messagerepo_idr)r`z.binr^rrr?rr)rDwbwzutf-8)encoding)indent sort_keysrz:The model is bigger than the maximum size per checkpoint (z) and is going to be split in z^ checkpoint shards. You can find where each parameters has been saved in the index located at )moder>zModel weights saved in )r=r)/rr r r rlrrrrjerrormakedirsrr@ _create_repo_get_files_timestampsabspathrwrxr` architectures _auto_classrsave_pretrainedrprrrrrr[rMlistdirrLrrr safe_save_filerr writerjsondumpsrvrJitemsr _upload_modified_files)r}r9rM push_to_hubrNrr:rrr=r>files_timestamps weights_nameoutput_model_filerUrZr' full_filenameweights_no_suffix flat_dictr model_bytessave_index_filecontentrXrW shard_bytess r2rNz#FlaxPreTrainedModel.save_pretrained9sP $4d;  % MME   l#E  #F7O 77>>. ) LL?>*::]^ _  NT2 #ZZ(8$?NjjN,@,@,Mb,QRG'd'':6:G#99.I 8%)^^%<% B-?(DU GGLLF-8JfPTP[P[]kl  >2 )HGGLLBM , 4 4VR @ H HY[ \ ""#45"''..:W\dlr\r -(  ) =!#)#54;;(S9 y*;xQWFXY+T2)a'-'9Vt{{F"*6"2KGGK()) !ggll>;RSOosW= !**U1EL  ! KKL^L\]K=)$$3#4A7  &,\\^ )! E"'',,~zBN)RS+Es;F"*6"2KGGK()) )  -.?-@AB   ' ' - (  3)) ! !))s$--Q-Q&4*Q2Q#&Q/2Q; ct|ts |j}ddlmcm}t ||st|d||_y)aY Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class. Args: auto_class (`str` or `type`, *optional*, defaults to `"FlaxAutoModel"`): The auto class to register this new model with. rNz is not a valid auto class.) rrrxtransformers.models.automodelsautohasattrrlrM)r auto_class auto_modules r2register_for_auto_classz+FlaxPreTrainedModel.register_for_auto_classsC*c*#,,J66{J/ |+FGH H$r4rg)NF10GBNF) FlaxAutoModel)5rx __module__ __qualname____doc__r rmain_input_namerMrzrrrrr.ModuletupleintrGboolrr-randomrrrrtr classmethodrpropertyrrr`rarrMrrsetterrrrrrrrrprPathLikerrrNrhrhr4r2r]r]s L!OKEM $;;5( 5( 5( 5(  5( yy 5(5(nS 2 2SSPZSfjSe%% 3 ( dJ./%%%'4'' ]]U4#34$+dJ.>(?+ +Y\+hk+.%BeD*$45%BS%BNAeD*$45ASA6%AeD*$45%AS%AN0,;,;\ T  ;;}0 GK7;(-$!&,0}0',S"++-='>}0yy}0 /bkkABC }0 E#r{{"234 }0"&}0}0}0c4i()}0}0}0D,0#(Dc2;;./D c4i() D!DL%%r4r]rrjzmodel checkpoint)object object_class object_filesct|j|_d|j_t||j|_yrg)r__call__rmr) model_class docstrings r2overwrite_call_docstringrs>$[%9%9:K#'K K@KKL`L`aKr4c t|j|_t||||j|||j|_y)N) checkpoint output_typer  model_clsrreal_checkpoint)rr|rrx)r}rrr rrrs r2append_call_sample_docstringrsQ%[%9%9:K5!&&' Kr4c|t|j|_t|||j|_y)N)rr )rr|r%)r}rr s r2 append_replace_return_docstringsrs=$[%9%9:K4!Kr4)ri)NNN)XrrRrr  functoolsrpicklertypingrrr flax.linenlinenr.r- jax.numpynumpyrmsgpack.exceptionsrflax.core.frozen_dictrr flax.serializationr r flax.traverse_utilr r jax.randomrconfiguration_utilsrdynamic_module_utilsr generationrrmodeling_flax_pytorch_utilsrutilsrrrrrrrrrrrr r!r"r#r$r% utils.hubr&r'utils.import_utilsr( safetensorsr)safetensors.flaxr*rr+rP get_loggerrxrjr3r6r7r9r<ACT2FNr[r]rVrmrrrrrhr4r2rsk" "'' 63;14=S&L8%<<   H %) BGG / GG HH XXT2 d; GG >B}%.*=}%B"#,,?,K,K"L""**6.A.M.M.U.U.\.\_CU/]/##+ bcg r4