Ë µL i8ãóÊ—ddlZddlmZmZddlmZddlmZmZddl m Z ddl m Z ddl mZeje«Zeee j$«Gd „d e «««Zy) éN)Ú dataclassÚfield)ÚPath)ÚOptionalÚUnioné)ÚGenerationConfig)ÚTrainingArguments)Úadd_start_docstringscó쇗eZdZUdZedddi¬«Zeed<edddi¬«Zeed<ed dd i¬«Z e e ed <ed dd i¬«Z e e ed <ed ddi¬«Z e eeeefed<ˆfd„ZˆxZS)ÚSeq2SeqTrainingArgumentsa® Args: predict_with_generate (`bool`, *optional*, defaults to `False`): Whether to use generate to calculate generative metrics (ROUGE, BLEU). generation_max_length (`int`, *optional*): The `max_length` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `max_length` value of the model configuration. generation_num_beams (`int`, *optional*): The `num_beams` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `num_beams` value of the model configuration. generation_config (`str` or `Path` or [`~generation.GenerationConfig`], *optional*): Allows to load a [`~generation.GenerationConfig`] from the `from_pretrained` method. This can be either: - a string, the *model id* of a pretrained model configuration hosted inside a model repo on huggingface.co. - a path to a *directory* containing a configuration file saved using the [`~GenerationConfig.save_pretrained`] method, e.g., `./my_model_directory/`. - a [`~generation.GenerationConfig`] object. FÚhelpz%Whether to use SortishSampler or not.)ÚdefaultÚmetadataÚsortish_samplerzFWhether to use generate to calculate generative metrics (ROUGE, BLEU).Úpredict_with_generateNz•The `max_length` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `max_length` value of the model configuration.Úgeneration_max_lengthz“The `num_beams` to use on each evaluation loop when `predict_with_generate=True`. Will default to the `num_beams` value of the model configuration.Úgeneration_num_beamsz^Model id, file path or url pointing to a GenerationConfig json file, to use during prediction.Úgeneration_configc󜕗t‰|«}|j«D])\}}t|t«sŒ|j«||<Œ+|S)zØ Serializes this instance while replace `Enum` by their values and `GenerationConfig` by dictionaries (for JSON serialization support). It obfuscates the token values by removing their value. )ÚsuperÚto_dictÚitemsÚ isinstancer )ÚselfÚdÚkÚvÚ __class__s €úh/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/training_args_seq2seq.pyrz Seq2SeqTrainingArguments.to_dictPsKø€ô ‰G‰OÓ ˆØ—G‘G“Iò #‰DˆAˆqܘ!Ô-Õ.Ø—y‘y“{!’ð #ðˆó)Ú__name__Ú __module__Ú __qualname__Ú__doc__rrÚboolÚ__annotations__rrrÚintrrrÚstrrr rÚ __classcell__)rs@r r r sÑø…ññ("¨%¸6ÐCjÐ:kÔl€OTÓlÙ"'Ø Ð)qÐ rô#И4óñ,1Øà ðHð ô,И8 C™=óñ+0Øà ðGð ô+И( 3™-óñGLØà Ðtð ôGÐx  c¨4Ð1AÐ&AÑ BÑCó÷ ð r!r )ÚloggingÚ dataclassesrrÚpathlibrÚtypingrrÚgeneration.configuration_utilsr Ú training_argsr Úutilsr Ú getLoggerr"Úloggerr%r ©r!r úr5s`ðóß(Ýß"å<Ý,Ý'ð ˆ× Ñ ˜8Ó $€ð ÙÐ'×/Ñ/Ó0ô<Ð0ó<ó1ó ñ<r!