L itT^dZddlZddlZddlZddlZddlZddlZddlZddlZddl m Z ddl m Z ddlm Z mZmZmZmZddlZddlZddlmZddlmZmZdd lmZdd lmZdd lmZm Z m!Z!dd l"m#Z#dd l$m%Z%m&Z&e!rddlm'Z'ddl(m)Z)m*Z*m+Z+m,Z,m-Z-ddl.m/Z/m0Z0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7m8Z8m9Z9m:Z:m;Z;mZ>ddl?m@Z@e<rddlAmBZBe>jeDZEeddZFe8e eGjZIdddddZJejdk\r ejZLn ejZLGddedZMGd d!edZNGd"d#edZOGd$d%edZPGd&d'edZQGd(d)edZRGd*d+edZSGd,d-edZTGd.d/eTeSdZUGd0d1edZVe Gd2d3ZWGd4de4ZXe7eXjeX_YeXjj8eXjjjd5d6d78eXj_yy)9z8 Processing saving/loading class for common processors. N) dataclass)Path)AnyOptional TypedDictTypeVarUnion)EntryNotFoundError) AudioInput load_audio)custom_object_save) BatchFeature)ChannelDimension ImageInputis_vision_available)render_jinja_template) VideoInput VideoMetadata)PILImageResampling)PaddingStrategyPreTokenizedInputPreTrainedTokenizerBase TextInputTruncationStrategy)AUDIO_TOKENIZER_NAMECHAT_TEMPLATE_DIRCHAT_TEMPLATE_FILE#LEGACY_PROCESSOR_CHAT_TEMPLATE_FILEPROCESSOR_NAMEPushToHubMixin TensorType cached_file copy_funcdirect_transformers_import download_urlis_offline_mode is_remote_urlis_torch_availablelist_repo_templateslogging)deprecate_kwarg)PreTrainedAudioTokenizerBaseSpecificProcessorTypeProcessorMixin)boundrFeatureExtractionMixinImageProcessingMixinBaseVideoProcessor) AutoTokenizerAutoFeatureExtractorAutoImageProcessorAutoVideoProcessor) ceZdZUdZeeeeeeeefe d<eeeeeeefe d<eeeeeeeefe d<ee e d<ee e e fe d<ee e e fe d<eee d<eee d <ee e d <eee d <ee e d <ee e d <ee e d<ee e d<ee e d<ee e d<ee e d<ee e d<ee e d<y) TextKwargsaL Keyword arguments for text processing. For extended documentation, check out tokenization_utils_base methods and docstrings associated. Attributes: add_special_tokens (`bool`, *optional*) Whether or not to add special tokens when encoding the sequences. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*) Activates and controls padding. truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*): Activates and controls truncation. max_length (`int`, *optional*): Controls the maximum length to use by one of the truncation/padding parameters. stride (`int`, *optional*): If set, the overflowing tokens will contain some tokens from the end of the truncated sequence. is_split_into_words (`bool`, *optional*): Whether or not the input is already pre-tokenized. pad_to_multiple_of (`int`, *optional*): If set, will pad the sequence to a multiple of the provided value. return_token_type_ids (`bool`, *optional*): Whether to return token type IDs. return_attention_mask (`bool`, *optional*): Whether to return the attention mask. return_overflowing_tokens (`bool`, *optional*): Whether or not to return overflowing token sequences. return_special_tokens_mask (`bool`, *optional*): Whether or not to return special tokens mask information. return_offsets_mapping (`bool`, *optional*): Whether or not to return `(char_start, char_end)` for each token. return_length (`bool`, *optional*): Whether or not to return the lengths of the encoded inputs. verbose (`bool`, *optional*): Whether or not to print more information and warnings. padding_side (`str`, *optional*): The side on which padding will be applied. return_mm_token_type_ids (`bool`, *optional*): Whether to return multimodal token type ids indicating mm placeholder token positions. text_pair text_targettext_pair_targetadd_special_tokenspadding truncation max_lengthstrideis_split_into_wordspad_to_multiple_ofreturn_token_type_idsreturn_attention_maskreturn_overflowing_tokensreturn_special_tokens_maskreturn_offsets_mapping return_lengthverbose padding_sidereturn_mm_token_type_idsN)__name__ __module__ __qualname____doc__rr rrlist__annotations__boolstrrrintc/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/processing_utils.pyr;r;ds#%Ni):DOTRcMddeffy"3T)_dK\F]]^^uY0A4 ?TXYjTk%klmm & 4o- ..dC!3344  SM!$'  %#D>)#D>)'~- (.$TN*D>! d^3-&tn,rYr;F)totalcTeZdZUdZeeed<eeee fed<eeee fed<ee de fed<eeed<ee ed<eeed <ee e e e fed <ee e e e fed <eeed <eeee fed <eeed<ee ed<ee ee fed<eeed<y) ImagesKwargsah Keyword arguments for image processing. For extended documentation, check the appropriate ImageProcessor class methods and docstrings. Attributes: do_resize (`bool`, *optional*): Whether to resize the image. size (`dict[str, int]`, *optional*): Resize the shorter side of the input to `size["shortest_edge"]`. crop_size (`dict[str, int]`, *optional*): Desired output size when applying center-cropping. resample (`PILImageResampling`, *optional*): Resampling filter to use if resizing the image. do_rescale (`bool`, *optional*): Whether to rescale the image by the specified scale `rescale_factor`. rescale_factor (`int` or `float`, *optional*): Scale factor to use if rescaling the image. do_normalize (`bool`, *optional*): Whether to normalize the image. image_mean (`float` or `list[float]`, *optional*): Mean to use if normalizing the image. image_std (`float` or `list[float]`, *optional*): Standard deviation to use if normalizing the image. do_pad (`bool`, *optional*): Whether to pad the image to the `(max_height, max_width)` of the images in the batch. pad_size (`dict[str, int]`, *optional*): The size `{"height": int, "width" int}` to pad the images to. do_center_crop (`bool`, *optional*): Whether to center crop the image. data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the output image. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the input image. device (`str`, *optional*): The device to use for processing (e.g. "cpu", "cuda"), only relevant for fast image processing. do_resizesize crop_sizerresample do_rescalerescale_factor do_normalize image_mean image_stddo_padpad_sizedo_center_crop data_formatinput_data_formatdeviceN)rOrPrQrRrrUrTdictrVrWr floatrSrrXrYrZr]r]s#J~ 4S> ""S#X''u136788UO#4. ud5k1233eT%[0122 TNtCH~&&TN"*++c+;&; <== SMrYr]ceZdZUdZeeed<eeed<eeee fed<eeed<eded<eeed<ee ed <eeed <ee e e e fed <ee e e e fed <eeed <eeee fed<ee ed<ee ee fed<eeed<eeed<ee eefed<ee e e fed<ee ed<eeed<y) VideosKwargsa Keyword arguments for video processing. Attributes: do_convert_rgb (`bool`): Whether to convert the video to RGB format. do_resize (`bool`): Whether to resize the video. size (`dict[str, int]`, *optional*): Resize the shorter side of the input to `size["shortest_edge"]`. default_to_square (`bool`, *optional*, defaults to `self.default_to_square`): Whether to default to a square when resizing, if size is an int. resample (`PILImageResampling`, *optional*): Resampling filter to use if resizing the video. do_rescale (`bool`, *optional*): Whether to rescale the video by the specified scale `rescale_factor`. rescale_factor (`int` or `float`, *optional*): Scale factor to use if rescaling the video. do_normalize (`bool`, *optional*): Whether to normalize the video. image_mean (`float` or `list[float]`, *optional*): Mean to use if normalizing the video. image_std (`float` or `list[float]`, *optional*): Standard deviation to use if normalizing the video. do_center_crop (`bool`, *optional*): Whether to center crop the video. do_sample_frames (`bool`, *optional*): Whether to sample frames from the video before processing or to process the whole video. video_metadata (`Union[VideoMetadata, dict]`, *optional*): Metadata of the video containing information about total duration, fps and total number of frames. num_frames (`int`, *optional*): Maximum number of frames to sample when `do_sample_frames=True`. fps (`int` or `float`, *optional*): Target frames to sample per second when `do_sample_frames=True`. crop_size (`dict[str, int]`, *optional*): Desired output size when applying center-cropping. data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the output video. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the input video. return_metadata (`ChannelDimension` or `str`, *optional*): Whether to return video metadata or not. do_convert_rgbr^r_default_to_squarerrarbrcrdrerfrir`rjrkrldo_sample_framesvideo_metadatafps num_framesreturn_metadataN)rOrPrQrRrrUrTrmrVrWrnr rSrrrXrYrZrprps'*XTN"~ 4S> ""~%+,,UO#4. ud5k1233eT%[0122TN"S#X''*++c+;&; <== SMtn$U=$#6788 %U # $$ d^#rYrpceZdZUdZeeed<eeeje e e eje e e fed<eee e efed<eeed<ee ed<eeed<ee ed<y ) AudioKwargsa Keyword arguments for audio processing. Attributes: sampling_rate (`int`, *optional*): The sampling rate at which the `raw_speech` input was sampled. raw_speech (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`): The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). truncation (`bool`, *optional*): Activates truncation to cut input sequences longer than *max_length* to *max_length*. pad_to_multiple_of (`int`, *optional*): If set, will pad the sequence to a multiple of the provided value. return_attention_mask (`bool`, *optional*): Whether or not [`~ASTFeatureExtractor.__call__`] should return `attention_mask`. sampling_rate raw_speechr@rBrArErGN)rOrPrQrRrrWrTr npndarrayrSrnrUrVrrXrYrZryrys:C= rzz4;RZZ8H$tTY{J[[\]] eD#67 88   %#D>)rYryc(eZdZUeeeefed<y) CommonKwargsreturn_tensorsN)rOrPrQrr rVr"rTrXrYrZrrBsU3 ?344rYrceZdZUdZiZiej Zeed<iej Z eed<ie j Z e ed<ie j Z e ed<iej Zeed<y)ProcessingKwargsa Base class for kwargs passing to processors. In case a model has specific kwargs that are not present in the base class or default values for existing keys, it should have its own `ModelProcessorKwargs` class that inherits from `ProcessingKwargs` to provide: 1) Additional typed keys and that this model requires to process inputs. 2) Default values for existing keys under a `_defaults` attribute. New keys have to be defined as follows to ensure type hinting is done correctly. ```python # adding a new image kwarg for this model class ModelImagesKwargs(ImagesKwargs, total=False): new_image_kwarg: Optional[bool] class ModelProcessorKwargs(ProcessingKwargs, total=False): images_kwargs: ModelImagesKwargs _defaults = { "images_kwargs: { "new_image_kwarg": False, } "text_kwargs": { "padding": "max_length", }, } ``` For Python 3.8 compatibility, when inheriting from this class and overriding one of the kwargs, you need to manually update the __annotations__ dictionary. This can be done as follows: ```python class CustomProcessorKwargs(ProcessingKwargs, total=False): images_kwargs: CustomImagesKwargs CustomProcessorKwargs.__annotations__["images_kwargs"] = CustomImagesKwargs # python 3.8 compatibility ```python common_kwargs text_kwargs images_kwargs videos_kwargs audio_kwargsN)rOrPrQrR _defaultsrrTrr;rr]rrprryrrXrYrZrrFs$LI#  & &#M<  $ $K#  & &#M<#  & &#M<!  % %!L+rYrceZdZUdZdZeeeed<dZ eeee e fed<dZ ee ed<dZ ee ed<dZee ed<y) TokenizerChatTemplateKwargsaU Keyword arguments for tokenizer's `apply_chat_template`, when it is called from within a processor. tools (`list[Dict]`, *optional*): A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use) for more information. documents (`list[dict[str, str]]`, *optional*): A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing "title" and "text" keys. Please see the RAG section of the [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#arguments-for-RAG) for examples of passing documents with chat templates. add_generation_prompt (bool, *optional*): If this is set, a prompt with the token(s) that indicate the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect. continue_final_message (bool, *optional*): If this is set, the chat will be formatted so that the final message in the chat is open-ended, without any EOS tokens. The model will continue this message rather than starting a new one. This allows you to "prefill" part of the model's response for it. Cannot be used at the same time as `add_generation_prompt`. return_assistant_tokens_mask (`bool`, defaults to `False`): Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant, the mask will contain 1. For user and system tokens, the mask will contain 0. This functionality is only available for chat templates that support it via the `{% generation %}` keyword. Ntools documentsFadd_generation_promptcontinue_final_messagereturn_assistant_tokens_mask)rOrPrQrRrrrSrmrTrrVrrUrrrXrYrZrrse>#'E8DJ &04IxT#s(^,-4,18D>1-2HTN238 (4.8rYrc:eZdZUdZdZeeed<dZee ed<y)ChatTemplateLoadKwargsa Keyword arguments used to load multimodal data in processor chat templates. num_frames (`int`, *optional*): Number of frames to sample uniformly. If not passed, the whole video is loaded. load_audio_from_video (`bool`, *optional*): Whether to use the audio track of input video. If `True` the audio track will be loaded and passed to the processor. This flag has no effect if the model doesn't support audio modality. i>rzFload_audio_from_videoN) rOrPrQrRrzrrWrTrrUrXrYrZrrs&$*M8C=),18D>1rYrc:eZdZUdZdZeeed<dZeeed<y)ProcessorChatTemplateKwargsa: Keyword arguments for processor's `apply_chat_template`. tokenize (`bool`, *optional*, defaults to `False`): Whether to tokenize the output or not. return_dict (`bool`, defaults to `False`): Whether to return a dictionary with named outputs. Has no effect if tokenize is `False`. Ftokenize return_dictN) rOrPrQrRrrrUrTrrXrYrZrrs% %Hhtn$"'K$'rYrc,eZdZUeed<eed<eed<y)AllKwargsForChatTemplateprocessor_kwargsmm_load_kwargstemplate_kwargsN)rOrPrQrrTrrrXrYrZrrs&&**00rYrceZdZUdZdZeeeed<dZ eeeed<dZ eeeed<dZ eeeed<dZ dZ y) MultiModalDataa Dataclass that holds extra useful data for processing multimodal data. Processors currently cannot return keys, unless it is used in model's forward. Thus we have helper methods that calculate and return useful data from processing input multimodals (images/videos). Note that this dataclass is aimed to be used only in vLLM and we might change its API in the future. Nnum_image_tokensnum_video_tokensnum_audio_tokensnum_image_patchesc:t||xrt||duSN)hasattrgetattrselfkeys rZ __contains__zMultiModalData.__contains__s tS!DgdC&8&DDrYczt||r t||St|jjd|)Nz has no attribute )rrAttributeError __class__rOrs rZ __getitem__zMultiModalData.__getitem__s; 4 4% % 7 788J3%PQQrY)rOrPrQrRrrrSrWrTrrrrrrXrYrZrrse-1htCy)0,0htCy)0,0htCy)0-1xS *1ERrYrcdeZdZUdZddgZddgZgZeee d<dZ dZ dZ e ZdZ d=d eed eeeeeeeefd eed eed ee f dZdZd>deeeffdZd>defdZd>deeej>ffdZ dZ!d?de"de"fdZ#e$deeej>fde%eeefeeefffdZ&e$deeeffdZ' d@de deedeeeffdZ(e$ dAd e)e*deeej>fd!eeeej>fd"e"d#e"d$eeee"fd%ede*fd&Z+e$dBd'Z,e$d(Z-e.d)Z/d*Z0d+Z1e2d,Z3e.d-Z4e5d.d/d01e5d2d3d45 d@d6eeeeefeeeeeffdeed ee6defd7Z7d>d8Z8d eed9d:d;eefd<Z9y)Cr/za This is a mixin used to provide saving/loading functionality for all processor classes. feature_extractor tokenizer chat_templateaudio_tokenizeroptional_call_argsNc |jD]e}|j|d}t||||dk(s(|+|j||}t rt |t rXtd|d|D]}||jvstd|dt||jD]\}}||vrtd|d|||<t|t|jk7rJtdt|jdd j|jd t|d |jD]$\}}|j||t|||&y) NrzTried to use `zW` for audio tokenization. However, this class is not registered for audio tokenization.zUnexpected keyword argument .z!Got multiple values for argument zThis processor requires z arguments: , z. Got z arguments instead.)optional_attributespopsetattrcheck_argument_for_proper_classr) isinstancer- ValueError attributes TypeErrorziplenjoinitems) rargskwargsoptional_attributeoptional_attribute_value proper_classrargattribute_names rZ__init__zProcessorMixin.__init__s#'":":  '-zz2Dd'K $ D,.F G"%66;S;_#CCDVXpq *,>  GC$//)">se1 EFF G$'tT__#= - C'"CNCSST UVV),~&  - v;#doo. .*3t+?*@ TYYW[WfWfMgLhhnt9+02  $*<<> / NC  0 0 E D.# . /rYimagestextvideosaudiorc |(|&|$|"td|jj|j|jfdt |dr|j jnii|}|df|df|df|dfd }i}|jD]:}t||d} ||\} } | | | | fi|| } |j| <t|S) a7 Main method to prepare for model inputs. This method forwards the each modality argument to its own processor along with `kwargs`. Please refer to the docstring of the each processor attributes for more information. Args: images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `list[PIL.Image.Image]`, `list[np.ndarray]`, `list[torch.Tensor]`): The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch tensor. Both channels-first and channels-last formats are supported. text (`TextInput`, `PreTokenizedInput`, `list[TextInput]`, `list[PreTokenizedInput]`, *optional*): The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences). videos (`np.ndarray`, `torch.Tensor`, `List[np.ndarray]`, `List[torch.Tensor]`): The video or batch of videos to be prepared. Each video can be a 4D NumPy array or PyTorch tensor, or a nested list of 3D frames. Both channels-first and channels-last formats are supported. audio (`np.ndarray`, `torch.Tensor`, `list[np.ndarray]`, `list[torch.Tensor]`): The audio or batch of audio to be prepared. Each audio can be a NumPy array or PyTorch tensor. return_tensors (`str` or [`~utils.TensorType`], *optional*): If set, will return tensors of a particular framework. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return NumPy `np.ndarray` objects. - `'jax'`: Return JAX `jnp.ndarray` objects. Returns: [`BatchFeature`]: A [`BatchFeature`] object with processed inputs in a dict format. Nz/You need to provide at least one input to call tokenizer_init_kwargsrrrrr)rimage_processorvideo_processorr) rrrO _merge_kwargsvalid_processor_kwargsrr init_kwargsrrupdater) rrrrrrattribute_to_kwargsoutputsr attribute input_data input_kwargsattribute_outputs rZ__call__zProcessorMixin.__call__sJ >dlv~%-Nt~~OfOfNghi i###  ' ' @Gk@Z$.."<"<`b   . &8 &8"'!8   "oo 1Nnd;I':>'J $J %)*?#,Z#P6,;O#P /0  1G$$rYc &t|d}tj||}t|trt fd|D}nj |}t||s(t dt|jd|d|d|S)z Checks the passed argument's class against the expected transformers class. In case of an unexpected mismatch between expected and actual class, an error is raise. Otherwise, the proper retrieved class is returned. _classc3FK|]}|j|ywrget_possibly_dynamic_module).0nrs rZ zAProcessorMixin.check_argument_for_proper_class..as" j\]\i!A!A!!D js!!z Received a z for argument z, but a z was expected.) rAUTO_TO_BASE_CLASS_MAPPINGgetrtuplerrtyperO)r argument_nameargument class_namers` rZrz.ProcessorMixin.check_argument_for_proper_classWs Tm_F#;< /33J K j% ( jj jjL;;JGL(L1d8n556n]OS[\f[gguv rYreturnc  tj|j}tj|j }t |j}|dgz }|r(|Dcgc]}||jjvs|}}d|vr|d=d|vr|d=d|vr|d=d|vr|d= fd |jDcic]Y\}}||vrP|jjdk7r7|rt|tr|s#|t|tr|jn|[}}} |}|s@d|vr<|jjj|jj d }||d<|jj|d <|Scc}wcc}}w) z Serializes this instance to a Python dictionary. Returns: `dict[str, Any]`: Dictionary of all the attributes that make up this processor instance. auto_maprqformer_tokenizerprotein_tokenizerrc|jD]O\}}t|tjr|j ||<4t|t sE|||<Q|S)z Numpy arrays are not serialiazable but can be in pre-processing dicts. This function casts arrays to list, recusring through the nested configs as well. )rrr|r}tolistrm) dictionaryrvaluecast_array_to_lists rZrz2ProcessorMixin.to_dict..cast_array_to_lists` )..0 @ UeRZZ0&+llnJsOt,&8&?JsO  @  rYBeamSearchDecoderCTCraudio_tokenizer_classaudio_tokenizer_name_or_pathprocessor_class)copydeepcopy__dict__inspect signaturerrS parametersrrrrOrr!to_dictr name_or_path) rlegacy_serializationoutputsig attrs_to_savexkvaudio_tokenizer_dictrs @rZrzProcessorMixin.to_dictlst}}- .S^^, *% (5\1$..B[B[9[Q\M\ & {# & (*+ & (*+ f $'    1]"KK((,BB)*Q2OXl jN;qyy{ B    $F+$(9V(C)-)=)=)G)G)P)P040D0D0Q0Q$ )=F$ %$(NN$;$; ! a]0  s!E??E?9AFc\|j|}tj|dddzS)z Serializes this instance to a JSON string. Returns: `str`: String containing all the attributes that make up this feature_extractor instance in JSON format. rTindent sort_keys )rjsondumps)rrrs rZto_json_stringzProcessorMixin.to_json_strings-\\7K\L zz*Q$?$FFrYjson_file_pathct|dd5}|j|j|dddy#1swYyxYw)z Save this instance to a JSON file. Args: json_file_path (`str` or `os.PathLike`): Path to the JSON file in which this processor instance's parameters will be saved. wutf-8encodingr N)openwriter)rrrwriters rZ to_json_filezProcessorMixin.to_json_filesF.# 8 YF LL,,BV,W X Y Y Ys ":Ac |jDcgc]}d|dtt||}}dj|}|jj d|d|j Scc}w)Nz- z: rz: z )rreprrrrrOr)rnameattributes_reprs rZ__repr__zProcessorMixin.__repr__svPTP_P_`RvRWT4-@(A'BC``))O4..))*#o->d4CVCVCXBYZZas"A0 push_to_hubrc ~ |jdd}|;tjdt|j d t d||d<t j|d|rr|jdd}|jd |jt jjd }|j|fi|}|j|}|jm|jD cgc]} t|| } } | D cgc] } t!| t"r | j$n| "} } | j'|t)||| |j d d} |jD]} | d k(rQt|| }t+|dr%|j-|j.j0|j3|| Y|s\t|| }t+|dr%|j-|j.j0|j3||j;|jD],} t|| }t!|t"s |j$d=.t jj5|t6}t jj5|t8}t jj5|t:}t jj5|t<}|j>|j d d} t!|j>t@}| rN|rLtC|dd5}|jE|j>dddtFjId|nu| r|s|j>jKD]\}}|dk(rNtC|dd5}|jE|j>ddddtFjId|Yt j|dt jj5||d}tC|dd5}|jE|dddtFjId|n|rhtMjNd|j>idddz}tC|dd5}|jE|dddtFjId|n|j> t d|r(t jj5|tP}|jS}tU|jWdhk7r)|jY|tFjId|tU|jWdhk(rg}n|g}|jZ|jZj.j0}|jZj\}||d}tMjN|dddz} tC|dd5}|jE| dddn.|jY|d tFjId||g}|r%|j_||j d!|Scc} wcc} w#1swY%xYw#1swYxYw#1swYLxYw#1swYxYw#1swYpxYw)"aU Saves the attributes of this processor (feature extractor, tokenizer...) in the specified directory so that it can be reloaded using the [`~ProcessorMixin.from_pretrained`] method. This class method is simply calling [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] and [`~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`]. Please refer to the docstrings of the methods above for more information. Args: save_directory (`str` or `os.PathLike`): Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist). push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). legacy_serialization (`bool`, *optional*, defaults to `True`): Whether or not to save processor attributes in separate config files (legacy) or in processor's config file as a nested dict. Saving all attributes in a single dict will become the default in future versions. Set to `legacy_serialization=True` until then. kwargs (`dict[str, Any]`, *optional*): Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. use_auth_tokenNrThe `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.tokenV`token` and `use_auth_token` are both specified. Please set only the argument `token`.T)exist_okcommit_messagerepo_id)configsave_jinja_filesr_set_processor_class)r0rrrrzchat template saved in default.jinjarrrrzMultiple chat templates are not supported in the legacy format. Please save them as separate files using the `save_jinja_files` argument.rzprocessor saved in rFr )r,r))0rwarningswarn FutureWarningrrosmakedirssplitpathsep _create_repo_get_files_timestamps _auto_classrrrrrappendrrr1rrOsave_pretrainedrr rrrrrVrrloggerinforrrrrsetkeysrrr_upload_modified_files)!rsave_directoryr%rrr'r,r-files_timestampsrattrsaconfigsr0routput_processor_fileoutput_chat_template_file_jinja output_chat_template_file_legacychat_template_diris_single_templatef template_nametemplatetemplate_filepathchat_template_json_stringroutput_audio_tokenizer_fileprocessor_dict return_filesrrr audio_tokenizer_jsons! rZr@zProcessorMixin.save_pretrainedsJ8 $4d;  % MME zz'". l-F7O NT2 #ZZ(8$?NjjN,@,@,Mb,QRG'd'':6:G#99.I     'IMY~WT>2YEYafg\]A7N)O UVVgGg NN4 t^G D!::&8$?"oo :N,#D.9 9&<=224>>3J3JK)).K[)\%#D.9 9&<=224>>3J3JK)).9 :    '"&// :#D.9 i)@A!--j9 :!# ^^ L*,'',,~GY*Z'+-77<< ?, (GGLL9JK    )%zz*,>!D $693Q0UVGGD../0 56U5VWX!*<04/A/A/G/G/I S+M8$ 1!"A3QXYC]^GGD$6$6y$ABC &=>]=^$_` $5E,.GGLL9J}o]cLd,e)!"3S7K.qGGH-. &=>O=P$QR S$JJ1C1CDQZ^_bff*:C'R&&().?-@@!!"78 12G1HIJ>&&().?-@@! 56 ##/(,(<(<(F(F(O(O%/3/C/C/P/P,-B4P($(,zz2Fq\`'adh'h$5sWM7QWLL!5677   3%  P KK-.C-DE F12L   ' ' -jj) ( uZgZ00CC ..<jL$rt9d#|d$wxYw)+a From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a processor of type [`~processing_utils.ProcessingMixin`] using `from_args_and_dict`. Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`): The identifier of the pre-trained checkpoint from which we want the dictionary of parameters. subfolder (`str`, *optional*, defaults to `""`): In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. Returns: `tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the processor object. cache_dirNforce_downloadFresume_downloadproxiesr)local_files_onlyrevision subfolder_from_pipeline _from_auto processor) file_typefrom_auto_classusing_pipelinez+Offline mode: forcing local_files_only=TrueTz*.jinja/)r_r`r[r)r3) r[r\r^r]r_r) user_agentr`ra%_raise_exceptions_for_missing_entrieszCan't load processor for 'z'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'z2' is the correct path to a directory containing a z filerrr2raCannot load chat template due to conflicting files - this checkpoint combines a legacy chat_template.json file with separate template files, which is not supported. To resolve this error, replace the legacy chat_template.json file with a modern chat_template.jinja file.rr z"It looks like the config file at 'z' is not a valid JSON file.zloading configuration file z from cache at zChat templates should be in a 'chat_template.jinja' file but found key='chat_template' in the processor's config. Make sure to move your template to its own file.rrr)+rrrr'rArBrVr7r:isdirrr isfiler(r&rris_dirglobstemr"r*r r#rrrrOSError ExceptionrrloadsreadrrrmrJSONDecodeError warning_oncerfrom_pretrainedr)$clsrYraudio_tokenizer_kwargsr[r\r]r^r)r_r`ra from_pipelinergrjis_localprocessor_fileadditional_chat_template_files'resolved_additional_chat_template_filesresolved_processor_fileresolved_chat_template_fileresolved_raw_chat_template_fileresolved_audio_tokenizer_file template_dir template_filerQrRreaderchat_template_jsonchat_templatesrVrr raudio_tokenizer_pathrs$ rZget_processor_dictz!ProcessorMixin.get_processor_dictzs&"&v!6JJ{D1 $4e< **%6=**Y- 7D)!::&8%@::j$/JJ{B/  #3T:  **\59#.?S  $+8J' (  %5 KKE F# (+,I(J%77==!>? 77==6 7WW\\*GXN)+&24/ 77>>7 8&C #*. '.2 +,0 )H 8 9:N&23P&Q #*. '.2 +,0 )#$ACTU &&()5):):9)Et (5(:(: K\J]]^_l_q_q^rHs6}Et $75)9!)"+# %kGXEXXYZbYcciCj6x@k,N\ *51"'#1#$3%5)%':? +'"/:17'#1#$3%5)%':? /+3>1&'#1#$3%5)%':? 3/<9W8\8\8^;5 }";5%"+'5 '(7)9##-!)"+>C $ ;7;$1<1('#1#$3%5)%':? 1-8 ' 21GD %)ZZ %>""+-?-P!Q:$B;  5\4a4a4c0M=tM3INNPPN/:93Q>U[06 N9-> nd + ^0KPSTbPcghPh+I6N &4F? # # *N 1GD)!;;=D)!%D!1  KK56M5NO P KK5n5E_UlTmn o n ,1P1\   ^  f $.4jj.IN? + ) 48I^8[,8;S7S'-{{}$'+zz2F'G$'56G'H$$'$C$CDXYpDq$r !#78V#W 0U0E0U0U$1(>1N, -  0I   y$ / 0v%%e*j;@  01N0OP99V8WX//=.>eE    >>$))'' 89P8QQlm ss$T ?A T!"TT!.8U'U=U# U=U0+U= TTT!!,U U#U-0U:5U==#V rVc x|j}|jdd}d|vr|d=d|vr|d=|j||jjj d|jjj dd}|j||\}}t|D cic])\}} | |vr |t|kr||j| +} }} t|D cgc]\}} | j|| }}} ||i|} tjd| |r| |fS| Scc} }wcc} }w) a Instantiates a type of [`~processing_utils.ProcessingMixin`] from a Python dictionary of parameters. Args: processor_dict (`dict[str, Any]`): Dictionary that will be used to instantiate the processor object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the [`~processing_utils.ProcessingMixin.to_dict`] method. kwargs (`dict[str, Any]`): Additional parameters from which to initialize the processor object. Returns: [`~processing_utils.ProcessingMixin`]: The processor object instantiated from those parameters. return_unused_kwargsFrrNr )processor_config valid_kwargsz Processor ) rrrr__code__ co_varnames co_argcountvalidate_init_kwargs enumeraterrrArB) ryrrVrraccepted_args_and_kwargs unused_kwargsrirargs_to_updateres rZfrom_args_and_dictz!ProcessorMixin.from_args_and_dict}sk"(,,.%zz*@%H  .01  'z* f%$'<<#8#8#D#DEhs||G\G\GhGh#ijkjl#m '*&>&>+:R'?' # |$$<= 3|#CI  |$ $  :C4Ivq#""1c*II..  j ,- m+ +  Js *.D0)D6ModelProcessorKwargsrc iiiiid}iiiiidhd}t}D]}|jj|ij|<|j|jD]C}|||vs t |j |rt|j |n||} | ||<E|jt|t|z } |jD]\}} |j|jD]{}||vr0||j|d} | dk7r/|| vr+td|d|d||vr|j|d} nd} t| tr| dk7sf| | |<|j|}tfd|DrT|jD]@\}} |vs | jD]#\}}||vs ||||<|j|%Bn_|jD]L\}}||vs ||jdjvr ||d|</||vs4t j#d |d N|j%D]}|j|d|S) a Method to merge dictionaries of kwargs cleanly separated by modality within a Processor instance. The order of operations is as follows: 1) kwargs passed as before have highest priority to preserve BC. ```python high_priority_kwargs = {"crop_size" = {"height": 222, "width": 222}, "padding" = "max_length"} processor(..., **high_priority_kwargs) ``` 2) kwargs passed as modality-specific kwargs have second priority. This is the recommended API. ```python processor(..., text_kwargs={"padding": "max_length"}, images_kwargs={"crop_size": {"height": 222, "width": 222}}}) ``` 3) kwargs passed during instantiation of a modality processor have fourth priority. ```python tokenizer = tokenizer_class(..., {"padding": "max_length"}) image_processor = image_processor_class(...) processor(tokenizer, image_processor) # will pass max_length unless overridden by kwargs at call ``` 4) defaults kwargs specified at processor level have lowest priority. ```python class MyProcessingKwargs(ProcessingKwargs, CommonKwargs, TextKwargs, ImagesKwargs, total=False): _defaults = { "text_kwargs": { "padding": "max_length", "max_length": 64, }, } ``` Args: ModelProcessorKwargs (`ProcessingKwargs`): Typed dictionary of kwargs specifically required by the model passed. tokenizer_init_kwargs (`Dict`, *optional*): Dictionary of kwargs the tokenizer was instantiated with and need to take precedence over defaults. Returns: output_kwargs (`Dict`): Dictionary of per-modality kwargs to be passed to each modality-specific processor. )rrrrr>rrrr __empty__zKeyword argument z+ was passed two times: in a dictionary for z and as a **kwarg.c3&K|]}|v ywrrX)rrdefault_kwargss rZrz/ProcessorMixin._merge_kwargs..s7sn$7srzKeyword argument `zA` is not a valid argument for this processor and will be ignored.)rCrrrrTrrrrrrrrrVaddanyrArwvalues)rrrr output_kwargspossible_modality_keywords used_keysmodality modality_keyrnon_modality_kwargs output_kwarg kwarg_valuesubdictsubkeysubvaluerkwargrs @rZrzProcessorMixin._merge_kwargss^    &K"E ' CH';'E'E'I'I(TV'W'\'\'^N8 $ 4 D DX N ^ ^ C (4I^9^#4>><@  =2<@ >CN8,\: C C ^,"&kC ,>>&3&9&9&; 0 "Hl 4 D DX N ^ ^ 0 v%"("2"6"6|["QK"k1lFY6Y(/ ~>33;*FM(3F;%MM&12 2%lln  Ui'2BB?Sccc>C o6s;$>>++05vw  #))+ 9E LL7 8 9rYryr[r\r_r)r`c ,||d<||d<||d<||d<|jdd}|)tjdt| t d|}|||d <|j |fi|} |j |fi|\} }|j| | fi|S) a[ Instantiate a processor associated with a pretrained model. This class method is simply calling the feature extractor [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`], image processor [`~image_processing_utils.ImageProcessingMixin`] and the tokenizer [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`] methods. Please refer to the docstrings of the methods above for more information. Args: pretrained_model_name_or_path (`str` or `os.PathLike`): This can be either: - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on huggingface.co. - a path to a *directory* containing a feature extractor file saved using the [`~SequenceFeatureExtractor.save_pretrained`] method, e.g., `./my_model_directory/`. - a path or url to a saved feature extractor JSON *file*, e.g., `./my_model_directory/preprocessor_config.json`. **kwargs Additional keyword arguments passed along to both [`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] and [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`]. r[r\r_r`r'Nr(r*r))rr4r5r6r_get_arguments_from_pretrainedrr) ryrYr[r\r_r)r`rr'rrVs rZrxzProcessorMixin.from_pretrained7sN({#1 %5!"%z$4d;  % MME   l#E  #F7O1s112OZSYZ!7!7!78U!`Y_!`%s%%dNEfEErYct|ts |j}ddlmcm}t ||st|d||_y)as Register this class with a given auto class. This should only be used for custom feature extractors as the ones in the library are already mapped with `AutoProcessor`. Args: auto_class (`str` or `type`, *optional*, defaults to `"AutoProcessor"`): The auto class to register this new feature extractor with. rNz is not a valid auto class.) rrVrOtransformers.models.automodelsautorrr>)ry auto_class auto_modules rZregister_for_auto_classz&ProcessorMixin.register_for_auto_classvsC*c*#,,J66{J/ |+FGH H$rYc g}jD]}t|d}t|trgtfd|D}|dk(r)|j d}|(t j dn|j dd}|r |d|d}n|d}nj|}|j|j|fi||S) a Identify and instantiate the subcomponents of Processor classes, like image processors and tokenizers. This method uses the Processor attributes like `tokenizer_class` to figure out what class those subcomponents should be. Note that any subcomponents must either be library classes that are accessible in the `transformers` root, or they must be custom code that has been registered with the relevant autoclass, via methods like `AutoTokenizer.register()`. If neither of these conditions are fulfilled, this method will be unable to find the relevant subcomponent class and will raise an error. rc3HK|]}|j|ndywrr)rrrys rZrz@ProcessorMixin._get_arguments_from_pretrained..s'rbcam ? ? BY] ]rs"ruse_fastaCUsing a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.Tr r) rrrrrrArwrr?rx) ryrYrrrrclassesrattribute_classs ` rZrz-ProcessorMixin._get_arguments_from_pretraineds!nn bN (8&?@J*e,rgqrr!%66%zz*5H'++T &zz*d;H 6&-ajO&-ajO"%"A"A*"M KK7778U`Y_` a- b0 rYctt|rtt|Stjtjtj tj tjg}|D]k}|jjD]L}t|tr"|D]}||j|k(s|cccS5|8|j|k(sH|ccSmtd|d)NzCould not find module z in `transformers`. If this is a custom class, it should be registered using the relevant `AutoClass.register()` function so that other functions can find it!)rtransformers_modulerIMAGE_PROCESSOR_MAPPINGVIDEO_PROCESSOR_MAPPINGTOKENIZER_MAPPINGFEATURE_EXTRACTOR_MAPPING$MODEL_FOR_AUDIO_TOKENIZATION_MAPPING_extra_contentrrrrOr) module_namelookup_locationslookup_location custom_classcustom_subclasss rZrz*ProcessorMixin.get_possibly_dynamic_modules & 4. < <  7 7  7 7  1 1  9 9  D D   0 (O / > > E E G ( lE2+73*6?;S;SWb;b#223"-,2G2G;2V''  ( ($[M2+ ,  rYct|ds#td|jjd|jj |i|S)z This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please refer to the docstring of this method for more information. rzCannot batch decode text:  has no tokenizer.)rrrrOr batch_decoderrrs rZrzProcessorMixin.batch_decodesK t[)9$..:Q:Q9RRdef f*t~~**D;F;;rYct|ds#td|jjd|jj |i|S)z This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to the docstring of this method for more information. rzCannot decode text: r)rrrrOrdecoders rZrzProcessorMixin.decodesK t[)3DNN4K4K3LL^_` `$t~~$$d5f55rYcg}|jD],}t||d}t|d}|j|.|S)Nmodel_input_names)rrextend)rrrrattr_input_namess rZrz ProcessorMixin.model_input_namessN"oo 7Nnd;I&y2EF   $ $%5 6 7! rYct|j}t|}||z }||z}|r|Dcic]}||| c}ni}|r|Dcic]}||| c}ni}||fScc}wcc}wr)rCrD)rrkwargs_from_configvalid_kwargs_set unused_keys valid_keysr rs rZrz#ProcessorMixin.validate_init_kwargss !1!6!6!89|,(+;; '*:: ITEA,Q//EZ\ GQ C1+A..CWY l**FCs A# A( video_fpsz4.58ru)versionnew_namevideo_load_backendz4.59zd. This function will use `torchcodec` by default, or `torchvision` if `torchcodec` is not installed.)radditional_message conversationc R |t|jtrd|jvr|jd}nt|jtr5tddj |jj |j |j}nDtdt|jtr||jvr|j|}n t |dxr/|jjjjd}|jdd r:|jd d r td |jd d r td |jd d r|s tdd|d<iid}|D]r}tj|jD]P}tj|}t||d} |j|| } | 8t| trI| |||<Rt|jdd|dj!|t|t"t$fr-t|dt"t$fst |ddrd} |} nd } |g} |djdd } |djdd }|d}| rDgg}}g}| D]8}gg}}|D] }|dDcgc] }|ddvs |}}|dDcgc]}dD]}||vr |ddk(r||}}}|Dcgc]}dD]}||vr |ddk(r||}}}|j'||Dcgc]}dD]}||vr |dd k(r||}}}|j'||d!s'|D]!}|j)t+||d"##|D]!}|j)t+||d"## |j)||j)|;t-d2| |d$|d|jj.\}}| s|d}| r| r|dn|}|jj0*|j3|jj0rd |d%<d&|vr'|jd'|jd(d|d&<t5d)D} t5d*D}!|d2|| r|nd|!r|ndr|ndd+|}"|r)|djd d rg}#|"jd,}$|"d-}%t7t9|%D]}&dgt9|%|&z}'|$|&}(|(D)*cgc]\})}*|) }+})}*||&D]v\},}-t;j<|+|,}.t;j<|+|-}/|.dk\r|(|.d|,cxkr |(|.d.ksnQt7|.|/r|/n t9|%|&D]}0d.|'|0< x|#j)|'|#|"d/<|"j?|jd01|"S|"d-S|Scc}wcc}}wcc}}wcc}}wcc}*})w)3a Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input conversations to turn them into a single tokenizable string. The input is expected to be in the following format, where each message content is a list consisting of text and optionally image or video inputs. One can also provide an image, video, URL or local path which will be used to form `pixel_values` when `return_dict=True`. If not provided, one will get only the formatted text, optionally tokenized text. conversation = [ { "role": "user", "content": [ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}, {"type": "text", "text": "Please describe this image in detail."}, ], }, ] Args: conversation (`Union[list[Dict, [str, str]], list[list[dict[str, str]]]]`): The conversation to format. chat_template (`Optional[str]`, *optional*): The Jinja template to use for formatting the conversation. If not provided, the tokenizer's chat template is used. Nr2zThe processor has multiple chat templates but none of them are named "default". You need to specify which one to use by passing the `chat_template` argument. Available templates are: rzTCannot use apply_chat_template because this processor does not have a chat template.rFastrFra continue_final_message and add_generation_prompt are not compatible. Use continue_final_message when you want the model to continue the final message, and add_generation_prompt when you want to add a header that will prompt it to start a new assistant message instead.rzKcontinue_final_message is not compatible with return_assistant_tokens_mask.z`return_assistant_tokens_mask` is not possible with slow tokenizers. Make sure you have `tokenizers` installed. If the error persists, open an issue to support a Fast tokenizer for your model.TrJ)rrrrrcontentrrrr)imagevideo)rurlr:r)rrr:base64r)rrr:rrrz)rz) conversationsrr?rsrurvc32K|]}|D]}|du ywrrX)rim_listims rZrz5ProcessorMixin.apply_chat_template..s"^GV]^PR$^^c32K|]}|D]}|du ywrrX)rvid_listvids rZrz5ProcessorMixin.apply_chat_template..s"bXYabRU4bbr)rrrroffset_mapping input_idsr assistant_masksr) tensor_typerX) rrrmrrrDrrrrOendswithrrrTrrrrSrrr?r rspecial_tokens_map bos_token startswithrrangerbisect bisect_leftconvert_to_tensors)1rrrris_tokenizers_fastprocessed_kwargs kwarg_typerkwarg_type_defaults default_valuer is_batchedrrrr batch_images batch_videos batch_audiosrrmessagervisuals audio_fnames vision_info image_fnames video_fnamesfnamepromptgeneration_indices single_prompt images_exist videos_existoutrrrr current_maskoffsetsstartend offset_startsassistant_start_charassistant_end_char start_posend_postoken_ids1 rZapply_chat_templatez"ProcessorMixin.apply_chat_templates]J  $,,d3 TEWEW8W $ 2 29 = D..5 kyy!3!3!8!8!:;<> ##/ $ 2 2  j$,,d3 I[I[8[ $ 2 2= A $T;7nDNNJ/?? K[[ >&>&N&Nz&Z# '(;S$ G  3 6$Zt-D8=$Z05  > >  '. *+226: lT5M 2 |Au 6',q/S\:]J(MJ)NM#$56:::uM&'89==mUS )*:; )+R,LL -$ , !#R+rG6=i6Hr7GTZO_qLqwrGr(/y'9$##;$ '>gfo.H  $$$L$,3$'#E$ +-+f2E2P$C($($L$ MM,/,3$'#;$ +-+f2E2P$C($($L$ MM,/**AB%1rE(// 5P^_nPo0pqr&2rE(// 5P^_nPo0pqr9rB##F+##F+I$ ,L&;& ''& 01& nn// & ""AYF *4F1IM~~''3 8P8PQUQ_Q_QiQi8j/4+,"/ 5!-L1I1U-1)*^|^^LbbbL'3|'3|&2l   C#$56::;Y[`a&(O%(WW-=%>N #K 0I"3y>2=()sS1->'> "0"3AH(I:5#(I (IHZ[\H] ;D02D(.(:(:=J^(_I&,&8&8HZ&[G!*Q$+I$6q$9=Q$iT[\eTfghTi$i!),1)WUXYbcdYeUf,g;9: X 6; ;(..|I+>VWsc2WIW@DEf&,,y1EJEJ& #H:-abkallvxBwCCss XEs !B B)NNNN)T)FTr)NFFNmain) AutoProcessor):rOrPrQrRrrrrSrVrTfeature_extractor_classtokenizer_classr>rrrrrr rrrr Unpackrrrmrrrr7PathLikerr$rUr@ classmethodrrrrrr.rxrr staticmethodrrrpropertyrrr,rrrr(rXrYrZr/r/s&{3J*,=>$&S &"OK-$/P(,hl'+&* <%$<%uY(94 ?DQbLccde<%$ <%  # <% )* <%|*BDcNBH G3 G Y5bkk1A+B Y[ m4m_cm^@&,1#r{{2B,C@& tCH~tCH~- .@&@&D5d38n55t15@.@ (~@ c4i @D8<$!&,0<F ' (<F',S"++-='><FE#r{{"234<F <F  <F c4i() <F<F <F<F|%%*""H  0<6!! + +[&5AB(,TDc3h0$tDcN7K2LLMT }T12 T T B Tlq$T#Y^aefiajrYrer*zprocessor files)object object_class object_files)[rRrrrrr7systypingr4 dataclassesrpathlibrrrrrr numpyr|typing_extensionshuggingface_hub.errorsr audio_utilsr r dynamic_module_utilsrfeature_extraction_utilsr image_utilsrrrutils.chat_template_utilsr video_utilsrrrtokenization_utils_baserrrrrutilsrrrrr r!r"r#r$r%r&r'r(r)r*r+utils.deprecationr,modeling_utilsr- get_loggerrOrAr.__file__parentrr version_infor-r;r]rpryrrrrrrrr/r%formatrXrYrZrKs%  !;;5/42JJ<2/$/<   H % 7?OP1h1F1FG/40. w ]]F  % %F:-%:-z49E4n@$9E@$F$*)5$*N59E57y7t$9)5$9N 2Ye 2 ("8:U]b (1y1  RR R2K^K\('~'A'AB%%1)7)C)C)K)K)R)RGX*S*N&2rY