wL i" ddlZddlmZddlmZddlmZmZmZm Z m Z m Z ddl m Z mZe jeZeGddZeGdd Zd e e eeefd e ed eefd ZGddeZGddeZGddeZdeeeefd e eeeffdZdZd ed eed eeeeffdZde eed e eefdZy)N) defaultdict) dataclass)AnyDictListOptionalTupleUnion)logging yaml_dumpcveZdZUdZeed<eed<eed<eed<eed<dZeeed<dZ eeed <dZ eeed <dZ eeed <dZ ee eefed <dZeeed <dZeeed<dZee eefed<dZeeed<dZeeed<dZeeed<dZeeed<edefdZdddefdZddZy) EvalResultu Flattened representation of individual evaluation results found in model-index of Model Cards. For more information on the model-index spec, see https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1. Args: task_type (`str`): The task identifier. Example: "image-classification". dataset_type (`str`): The dataset identifier. Example: "common_voice". Use dataset id from https://hf.co/datasets. dataset_name (`str`): A pretty name for the dataset. Example: "Common Voice (French)". metric_type (`str`): The metric identifier. Example: "wer". Use metric id from https://hf.co/metrics. metric_value (`Any`): The metric value. Example: 0.9 or "20.0 ± 1.2". task_name (`str`, *optional*): A pretty name for the task. Example: "Speech Recognition". dataset_config (`str`, *optional*): The name of the dataset configuration used in `load_dataset()`. Example: fr in `load_dataset("common_voice", "fr")`. See the `datasets` docs for more info: https://hf.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name dataset_split (`str`, *optional*): The split used in `load_dataset()`. Example: "test". dataset_revision (`str`, *optional*): The revision (AKA Git Sha) of the dataset used in `load_dataset()`. Example: 5503434ddd753f426f4b38109466949a1217c2bb dataset_args (`Dict[str, Any]`, *optional*): The arguments passed during `Metric.compute()`. Example for `bleu`: `{"max_order": 4}` metric_name (`str`, *optional*): A pretty name for the metric. Example: "Test WER". metric_config (`str`, *optional*): The name of the metric configuration used in `load_metric()`. Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations metric_args (`Dict[str, Any]`, *optional*): The arguments passed during `Metric.compute()`. Example for `bleu`: max_order: 4 verified (`bool`, *optional*): Indicates whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. Automatically computed by Hugging Face, do not set. verify_token (`str`, *optional*): A JSON Web Token that is used to verify whether the metrics originate from Hugging Face's [evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator) or not. source_name (`str`, *optional*): The name of the source of the evaluation result. Example: "Open LLM Leaderboard". source_url (`str`, *optional*): The URL of the source of the evaluation result. Example: "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard". task_type dataset_type dataset_name metric_type metric_valueN task_namedataset_config dataset_splitdataset_revision dataset_args metric_name metric_config metric_argsverified verify_token source_name source_urlreturnct|j|j|j|j|jfS)z9Returns a tuple that uniquely identifies this evaluation.)rrrrrselfs c/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/huggingface_hub/repocard_data.pyunique_identifierzEvalResult.unique_identifiers9 NN           ! !   otherc|jjD],\}}|dk(r |dk7st||t||k7s,yy)zx Return True if `self` and `other` describe exactly the same metric but with a different value. rrFT)__dict__itemsgetattr)r#r'key_s r$is_equal_except_valuez EvalResult.is_equal_except_valuesV mm))+ FCn$n$s);wuc?R)R  r&cL|j|j tdyy)NzAIf `source_name` is provided, `source_url` must also be provided.)rr ValueErrorr"s r$ __post_init__zEvalResult.__post_init__s,    'DOO,C`a a-D 'r&)r N)__name__ __module__ __qualname____doc__str__annotations__rrrrrrrrrrrrboolrrrpropertytupler%r.r1r&r$rr s4-fN $Ix}# %)NHSM($(M8C=''+hsm*.2L(4S>*1"&K#% $(M8C='-1K$sCx.)0 $Hhtn##'L(3-&"&K#%!%J $  5   < D br&rceZdZdZddefdZdZdZddee e de fd Z d Z d Z dd e d edefdZdd e d edefdZd e defdZd e deddfdZd e defdZdefdZy)CardDataaStructure containing metadata from a RepoCard. [`CardData`] is the parent class of [`ModelCardData`] and [`DatasetCardData`]. Metadata can be exported as a dictionary or YAML. Export can be customized to alter the representation of the data (example: flatten evaluation results). `CardData` behaves as a dictionary (can get, pop, set values) but do not inherit from `dict` to allow this export step. ignore_metadata_errorsc :|jj|yN)r)update)r#r>kwargss r$__init__zCardData.__init__s V$r&ctj|j}|j||j Dcic] \}}| || c}}Scc}}w)zConverts CardData to a dict. Returns: `dict`: CardData represented as a dictionary ready to be dumped to a YAML block for inclusion in a README.md file. )copydeepcopyr)_to_dictr*)r# data_dictr,values r$to_dictzCardData.to_dictsJMM$--0  i -6__->TzsE%BSU TTTs  AAcy)zUse this method in child classes to alter the dict representation of the data. Alter the dict in-place. Args: data_dict (`dict`): The raw dict representation of the card data. Nr;r#rHs r$rGzCardData._to_dicts r&Noriginal_orderr c8|rj|tt|jjt|z zDcic] }||jvr||j|"c}|_t |j d|j Scc}w)a Dumps CardData to a YAML block for inclusion in a README.md file. Args: line_break (str, *optional*): The line break to use when dumping to yaml. Returns: `str`: CardData represented as a YAML block. F) sort_keys line_break)listsetr)keysr rJstrip)r#rPrMks r$to_yamlzCardData.to_yamls ($s4==3E3E3G/H3~K^/^*__ %4==##DM 5ZPVVXX s%Bc,t|jSr@)reprr)r"s r$__repr__zCardData.__repr__sDMM""r&c"|jSr@)rVr"s r$__str__zCardData.__str__s||~r&r,defaultcD|jj|}||S|Sz#Get value for a given metadata key.)r)get)r#r,r\rIs r$r_z CardData.gets% !!#&-w2U2r&c:|jj||S)z#Pop value for a given metadata key.)r)pop)r#r,r\s r$raz CardData.pops}}  g..r&c |j|Sr^r)r#r,s r$ __getitem__zCardData.__getitem__s}}S!!r&rIc"||j|<y)z#Set value for a given metadata key.Nrc)r#r,rIs r$ __setitem__zCardData.__setitem__s" cr&c||jvS)z%Check if a given metadata key is set.rcrds r$ __contains__zCardData.__contains__sdmm##r&c,t|jS)z'Return the number of metadata keys set.)lenr)r"s r$__len__zCardData.__len__s4==!!r&)F)NNr@)r2r3r4r5r8rCrJrGrrr6rVrYr[rr_rarergriintrlr;r&r$r=r=s%t% U YxS 7JYVYY$#3s3S3C3 /s/S/C/"s"s"#s#3#4#$$$""r&r= eval_results model_namer c|gSt|tr|g}t|trtd|Dst dt |d| t d|S)Nc3<K|]}t|tywr@) isinstancer).0rs r$ z)_validate_eval_results..s4eSTZ:5N4eszM`eval_results` should be of type `EvalResult` or a list of `EvalResult`, got .z7Passing `eval_results` requires `model_name` to be set.)rrrrQallr0type)rnros r$_validate_eval_resultsryst , +$~ lD )4eXd4e1e[\`am\n[oop q  RSS r&ceZdZdZdddddddddddddd deeeeefdeeeeefdeeedeeeeefd eed eed eed eed eeedeedeedeeede ffdZ dZ xZ S) ModelCardDataaQModel Card Metadata that is used by Hugging Face Hub when included at the top of your README.md Args: base_model (`str` or `List[str]`, *optional*): The identifier of the base model from which the model derives. This is applicable for example if your model is a fine-tune or adapter of an existing model. The value must be the ID of a model on the Hub (or a list of IDs if your model derives from multiple models). Defaults to None. datasets (`Union[str, List[str]]`, *optional*): Dataset or list of datasets that were used to train this model. Should be a dataset ID found on https://hf.co/datasets. Defaults to None. eval_results (`Union[List[EvalResult], EvalResult]`, *optional*): List of `huggingface_hub.EvalResult` that define evaluation results of the model. If provided, `model_name` is used to as a name on PapersWithCode's leaderboards. Defaults to `None`. language (`Union[str, List[str]]`, *optional*): Language of model's training data or metadata. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". Defaults to `None`. library_name (`str`, *optional*): Name of library used by this model. Example: keras or any library from https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/model-libraries.ts. Defaults to None. license (`str`, *optional*): License of this model. Example: apache-2.0 or any license from https://huggingface.co/docs/hub/repositories-licenses. Defaults to None. license_name (`str`, *optional*): Name of the license of this model. Defaults to None. To be used in conjunction with `license_link`. Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a name. In that case, use `license` instead. license_link (`str`, *optional*): Link to the license of this model. Defaults to None. To be used in conjunction with `license_name`. Common licenses (Apache-2.0, MIT, CC-BY-SA-4.0) do not need a link. In that case, use `license` instead. metrics (`List[str]`, *optional*): List of metrics used to evaluate this model. Should be a metric name that can be found at https://hf.co/metrics. Example: 'accuracy'. Defaults to None. model_name (`str`, *optional*): A name for this model. It is used along with `eval_results` to construct the `model-index` within the card's metadata. The name you supply here is what will be used on PapersWithCode's leaderboards. If None is provided then the repo name is used as a default. Defaults to None. pipeline_tag (`str`, *optional*): The pipeline tag associated with the model. Example: "text-classification". tags (`List[str]`, *optional*): List of tags to add to your model that can be used when filtering on the Hugging Face Hub. Defaults to None. ignore_metadata_errors (`str`): If True, errors while parsing the metadata section will be ignored. Some information might be lost during the process. Use it at your own risk. kwargs (`dict`, *optional*): Additional metadata that will be added to the model card. Defaults to None. Example: ```python >>> from huggingface_hub import ModelCardData >>> card_data = ModelCardData( ... language="en", ... license="mit", ... library_name="timm", ... tags=['image-classification', 'resnet'], ... ) >>> card_data.to_dict() {'language': 'en', 'license': 'mit', 'library_name': 'timm', 'tags': ['image-classification', 'resnet']} ``` NF) base_modeldatasetsrnlanguage library_namelicense license_name license_linkmetricsro pipeline_tagtagsr>r|r}rnr~rrrrrrorrr>c  ||_||_||_||_||_||_||_||_| |_| |_ | |_ t| |_ |jdd}|r t|\} }| |_ ||_t+|Xdi||jr' t/|j|j|_yy#tt f$r>}| rt"j%dnt'd|j(d|dYd}~d}~wwxYw#t0$r5}| rt"j%d|dnt'd||Yd}~yd}~wwxYw) N model-indexzrB model_indexerrorers r$rCzModelCardData.__init__Is}$%  (  ( (( $(#D) jj5  +F{+S( L",$0! "6"    U$:4;L;Ldoo$^! i( )NN#ab$NuN__`af`ghSSc  U)NN%FqcIr#st$'H%LMSTTu Us04C,%D#D "4DD # E!,+EE!cp|j*t|j|j|d<|d=|d=yy)z[Format the internal data dict. In this case, we convert eval results to a valid model indexNrrnro)rneval_results_to_model_indexrorLs r$rGzModelCardData._to_dicts>    ('B4??TXTeTe'fIm $.)9\+B )r&) r2r3r4r5rr r6rrr8rCrG __classcell__rs@r$r{r{ s8=D7;483748&*!%&*&*'+$(&*$(',8UU3S >238U5d3i01 8U tJ/0 8U 5d3i01 8Usm8U#8Usm8Usm8U$s)$8USM8Usm8UtCy!8U!%8UtCr&r{cfeZdZdZddddddddddddddddeeeeefdeeeeefdeeeeefdeeeeefd eeeeefd eeeeefd eeed eeeeefd eeeeefdeedeedeedeeeeefde ffdZ dZ xZ S)DatasetCardDataa Dataset Card Metadata that is used by Hugging Face Hub when included at the top of your README.md Args: language (`List[str]`, *optional*): Language of dataset's data or metadata. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". license (`Union[str, List[str]]`, *optional*): License(s) of this dataset. Example: apache-2.0 or any license from https://huggingface.co/docs/hub/repositories-licenses. annotations_creators (`Union[str, List[str]]`, *optional*): How the annotations for the dataset were created. Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'no-annotation', 'other'. language_creators (`Union[str, List[str]]`, *optional*): How the text-based data in the dataset was created. Options are: 'found', 'crowdsourced', 'expert-generated', 'machine-generated', 'other' multilinguality (`Union[str, List[str]]`, *optional*): Whether the dataset is multilingual. Options are: 'monolingual', 'multilingual', 'translation', 'other'. size_categories (`Union[str, List[str]]`, *optional*): The number of examples in the dataset. Options are: 'n<1K', '1K1T', and 'other'. source_datasets (`List[str]]`, *optional*): Indicates whether the dataset is an original dataset or extended from another existing dataset. Options are: 'original' and 'extended'. task_categories (`Union[str, List[str]]`, *optional*): What categories of task does the dataset support? task_ids (`Union[str, List[str]]`, *optional*): What specific tasks does the dataset support? paperswithcode_id (`str`, *optional*): ID of the dataset on PapersWithCode. pretty_name (`str`, *optional*): A more human-readable name for the dataset. (ex. "Cats vs. Dogs") train_eval_index (`Dict`, *optional*): A dictionary that describes the necessary spec for doing evaluation on the Hub. If not provided, it will be gathered from the 'train-eval-index' key of the kwargs. config_names (`Union[str, List[str]]`, *optional*): A list of the available dataset configs for the dataset. NF)r~rannotations_creatorslanguage_creatorsmultilingualitysize_categoriessource_datasetstask_categoriestask_idspaperswithcode_id pretty_nametrain_eval_index config_namesr>r~rrrrrrrrrrrrr>c ||_||_||_||_||_||_||_||_| |_| |_ | |_ | |_ | xs|jdd|_ t|<di|y)Ntrain-eval-indexr;)rrr~rrrrrrrrrrarrrC)r#r~rrrrrrrrrrrrr>rBrs r$rCzDatasetCardData.__init__s&%9!!2   ....  !2&(!1 XFJJ?QSW4X "6"r&c,|jd|d<y)Nrr)rarLs r$rGzDatasetCardData._to_dicts(1 6H(I $%r&) r2r3r4r5rr r6rrr8rCrGrrs@r$rrs%T5937@D=A;?;?/3;?48+/%)+/8<',!"#5d3i01"#%T#Y/0 "# 'uS$s)^'<= "# $E#tCy.$9: "#"%T#Y"78"#"%T#Y"78"#"$s),"#"%T#Y"78"#5d3i01"#$C="#c]"##4."#uS$s)^45"# !%!"#HJr&rceZdZdZddddddddddddd deedeedeedeed eed eed eed eed eeedeeedeeedeffdZ xZ S) SpaceCardDataa Space Card Metadata that is used by Hugging Face Hub when included at the top of your README.md To get an exhaustive reference of Spaces configuration, please visit https://huggingface.co/docs/hub/spaces-config-reference#spaces-configuration-reference. Args: title (`str`, *optional*) Title of the Space. sdk (`str`, *optional*) SDK of the Space (one of `gradio`, `streamlit`, `docker`, or `static`). sdk_version (`str`, *optional*) Version of the used SDK (if Gradio/Streamlit sdk). python_version (`str`, *optional*) Python version used in the Space (if Gradio/Streamlit sdk). app_file (`str`, *optional*) Path to your main application file (which contains either gradio or streamlit Python code, or static html code). Path is relative to the root of the repository. app_port (`str`, *optional*) Port on which your application is running. Used only if sdk is `docker`. license (`str`, *optional*) License of this model. Example: apache-2.0 or any license from https://huggingface.co/docs/hub/repositories-licenses. duplicated_from (`str`, *optional*) ID of the original Space if this is a duplicated Space. models (List[`str`], *optional*) List of models related to this Space. Should be a dataset ID found on https://hf.co/models. datasets (`List[str]`, *optional*) List of datasets related to this Space. Should be a dataset ID found on https://hf.co/datasets. tags (`List[str]`, *optional*) List of tags to add to your Space that can be used when filtering on the Hub. ignore_metadata_errors (`str`): If True, errors while parsing the metadata section will be ignored. Some information might be lost during the process. Use it at your own risk. kwargs (`dict`, *optional*): Additional metadata that will be added to the space card. Example: ```python >>> from huggingface_hub import SpaceCardData >>> card_data = SpaceCardData( ... title="Dreambooth Training", ... license="mit", ... sdk="gradio", ... duplicated_from="multimodalart/dreambooth-training" ... ) >>> card_data.to_dict() {'title': 'Dreambooth Training', 'sdk': 'gradio', 'license': 'mit', 'duplicated_from': 'multimodalart/dreambooth-training'} ``` NF) titlesdk sdk_versionpython_versionapp_fileapp_portrduplicated_frommodelsr}rr>rrrrrrrrrr}rr>c  ||_||_||_||_||_||_||_||_| |_| |_ t| |_ t|4di| y)Nr;)rrrrrrrrrr}rrrrC)r#rrrrrrrrrr}rr>rBrs r$rCzSpaceCardData.__init__ sj" &,     .   #D)  "6"r&) r2r3r4r5rr6rmrr8rCrrs@r$rrs/h $!%)(,"&"&!%)-&*(,$(',#}#c] # c] # ! #3-#3-###"##c##49%#tCy!#!%##r&rrc .g}|D]}|d}|d}|D]w}|dd}|djd}|dd}|dd} |djd} |djd} |djd} |djd } |jd ijd}|jd ijd }|d D]}|d}|d }|jd}|jd }|jd}|jd}|jd}td id|d|d| d|d|d|d| d| d| d| d|d|d|d|d|d|d|}|j|z|fS)!aTakes in a model index and returns the model name and a list of `huggingface_hub.EvalResult` objects. A detailed spec of the model index can be found here: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 Args: model_index (`List[Dict[str, Any]]`): A model index data structure, likely coming from a README.md file on the Hugging Face Hub. Returns: model_name (`str`): The name of the model as found in the model index. This is used as the identifier for the model on leaderboards like PapersWithCode. eval_results (`List[EvalResult]`): A list of `huggingface_hub.EvalResult` objects containing the metrics reported in the provided model_index. Example: ```python >>> from huggingface_hub.repocard_data import model_index_to_eval_results >>> # Define a minimal model index >>> model_index = [ ... { ... "name": "my-cool-model", ... "results": [ ... { ... "task": { ... "type": "image-classification" ... }, ... "dataset": { ... "type": "beans", ... "name": "Beans" ... }, ... "metrics": [ ... { ... "type": "accuracy", ... "value": 0.9 ... } ... ] ... } ... ] ... } ... ] >>> model_name, eval_results = model_index_to_eval_results(model_index) >>> model_name 'my-cool-model' >>> eval_results[0].task_type 'image-classification' >>> eval_results[0].metric_type 'accuracy' ``` nameresultstaskrxdatasetconfigsplitrevisionargssourceurlrrIr verifyTokenrrrrrrrrrrrrrrrrr;)r_rappend)rrnelemrrresultrrrrrrrrrrmetricrrrrrrr eval_results r$rr+s?pL+1F|y/( 1Fvv.Iv**62I!),V4L!),V4L#I.228KHb155e.sGQaGsc3ZK|]#\}}| | t|t|f%ywr@r)rsrUvs r$ruz_remove_none..s-w1WXWdijiv,q/<?;ws +++)rrrQr:rRrxdictr*)objs r$rrsX#eS)*tCyG#GGG C tCyw www r&ctt}|D] }||jj|"g}|j D]}|d}|j |j d|j|j|j|j|j|jd|Dcgc]R}|j|j|j|j |j"|j$|j&dTc}d}|j(.d|j(i} |j*|j*| d<| |d<|j|||d g} t-| Scc}w) aTakes in given model name and list of `huggingface_hub.EvalResult` and returns a valid model-index that will be compatible with the format expected by the Hugging Face Hub. Args: model_name (`str`): Name of the model (ex. "my-cool-model"). This is used as the identifier for the model on leaderboards like PapersWithCode. eval_results (`List[EvalResult]`): List of `huggingface_hub.EvalResult` objects containing the metrics to be reported in the model-index. Returns: model_index (`List[Dict[str, Any]]`): The eval_results converted to a model-index. Example: ```python >>> from huggingface_hub.repocard_data import eval_results_to_model_index, EvalResult >>> # Define minimal eval_results >>> eval_results = [ ... EvalResult( ... task_type="image-classification", # Required ... dataset_type="beans", # Required ... dataset_name="Beans", # Required ... metric_type="accuracy", # Required ... metric_value=0.9, # Required ... ) ... ] >>> eval_results_to_model_index("my-cool-model", eval_results) [{'name': 'my-cool-model', 'results': [{'task': {'type': 'image-classification'}, 'dataset': {'name': 'Beans', 'type': 'beans'}, 'metrics': [{'type': 'accuracy', 'value': 0.9}]}]}] ``` r)rxr)rrxrrrr)rxrIrrrrr)rrrrrr)rr)rrQr%rvaluesrrrrrrrrrrrrrrrrrr) rorntask_and_ds_types_maprmodel_index_datar sample_resultrdatarrs r$rrsJ:ET9J#Q k;;<CCKPQ(//1$& &//%// &22%22'66&44)::%22 $& #..#00"..$22".. &#)#6#6  4  # # /}//F((4!.!:!:v#DN%I$&T' K  $$= s0AE+ rcL||Sg}|D]}||vs|j||Sr@)r)r unique_tagstags r$rrs> | K$ k !   s #$ r&)rE collectionsr dataclassesrtypingrrrrr r huggingface_hub.utilsr r get_loggerr2rrr=r6ryr{rrrrrrr;r&r$rsk #!::4   H % TbTb Tbn P"P" P"f5T*-=!=>?  *"~CH~CBMJhMJ`N#HN#beT$sCx.-AeeCQUV`QaLaFbeP Y%CY%tJ?OY%TXY]^acf^fYgThY%x(49-(492Er&