wL i ddlZddlmZmZmZmZmZddlmZddl m Z ddl m Z m Z mZmZmZddlmZej&eZgdZGd d Zy) N)AnyDictListOptionalUnion) constants)HfApi)build_hf_headers get_sessionis_pillow_availableloggingvalidate_hf_hub_args)_deprecate_method)ztext-classificationztoken-classificationztable-question-answeringzquestion-answeringzzero-shot-classification translation summarizationconversationalzfeature-extractionztext-generationztext2text-generationz fill-maskzsentence-similarityztext-to-speechzautomatic-speech-recognitionzaudio-to-audiozaudio-classificationzvoice-activity-detectionzimage-classificationzobject-detectionzimage-segmentationz text-to-imagezimage-to-imageztabular-classificationztabular-regressionc eZdZdZeedd ddedeedeed efd Z d Z dd ee ee e ee e efd ee deededef dZy) InferenceApiaClient to configure requests and make calls to the HuggingFace Inference API. Example: ```python >>> from huggingface_hub.inference_api import InferenceApi >>> # Mask-fill example >>> inference = InferenceApi("bert-base-uncased") >>> inference(inputs="The goal of life is [MASK].") [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] >>> # Question Answering example >>> inference = InferenceApi("deepset/roberta-base-squad2") >>> inputs = { ... "question": "What's my name?", ... "context": "My name is Clara and I live in Berkeley.", ... } >>> inference(inputs) {'score': 0.9326569437980652, 'start': 11, 'end': 16, 'answer': 'Clara'} >>> # Zero-shot example >>> inference = InferenceApi("typeform/distilbert-base-uncased-mnli") >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" >>> params = {"candidate_labels": ["refund", "legal", "faq"]} >>> inference(inputs, params) {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} >>> # Overriding configured task >>> inference = InferenceApi("bert-base-uncased", task="feature-extraction") >>> # Text-to-image >>> inference = InferenceApi("stabilityai/stable-diffusion-2-1") >>> inference("cat") >>> # Return as raw response to parse the output yourself >>> inference = InferenceApi("mio/amadeus") >>> response = inference("hello world", raw_response=True) >>> response.headers {"Content-Type": "audio/flac", ...} >>> response.content # raw bytes from server b'(...)' ``` z1.0z`InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client.)versionmessageNrepo_idtasktokengpucd|d|_t||_t|j |}|j s |s t d|rC||j k7r4|tvrt d|dtjd||_ n$|j Jd |j |_ tjd |jd ||_ y ) akInits headers and API call information. Args: repo_id (``str``): Id of repository (e.g. `user/bert-base-uncased`). task (``str``, `optional`, defaults ``None``): Whether to force a task instead of using task specified in the repository. token (`str`, `optional`): The API token to use as HTTP bearer authorization. This is not the authentication token. You can find the token in https://huggingface.co/settings/token. Alternatively, you can find both your organizations and personal API tokens using `HfApi().whoami(token)`. gpu (`bool`, `optional`, defaults `False`): Whether to use GPU instead of CPU for inference(requires Startup plan at least). T)wait_for_modeluse_gpu)r)rzTask not specified in the repository. Please add it to the model card using pipeline_tag (https://huggingface.co/docs#how-is-a-models-type-of-inference-api-and-widget-determined)z Invalid task z. Make sure it's valid.zlYou're using a different task than the one specified in the repository. Be sure to know what you're doing :)NzPipeline tag cannot be Nonez /pipeline//)optionsr headersr model_info pipeline_tag ValueError ALL_TASKSloggerwarningrr INFERENCE_ENDPOINTapi_url)selfrrrrr"s c/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/huggingface_hub/inference_api.py__init__zInferenceApi.__init__]sD+/3? 'e4 '2272C &&tm  DJ3339$ =6M!NOO NND DI**6 U8U U6"//DI#667z$))AgYW cVd|jd|jd|jdS)NzInferenceAPI(api_url='z ', task='z ', options=))r)rr )r*s r+__repr__zInferenceApi.__repr__s.' ~YtyykUYUaUaTbbcddr-inputsparamsdata raw_responsereturncd|ji}|r||d<|r||d<tj|j|j||}|r|S|jj dxsd}|j drWtstd|jd d d l m }|jtj|jS|d k(r|j!St#|d )a Make a call to the Inference API. Args: inputs (`str` or `Dict` or `List[str]` or `List[List[str]]`, *optional*): Inputs for the prediction. params (`Dict`, *optional*): Additional parameters for the models. Will be sent as `parameters` in the payload. data (`bytes`, *optional*): Bytes content of the request. In this case, leave `inputs` and `params` empty. raw_response (`bool`, defaults to `False`): If `True`, the raw `Response` object is returned. You can parse its content as preferred. By default, the content is parsed into a more practical format (json dictionary or PIL Image for example). r r1 parameters)r!jsonr3z Content-TypeimagezTask 'z' returned as image but Pillow is not installed. Please install it (`pip install Pillow`) or pass `raw_response=True` to get the raw `Response` object and parse the image by yourself.r)Imagezapplication/jsonz output type is not implemented yet. You can pass `raw_response=True` to get the raw `Response` object and parse the output by yourself.)r r postr)r!get startswithr ImportErrorrPILr;openioBytesIOcontentr8NotImplementedError) r*r1r2r3r4payloadresponse content_typer;s r+__call__zInferenceApi.__call__s0 t||#   &GH  $*GL !=%%dllDLLw]a%b O ''++N;Ar  " "7 +&(!TYYK(.. "::bjj)9)9:; ; / /==? "%.!'' r-)NNF)NNNF)__name__ __module__ __qualname____doc__rrstrrboolr,r0rrrbytesrrIr-r+rr.s,\ h## 2X2Xsm2X} 2X  2X2Xhe JN!% $" ;sD$s)T$s)_DEF;;uo ;  ; ;r-r)rBtypingrrrrrr9r hf_apir utilsr r r rrutils._deprecationr get_loggerrJr&r%rrQr-r+rWsE 33dd1   H %  Bkkr-