L ik<ddlZddlZddlZddlmZmZddlmZddlm Z m Z m Z m Z ddl mZmZe rddlZddlmZdd lmZe r ddlZdd lmZeeeefZGd d ej:ZGd dZe edGddeZ y)N)Anyoverload)GenerationConfig) ModelOutputadd_end_docstringsis_tf_availableis_torch_available)Pipelinebuild_pipeline_init_args)!MODEL_FOR_CAUSAL_LM_MAPPING_NAMES) KeyDataset)$TF_MODEL_FOR_CAUSAL_LM_MAPPING_NAMESceZdZdZdZdZy) ReturnTyperr rN)__name__ __module__ __qualname__TENSORSNEW_TEXT FULL_TEXTl/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/pipelines/text_generation.pyrrsGHIrrceZdZdZdefdZy)Chata This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats to this format because the rest of the pipeline code tends to assume that lists of messages are actually a batch of samples rather than messages in the same conversation.messagescF|D]}d|vrd|vr td||_y)NrolecontentzQWhen passing chat dicts as input, each dict must have a 'role' and 'content' key.) ValueErrorr)selfrmessages r__init__z Chat.__init__$s7 vGg%)w*> !tuu v! rN)rrr__doc__dictr%rrrrrsR!!rrT) has_tokenizerc eZdZdZdZdZdZdZdZdZ e dddZ fdZ dd Z fd Zed ed edeeeeffdZed eed edeeeeeffdZed ed edeeeeffdZed eed edeeeeeffdZfdZ ddZdZej2dd d fdZxZS)TextGenerationPipelinea Language generation pipeline using any `ModelWithLMHead` or `ModelForCausalLM`. This pipeline predicts the words that will follow a specified text prompt. When the underlying model is a conversational model, it can also accept one or more chats, in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s). Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys. Unless the model you're using explicitly sets these generation parameters in its configuration files (`generation_config.json`), the following default values will be used: - max_new_tokens: 256 - do_sample: True - temperature: 0.7 Examples: ```python >>> from transformers import pipeline >>> generator = pipeline(model="openai-community/gpt2") >>> generator("I can't believe you did such a ", do_sample=False) [{'generated_text': "I can't believe you did such a icky thing to me. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I"}] >>> # These parameters will return suggestions, and only the newly created text making it easier for prompting suggestions. >>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False) ``` ```python >>> from transformers import pipeline >>> generator = pipeline(model="HuggingFaceH4/zephyr-7b-beta") >>> # Zephyr-beta is a conversational model, so let's pass it a chat instead of a single string >>> generator([{"role": "user", "content": "What is the capital of France? Answer in one word."}], do_sample=False, max_new_tokens=2) [{'generated_text': [{'role': 'user', 'content': 'What is the capital of France? Answer in one word.'}, {'role': 'assistant', 'content': 'Paris'}]}] ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in [Text generation strategies](../generation_strategies) and [Text generation](text_generation). This language generation pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"text-generation"`. The models that this pipeline can use are models that have been trained with an autoregressive language modeling objective. See the list of available [text completion models](https://huggingface.co/models?filter=text-generation) and the list of [conversational models](https://huggingface.co/models?other=conversational) on [huggingface.co/models]. a In 1991, the remains of Russian Tsar Nicholas II and his family (except for Alexei and Maria) are discovered. The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the remainder of the story. 1883 Western Siberia, a young Grigori Rasputin is asked by his father and a group of men to perform magic. Rasputin has a vision and denounces one of the men as a horse thief. Although his father initially slaps him for making such an accusation, Rasputin watches as the man is chased outside and beaten. Twenty years later, Rasputin sees a vision of the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous, with people, even a bishop, begging for his blessing. TFgffffff?)max_new_tokens do_sample temperaturect||i||j|jdk(rtnt d|j vrd}|j |j}|.|jjjdvr |j}|M|jdd|i|j\}}}i|j ||_i|j||_ yyy)Ntfprefix)XLNetLMHeadModelTransfoXLLMHeadModelTFXLNetLMHeadModelTFTransfoXLLMHeadModelr)superr%check_model_type frameworkrr_preprocess_paramsr1model __class__r XL_PREFIX_sanitize_parameters_forward_params)r#argskwargsr1preprocess_paramsforward_params_r;s rr%zTextGenerationPipeline.__init__xs $)&) 48NNd4J 0Pq  422 2 F{{&~$**"6"6"?"?D#!7Pt7P7P7wX^7wbfbvbv7w4!>1*ZT-D-D*ZHY*Z''Q$*>*>'Q.'Q$ "! 3rNc ^i}d}d|vr|jdx}|d<d|vr|jd|d<| | |d<| | |d<| |d<|||d<|r4|j|d||j}|djd |d <||d k7rt |d ||d <| | |d<| | |d<|j ||"|jj |d}||d<|}|j|j|d<|j|j|d<|j|d<i}|>|<| t d| t d|rtjntj}||| t dtj}|||d<|||d<| | |d<| | |d<|||fS)NFadd_special_tokenspadding truncation max_lengthr1)rFrEreturn_tensors input_ids prefix_lengthholezT is not a valid value for `handle_long_generation` parameter expected [None, 'hole']handle_long_generationcontinue_final_messagetokenizer_encode_kwargs)rE eos_token_idassistant_model tokenizerassistant_tokenizerz;`return_text` is mutually exclusive with `return_full_text`z>`return_full_text` is mutually exclusive with `return_tensors`z9`return_text` is mutually exclusive with `return_tensors` return_typeclean_up_tokenization_spacesskip_special_tokens) poprSr8shaper"updateencoderRrTrrrr)r#return_full_textrI return_textrUrVr1rN stop_sequencerGrHrOrWrPgenerate_kwargsrArE prefix_inputsstop_sequence_idsrBpostprocess_paramss rr=z+TextGenerationPipeline._sanitize_parameterss$" ? 2KZK^K^_sKt t !23G!H  '+:+>+>y+I i (  !.8 l +  !.8 l +,6OL )  *0 h '  NN:L]a]k]k+M0=[/I/O/OPR/SOO , ! -%/ -./&&;Q 6 7 ! -:P 6 7 " .;R 7 8  1  $ $ 5 5mX] 5 ^ .?ON +(    +040D0DN, -  # # /*...N; '484L4LN0 1   'K,?& !^__) !abb2B*.. H[H[K  %+*=& !\]]$,,K  "0; } - ' 3A] = > ! -;Q 7 8  *8K 4 5 .2DDDrc|jjjdk(r|jddit ||i|S)z. Parse arguments and tokenize r3add_space_before_punct_symbolT)r:r;rrZr6_parse_and_tokenize)r#r?r@r;s rrez*TextGenerationPipeline._parse_and_tokenizesE ::   ( (,B B MM:DA Bw*D;F;;r text_inputsr@returnc yNrr#rfr@s r__call__zTextGenerationPipeline.__call__sQTrc yrirrjs rrkzTextGenerationPipeline.__call__s]`rc yrirrjs rrkzTextGenerationPipeline.__call__s[^rc yrirrjs rrkzTextGenerationPipeline.__call__sgjrc zt|tr tttj t fntttj frt|tj r-tj|\}}d|Dt|}}n|d}t|tttfrut|trt|0t|fi|Sd|D}t|tj rt|0|fi|St|0t|fi|St|0|fi|S)a? Complete the prompt(s) given as inputs. Args: text_inputs (`str`, `list[str]`, list[dict[str, str]], or `list[list[dict[str, str]]]`): One or several prompts (or one list of prompts) to complete. If strings or a list of string are passed, this pipeline will continue each prompt. Alternatively, a "chat", in the form of a list of dicts with "role" and "content" keys, can be passed, or a list of such chats. When chats are passed, the model's chat template will be used to format them before passing them to the model. return_tensors (`bool`, *optional*, defaults to `False`): Returns the tensors of predictions (as token indices) in the outputs. If set to `True`, the decoded text is not returned. return_text (`bool`, *optional*): Returns the decoded texts in the outputs. return_full_text (`bool`, *optional*, defaults to `True`): If set to `False` only added text is returned, otherwise the full text is returned. Cannot be specified at the same time as `return_text`. clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`): Whether or not to clean up the potential extra spaces in the text output. continue_final_message( `bool`, *optional*): This indicates that you want the model to continue the last message in the input chat rather than starting a new one, allowing you to "prefill" its response. By default this is `True` when the final message in the input chat has the `assistant` role and `False` otherwise, but you can manually override that behaviour by setting this flag. prefix (`str`, *optional*): Prefix added to prompt. handle_long_generation (`str`, *optional*): By default, this pipelines does not handle long generation (ones that exceed in one form or the other the model maximum length). There is no perfect way to address this (more info :https://github.com/huggingface/transformers/issues/14033#issuecomment-948385227). This provides common strategies to work around that problem depending on your use case. - `None` : default strategy where nothing in particular happens - `"hole"`: Truncates left of input, and leaves a gap wide enough to let generation happen (might truncate a lot of the prompt and not suitable when generation exceed the model capacity) tokenizer_encode_kwargs (`dict`, *optional*): Additional keyword arguments to pass along to the encoding step of the tokenizer. If the text input is a chat, it is passed to `apply_chat_template`. Otherwise, it is passed to `__call__`. generate_kwargs (`dict`, *optional*): Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework [here](./text_generation)). Return: A list or a list of lists of `dict`: Returns one of the following dictionaries (cannot return a combination of both `generated_text` and `generated_token_ids`): - **generated_text** (`str`, present when `return_text=True`) -- The generated text. - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `return_tensors=True`) -- The token ids of the generated text. c3 K|]}|ywrir).0xs r z2TextGenerationPipeline.__call__..?s*B1*Bs rc32K|]}t|ywri)r)rqchats rrsz2TextGenerationPipeline.__call__..Gs@DT$Z@s) isinstancer listtupletypes GeneratorTyper itertoolsteenextr'r6rkr)r#rfr@rC first_itemchatsr;s rrkzTextGenerationPipeline.__call__s d  !#5%--z :u223  +u':':;!*{!; Q*Bk*BDGZ (^ *tUD&9:j$/ 7+D,=HHH@K@E!+u/B/BC$w/@@@$w/U FvFFw 6v66rc  T||||d} | jD cic] \} } |  | |  } } } | j| xsit|tra| j dd||j dddk(}|j j|j f| |d|jd| }n"|j ||zfd|ji| }||d <|d k(r|d jd}d | vr| d }n9| jd |jj|z }|dkr td||z|j jkDrQ|j j|z }|dkr td|d dd| df|d <d|vr|ddd| df|d<|Scc} } w)N)rErGrFrHrErKr assistantT)add_generation_promptrO return_dictrIrI prompt_textrMrJr,rHrz0We cannot infer how many new tokens are expectedziWe cannot use `hole` to handle this generation the number of desired tokens exceeds the models max lengthattention_mask)itemsrZrvrrXrrSapply_chat_templater8rYgetgeneration_configrHr"model_max_length)r#rr1rNrErGrFrHrOrPr_tokenizer_kwargskeyvalueinputscur_len new_tokens keep_lengths r preprocessz!TextGenerationPipeline.preprocessNs#5$$   :J9O9O9Qg:3UZUfCJgg 7 =2> k4 (  !5t <&-)4)=)=b)A&)I[)X&7T^^77$$*@&@'= #~~  # F$T^^F[$8ll[klF +} !V +[)//3G?2,-=> ,00t?U?U?`?`adkk >$%WXX#dnn&E&EE"nn== J !#$- '-[&9!k\]:J&K{##v-/56F/GK<=HX/YF+, Uhs F$F$c b|d}|jdd}|jddk(rd}d}d}n|jd}|jd}|jdd}|dkDrd|vxsd|vxr|djdu}|s9|jd xs|jj |d <|d xx|z cc<d |vxsd|vxr|dj du} | sd |vr |d xx|z cc<d|vr|j|d<|jjd||d |} t| tr| j} | jD cic] \} } | d vs | | }} } | jd}|jdk(r|jD]\}}t|tjr9|jd|k(r'|j |||zg|jdd||<t|t"sjt%|d|k(s|tj&|j)dd}|||<n|jdk(r|jD]\}}t|t*jr@|jd|k(r.t+j ||||zg|jdd||<t|t"sqt%|d|k(st+j&|j)dd}|||<n| } i}| jd}|jdk(r%| j |||zg| jdd} n:|jdk(r+t+j | |||zg| jdd} | ||d}|r|j-d|i|Scc} } w)NrJrr rrrLr,rrHmin_new_tokens min_length)rJr> sequencespast_key_valuesptr0)generated_sequencerJradditional_outputsr)rrYrXr,rrHrr:generatervrrrr8torchTensorreshaperxlenstackswapaxesr0rZ)r# model_inputsr_rJrin_brrLhas_max_new_tokenshas_min_new_tokensoutputrkv other_outputsout_brr model_outputss r_forwardzTextGenerationPipeline._forwards - %))*:DA ??1  "I!ND??1%D"&&}5 (++OQ? 1 !1_!D"#6T#$78GGtS &0?0C0CL0Q0vUYUkUkUvUv - ->-!1_!D"#6T#$78GGtS &,/*I ->- o 5373I3IO/ 0$$$kyk[jk fk *!'!1!1 .4llnjdaIi@iQTjMj&,,Q/E~~%"/"5"5"73JC!%65;;q>U;R-:U]]4$-aQVQ\Q\]^]_Q`-a c*!%/CaMU4J % E 2 ; ;Aq A-2 c* 3 4'"/"5"5"73JC!%3 A%8O-/ZZetm?fV[VaVabcbdVe?f-g c*!%/CaMU4J " 8 8A >-2 c* 3"( M"((+ >>T !!;!3!;!;D%4-!oRdRjRjklkmRn!o  ^^t #!#,>uPT}@tWiWoWopqprWs@t!u #5"&   "6 !F GEks  N+N+c|dd}|d}|d}|jj}g} |jdi} i} | r|jdk(rq| j D]]\} } t | t js!| jdt|k(s=| jj| | <_n|jdk(rp| j D]]\} } t | tjs!| jdt|k(s=| jj| | <_||nd}t|D]z\}}|tjk(rd |i}nJ|tjtjhvr'|j j#||| }|d}n*t|j j#|d|| }||d}|tjk(rt |t$r||z}nt |t&rx||j(d d d k(}|rCt+|j(dd |j(d d |j(d d|zdgz}nt+|j(d |dgz}d|i}| j D] \}}||||<| j-}| S)NrrrJrrrr0Tgenerated_token_ids)rWrVrKr rr!)r r!generated_text)numpytolistrr8rrvrrrYrr0 enumeraterrrrrSdecodestrrrrwappend)r#rrUrVrOrWrrJrrecordsr split_keysrridxsequencerecordtext prompt_lengthall_textrvaluess r postprocessz"TextGenerationPipeline.postprocesss++?@C!+. #M2 /557>>@%))*>C  ~~%)//1;DAq!!U\\2qwwqzSI[E\7\() (8(8(: 1 ;4')//1;DAq!!RYY/AGGAJ#FXBY4Y() (8(8(: 1 ;6I5T1Z^&'9:. #MCj000/:!4!4j6J6J KK~~,,(;1M-$$%M$'--%aL0C9U.%M /*"6"66!+s3#.#9#K6196A5I5I"5Mf5UYd5d21'+K,@,@'A#2'F,7,@,@,DV,L/:/C/CB/G /RU]/]!"J(H(,K,@,@'AkfnEoDp'pH*H5#-#3#3#5.KC"(+F3K. NN6 "]. #`r) NNNNNNNNNNNNN)NNNNNNN)rrrr&r<_pipeline_calls_generate_load_processor_load_image_processor_load_feature_extractor_load_tokenizerr_default_generation_configr%r=rerrrrwr'rkChatTyperrrrr __classcell__)r;s@rr*r*+s.hI $O!#O"2" R:%)## $YEx<TCT3T4S#X;OTT `DI``d4PSUXPX>FZA[`` ^H^^T#x-EX@Y^^ jDNjcjd4PTUXZbUbPcKdFejjG7X## $>@HZ((%)# Jrr*)!enumr{rytypingrr generationrutilsrrr r baser r rmodels.auto.modeling_autorpt_utilsr tensorflowr0models.auto.modeling_tf_autorrwr'rrEnumrrr*rrrrs  )XX4M$S S#X  ! !,4@AvXvBvr