L iddlmZddlmZddlmZmZddlZddlm Z ddl m Z ddl m Z mZmZmZmZmZmZmZdd lmZmZmZmZmZdd lmZdd lmZmZdd l m!Z!m"Z"dd l#m$Z$ddl%m&Z&m'Z'ddl(m)Z)ddl*m+Z+ddl,m-Z-ddl.m/Z/ddl0m1Z1ddgZ2GddeZ3GddeZ4GddZ5GddZ6GddZ7GddZ8y) ) annotations)List)LiteraloverloadN)_legacy_response)completion_create_params)BodyOmitQueryHeadersNotGivenSequenceNotStromit not_given)is_given required_argsmaybe_transformstrip_not_givenasync_maybe_transform)cached_property)SyncAPIResourceAsyncAPIResource)to_streamed_response_wrapper"async_to_streamed_response_wrapper)DEFAULT_TIMEOUT)Stream AsyncStream)make_request_options) Completion) ModelParam) MetadataParam)AnthropicBetaParam CompletionsAsyncCompletionsc eZdZed dZed dZeeeeeeeeddded ddZ eeeeeeeddded ddZ eeeeeeeddded ddZ e gd gd eeeeeeeddded dd Z y)r$ct|Sa This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content. For more information, see https://www.github.com/anthropics/anthropic-sdk-python#accessing-raw-response-data-eg-headers )CompletionsWithRawResponseselfs e/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/anthropic/resources/completions.pywith_raw_responsezCompletions.with_raw_responses*$//ct|Sz An alternative to `.with_raw_response` that doesn't eagerly read the response body. For more information, see https://www.github.com/anthropics/anthropic-sdk-python#with_streaming_response ) CompletionsWithStreamingResponser*s r,with_streaming_responsez#Completions.with_streaming_response's055r.N metadatastop_sequencesstream temperaturetop_ktop_pbetas extra_headers extra_query extra_bodytimeoutcya[Legacy] Create a Text Completion. The Text Completions API is a legacy API. We recommend using the [Messages API](https://docs.claude.com/en/api/messages) going forward. Future models and features will not be compatible with Text Completions. See our [migration guide](https://docs.claude.com/en/api/migrating-from-text-completions-to-messages) for guidance in migrating from Text Completions to Messages. Args: max_tokens_to_sample: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/en/docs/models-overview) for additional details and options. prompt: The prompt that you want Claude to complete. For proper response generation you will need to format your prompt using alternating ` Human:` and ` Assistant:` conversational turns. For example: ``` " Human: {userQuestion} Assistant:" ``` See [prompt validation](https://docs.claude.com/en/api/prompt-validation) and our guide to [prompt design](https://docs.claude.com/en/docs/intro-to-prompting) for more details. metadata: An object describing metadata about the request. stop_sequences: Sequences that will cause the model to stop generating. Our models stop on `" Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.claude.com/en/api/streaming) for details. temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. betas: Optional header to specify the beta version(s) you want to use. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds Nr+max_tokens_to_samplemodelpromptr4r5r6r7r8r9r:r;r<r=r>s r,createzCompletions.create0P r. r4r5r7r8r9r:r;r<r=r>cya[Legacy] Create a Text Completion. The Text Completions API is a legacy API. We recommend using the [Messages API](https://docs.claude.com/en/api/messages) going forward. Future models and features will not be compatible with Text Completions. See our [migration guide](https://docs.claude.com/en/api/migrating-from-text-completions-to-messages) for guidance in migrating from Text Completions to Messages. Args: max_tokens_to_sample: The maximum number of tokens to generate before stopping. Note that our models may stop _before_ reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. model: The model that will complete your prompt. See [models](https://docs.anthropic.com/en/docs/models-overview) for additional details and options. prompt: The prompt that you want Claude to complete. For proper response generation you will need to format your prompt using alternating ` Human:` and ` Assistant:` conversational turns. For example: ``` " Human: {userQuestion} Assistant:" ``` See [prompt validation](https://docs.claude.com/en/api/prompt-validation) and our guide to [prompt design](https://docs.claude.com/en/docs/intro-to-prompting) for more details. stream: Whether to incrementally stream the response using server-sent events. See [streaming](https://docs.claude.com/en/api/streaming) for details. metadata: An object describing metadata about the request. stop_sequences: Sequences that will cause the model to stop generating. Our models stop on `" Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating. temperature: Amount of randomness injected into the response. Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic. top_k: Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277). Recommended for advanced use cases only. You usually only need to use `temperature`. top_p: Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by `top_p`. You should either alter `temperature` or `top_p`, but not both. Recommended for advanced use cases only. You usually only need to use `temperature`. betas: Optional header to specify the beta version(s) you want to use. extra_headers: Send extra headers extra_query: Add additional query parameters to the request extra_body: Add additional JSON properties to the request timeout: Override the client-level default timeout for this request, in seconds NrAr+rCrDrEr6r4r5r7r8r9r:r;r<r=r>s r,rFzCompletions.createrGr.cyrJrArKs r,rFzCompletions.createrGr.rCrDrErCrDrEr6ct|s|jjtk(rd}it dt| rdj d| Dnt i| xsi} |jdt||||||||| d |rtjntjt| | | |t|xsdtt S) NXanthropic-beta,c32K|]}t|ywNstr.0es r, z%Completions.create..9PQ#a&9P /v1/complete rCrDrEr4r5r6r7r8r9r;r<r=r>Fbodyoptionscast_tor6 stream_cls)r_clientr>rrjoinr_postrr CompletionCreateParamsStreaming"CompletionCreateParamsNonStreamingrr rrBs r,rFzCompletions.createns* T\\%9%9_%LG /T\]bTc9P%9P1Pirst " zz  ,@"$ (&4$#."" )HH-PP )+Q[el?Uj)/  r.)returnr))rjr1rCintrDr!rErVr4MetadataParam | Omitr5SequenceNotStr[str] | Omitr6zLiteral[False] | Omitr7 float | Omitr8 int | Omitr9ror:List[AnthropicBetaParam] | Omitr;Headers | Noner< Query | Noner= Body | Noner>'float | httpx.Timeout | None | NotGivenrjr )rCrlrDr!rErVr6 Literal[True]r4rmr5rnr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurjzStream[Completion])rCrlrDr!rErVr6boolr4rmr5rnr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurjCompletion | Stream[Completion])rCrlrDr!rErVr4rmr5rnr6%Literal[False] | Literal[True] | Omitr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurjrx __name__ __module__ __qualname__rr-r2rrrrFrrAr.r,r$r$s0066*.59(,$( "15)-$("&;D%g "g  g  g ' g 3g &g "g g g /g &g "!g " #g $9%g & 'g g R*.59$( "15)-$("&;D%g "g  g  g  g 'g 3g "g g g /g &g "!g " #g $9%g & 'g g R*.59$( "15)-$("&;D%g "g  g  g  g 'g 3g "g g g /g &g "!g " #g $9%g & )'g g R>@uv*.598<$( "15)-$("&;D%2 "2  2  2 ' 2 32 62 "2 2 2 /2 &2 "!2 " #2 $9%2 & )'2 w2 r.c eZdZed dZed dZeeeeeeeeddded ddZ eeeeeeeddded ddZ eeeeeeeddded ddZ e gd gd eeeeeeeddded dd Z y)r%ct|Sr()AsyncCompletionsWithRawResponser*s r,r-z"AsyncCompletions.with_raw_responses/t44r.ct|Sr0)%AsyncCompletionsWithStreamingResponser*s r,r2z(AsyncCompletions.with_streaming_responses5T::r.Nr3c Kywr@rArBs r,rFzAsyncCompletions.create P rHc KywrJrArKs r,rFzAsyncCompletions.create"rrc KywrJrArKs r,rFzAsyncCompletions.createrrrMrNcKt|s|jjtk(rd}it dt| rdj d| Dnt i| xsi} |jdt||||||||| d |rtjntjd{t| | | |t|xsdtt d{S757w) NrPrQrRc32K|]}t|ywrTrUrWs r,rZz*AsyncCompletions.create..r[r\r]r^r_Fr`)rrer>rrrfrrgrr rhrirr rrBs r,rFzAsyncCompletions.creates* T\\%9%9_%LG /T\]bTc9P%9P1Pirst " ZZ ,,@"$ (&4$#."" )HH-PP )+Q[el?U":./     s$B,C(.C$ /0C(C& C(&C()rjr)rjrrk)rCrlrDr!rErVr6rvr4rmr5rnr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurjzAsyncStream[Completion])rCrlrDr!rErVr6rwr4rmr5rnr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurj$Completion | AsyncStream[Completion])rCrlrDr!rErVr4rmr5rnr6ryr7ror8rpr9ror:rqr;rrr<rsr=rtr>rurjrrzrAr.r,r%r%s55;;*.59(,$( "15)-$("&;D%g "g  g  g ' g 3g &g "g g g /g &g "!g " #g $9%g & 'g g R*.59$( "15)-$("&;D%g "g  g  g  g 'g 3g "g g g /g &g "!g " #g $9%g & !'g g R*.59$( "15)-$("&;D%g "g  g  g  g 'g 3g "g g g /g &g "!g " #g $9%g & .'g g R>@uv*.598<$( "15)-$("&;D%2 "2  2  2 ' 2 32 62 "2 2 2 /2 &2 "!2 " #2 $9%2 & .'2 w2 r.ceZdZddZy)r)cZ||_tj|j|_yrT) _completionsrto_raw_response_wrapperrFr+ completionss r,__init__z#CompletionsWithRawResponse.__init__-s%'&>>     r.Nrr$rjNoner{r|r}rrAr.r,r)r), r.r)ceZdZddZy)rcZ||_tj|j|_yrT)rrasync_to_raw_response_wrapperrFrs r,rz(AsyncCompletionsWithRawResponse.__init__6s%'&DD     r.Nrr%rjrrrAr.r,rr5rr.rceZdZddZy)r1cF||_t|j|_yrT)rrrFrs r,rz)CompletionsWithStreamingResponse.__init__?s'2     r.NrrrAr.r,r1r1>rr.r1ceZdZddZy)rcF||_t|j|_yrT)rrrFrs r,rz.AsyncCompletionsWithStreamingResponse.__init__Hs'8     r.NrrrAr.r,rrGrr.r)9 __future__rtypingrtyping_extensionsrrhttpxrtypesr _typesr r r r rrrr_utilsrrrrr_compatr _resourcerr _responserr _constantsr _streamingrr _base_clientrtypes.completionr types.model_paramr!types.metadata_paramr"types.anthropic_beta_paramr#__all__r$r%r)rr1rrAr.r,rs#/ ,ZZZee%9X(,/)*0; , -E /E P E 'E P         r.