JL i!NddZddlZddlmZddlmZGddZGddZGd d Zy) a& A decoder that uses stacks to implement phrase-based translation. In phrase-based translation, the source sentence is segmented into phrases of one or more words, and translations for those phrases are used to build the target sentence. Hypothesis data structures are used to keep track of the source words translated so far and the partial output. A hypothesis can be expanded by selecting an untranslated phrase, looking up its translation in a phrase table, and appending that translation to the partial output. Translation is complete when a hypothesis covers all source words. The search space is huge because the source sentence can be segmented in different ways, the source phrases can be selected in any order, and there could be multiple translations for the same source phrase in the phrase table. To make decoding tractable, stacks are used to limit the number of candidate hypotheses by doing histogram and/or threshold pruning. Hypotheses with the same number of words translated are placed in the same stack. In histogram pruning, each stack has a size limit, and the hypothesis with the lowest score is removed when the stack is full. In threshold pruning, hypotheses that score below a certain threshold of the best hypothesis in that stack are removed. Hypothesis scoring can include various factors such as phrase translation probability, language model probability, length of translation, cost of remaining words to be translated, and so on. References: Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York. N defaultdict)logceZdZdZdZedZejdZdZdZ dZ dZ d Z d Z d Zed Zy ) StackDecodera Phrase-based stack decoder for machine translation >>> from nltk.translate import PhraseTable >>> phrase_table = PhraseTable() >>> phrase_table.add(('niemand',), ('nobody',), log(0.8)) >>> phrase_table.add(('niemand',), ('no', 'one'), log(0.2)) >>> phrase_table.add(('erwartet',), ('expects',), log(0.8)) >>> phrase_table.add(('erwartet',), ('expecting',), log(0.2)) >>> phrase_table.add(('niemand', 'erwartet'), ('one', 'does', 'not', 'expect'), log(0.1)) >>> phrase_table.add(('die', 'spanische', 'inquisition'), ('the', 'spanish', 'inquisition'), log(0.8)) >>> phrase_table.add(('!',), ('!',), log(0.8)) >>> # nltk.model should be used here once it is implemented >>> from collections import defaultdict >>> language_prob = defaultdict(lambda: -999.0) >>> language_prob[('nobody',)] = log(0.5) >>> language_prob[('expects',)] = log(0.4) >>> language_prob[('the', 'spanish', 'inquisition')] = log(0.2) >>> language_prob[('!',)] = log(0.1) >>> language_model = type('',(object,),{'probability_change': lambda self, context, phrase: language_prob[phrase], 'probability': lambda self, phrase: language_prob[phrase]})() >>> stack_decoder = StackDecoder(phrase_table, language_model) >>> stack_decoder.translate(['niemand', 'erwartet', 'die', 'spanische', 'inquisition', '!']) ['nobody', 'expects', 'the', 'spanish', 'inquisition', '!'] c~||_||_d|_ d|_ d|_ d|_|j y)aG :param phrase_table: Table of translations for source language phrases and the log probabilities for those translations. :type phrase_table: PhraseTable :param language_model: Target language model. Must define a ``probability_change`` method that calculates the change in log probability of a sentence, if a given string is appended to it. This interface is experimental and will likely be replaced with nltk.model once it is implemented. :type language_model: object dg?N) phrase_tablelanguage_model word_penaltybeam_threshold stack_size _StackDecoder__distortion_factor%_StackDecoder__compute_log_distortion)selfr r s b/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/nltk/translate/stack_decoder.py__init__zStackDecoder.__init__OsT), "  $'  %%'c|jS)a float: Amount of reordering of source phrases. Lower values favour monotone translation, suitable when word order is similar for both source and target languages. Value between 0.0 and 1.0. Default 0.5. )rrs rdistortion_factorzStackDecoder.distortion_factorys'''rc2||_|jyN)rr)rds rrzStackDecoder.distortion_factors#$  %%'rcx|jdk(rtd|_yt|j|_y)Nr g& .>)rr$_StackDecoder__log_distortion_factorrs r__compute_log_distortionz%StackDecoder.__compute_log_distortions1  # #s *+.t9D (+.t/G/G+HD (rc &t|}t|}td|dzDcgc]"}t|j|j $}}t }|dj||j|}|j|}|D]} | D]} tj|| } | D]} || d| d} |jj| D]j}|j| || }t || |j| }|j!||||_|j#}||j|l||st%j&dgS||j)}|j+Scc}w)z :param src_sentence: Sentence to be translated :type src_sentence: list(str) :return: Translated sentence :rtype: list(str) r) raw_scoresrc_phrase_span trg_phrasepreviouszYUnable to translate all words. The source sentence contains words not in the phrase table)tuplelenrange_Stackrr _Hypothesispushfind_all_src_phrasescompute_future_scoresr valid_phrasesr translations_forexpansion_scorer# future_scoretotal_translated_wordswarningswarnbesttranslation_so_far)r src_sentencesentencesentence_length_stacksempty_hypothesis all_phrasesfuture_score_tablestack hypothesispossible_expansionsr" src_phrasetranslation_optionr!new_hypothesis total_wordsbest_hypothesiss r translatezStackDecoder.translates&h-1o12  4??D$7$7 8  '=q '(//9 !77A AE# A &2&@&@'#(;AO!)/!*<q?Q!RJ.2.?.?.P.P"/A*%)$8$8&(:O% *5&/,;'9'D'D%/ * 7;6G6G*,>73'5&K&K&M {+00@!AA  A A2o& MM# I 16681133[ s'Fct|}|Dcgc]}g}}td|D]A}t|dz|dzD]*}|||}||jvs||j|,C|Scc}w)a@ Finds all subsequences in src_sentence that have a phrase translation in the translation table :type src_sentence: tuple(str) :return: Subsequences that have a phrase translation, represented as a table of lists of end positions. For example, if result[2] is [5, 6, 9], then there are three phrases starting from position 2 in ``src_sentence``, ending at positions 5, 6, and 9 exclusive. The list of ending positions are in ascending order. :rtype: list(list(int)) rr )r&r'r append)rr6r8r9phrase_indicesstartendpotential_phrases rr+z!StackDecoder.find_all_src_phrasessl+&23"331o. 6EUQY!(;< 6#/c#: #t'8'88"5)005 6 6  4s A-ctd}tdt|dzD]}tdt||z dzD]}||z}|||}||jvrN|jj |dj }||j j|z }||||<t|dz|D]'}||||||z} | |||kDs | |||<)|S)a Determines the approximate scores for translating every subsequence in ``src_sentence`` Future scores can be used a look-ahead to determine the difficulty of translating the remaining parts of a src_sentence. :type src_sentence: tuple(str) :return: Scores of subsequences referenced by their start and end positions. For example, result[2][5] is the score of the subsequence covering positions 2, 3, and 4. :rtype: dict(int: (dict(int): float)) ctdS)NctdS)N-inf)floatrrzFStackDecoder.compute_future_scores....s vrrrRrrrSz4StackDecoder.compute_future_scores..s [1F%Grr r)rr'r&r r.log_probr probability) rr6scores seq_lengthrJrKphrasescoremidcombined_scores rr,z"StackDecoder.compute_future_scoress'GH3|#4q#89 >vFhT00<J!M1!((%)<= > -' rN)__name__ __module__ __qualname____doc__rpropertyrsetterrrFr+r,r0r/ra staticmethodr-rRrrrr1sv:((T((((I74r0"H2GrrcFeZdZdZ d dZdZdZdZdZdZ d Z y) r)a0 Partial solution to a translation. Records the word positions of the phrase being translated, its translation, raw score, and the cost of the untranslated parts of the sentence. When the next phrase is selected to build upon the partial solution, a new _Hypothesis object is created, with a back pointer to the previous hypothesis. To find out which words have been translated so far, look at the ``src_phrase_span`` in the hypothesis chain. Similarly, the translation output can be found by traversing up the chain. NcJ||_||_||_||_||_y)a :param raw_score: Likelihood of hypothesis so far. Higher is better. Does not account for untranslated words. :type raw_score: float :param src_phrase_span: Span of word positions covered by the source phrase in this hypothesis expansion. For example, (2, 5) means that the phrase is from the second word up to, but not including the fifth word in the source sentence. :type src_phrase_span: tuple(int) :param trg_phrase: Translation of the source phrase in this hypothesis expansion :type trg_phrase: tuple(str) :param previous: Previous hypothesis before expansion to this one :type previous: _Hypothesis :param future_score: Approximate score for translating the remaining words not covered by this hypothesis. Higher means that the remaining words are easier to translate. :type future_score: float N)r!r"r#r$r0)rr!r"r#r$r0s rrz_Hypothesis.__init__as)>#.$  (rc4|j|jzS)zd Overall score of hypothesis after accounting for local and global features )r!r0rs rrYz_Hypothesis.scores ~~ 1 111rc|j}|j|j|g}d}|D]}||kr|j||f|dz}!|S)a. Starting from each untranslated word, find the longest continuous span of untranslated positions :param sentence_length: Length of source sentence being translated by the hypothesis :type sentence_length: int :rtype: list(tuple(int, int)) rr )translated_positionssortrH)rr8rxr]rJrKs rr]z_Hypothesis.untranslated_spanssq $88:!!###O4' Cs{"))5#,7!GE  "!rcg}|}|jF|j}|jt|d|d|j}|jF|S)z List of positions in the source sentence of words already translated. The list is not sorted. :rtype: list(int) rr )r$r"extendr')rrxcurrent_hypothesistranslated_spans rrxz _Hypothesis.translated_positionssi "! ))50@@O ' 'oa.@/RSBT(U V!3!.s aggirT)keyreverseN)rrHryr&rpopthreshold_prunerr?s rr*z _Stack.pushso *% />$**o - JJNN $**o - rc|jsy|jdj|jz}t|jD]1}|j|kr|jj 1yyNr)rrYrreversedr)r thresholdr?s rrz_Stack.threshold_prunesgzz JJqM'')D,E,EE "4::. J!I-    rc:|jr|jdSy)ze :return: Hypothesis with the highest score in the stack :rtype: _Hypothesis rNrrs rr4z _Stack.bests ::::a= rc,t|jSr)iterrrs r__iter__z_Stack.__iter__sDJJrc||jvSrrrs r __contains__z_Stack.__contains__sTZZ''rc2t|jdk7Sr)r&rrs r__bool__z_Stack.__bool__s4::!##rN)r r ) rmrnrorprr*rr4rrr __nonzero__rRrrr(r(s1 <   ($Krr() rpr2 collectionsrmathrrr)r(rRrrrs:"H#^^B o-o-d??r