JL ikddlmZddlmZddlmZddlmZmZm Z ddl m Z ddl m Z eGddZGd d ZGd d ZGd dZdZdZdZdZedk(reyy)) defaultdict)total_ordering)chain)DependencyGrammarDependencyProductionProbabilisticDependencyGrammar)raise_unorderable_types)DependencyGraphc@eZdZdZdZdZdZdZdZdZ dZ d Z y ) DependencySpanaT A contiguous span over some part of the input string representing dependency (head -> modifier) relationships amongst words. An atomic span corresponds to only one word so it isn't a 'span' in the conventional sense, as its _start_index = _end_index = _head_index for concatenation purposes. All other spans are assumed to have arcs between all nodes within the start and end indexes of the span, and one head index corresponding to the head word for the entire span. This is the same as the root node if the dependency structure were depicted as a graph. c||_||_||_||_||_|||t |f|_t|j |_yN) _start_index _end_index _head_index_arcs_tagstuple_comparison_keyhash_hash)self start_index end_index head_indexarcstagss k/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/nltk/parse/projectivedependencyparser.py__init__zDependencySpan.__init__(sN'#%  +Y E$KP$../ c|jS)zk :return: An value indexing the head of the entire ``DependencySpan``. :rtype: int )rrs rrzDependencySpan.head_index1s r cNd|j|j|jfzS)zj :return: A concise string representatino of the ``DependencySpan``. :rtype: str. Span %d-%d; Head Index: %d)rrrr"s r__repr__zDependencySpan.__repr__8s/ ,    OO   /   r cd|j|j|jfz}tt |j D]'}|d||j ||j |fzz })|S)zi :return: A verbose string representation of the ``DependencySpan``. :rtype: str r$z %d <- %d, %s)rrrrangelenrr)rstris r__str__zDependencySpan.__str__Csx +    OO   .   s4::' HA #q$**Q-A&GG GC H r cft|t|k(xr|j|jk(Sr)typerrothers r__eq__zDependencySpan.__eq__Qs- J$u+ % W$*>*>%BWBW*W r c||k( Srr.s r__ne__zDependencySpan.__ne__Vs5=  r cnt|ts td|||j|jkS)N<) isinstancer r rr.s r__lt__zDependencySpan.__lt__Ys/%0 #Cu 5##e&;&;;;r c|jS)zE :return: The hash value of this ``DependencySpan``. )rr"s r__hash__zDependencySpan.__hash__^szzr N) __name__ __module__ __qualname____doc__rrr%r+r0r3r7r9r2r rr r s/ 0     !< r r c(eZdZdZdZdZdZdZy) ChartCellz A cell from the parse chart formed when performing the CYK algorithm. Each cell keeps track of its x and y coordinates (though this will probably be discarded), and a list of spans serving as the cell's entries. c>||_||_t|_y)z :param x: This cell's x coordinate. :type x: int. :param y: This cell's y coordinate. :type y: int. N)_x_yset_entries)rxys rrzChartCell.__init__qs r c:|jj|y)z Appends the given span to the list of spans representing the chart cell's entries. :param span: The span to add. :type span: DependencySpan N)rDadd)rspans rrHz ChartCell.add|s $r cNd|j|j|jfzS)zf :return: A verbose string representation of this ``ChartCell``. :rtype: str. z CC[%d,%d]: %s)rArBrDr"s rr+zChartCell.__str__s" $''477DMM!BBBr c d|zS)zf :return: A concise string representation of this ``ChartCell``. :rtype: str. z%sr2r"s rr%zChartCell.__repr__s d{r N)r:r;r<r=rrHr+r%r2r rr?r?js  Cr r?c"eZdZdZdZdZdZy)ProjectiveDependencyParsera A projective, rule-based, dependency parser. A ProjectiveDependencyParser is created with a DependencyGrammar, a set of productions specifying word-to-word dependency relations. The parse() method will then return the set of all parses, in tree representation, for a given input sequence of tokens. Each parse must meet the requirements of the both the grammar and the projectivity constraint which specifies that the branches of the dependency tree are not allowed to cross. Alternatively, this can be understood as stating that each parent node and its children in the parse tree form a continuous substring of the input sequence. c||_y)a Create a new ProjectiveDependencyParser, from a word-to-word dependency grammar ``DependencyGrammar``. :param dependency_grammar: A word-to-word relation dependencygrammar. :type dependency_grammar: DependencyGrammar N)_grammar)rdependency_grammars rrz#ProjectiveDependencyParser.__init__s + r c#Kt||_g}tdt|jdzD]}|j gtdt|jdzD]U}||j t ||||dzk(s*|||j t|dz ||dz dgdgWtdt|jdzD]}t|dz ddD]s}t|dz |dD]^}|||jD]G}|||jD]0}|j||D]}|||j |2I`u|t|jdjD]e} d} tt|D]-}| d|dz||||ddd| j|dzdd d f zz } /t| } | jgy w) a> Performs a projective dependency parse on the list of tokens using a chart-based, span-concatenation algorithm similar to Eisner (1996). :param tokens: The list of input tokens. :type tokens: list(str) :return: An iterator over parse trees. :rtype: iter(Tree) rnull %d %s %s %s %s %s %d %s %s %s ROOT-N) list_tokensr'r(appendr?rHr rD concatenaterr tree) rtokenschartr*jkspan1span2newspanparse conll_formatdgs rrfz ProjectiveDependencyParser.parses5F| q#dll+a/0 UA LL 1c$,,/!34 Ua !Q0A:!HQKOON1q5!QURD6($ST U Uq#dll+a/0 9A1q5"b) 9q1ua,9A!&q!!5!59%*1Xa[%9%99E+/+;+;E5+I9 %a  89999 9 93t||,-a099 EL3v;'  LE1I1IKKNQ& P !    !.B'')O+ s BHE+Hc g}|j|jk(r td|j|jkDr|}|}|}|j|jz}|j|jz}|jj |j |j|j |jrb|j||j|jz <|jt|j|j|j|||j|jz}|jj |j |j|j |jrb|j||j|jz <|jt|j|j|j|||Sa Concatenates the two spans in whichever way possible. This includes rightward concatenation (from the leftmost word of the leftmost span to the rightmost word of the rightmost span) and leftward concatenation (vice-versa) between adjacent spans. Unlike Eisner's presentation of span concatenation, these spans do not share or pivot on a particular word/word-index. :return: A list of new spans formed through concatenation. :rtype: list(DependencySpan) z8Error: Mismatched spans - replace this with thrown error rprintrrrOcontainsr[rr\r rrrcrdspans temp_spannew_arcsnew_tagss rr]z&ProjectiveDependencyParser.concatenates   !3!3 3 L M    2 2 2IEE;;,;;, == ! ! LL** +T\\%:K:K-L @E?P?PHU&&););; < LL&&$$%%  ;;, == ! ! LL** +T\\%:K:K-L @E?P?PHU&&););; < LL&&$$%%   r N)r:r;r<r=rrfr]r2r rrMrMs +0d4r rMc.eZdZdZdZdZdZdZdZy)'ProbabilisticProjectiveDependencyParseraA probabilistic, projective dependency parser. This parser returns the most probable projective parse derived from the probabilistic dependency grammar derived from the train() method. The probabilistic model is an implementation of Eisner's (1996) Model C, which conditions on head-word, head-tag, child-word, and child-tag. The decoding uses a bottom-up chart-based span concatenation algorithm that's identical to the one utilized by the rule-based projective parser. Usage example >>> from nltk.parse.dependencygraph import conll_data2 >>> graphs = [ ... DependencyGraph(entry) for entry in conll_data2.split('\n\n') if entry ... ] >>> ppdp = ProbabilisticProjectiveDependencyParser() >>> ppdp.train(graphs) >>> sent = ['Cathy', 'zag', 'hen', 'wild', 'zwaaien', '.'] >>> list(ppdp.parse(sent)) [Tree('zag', ['Cathy', 'hen', Tree('zwaaien', ['wild', '.'])])] cy)zp Create a new probabilistic dependency parser. No additional operations are necessary. Nr2r"s rrz0ProbabilisticProjectiveDependencyParser.__init__:sr ct||_g}tdt|jdzD]}|j gtdt|jdzD]}||j t ||||dzk(s*||dz |j jvrQ|j j||dz D].}|||jt|dz ||dz dg|g0td||dz zgccStdt|jdzD]}t|dz ddD]s}t|dz |dD]^}|||jD]G}|||jD]0}|j||D]} |||j| 2I`ug} d} d} |t|jdjD]} d}d}tt|D]f}|d||d | j|dzd fzz }|d |dz||||| j|| j|d | j|dzd d d f zz }ht|}|j|}| j ||j!f| j#d | DS)aX Parses the list of tokens subject to the projectivity constraint and the productions in the parser's grammar. This uses a method similar to the span-concatenation algorithm defined in Eisner (1996). It returns the most probable parse derived from the parser's probabilistic dependency grammar. rrRrSz7No tag found for input token '%s', parse is impossible.rUNrVz %s %s %d %s rTrWrXrYc3&K|] \}}| ywrr2).0scorer^s r z@ProbabilisticProjectiveDependencyParser.parse..s0%0s)rZr[r'r(r\r?rOrrHr rlrDr]rr compute_probr^sort)rr_r`r*ratagrbrcrdretrees max_parse max_scorerfrg malt_formatrhrys rrfz-ProbabilisticProjectiveDependencyParser.parse@sF| q#dll+a/0 "A LL 1c$,,/!34 "a !Q0A:a!e} (;(;;#'==#6#6va!e}#EC!!HQKOO .q1uaQse L U$QUm, "  " " q#dll+a/0 9A1q5"b) 9q1ua,9A!&q!!5!59%*1Xa[%9%99E+/+;+;E5+I9 %a  89999 9 9  3t||,-a099 -ELK3v;' 11IKKNQ& 5   LE1I1IKKNKKNKKNQ& P !   *!.B%%b)E LL%+ ,5 -6  0%00r c g}|j|jk(r td|j|jkDr|}|}|}|j|jz}|j|jz}|jj |j |j|j |jrb|j||j|jz <|jt|j|j|j|||j|jz}|j|jz}|jj |j |j|j |jrb|j||j|jz <|jt|j|j|j|||Srjrkrns rr]z3ProbabilisticProjectiveDependencyParser.concatenates   !3!3 3 L M    2 2 2IEE;;,;;, == ! ! LL** +T\\%:K:K-L @E?P?PHU&&););; < LL&&$$%%  ;;,;;, == ! ! LL** +T\\%:K:K-L @E?P?PHU&&););; < LL&&$$%%   r c g}tt}i}|D]}tdt|jD]v}t t j|j|dj}|j|}|j|} || z} td|dzz | dzD]} |j|d} |j|d} | |vr|| j| n| h|| <d}d}d}d}| dkr| |z}|dk\r*|j||d}|j||d}| d k7r0|j||dzd}|j||dzd}|dk7r|jt| |gd j|||| | }d j|| | }||xxdz cc<||xxdz cc<"| dkDs)| |zdz }|| kr*|j||d}|j||d}| dk7r0|j||dz d}|j||dz d}|dk7r|jt| |gd j|||| | }d j|| | }||xxdz cc<||xxdz cc<yt||||_y)a Trains a ProbabilisticDependencyGrammar based on the list of input DependencyGraphs. This model is an implementation of Eisner's (1996) Model C, which derives its statistics from head-word, head-tag, child-word, and child-tag relationships. :param graphs: A list of dependency graphs to train from. :type: list(DependencyGraph) rRdepsrrUwordr}STOPSTARTrS'(head ({} {}) (mods ({}, {}, {}) left))(mods ({}, {}, {}) left))((head ({} {}) (mods ({}, {}, {}) right))(mods ({}, {}, {}) right))N)rintr'r(nodesrZr from_iterablevalues left_childrenright_childrenrHr\rformatrrO)rgraphs productionseventsrrh node_indexchildrennr_left_childrennr_right_children nr_children child_index head_wordhead_tagchild child_tag prev_wordprev_tag array_index head_event mod_events rtrainz-ProbabilisticProjectiveDependencyParser.trainsY S!F /B#As288}5E / ''(rr) r'r(rrZrrrrrrrO_events)rrhprobrrrrrrrrrrrrrrrh_countm_counts rr{z4ProbabilisticProjectiveDependencyParser.compute_prob s3rxx=1E *JE//0DV0L0S0S0UVWH!// ;  " 1 1* = *->>K$Q*:Q*>%?ARUVAVW> * HHZ08 88J/6" # "?"-0@"@K"a' "+)> ? G$&HHXk-B$CE$J "b($&HHXkAo-F$G$O #%88H[1_,E#Fu#M!J!Q!Q! ! "J!< B B ! !I #mm33J?G"mm33I>G!|' 11) 1_"-0@"@1"DK"[0 "+)> ? G$&HHXk-B$CE$J "a'$&HHXkAo-F$G$O #%88H[1_,E#Fu#M!K!R!R! ! "J!= C C ! !I #mm33J?G"mm33I>G!|' 11)}> *E *N r N) r:r;r<r=rrfr]rr{r2r rrtrts'4 @1D3jTRlSr rtc,ttyr)projective_rule_parse_demoprojective_prob_parse_demor2r rdemorhs  r ctjd}t|t|}|j gd}|D] }t|y)z A demonstration showing the creation and use of a ``DependencyGrammar`` to perform a projective dependency parse. zP 'scratch' -> 'cats' | 'walls' 'walls' -> 'the' 'cats' -> 'the' )thecatsscratchrwallsN)r fromstringrlrMrfgrammarpdpr~r^s rrrnsQ  ** G 'N $W -C II@ AE d r cpttdtdtdtjd}t|ttdtdt|}|j gd}|D] }t|ttdtd td td tjd }t|ttd t|}|j gd}|D] }t|y)a A demonstration showing the creation of a ``DependencyGrammar`` in which a specific number of modifiers is listed for a given head. This can further constrain the number of possible parses created by a ``ProjectiveDependencyParser``. z>A grammar with no arity constraints. Each DependencyProductionz;specifies a relationship between one head word and only onezmodifier word.zj 'fell' -> 'price' | 'stock' 'price' -> 'of' | 'the' 'of' -> 'stock' 'stock' -> 'the' z 'price' | 'stock' 'price' -> 'of' 'the' 'of' -> 'stock' 'stock' -> 'the' z:This constrains the number of possible parses to just one:N)rlrrrMrfrs rarity_parse_demors G JK GH ** G 'N G HI 45 $W -C IID EE d  G :; >? AB "#** G 'N G D %W -C IID EE d r cRddlm}|jdDcgc]}|st|}}t }t d|j |gd}t ddj|dt d |j|D] }t |y cc}w) zT A demo showing the training and use of a projective dependency parser. r) conll_data2z z6Training Probabilistic Projective Dependency Parser...)Cathyzaghenwildzwaaien.z Parsing ' z'...zParse:N) nltk.parse.dependencygraphrsplitr rtrlrjoinrf)rentryrppdpsentr^s rrrs 72=2C2CF2K Uuoe$ UF U 2 4D BCJJv :D +sxx~v. (O 4  d Vs B$ B$__main__N) collectionsr functoolsr itertoolsr nltk.grammarrrrnltk.internalsr rr r r?rMrtrrrrr:r2r rrs$$ 36FFF\((`}}JAAR ! (3l& zFr