JL i?|dZddlZddlZddlmZmZmZddlmZddl m Z ddl m Z GddeZ Gd d eZy) a  Penn Treebank Tokenizer The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This implementation is a port of the tokenizer sed script written by Robert McIntyre and available at http://www.cis.upenn.edu/~treebank/tokenizer.sed. N)IteratorListTuple) TokenizerI)MacIntyreContractions) align_tokensc DeZdZdZej ddfej ddfej ddfgZej dd fej d dfej d d fej d dfej ddfej ddfej ddfgZej ddfZej ddfej ddfej ddfej ddfej ddfej dd fgZ ej d!d"fZ ej d#d$fej d%d$fej d&d'fej d(d'fgZ e Z eeej e j Zeeej e j"Z d0d)ed*ed+ed,eefd-Zd)ed,eeeeffd.Zy/)1TreebankWordTokenizera  The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This tokenizer performs the following steps: - split standard contractions, e.g. ``don't`` -> ``do n't`` and ``they'll`` -> ``they 'll`` - treat most punctuation characters as separate tokens - split off commas and single quotes, when followed by whitespace - separate periods that appear at the end of line >>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> TreebankWordTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] >>> s = "They'll save and invest more." >>> TreebankWordTokenizer().tokenize(s) ['They', "'ll", 'save', 'and', 'invest', 'more', '.'] >>> s = "hi, my name can't hello," >>> TreebankWordTokenizer().tokenize(s) ['hi', ',', 'my', 'name', 'ca', "n't", 'hello', ','] z^\"``z(``)z \1 z([ \(\[{<])(\"|\'{2})z\1 `` z ([:,])([^\d])z \1 \2z([:,])$z\.\.\.z ... z[;@#$%&]z \g<0> z([^\.])(\.)([\]\)}>"\']*)\s*$z\1 \2\3 z[?!]z([^'])' z\1 ' z[\]\[\(\)\{\}\<\>]z\(-LRB-z\)-RRB-z\[-LSB-z\]-RSB-z\{-LCB-z\}-RCB--- -- ''z '' "z([^' ])('[sS]|'[mM]|'[dD]|') z\1 \2 z)([^' ])('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) textconvert_parentheses return_strreturnc|durtjdtd|jD]\}}|j ||}|j D]\}}|j ||}|j \}}|j ||}|r&|jD]\}}|j ||}|j\}}|j ||}d|zdz}|jD]\}}|j ||}|jD]}|j d|}|jD]}|j d|}|jS)aReturn a tokenized copy of `text`. >>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88 (roughly 3,36 euros)\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> TreebankWordTokenizer().tokenize(s) # doctest: +NORMALIZE_WHITESPACE ['Good', 'muffins', 'cost', '$', '3.88', '(', 'roughly', '3,36', 'euros', ')', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] >>> TreebankWordTokenizer().tokenize(s, convert_parentheses=True) # doctest: +NORMALIZE_WHITESPACE ['Good', 'muffins', 'cost', '$', '3.88', '-LRB-', 'roughly', '3,36', 'euros', '-RRB-', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks', '.'] :param text: A string with a sentence or sentences. :type text: str :param convert_parentheses: if True, replace parentheses to PTB symbols, e.g. `(` to `-LRB-`. Defaults to False. :type convert_parentheses: bool, optional :param return_str: If True, return tokens as space-separated string, defaults to False. :type return_str: bool, optional :return: List of tokens from `text`. :rtype: List[str] FzHParameter 'return_str' has been deprecated and should no longer be used.)category stacklevel z \1 \2 ) warningswarnDeprecationWarningSTARTING_QUOTESsub PUNCTUATIONPARENS_BRACKETSCONVERT_PARENTHESES DOUBLE_DASHES ENDING_QUOTES CONTRACTIONS2 CONTRACTIONS3split)selfrrrregexp substitutions \/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/nltk/tokenize/treebank.pytokenizezTreebankWordTokenizer.tokenizees6 U " MM"+  %)$8$8 2 FL::lD1D 2%)$4$4 2 FL::lD1D 2 $33 zz,- (,(@(@ 6$ zz,5 6 $11 zz,-TzC$($6$6 2 FL::lD1D 2(( 0F::j$/D 0(( 0F::j$/D 0zz|c#.K|j|}d|vsd|vrVtjd|Dcgc]}|j}}|Dcgc]}|dvr|j dn|}}n|}t ||Ed{ycc}wcc}w7w)a Returns the spans of the tokens in ``text``. Uses the post-hoc nltk.tokens.align_tokens to return the offset spans. >>> from nltk.tokenize import TreebankWordTokenizer >>> s = '''Good muffins cost $3.88\nin New (York). Please (buy) me\ntwo of them.\n(Thanks).''' >>> expected = [(0, 4), (5, 12), (13, 17), (18, 19), (19, 23), ... (24, 26), (27, 30), (31, 32), (32, 36), (36, 37), (37, 38), ... (40, 46), (47, 48), (48, 51), (51, 52), (53, 55), (56, 59), ... (60, 62), (63, 68), (69, 70), (70, 76), (76, 77), (77, 78)] >>> list(TreebankWordTokenizer().span_tokenize(s)) == expected True >>> expected = ['Good', 'muffins', 'cost', '$', '3.88', 'in', ... 'New', '(', 'York', ')', '.', 'Please', '(', 'buy', ')', ... 'me', 'two', 'of', 'them.', '(', 'Thanks', ')', '.'] >>> [s[start:end] for start, end in TreebankWordTokenizer().span_tokenize(s)] == expected True :param text: A string with a sentence or sentences. :type text: str :yield: Tuple[int, int] rrz ``|'{2}|\")rr rrN)r0refinditergrouppopr)r,r raw_tokensmmatchedtoktokenss r/ span_tokenizez#TreebankWordTokenizer.span_tokenizes.]]4( 4KTT\*,++mT*JKQqwwyKGK &#&):": ACF  F---L .s(2BB  BB/BBBN)FF)__name__ __module__ __qualname____doc__r3compiler"r$r%r&r'r(r _contractionslistmapr)r*strboolrr0rrintr<r1r/r r sw0 F U# G g& , -y9O $ %y1 J ) I ) K *- BJJ7 8   G j) K (+ K"rzz"78*EO E G$ E G$ E G$ E G$ E G$ E G$  RZZ&0M E F# D 6" 4 5yA @ A9M M*+MRZZ)D)DEFMRZZ)D)DEFMPUEE.2EHLE cEN).#).(5c?*C).r1r c BeZdZdZeZej Dcgc]'}tj|jdd)c}}}ZejDcgc]'}tj|jdd)c}}}Z ejddfejddfejddfejd dfejd d fgZ ejd d fZ ejddfejddfejddfejddfejddfejddfgZ ejddfejddfejddfgZejddfejd dfejd!d"fejd#dfejd$dfejd%d&fejd'd(fgZejd)d*fejd+d(fejd,d fgZd3d-eed.ed/efd0Zd3d-eed.ed/efd1Zy2cc}}}wcc}}}w)4TreebankWordDetokenizera The Treebank detokenizer uses the reverse regex operations corresponding to the Treebank tokenizer's regexes. Note: - There're additional assumption mades when undoing the padding of ``[;@#$%&]`` punctuation symbols that isn't presupposed in the TreebankTokenizer. - There're additional regexes added in reversing the parentheses tokenization, such as the ``r'([\]\)\}\>])\s([:;,.])'``, which removes the additional right padding added to the closing parentheses precedding ``[:;,.]``. - It's not possible to return the original whitespaces as they were because there wasn't explicit records of where `'\n'`, `'\t'` or `'\s'` were removed at the text.split() operation. >>> from nltk.tokenize.treebank import TreebankWordTokenizer, TreebankWordDetokenizer >>> s = '''Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\nThanks.''' >>> d = TreebankWordDetokenizer() >>> t = TreebankWordTokenizer() >>> toks = t.tokenize(s) >>> d.detokenize(toks) 'Good muffins cost $3.88 in New York. Please buy me two of them. Thanks.' The MXPOST parentheses substitution can be undone using the ``convert_parentheses`` parameter: >>> s = '''Good muffins cost $3.88\nin New (York). Please (buy) me\ntwo of them.\n(Thanks).''' >>> expected_tokens = ['Good', 'muffins', 'cost', '$', '3.88', 'in', ... 'New', '-LRB-', 'York', '-RRB-', '.', 'Please', '-LRB-', 'buy', ... '-RRB-', 'me', 'two', 'of', 'them.', '-LRB-', 'Thanks', '-RRB-', '.'] >>> expected_tokens == t.tokenize(s, convert_parentheses=True) True >>> expected_detoken = 'Good muffins cost $3.88 in New (York). Please (buy) me two of them. (Thanks).' >>> expected_detoken == d.detokenize(t.tokenize(s, convert_parentheses=True), convert_parentheses=True) True During tokenization it's safe to add more spaces but during detokenization, simply undoing the padding doesn't really help. - During tokenization, left and right pad is added to ``[!?]``, when detokenizing, only left shift the ``[!?]`` is needed. Thus ``(re.compile(r'\s([?!])'), r'\g<1>')``. - During tokenization ``[:,]`` are left and right padded but when detokenizing, only left shift is necessary and we keep right pad after comma/colon if the string after is a non-digit. Thus ``(re.compile(r'\s([:,])\s([^\d])'), r'\1 \2')``. >>> from nltk.tokenize.treebank import TreebankWordDetokenizer >>> toks = ['hello', ',', 'i', 'ca', "n't", 'feel', 'my', 'feet', '!', 'Help', '!', '!'] >>> twd = TreebankWordDetokenizer() >>> twd.detokenize(toks) "hello, i can't feel my feet! Help!!" >>> toks = ['hello', ',', 'i', "can't", 'feel', ';', 'my', 'feet', '!', ... 'Help', '!', '!', 'He', 'said', ':', 'Help', ',', 'help', '?', '!'] >>> twd.detokenize(toks) "hello, i can't feel; my feet! Help!! He said: Help, help?!" z(?#X)z\sz+([^' ])\s('ll|'LL|'re|'RE|'ve|'VE|n't|N'T) z\1\2 z([^' ])\s('[sS]|'[mM]|'[dD]|') z (\S)\s(\'\')\1\2z(\'\')\s([.,:)\]>};%])rrrrr (r )r[r]r{r}z([\[\(\{\<])\sz\g<1>z\s([\]\)\}\>])z([\]\)\}\>])\s([:;,.])z ([^'])\s'\sz\1' z\s([?!])z([^\.])\s(\.)([\]\)}>"\']*)\s*$z\1\2\3z([#$])\sz\s([;%])z \s\.\.\.\sz...z\s([:,])z\1z([ (\[{<])\s``z\1``z(``)\sr r;rrcdj|}d|zdz}|jD]}|jd|}|jD]}|jd|}|jD]\}}|j||}|j }|j \}}|j||}|r&|jD]\}}|j||}|jD]\}}|j||}|jD]\}}|j||}|jD]\}}|j||}|j S)a Treebank detokenizer, created by undoing the regexes from the TreebankWordTokenizer.tokenize. :param tokens: A list of strings, i.e. tokenized text. :type tokens: List[str] :param convert_parentheses: if True, replace PTB symbols with parentheses, e.g. `-LRB-` to `(`. Defaults to False. :type convert_parentheses: bool, optional :return: str rrK) joinr*r#r)r(stripr'r&r%r$r")r,r;rrr-r.s r/r0z TreebankWordDetokenizer.tokenize[sxxTzC(( -F::gt,D -(( -F::gt,D -%)$6$6 2 FL::lD1D 2zz| $11 zz,- (,(@(@ 6$ zz,5 6%)$8$8 2 FL::lD1D 2%)$4$4 2 FL::lD1D 2%)$8$8 2 FL::lD1D 2zz|r1c&|j||S)z&Duck-typing the abstract *tokenize()*.)r0)r,r;rs r/ detokenizez"TreebankWordDetokenizer.detokenizes}}V%899r1N)F)r=r>r?r@rrBr)r3rAreplacer*r(r'r&r%r$r"rrErFr0rV).0patternr3s000r/rJrJs:x*+M%22  7??7E23M %22  7??7E23M B CXN 6 7B O $g. BJJ0 1   E C  M RZZ(%0M G c" G c" G c" G c" G c" G c"  % &1 % &1 - .8O N #W- K (+ 6 7C K (+ K (+ M "F+ BJJ{ #  K, % &0 I & E D!O 3tCy3t3PS3j:c::RU:us ,J,JrJ)r@r3rtypingrrrnltk.tokenize.apirnltk.tokenize.destructivernltk.tokenize.utilrr rJrHr1r/r^s> (((;+x.Jx.vz:jz:r1