`L i[KdZddlmZddlZddlmZddlmZddlm Z ddl m Z ddl m Z mZeefZdZd Zd Zd ZGd d ZGddeZGddeZGddeZGddeZGddeZy)aVarious classifier implementations. Also includes basic feature extractor methods. Example Usage: :: >>> from textblob import TextBlob >>> from textblob.classifiers import NaiveBayesClassifier >>> train = [ ... ('I love this sandwich.', 'pos'), ... ('This is an amazing place!', 'pos'), ... ('I feel very good about these beers.', 'pos'), ... ('I do not like this restaurant', 'neg'), ... ('I am tired of this stuff.', 'neg'), ... ("I can't deal with this", 'neg'), ... ("My boss is horrible.", "neg") ... ] >>> cl = NaiveBayesClassifier(train) >>> cl.classify("I feel amazing!") 'pos' >>> blob = TextBlob("The beer is good. But the hangover is horrible.", classifier=cl) >>> for s in blob.sentences: ... print(s) ... print(s.classify()) ... The beer is good. pos But the hangover is horrible. neg .. versionadded:: 0.6.0 )chainN)cached_property) FormatError) word_tokenize) is_filelike strip_puncc\dtjfd|D}t|S)zReturn a set of all words in a dataset. :param dataset: A list of tuples of the form ``(words, label)`` where ``words`` is either a string of a list of tokens. c@t|tr t|dS|S)NF include_punc) isinstance basestringr)wordss Z/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/textblob/classifiers.pytokenizez)_get_words_from_dataset..tokenize:s eZ ( U; ;Lc34K|]\}}|ywN).0r_rs r z*_get_words_from_dataset..@s#LqHUO#Ls)r from_iterableset)dataset all_wordsrs @r_get_words_from_datasetr1s* ###LG#LLI y>rct|trtdt|dD}|Std|D}|S)Nc36K|]}t|dywF)allNrrws rrz'_get_document_tokens..Fs"  qe $ $ Fr c36K|]}t|dywr r"r#s rrz'_get_document_tokens..Ks@!Zu--@r%)r rrr)documenttokenss r_get_document_tokensr)DsH(J' "8%@   M@x@@ Mrc tt|}t|trt |g|Dcgc]}|}}n, t|dtsJt t |g|}t|tfd|D}|S#t$ricYSwxYwcc}w#t$r}td|d}~wwxYw)a_A basic document feature extractor that returns a dict indicating what words in ``train_set`` are contained in ``document``. :param document: The text to extract features from. Can be a string or an iterable. :param list train_set: Training data set, a list of tuples of the form ``(words, label)`` OR an iterable of strings. rz train_set is probably malformed.Nc32K|]}d|d|vfyw) contains()Nr)rwordr(s rrz"basic_extractor..fs#Vya(46>;Vs) nextiter StopIterationr rrr Exception ValueErrorr)dict)r' train_setel_zeror$ word_featureserrorfeaturesr(s @rbasic_extractorr:OstI'':&$)7)Y$?@q@ @ Lgaj*5 553E7)Y4OPM"( +FV VVH O  A  L?@e K Ls.B B +B% BB% B?. B::B?c@t|}td|D}|S)zdA basic document feature extractor that returns a dict of words that the document contains. c3,K|] }d|ddfyw)r,r-TNrr#s rrz%contains_extractor..os=y1%t,=s)r)r4)r'r(r9s rcontains_extractorr=js#"( +F=f==H OrcLeZdZdZedfdZd dZedZdZ dZ dZ d Z y) BaseClassifieraAbstract classifier class from which all classifers inherit. At a minimum, descendant classes must implement a ``classify`` method and have a ``classifier`` property. :param train_set: The training set, either a list of tuples of the form ``(text, classification)`` or a file-like object. ``text`` may be either a string or an iterable. :param callable feature_extractor: A feature extractor function that takes one or two arguments: ``document`` and ``train_set``. :param str format: If ``train_set`` is a filename, the file format, e.g. ``"csv"`` or ``"json"``. If ``None``, will attempt to detect the file format. :param kwargs: Additional keyword arguments are passed to the constructor of the :class:`Format ` class used to read the data. Only applies when a file-like object is passed as ``train_set``. .. versionadded:: 0.6.0 Nc ||_||_t|r|j|||_n||_t |j|_d|_yr) format_kwargsfeature_extractorr _read_datar5r _word_settrain_features)selfr5rBformatkwargss r__init__zBaseClassifier.__init__sR$!2 y !!__Y?DN&DN0 NN #rc|s"tj|}|sEtdtj}||j vrt d|d||}||fi|j jS)zhReads a data file and returns an iterable that can be used as testing or training data. z@Could not automatically detect format for the given data source.'z' format not supported.)formatsdetectr get_registrykeysr3rA to_iterable)rFrrG format_classregistrys rrCzBaseClassifier._read_datas ">>'2L!# ++-HX]]_, 1VH,C!DEE#F+LG:t'9'9:FFHHrctd)zThe classifier object.z)Must implement the "classifier" property.NotImplementedErrorrFs r classifierzBaseClassifier.classifiers""MNNrctd)zClassifies a string of text.z#Must implement a "classify" method.rTrFtexts rclassifyzBaseClassifier.classifys!"GHHrctd)zTrains the classifier.z Must implement a "train" method.rT)rFlabeled_featuresets rtrainzBaseClassifier.trains!"DEErctd)z3Returns an iterable containing the possible labels.z!Must implement a "labels" method.rTrVs rlabelszBaseClassifier.labelss!"EFFrc |j||jS#ttf$r|j|cYSwxYw)zWExtracts features from a body of text. :rtype: dictionary of features )rBrD TypeErrorAttributeErrorrYs rextract_featureszBaseClassifier.extract_featuressC  0))$? ?>* 0))$/ / 0s AAr) __name__ __module__ __qualname____doc__r:rIrCrrWr[r^r`rdrrrr?r?vsF*,;4 #I&OOIFG 0rr?cbeZdZdZdZedffd ZdZedZ dZ dZ dZ d d Z d ZxZS) NLTKClassifieraHAn abstract class that wraps around the nltk.classify module. Expects that descendant classes include a class variable ``nltk_class`` which is the class in the nltk.classify module to be wrapped. Example: :: class MyClassifier(NLTKClassifier): nltk_class = nltk.classify.svm.SvmClassifier Nc t||||fi||jDcgc]\}}|j||fc}}|_ycc}}wr)superrIr5rdrE)rFr5rBrGrHdc __class__s rrIzNLTKClassifier.__init__sJ $5vHHIMXA 5 5a 8!<XXsA cf|jj}d|dt|jdS)N< trained on z instances>)rorelenr5rF class_names r__repr__zNLTKClassifier.__repr__s0^^,, :,l3t~~+>*?{KKrc^ |jS#t$r}td|d}~wwxYw)zThe classifier.@NLTKClassifier must have a nltk_class variable that is not None.N)r^rcr3)rFr8s rrWzNLTKClassifier.classifiers6 ::<  U  s , ',c |jj|jg|i||_|jS#t$r}t d|d}~wwxYw)aTrain the classifier with a labeled feature set and return the classifier. Takes the same arguments as the wrapped NLTK class. This method is implicitly called when calling ``classify`` or ``accuracy`` methods and is included only to allow passing in arguments to the ``train`` method of the wrapped NLTK class. .. versionadded:: 0.6.2 :rtype: A classifier rxN) nltk_classr^rErWrcr3)rFargsrHr8s rr^zNLTKClassifier.trainsg 3doo33##&*.4DO?? " U  s:= A AAc6|jjS)z&Return an iterable of possible labels.)rWr`rVs rr`zNLTKClassifier.labelss%%''rcZ|j|}|jj|S)zIClassifies the text. :param str text: A string of text. )rdrWr[rFrZ text_featuress rr[zNLTKClassifier.classifys) --d3 '' 66rct|r|j||}n|}|Dcgc]\}}|j||f}}}tjj |j |Scc}}w)aGCompute the accuracy on a test set. :param test_set: A list of tuples of the form ``(text, label)``, or a file pointer. :param format: If ``test_set`` is a filename, the file format, e.g. ``"csv"`` or ``"json"``. If ``None``, will attempt to detect the file format. )rrCrdnltkr[accuracyrW)rFtest_setrG test_datarmrn test_featuress rrzNLTKClassifier.accuracy si x &9I ICLM41a$//2A6M M}}%%doo}EENsA0c|xj|z c_|jjt||jDcgc]\}}|j ||fc}}|_ |j j|j g|i||_ycc}}w#t$r}td|d}~wwxYw)zUpdate the classifier with new training data and re-trains the classifier. :param new_data: New data as a list of tuples of the form ``(text, label)``. rxNT) r5rDupdaterrdrErzr^rWrcr3)rFnew_datar{rHrmrnr8s rrzNLTKClassifier.updates (" 5h?@IMXA 5 5a 8!<X 3doo33##&*.4DOY  U  s B 0/B&& C/ B;;Cr)rerfrgrhrzr:rIrvrrWr^r`r[rr __classcell__)ros@rrjrjsP J,;4Y L*(7F rrjcNeZdZdZej j ZdZdZ dZ y)NaiveBayesClassifieraPA classifier based on the Naive Bayes algorithm, as implemented in NLTK. :param train_set: The training set, either a list of tuples of the form ``(text, classification)`` or a filename. ``text`` may be either a string or an iterable. :param feature_extractor: A feature extractor function that takes one or two arguments: ``document`` and ``train_set``. :param format: If ``train_set`` is a filename, the file format, e.g. ``"csv"`` or ``"json"``. If ``None``, will attempt to detect the file format. .. versionadded:: 0.6.0 cZ|j|}|jj|S)aReturn the label probability distribution for classifying a string of text. Example: :: >>> classifier = NaiveBayesClassifier(train_data) >>> prob_dist = classifier.prob_classify("I feel happy this morning.") >>> prob_dist.max() 'positive' >>> prob_dist.prob("positive") 0.7 :rtype: nltk.probability.DictionaryProbDist rdrW prob_classifyr~s rrz"NaiveBayesClassifier.prob_classifyDs) --d3 ,,];;rc:|jj|i|S)zReturn the most informative features as a list of tuples of the form ``(feature_name, feature_value)``. :rtype: list )rWmost_informative_featuresrFr{rHs rinformative_featuresz)NaiveBayesClassifier.informative_featuresWs 9t88$I&IIrc:|jj|i|S)zoDisplays a listing of the most informative features for this classifier. :rtype: None )rWshow_most_informative_featuresrs rshow_informative_featuresz.NaiveBayesClassifier.show_informative_features_s >t==tNvNNrN) rerfrgrhrr[rrzrrrrrrrr2s) 33J<&JOrrc`eZdZdZej j jZdZ e Z dZ y)DecisionTreeClassifieraRA classifier based on the decision tree algorithm, as implemented in NLTK. :param train_set: The training set, either a list of tuples of the form ``(text, classification)`` or a filename. ``text`` may be either a string or an iterable. :param feature_extractor: A feature extractor function that takes one or two arguments: ``document`` and ``train_set``. :param format: If ``train_set`` is a filename, the file format, e.g. ``"csv"`` or ``"json"``. If ``None``, will attempt to detect the file format. .. versionadded:: 0.6.2 c:|jj|i|S)aReturn a string containing a pretty-printed version of this decision tree. Each line in the string corresponds to a single decision tree node or leaf, and indentation is used to display the structure of the tree. :rtype: str )rW pretty_formatrs rrz$DecisionTreeClassifier.pretty_formatzs -t,,d=f==rc:|jj|i|S)zReturn a string representation of this decision tree that expresses the decisions it makes as a nested set of pseudocode if statements. :rtype: str )rW pseudocoders rrz!DecisionTreeClassifier.pseudocodes *t))4:6::rN) rerfrgrhrr[ decisiontreerrzrpprintrrrrrrhs/ ++BBJ>F;rrcbeZdZdZej j ZedfdZ dZ dZ ddZ y) PositiveNaiveBayesClassifieraA variant of the Naive Bayes Classifier that performs binary classification with partially-labeled training sets, i.e. when only one class is labeled and the other is not. Assuming a prior distribution on the two labels, uses the unlabeled set to estimate the frequencies of the features. Example usage: :: >>> from text.classifiers import PositiveNaiveBayesClassifier >>> sports_sentences = ['The team dominated the game', ... 'They lost the ball', ... 'The game was intense', ... 'The goalkeeper catched the ball', ... 'The other team controlled the ball'] >>> various_sentences = ['The President did not comment', ... 'I lost the keys', ... 'The team won the game', ... 'Sara has two kids', ... 'The ball went off the court', ... 'They had the ball for the whole game', ... 'The show is over'] >>> classifier = PositiveNaiveBayesClassifier(positive_set=sports_sentences, ... unlabeled_set=various_sentences) >>> classifier.classify("My team lost the game") True >>> classifier.classify("And now for something completely different.") False :param positive_set: A collection of strings that have the positive label. :param unlabeled_set: A collection of unlabeled strings. :param feature_extractor: A feature extractor function. :param positive_prob_prior: A prior estimate of the probability of the label ``True``. .. versionadded:: 0.7.0 ?c ||_||_||_|jDcgc]}|j|c}|_|jDcgc]}|j|c}|_||_ycc}wcc}wr)rB positive_set unlabeled_setrdpositive_featuresunlabeled_featurespositive_prob_prior)rFrrrBrrHrms rrIz%PositiveNaiveBayesClassifier.__init__su"3(*DHDUDU!Vq$"7"7":!VEIEWEW"X4#8#8#;"X#6 "W"Xs A:A?c|jj}d|dt|jdt|jdS)Nrqrrz labeled and z unlabeled instances>)rorersrrrts rrvz%PositiveNaiveBayesClassifier.__repr__sO^^,,  |<D,=,=(>'?@t))*++@ B rc|jj|j|j|j|_|j S)aTrain the classifier with a labeled and unlabeled feature sets and return the classifier. Takes the same arguments as the wrapped NLTK class. This method is implicitly called when calling ``classify`` or ``accuracy`` methods and is included only to allow passing in arguments to the ``train`` method of the wrapped NLTK class. :rtype: A classifier )rzr^rrrrWrs rr^z"PositiveNaiveBayesClassifier.trains?////  " "D$;$;T=U=U rNc||_|rG|xj|z c_|xj|Dcgc]}|j|c}z c_|rG|xj|z c_|xj |Dcgc]}|j|c}z c_|j j|j|j |jg|i||_ycc}wcc}w)zUpdate the classifier with new data and re-trains the classifier. :param new_positive_data: List of new, labeled strings. :param new_unlabeled_data: List of new, unlabeled strings. T) rrrrdrrrzr^rW)rFnew_positive_datanew_unlabeled_datarr{rHrms rrz#PositiveNaiveBayesClassifier.updates$7    !2 2   " "2C'-.%%a('  "    "4 4   # #2D(-.%%a((  #0$////  " "  # #  $ $     ' (s C 8C%)NNr) rerfrgrhrr[rrzr=rIrvr^rrrrrrs@%N;;J - 7    rrceZdZejj j jZejj j ZdZ y)MaxEntClassifiercZ|j|}|jj|S)aReturn the label probability distribution for classifying a string of text. Example: :: >>> classifier = MaxEntClassifier(train_data) >>> prob_dist = classifier.prob_classify("I feel happy this morning.") >>> prob_dist.max() 'positive' >>> prob_dist.prob("positive") 0.7 :rtype: nltk.probability.DictionaryProbDist r)rFrZfeatss rrzMaxEntClassifier.prob_classifys) %%d+,,U33rN) rerfrgrr[maxentMaxentClassifierrhrzrrrrrrs7mm""33;;G%%66J4rr)rh itertoolsrrtextblob.formatsrLtextblob.decoratorsrtextblob.exceptionsrtextblob.tokenizersrtextblob.utilsrrstrbytesrrr)r:r=r?rjrrrrrrrrsB "/+-25\ &6P0P0ff^fR3O>3Ol$;^$;No>od4~4r