`L iddlmZddlmZddlZddlmZddl m Z ddl m Z m Z mZddlmZmZdd lmZd ZGd d e e Zy) )chain)IntegralN)metadata_routing) BaseEstimatorTransformerMixin _fit_context)Interval StrOptions) transformcZt|dr|jS|jS)z6Like d.iteritems, but accepts any collections.Mapping. iteritems)hasattrritems)ds f/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/feature_extraction/_hash.py _iteritemsrs"#A{31;;=BBceZdZUdZdej iZeede je jjdge hdgddgd Zeed < dd e j"d d dZed ddZdZfdZxZS) FeatureHasheraImplements feature hashing, aka the hashing trick. This class turns sequences of symbolic feature names (strings) into scipy.sparse matrices, using a hash function to compute the matrix column corresponding to a name. The hash function employed is the signed 32-bit version of Murmurhash3. Feature names of type byte string are used as-is. Unicode strings are converted to UTF-8 first, but no Unicode normalization is done. Feature values must be (finite) numbers. This class is a low-memory alternative to DictVectorizer and CountVectorizer, intended for large-scale (online) learning and situations where memory is tight, e.g. when running prediction code on embedded devices. For an efficiency comparison of the different feature extractors, see :ref:`sphx_glr_auto_examples_text_plot_hashing_vs_dict_vectorizer.py`. Read more in the :ref:`User Guide `. .. versionadded:: 0.13 Parameters ---------- n_features : int, default=2**20 The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners. input_type : str, default='dict' Choose a string from {'dict', 'pair', 'string'}. Either "dict" (the default) to accept dictionaries over (feature_name, value); "pair" to accept pairs of (feature_name, value); or "string" to accept single strings. feature_name should be a string, while value should be a number. In the case of "string", a value of 1 is implied. The feature_name is hashed to find the appropriate column for the feature. The value's sign might be flipped in the output (but see non_negative, below). dtype : numpy dtype, default=np.float64 The type of feature values. Passed to scipy.sparse matrix constructors as the dtype argument. Do not set this to bool, np.boolean or any unsigned integer type. alternate_sign : bool, default=True When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n_features. This approach is similar to sparse random projection. .. versionchanged:: 0.19 ``alternate_sign`` replaces the now deprecated ``non_negative`` parameter. See Also -------- DictVectorizer : Vectorizes string-valued features using a hash table. sklearn.preprocessing.OneHotEncoder : Handles nominal/categorical features. Notes ----- This estimator is :term:`stateless` and does not need to be fitted. However, we recommend to call :meth:`fit_transform` instead of :meth:`transform`, as parameter validation is only performed in :meth:`fit`. Examples -------- >>> from sklearn.feature_extraction import FeatureHasher >>> h = FeatureHasher(n_features=10) >>> D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}] >>> f = h.transform(D) >>> f.toarray() array([[ 0., 0., -4., -1., 0., 0., 0., 0., 0., 2.], [ 0., 0., 0., -2., -5., 0., 0., 0., 0., 0.]]) With `input_type="string"`, the input must be an iterable over iterables of strings: >>> h = FeatureHasher(n_features=8, input_type="string") >>> raw_X = [["dog", "cat", "snake"], ["snake", "dog"], ["cat", "bird"]] >>> f = h.transform(raw_X) >>> f.toarray() array([[ 0., 0., 0., -1., 0., -1., 0., 1.], [ 0., 0., 0., -1., 0., -1., 0., 0.], [ 0., -1., 0., 0., 0., 0., 0., 1.]]) raw_Xr both)closed>dictpairstring no_validationboolean) n_features input_typedtypealternate_sign_parameter_constraintsrT)r!r"r#c<||_||_||_||_yN)r"r!r r#)selfr r!r"r#s r__init__zFeatureHasher.__init__ws! $$,r)prefer_skip_nested_validationc|S)aOnly validates estimator's parameters. This method allows to: (i) validate the estimator's parameters and (ii) be consistent with the scikit-learn transformer API. Parameters ---------- X : Ignored Not used, present here for API consistency by convention. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object FeatureHasher class instance. )r'Xys rfitzFeatureHasher.fits ( rct|}|jdk(r d|D}nK|jdk(rz*FeatureHasher.transform..s2qZ]2srz\Samples can not be a single string. The input must be an iterable over iterables of strings.c3.K|] }d|Dyw)c3$K|]}|df yw)r Nr+)r1fs rr2z4FeatureHasher.transform...s(q!f(sNr+)r1xs rr2z*FeatureHasher.transform..s9Q(a((9sr)seedr z Cannot vectorize empty sequence.)r"shape)iterr!next isinstancestr ValueErrorr_hashing_transformr r"r#r8sp csr_matrixsum_duplicates) r'r first_raw_Xraw_X_indicesindptrvalues n_samplesr,s rr zFeatureHasher.transforms"U  ??f $2E2E __ (u+K+s+ 2K=%0F9&9E"4 4??DJJ0C0C!# LLOa' >?@ @ MM Wf %**doo.  rct|}d|j_|jdk(rd|j_n |jdk(rd|j_d|_|S)NFrTr)super__sklearn_tags__ input_tags two_d_arrayr!rr requires_fit)r'tags __class__s rrJzFeatureHasher.__sklearn_tags__sYw')&+# ??h &%)DOO " __ &#'DOO ! r)i)NN)__name__ __module__ __qualname____doc__rUNUSED+_FeatureHasher__metadata_request__transformr rnpiinfoint32maxr r$r__annotations__float64r(r r.r rJ __classcell__)rOs@rrrsTn&-.>.E.E$F! !XRXXbhh-?-C-CFST!"<=> $+ $D -jj -56*-^rr) itertoolsrnumbersrnumpyrV scipy.sparsesparser? sklearn.utilsrbaserrr utils._param_validationr r _hashing_fastr r>rrr+rrrfs9*@@::C {$m{r