`L i ddlZddlmZmZddlZddlmZmZddl m Z m Z ddl m Z ddlmZmZmZmZmZddlmZmZmZdd lmZmZmZmZmZdd lmZm Z m!Z!m"Z"dd l#m$Z$m%Z%dd l&m'Z'dd l(m)Z)m*Z*m+Z+m,Z,ddl-m.Z.m/Z/ddl0m1Z1m2Z2m3Z3m4Z4m5Z5ddl6m7Z7dZ8gdZ9dZ:dLdZ;e"ddge eddhgdgdgdgddddddddZ<GddeeeZ=e"dge eddhgd d!dMddd"d#Z>Gd$d%eeeZ?Gd&d'eeeZ@e"ddge eddhgd d!ddd"d(ZAGd)d*eeeZBe"ddge eddhgd d!dddd+dd!d,d-ZCe"ddge!hd.ge eddhgdgdgd/ddNddd!d0d1ZDGd2d3eeeZEe"ddgeeddd45gdgd6dd7dd8d9ZFGd:d;eeeZGGd<d=eeeZHe"ddgeeddd45gd>ddOd?ZIGd@dAeeeZJe"ddge eddhgd d!ddBdCd!eKdDdddEdFZLGdGdHeeeZMe"dIdgid!dPdddJdKZNy)QN)IntegralReal)sparsestats)boxcox inv_boxcox)metadata_routing) BaseEstimatorClassNamePrefixFeaturesOutMixinOneToOneFeatureMixinTransformerMixin _fit_context) _array_api check_arrayresample)_find_matching_floating_dtype_modify_in_place_if_numpydevice get_namespaceget_namespace_and_device)IntervalOptions StrOptionsvalidate_params)_incremental_mean_and_var row_norms)_yeojohnson_lambda)incr_mean_variance_axisinplace_column_scalemean_variance_axis min_max_axis)inplace_csr_row_normalize_l1inplace_csr_row_normalize_l2) FLOAT_DTYPES_check_sample_weightcheck_is_fittedcheck_random_state validate_data) OneHotEncodergHz>) BinarizerKernelCenterer MaxAbsScaler MinMaxScaler Normalizerr+PowerTransformerQuantileTransformer RobustScalerStandardScaleradd_dummy_featurebinarize maxabs_scale minmax_scale normalizepower_transformquantile_transform robust_scalescalectjtjj}||z|z||z|zdzz}||kS)aODetect if a feature is indistinguishable from a constant feature. The detection is based on its computed variance and on the theoretical error bounds of the '2 pass algorithm' for variance computation. See "Algorithms for computing the sample variance: analysis and recommendations", by Chan, Golub, and LeVeque. r )npfinfofloat64eps)varmean n_samplesrB upper_bounds a/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/preprocessing/_data.py_is_constant_featurerHLsI ((2::  " "Cc/C'9t+;c+Aa*GGK + Tctj|r |dk(rd}|St|\}}|+|d|j|jj zk}|r|j |d}d||<|S)aSet scales of near constant features to 1. The goal is to avoid division by very small or zero values. Near constant features are detected automatically by identifying scales close to machine precision unless they are precomputed by the caller and passed with the `constant_mask` kwarg. Typically for standard scaling, the scales are the standard deviation while near constant features are better detected on the computed variances which are closer to machine precision by construction. ? Tcopy)r?isscalarrr@dtyperBasarray)r=rO constant_maskxp_s rG_handle_zeros_in_scalerV\s {{5 C<E e$A  "B%++)>)B)B$BBM JJu4J0E"m rIz array-likez sparse matrixboolean)Xaxis with_meanwith_stdrOprefer_skip_nested_validation)rYrZr[rOc .t|d|ddtd}tj|rc|r t d|dk7rt d|z|r?t |d \}}t |d }t|d tj|z |Stj|}|rtj||}|rtj||}tj||} |rL| z} tj| d } tj| dstj d | | z} |r[t d }| |z} |rGtj| d } tj| dstj d | | z} |S)a Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance. Read more in the :ref:`User Guide `. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to center and scale. axis : {0, 1}, default=0 Axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample. with_mean : bool, default=True If True, center the data before scaling. with_std : bool, default=True If True, scale the data to unit variance (or equivalently, unit standard deviation). copy : bool, default=True If False, try to avoid a copy and scale in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) The transformed data. See Also -------- StandardScaler : Performs scaling to unit variance using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Notes ----- This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly `with_mean=False` (in that case, only variance scaling will be performed on the features of the CSC matrix) or to call `X.toarray()` if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSC matrix. NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance. For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. .. warning:: Risk of data leak Do not use :func:`~sklearn.preprocessing.scale` unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using :class:`~sklearn.preprocessing.StandardScaler` within a :ref:`Pipeline ` in order to prevent most risks of data leaking: `pipe = make_pipeline(StandardScaler(), LogisticRegression())`. Examples -------- >>> from sklearn.preprocessing import scale >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> scale(X, axis=0) # scaling each column independently array([[-1., 1., 1.], [ 1., -1., -1.]]) >>> scale(X, axis=1) # scaling each row independently array([[-1.37, 0.39, 0.98], [-1.22, 0. , 1.22]]) cscFzthe scale function allow-nan) accept_sparserO ensure_2d estimatorrQensure_all_finitezlCannot center sparse matrices: pass `with_mean=False` instead See docstring for motivation and alternatives.rz4Can only scale sparse matrix on axis=0, got axis=%drYrNr*zNumerical issues were encountered when centering the data and might not be solved. Dataset may contain too large values. You may need to prescale your features.zNumerical issues were encountered when scaling the data and might not be solved. The standard deviation of the data is probably very close to 0. )rr%rissparse ValueErrorr!rVr r?sqrtrRnanmeannanstdrollaxisallclosewarningswarn) rXrYrZr[rOrUrCmean_scale_Xrmean_1mean_2s rGr=r=sB   &% Aq B  19FM  '2FAs(59C A $4 5\ HY JJqM JJq$'E YYq$'F[[D !  %KBZZ+F ;;vq) 1f  +F?F &LBBQ/ {{61-MM,&LB HrIceZdZUdZegdgdgdZeed<dddddZd Z dd Z e d dd Z d Z dZfdZxZS)r/ai Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. The transformation is given by:: X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min where min, max = feature_range. This transformation is often used as an alternative to zero mean, unit variance scaling. `MinMaxScaler` doesn't reduce the effect of outliers, but it linearly scales them down into a fixed range, where the largest occurring data point corresponds to the maximum value and the smallest one corresponds to the minimum value. For an example visualization, refer to :ref:`Compare MinMaxScaler with other scalers `. Read more in the :ref:`User Guide `. Parameters ---------- feature_range : tuple (min, max), default=(0, 1) Desired range of transformed data. copy : bool, default=True Set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array). clip : bool, default=False Set to True to clip transformed values of held-out data to provided `feature range`. .. versionadded:: 0.24 Attributes ---------- min_ : ndarray of shape (n_features,) Per feature adjustment for minimum. Equivalent to ``min - X.min(axis=0) * self.scale_`` scale_ : ndarray of shape (n_features,) Per feature relative scaling of the data. Equivalent to ``(max - min) / (X.max(axis=0) - X.min(axis=0))`` .. versionadded:: 0.17 *scale_* attribute. data_min_ : ndarray of shape (n_features,) Per feature minimum seen in the data .. versionadded:: 0.17 *data_min_* data_max_ : ndarray of shape (n_features,) Per feature maximum seen in the data .. versionadded:: 0.17 *data_max_* data_range_ : ndarray of shape (n_features,) Per feature range ``(data_max_ - data_min_)`` seen in the data .. versionadded:: 0.17 *data_range_* n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 n_samples_seen_ : int The number of samples processed by the estimator. It will be reset on new calls to fit, but increments across ``partial_fit`` calls. feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- minmax_scale : Equivalent function without the estimator API. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. Examples -------- >>> from sklearn.preprocessing import MinMaxScaler >>> data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]] >>> scaler = MinMaxScaler() >>> print(scaler.fit(data)) MinMaxScaler() >>> print(scaler.data_max_) [ 1. 18.] >>> print(scaler.transform(data)) [[0. 0. ] [0.25 0.25] [0.5 0.5 ] [1. 1. ]] >>> print(scaler.transform([[2, 2]])) [[1.5 0. ]] rW feature_rangerOclip_parameter_constraintsTF)rOrwc.||_||_||_yNru)selfrvrOrws rG__init__zMinMaxScaler.__init__s*  rIc6t|dr |`|`|`|`|`|`yyzwReset internal data-dependent state of the scaler, if necessary. __init__ parameters are not touched. rpN)hasattrrpmin_n_samples_seen_ data_min_ data_max_ data_range_r{s rG_resetzMinMaxScaler._resets3 4 "  $ #rIcF|j|j||S)aCompute the minimum and maximum to be used for later scaling. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Fitted scaler. r partial_fitr{rXys rGfitzMinMaxScaler.fit $ 1%%rIr\c|j}|d|dk\rtdt|ztj|r t dt |\}}t|d }t|||tj|d}t|}|j|d|j||j|d|j|f}tj|d| }tj|d| } |r|j d|_nZ|j%|j&|}|j)|j*| } |xj"|j dz c_| |z } |d|dz t-| d z |_|d||j.zz |_||_| |_| |_|S) apOnline computation of min and max on X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Fitted scaler. rr*zFMinimum of desired feature range must be smaller than maximum. Got %s.zPMinMaxScaler does not support sparse input. Consider using MaxAbsScaler instead.rr`)resetrQrdrQrrYrTTrN)rvrgstrrrf TypeErrorrrr)rsupported_float_dtypesrrRrQ_nanmin_nanmaxshaperminimumrmaximumrrVrprr) r{rXrrvrTrU first_passdevice_data_mindata_max data_ranges rGrzMinMaxScaler.partial_fits,**  }Q/ /Xm$%  ??1 7  a A '899   33B7)  ) JJ}Q'qwwwJ G JJ}Q'qwwwJ G %%aaB7%%aaB7 #$771:D zz$..(;Hzz$..(;H  AGGAJ . ( $Q'-*::>T T?   "!$x$++'== !!% rIc t|t|\}}t|||jt j |ddd}||j z}||jz }|jrxt|}t||j||j|jd|j||j|jd|j||}|S) a=Scale features of X according to feature_range. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data that will be transformed. Returns ------- Xt : ndarray of shape (n_samples, n_features) Transformed data. Tr`F)rOrQforce_writeablerdrrrr*)out)r'rr)rOrrrprrwrrrRrvrQ)r{rXrTrUrs rG transformzMinMaxScaler.transforms a A   33B7 )  T[[ TYY 99QiG) 4--a0 P 4--a0 P ArIct|t|\}}t||jt j |dd}||j z}||jz}|S)a\Undo the scaling of X according to feature_range. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data that will be transformed. It cannot be sparse. Returns ------- X_original : ndarray of shape (n_samples, n_features) Transformed data. Tr`)rOrQrrd)r'rrrOrrrrpr{rXrTrUs rGinverse_transformzMinMaxScaler.inverse_transform9sd a A  33B7 )   TYY T[[rIcTt|}d|j_d|_|SNT)super__sklearn_tags__ input_tags allow_nanarray_api_supportr{tags __class__s rGrzMinMaxScaler.__sklearn_tags__Vs)w')$(!!% rI)rr*rz)__name__ __module__ __qualname____doc__tuplerxdict__annotations__r|rrrrrrr __classcell__rs@rGr/r/'ssod   $D T !&*5E6EN'R:rIr/)rXrYF)rYrOcJt|ddtd}|j}|dk(r|j|jdd}t ||}|dk(r|j |}n%|j |jj}|dk(r|j}|S)a Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, i.e. between zero and one. The transformation is given by (when ``axis=0``):: X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min where min, max = feature_range. The transformation is calculated as (when ``axis=0``):: X_scaled = scale * X + min - X.min(axis=0) * scale where scale = (max - min) / (X.max(axis=0) - X.min(axis=0)) This transformation is often used as an alternative to zero mean, unit variance scaling. Read more in the :ref:`User Guide `. .. versionadded:: 0.17 *minmax_scale* function interface to :class:`~sklearn.preprocessing.MinMaxScaler`. Parameters ---------- X : array-like of shape (n_samples, n_features) The data. feature_range : tuple (min, max), default=(0, 1) Desired range of transformed data. axis : {0, 1}, default=0 Axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample. copy : bool, default=True If False, try to avoid a copy and scale in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. Returns ------- X_tr : ndarray of shape (n_samples, n_features) The transformed data. .. warning:: Risk of data leak Do not use :func:`~sklearn.preprocessing.minmax_scale` unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using :class:`~sklearn.preprocessing.MinMaxScaler` within a :ref:`Pipeline ` in order to prevent most risks of data leaking: `pipe = make_pipeline(MinMaxScaler(), LogisticRegression())`. See Also -------- MinMaxScaler : Performs scaling to a given range using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Notes ----- For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. Examples -------- >>> from sklearn.preprocessing import minmax_scale >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> minmax_scale(X, axis=0) # scale each column independently array([[0., 1., 1.], [1., 0., 0.]]) >>> minmax_scale(X, axis=1) # scale each row independently array([[0. , 0.75, 1. ], [0. , 0.5 , 1. ]]) Fr`)rOrbrQrdr*r)rvrO) rr%ndimreshaperr/ fit_transformTravel)rXrvrYrO original_ndimss rGr8r8]s|  %  AFFM IIaggaj! $=t`. This scaler can also be applied to sparse CSR or CSC matrices by passing `with_mean=False` to avoid breaking the sparsity structure of the data. Read more in the :ref:`User Guide `. Parameters ---------- copy : bool, default=True If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned. with_mean : bool, default=True If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory. with_std : bool, default=True If True, scale the data to unit variance (or equivalently, unit standard deviation). Attributes ---------- scale_ : ndarray of shape (n_features,) or None Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using `np.sqrt(var_)`. If a variance is zero, we can't achieve unit variance, and the data is left as-is, giving a scaling factor of 1. `scale_` is equal to `None` when `with_std=False`. .. versionadded:: 0.17 *scale_* mean_ : ndarray of shape (n_features,) or None The mean value for each feature in the training set. Equal to ``None`` when ``with_mean=False`` and ``with_std=False``. var_ : ndarray of shape (n_features,) or None The variance for each feature in the training set. Used to compute `scale_`. Equal to ``None`` when ``with_mean=False`` and ``with_std=False``. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 n_samples_seen_ : int or ndarray of shape (n_features,) The number of samples processed by the estimator for each feature. If there are no missing samples, the ``n_samples_seen`` will be an integer, otherwise it will be an array of dtype int. If `sample_weights` are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across ``partial_fit`` calls. See Also -------- scale : Equivalent function without the estimator API. :class:`~sklearn.decomposition.PCA` : Further removes the linear correlation across features with 'whiten=True'. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance. Examples -------- >>> from sklearn.preprocessing import StandardScaler >>> data = [[0, 0], [0, 0], [1, 1], [1, 1]] >>> scaler = StandardScaler() >>> print(scaler.fit(data)) StandardScaler() >>> print(scaler.mean_) [0.5 0.5] >>> print(scaler.transform(data)) [[-1. -1.] [-1. -1.] [ 1. 1.] [ 1. 1.]] >>> print(scaler.transform([[2, 2]])) [[3. 3.]] rW)rOrZr[rxTc.||_||_||_yrz)rZr[rO)r{rOrZr[s rGr|zStandardScaler.__init__`s"   rIc.t|dr |`|`|`|`yyr~)rrprrovar_rs rGrzStandardScaler._resetes( 4 " $  #rIcH|j|j|||S)aCompute the mean and std to be used for later scaling. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.24 parameter *sample_weight* support to StandardScaler. Returns ------- self : object Fitted scaler. r)r{rXr sample_weights rGrzStandardScaler.fitrs"0 1m44rIr\ct|d }t||dtd|}|jd}|t |||j }|t jn |j }t|dst j|||_ nvt j|jdk(rTt j|j|jd|_ |jj|d |_ tj|r|jr t!d |j"d k(rtj$ntj&}|j(rt|d s#t+|d |d\|_|_|_ nBt1|d |j,|j.|j|\|_|_|_ |j,jt j2d |_|j.jt j2d |_n~d|_d|_t ||}||t j4|j6|j8|j:f|jz} |xjt j<|| z j|z c_ nt|d s"d|_|j(rd|_nd|_|jsc|j(sWd|_d|_|xj|jd t j4|j=d z z c_ nAt?||j,|j.|j|\|_|_|_ t j@|jd k(r|jd |_ |j(r]tC|j.|j,|j} tEt jF|j.d| |_$|Sd|_$|S)aOnline computation of mean and std on X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. "Algorithms for computing the sample variance: Analysis and recommendations." The American Statistician 37.3 (1983): 242-247: Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.24 parameter *sample_weight* support to StandardScaler. Returns ------- self : object Fitted scaler. rcsrr_r`)rarQrdrr*NrQFrNmCannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives.rrprT)rYweightsreturn_sum_weights)rY last_meanlast_varlast_nr)rrKre)r)rOrS)%rr)r%rr&rQr?int64zerosrsizerepeatastyperrfrZrgformat csr_matrix csc_matrixr[r!rorrrAisnandataindicesindptrsumrptprHrVrhrp) r{rXrr first_call n_featuresrQsparse_constructorrsum_weights_nanrSs rGrzStandardScaler.partial_fitsB!'899   ()  WWQZ  $0QM*1qwwt./#%88Je#DD WWT)) *a /#%99T-A-A1771:#ND #'#7#7#>#>u5#>#QD ??1 ~~ N &'XX%6!!FXXaff%qyy!((;177-#$$?)J(R(R)$ 4*  == #DI $DI>>$--!   $$ RXXa[__!_5L(LL$?XJJII(("/ ?; DIt'; 66$&& '1 ,#'#7#7#:D ==1 4::t';';M1 "mDK  DK rIc |t|||n |j}t||dd|tdd}t j |r>|j r td|jt|d|jz |S|j r||jz}|jr||jz}|S)aPerform standardization by centering and scaling. Parameters ---------- X : {array-like, sparse matrix of shape (n_samples, n_features) The data used to scale along the features axis. copy : bool, default=None Copy the input X or not. Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. FrTr`rrarOrQrrdrr*) r'rOr)r%rrfrZrgrpr ror[r{rXrOs rGrzStandardScaler.transform!s 'tTYY    )  ??1 ~~ N{{&$QDKK8  ~~TZZ}}T[[ rIcrt|||n |j}t|d|tdd}t j |r;|j r td|jt||j|S|jr||jz}|j r||jz }|S)aScale back the data to the original representation. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to scale along the features axis. copy : bool, default=None Copy the input `X` or not. Returns ------- X_original : {ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. rTr`rarOrQrrdznCannot uncenter sparse matrices: pass `with_mean=False` instead See docstring for motivation and alternatives.) r'rOrr%rrfrZrgrpr r[rors rGrz StandardScaler.inverse_transformMs 'tTYY   )   ??1 ~~ M{{&$Q 4  }}T[[ ~~TZZrIct|}d|j_|j |j_ddg|j _|S)NTrAfloat32)rrrrrZrtransformer_tagspreserves_dtypers rGrzStandardScaler.__sklearn_tags__xsGw')$(!%)^^!31:I0F- rI)NNrzrrrrrxrrr|rrrrrrrrrs@rGr4r4stDN [K$D $td 565Q6Qf*X)VrIr4c~eZdZUdZddgiZeed<dddZdZdd Z e d dd Z d Z d Z fdZxZS)r.arScale each feature by its maximum absolute value. This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity. This scaler can also be applied to sparse CSR or CSC matrices. `MaxAbsScaler` doesn't reduce the effect of outliers; it only linearly scales them down. For an example visualization, refer to :ref:`Compare MaxAbsScaler with other scalers `. .. versionadded:: 0.17 Parameters ---------- copy : bool, default=True Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array). Attributes ---------- scale_ : ndarray of shape (n_features,) Per feature relative scaling of the data. .. versionadded:: 0.17 *scale_* attribute. max_abs_ : ndarray of shape (n_features,) Per feature maximum absolute value. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 n_samples_seen_ : int The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across ``partial_fit`` calls. See Also -------- maxabs_scale : Equivalent function without the estimator API. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. Examples -------- >>> from sklearn.preprocessing import MaxAbsScaler >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> transformer = MaxAbsScaler().fit(X) >>> transformer MaxAbsScaler() >>> transformer.transform(X) array([[ 0.5, -1. , 1. ], [ 1. , 0. , 0. ], [ 0. , 1. , -0.5]]) rOrWrxTrNc||_yrzrN)r{rOs rGr|zMaxAbsScaler.__init__s  rIc*t|dr|`|`|`yyr~)rrprmax_abs_rs rGrzMaxAbsScaler._resets# 4 " $  #rIcF|j|j||S)aCompute the maximum absolute value to be used for later scaling. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Fitted scaler. rrs rGrzMaxAbsScaler.fitrrIr\c tt|\}}t|d }t|||dtj|d}t j |rNt|dd\}}tjtj|tj|}n'tj|j|d|}|r|jd|_ n>|j|j|}|xj|jdz c_ ||_t|d |_|S) aOnline computation of max absolute value of X for later scaling. All of X is processed as a single batch. This is intended for cases when :meth:`fit` is not feasible due to very large number of `n_samples` or because X is read from a continuous stream. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. y : None Ignored. Returns ------- self : object Fitted scaler. rrr`)rrarQrdrT)rY ignore_nanrrN)rrr)rrrrfr"r?rabsrrrrrVrp) r{rXrrTrUrminsmaxsmax_abss rGrzMaxAbsScaler.partial_fits,a A '899   (33B7)   ??1 %aaDAJD$jjrvvd|` in order to prevent most risks of data leaking: `pipe = make_pipeline(MaxAbsScaler(), LogisticRegression())`. See Also -------- MaxAbsScaler : Performs scaling to the [-1, 1] range using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Notes ----- NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. Examples -------- >>> from sklearn.preprocessing import maxabs_scale >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> maxabs_scale(X, axis=0) # scale each column independently array([[-1. , 1. , 1. ], [-0.5, 0. , 0.5]]) >>> maxabs_scale(X, axis=1) # scale each row independently array([[-1. , 0.5, 1. ], [-1. , 0. , 1. ]]) rFr`rarOrbrQrdr*rrN) rr%rrrr.rrr)rXrYrOrrs rGr7r7isX  $ %  AFFM IIaggaj! $$A qy OOA  OOACC " " GGI HrIceZdZUdZdgdgegdgdgdZeed<dddddddZe d dd Z d Z d Z fd Z xZS)r3avScale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the :meth:`transform` method. Standardization of a dataset is a common preprocessing for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, using the median and the interquartile range often give better results. For an example visualization and comparison to other scalers, refer to :ref:`Compare RobustScaler with other scalers `. .. versionadded:: 0.17 Read more in the :ref:`User Guide `. Parameters ---------- with_centering : bool, default=True If `True`, center the data before scaling. This will cause :meth:`transform` to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory. with_scaling : bool, default=True If `True`, scale the data to interquartile range. quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0) Quantile range used to calculate `scale_`. By default this is equal to the IQR, i.e., `q_min` is the first quantile and `q_max` is the third quantile. .. versionadded:: 0.18 copy : bool, default=True If `False`, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned. unit_variance : bool, default=False If `True`, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of `q_max` and `q_min` for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. .. versionadded:: 0.24 Attributes ---------- center_ : array of floats The median value for each feature in the training set. scale_ : array of floats The (scaled) interquartile range for each feature in the training set. .. versionadded:: 0.17 *scale_* attribute. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- robust_scale : Equivalent function without the estimator API. sklearn.decomposition.PCA : Further removes the linear correlation across features with 'whiten=True'. Notes ----- https://en.wikipedia.org/wiki/Median https://en.wikipedia.org/wiki/Interquartile_range Examples -------- >>> from sklearn.preprocessing import RobustScaler >>> X = [[ 1., -2., 2.], ... [ -2., 1., 3.], ... [ 4., 1., -2.]] >>> transformer = RobustScaler().fit(X) >>> transformer RobustScaler() >>> transformer.transform(X) array([[ 0. , -2. , 0. ], [-1. , 0. , 0.4], [ 1. , 0. , -1.6]]) rW)with_centering with_scalingquantile_rangerO unit_variancerxTg9@gR@FcJ||_||_||_||_||_yrzrrrrrO)r{rrrrOrs rGr|zRobustScaler.__init__Cs+-(,* rIr\cvt||dtd}|j\}}d|cxkr |cxkrdks#ntdt |jz|j r=t j|r tdtj|d|_ nd |_ |jr}g}t|jd D]}t j|ri|j|j||j|d z}tj |jd|j" }||d t%|n |d d |f}|j'tj(||jtj*|}|d |dz |_t/|j,d |_|j0rYt2j4j7|dz t2j4j7|dz z } |j,| z |_|Sd |_|S)aCompute the median and quantiles to be used for scaling. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the median and quantiles used for later scaling along the features axis. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object Fitted scaler. r_r`)rarQrdrdzInvalid quantile range: %szqCannot center sparse matrices: use `with_centering=False` instead. See docstring for motivation and alternatives.reNr*rrQFrNgY@)r)r%rrgrrrrfr? nanmediancenter_rrangerrrrrQlenappend nanpercentile transposerprVrrnormppf) r{rXrq_minq_max quantiles feature_idxcolumn_nnz_data column_dataadjusts rGrzRobustScaler.fitRs(   )  ** uE)U)c)9C@S@S`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_sample, n_features) The data to center and scale. axis : int, default=0 Axis used to compute the medians and IQR along. If 0, independently scale each feature, otherwise (if 1) scale each sample. with_centering : bool, default=True If `True`, center the data before scaling. with_scaling : bool, default=True If `True`, scale the data to unit variance (or equivalently, unit standard deviation). quantile_range : tuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0) Quantile range used to calculate `scale_`. By default this is equal to the IQR, i.e., `q_min` is the first quantile and `q_max` is the third quantile. .. versionadded:: 0.18 copy : bool, default=True If False, try to avoid a copy and scale in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. unit_variance : bool, default=False If `True`, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of `q_max` and `q_min` for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. .. versionadded:: 0.24 Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) The transformed data. See Also -------- RobustScaler : Performs centering and scaling using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Notes ----- This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly `with_centering=False` (in that case, only variance scaling will be performed on the features of the CSR matrix) or to call `X.toarray()` if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSR matrix. For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. .. warning:: Risk of data leak Do not use :func:`~sklearn.preprocessing.robust_scale` unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using :class:`~sklearn.preprocessing.RobustScaler` within a :ref:`Pipeline ` in order to prevent most risks of data leaking: `pipe = make_pipeline(RobustScaler(), LogisticRegression())`. Examples -------- >>> from sklearn.preprocessing import robust_scale >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> robust_scale(X, axis=0) # scale each column independently array([[-1., 1., 1.], [ 1., -1., -1.]]) >>> robust_scale(X, axis=1) # scale each row independently array([[-1.5, 0. , 0.5], [-1. , 0. , 1. ]]) rFr`rr*rr) rr%rrrr3rrr) rXrYrrrrOrrrs rGr<r<s\  $ %  AFFM IIaggaj! $%!%#   A qy OOA  OOACC " " GGI HrI>l1l2max)rXrrYrO return_norm)rYrOr!c T|dk(rd}nd}t|\}}t|||dtj|d}|dk(r |j}t j |r|r|dvr td|d k(r t|n |d k(r t|n|d k(rt|d \}} tjt|| } | jtj|j } | dk7} |j"| xx| | zcc<ny|d k(r#|j%|j|d } n8|d k(r t'|} n'|d k(r"|j)|j|d } t+ d} || dddfz}|dk(r |j}|r| fS|S)aXScale input vectors individually to unit norm (vector length). Read more in the :ref:`User Guide `. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to normalize, element by element. scipy.sparse matrices should be in CSR format to avoid an un-necessary copy. norm : {'l1', 'l2', 'max'}, default='l2' The norm to use to normalize each non zero sample (or each non-zero feature if axis is 0). axis : {0, 1}, default=1 Define axis used to normalize the data along. If 1, independently normalize each sample, otherwise (if 0) normalize each feature. copy : bool, default=True If False, try to avoid a copy and normalize in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. return_norm : bool, default=False Whether to return the computed norms. Returns ------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) Normalized input X. norms : ndarray of shape (n_samples, ) if axis=1 else (n_features, ) An array of norms along given axis for X. When X is sparse, a NotImplementedError will be raised for norm 'l1' or 'l2'. See Also -------- Normalizer : Performs normalization using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Notes ----- For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. Examples -------- >>> from sklearn.preprocessing import normalize >>> X = [[-2, 1, 2], [-1, 0, 1]] >>> normalize(X, norm="l1") # L1 normalization each row independently array([[-0.4, 0.2, 0.4], [-0.5, 0. , 0.5]]) >>> normalize(X, norm="l2") # L2 normalization each row independently array([[-0.67, 0.33, 0.67], [-0.71, 0. , 0.71]]) rr_rzthe normalize functionT)rarOrcrQr)rrzSreturn_norm=True is not implemented for sparse matrices with norm 'l1' or norm 'l2'rrr r*reFrNN)rrrrrrrfNotImplementedErrorr#r$r"r?rrrdiffrrrrr rV) rXrrYrOr! sparse_formatrTrUrmaxesnormsnorms_elementwisemasks rGr9r9nsL qy  ! EB # *//3  A qy CC q 4</%  4< ( + T\ ( + U]&q!,KD%JJs4y%0E % RWWQXX-> ? $)D FF4L-d3 3L 4<FF266!91F-E T\aLE U]FF266!91F-E&u59 U1d7^ qy CC%xrIceZdZUdZehdgdgdZeed<d dddZe d dd Z dd Z fd Z xZ S)r0a Normalize samples individually to unit norm. Each sample (i.e. each row of the data matrix) with at least one non zero component is rescaled independently of other samples so that its norm (l1, l2 or inf) equals one. This transformer is able to work both with dense numpy arrays and scipy.sparse matrix (use CSR format if you want to avoid the burden of a copy / conversion). Scaling inputs to unit norms is a common operation for text classification or clustering for instance. For instance the dot product of two l2-normalized TF-IDF vectors is the cosine similarity of the vectors and is the base similarity metric for the Vector Space Model commonly used by the Information Retrieval community. For an example visualization, refer to :ref:`Compare Normalizer with other scalers `. Read more in the :ref:`User Guide `. Parameters ---------- norm : {'l1', 'l2', 'max'}, default='l2' The norm to use to normalize each non zero sample. If norm='max' is used, values will be rescaled by the maximum of the absolute values. copy : bool, default=True Set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSR matrix). Attributes ---------- n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- normalize : Equivalent function without the estimator API. Notes ----- This estimator is :term:`stateless` and does not need to be fitted. However, we recommend to call :meth:`fit_transform` instead of :meth:`transform`, as parameter validation is only performed in :meth:`fit`. Examples -------- >>> from sklearn.preprocessing import Normalizer >>> X = [[4, 1, 2, 2], ... [1, 3, 9, 3], ... [5, 7, 5, 1]] >>> transformer = Normalizer().fit(X) # fit does nothing. >>> transformer Normalizer() >>> transformer.transform(X) array([[0.8, 0.2, 0.4, 0.4], [0.1, 0.3, 0.9, 0.3], [0.5, 0.7, 0.5, 0.1]]) >rrr rWrrOrxTrNc ||_||_yrzr+)r{rrOs rGr|zNormalizer.__init__8s  rIr\c"t||d|S)aOnly validates estimator's parameters. This method allows to: (i) validate the estimator's parameters and (ii) be consistent with the scikit-learn transformer API. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to estimate the normalization parameters. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object Fitted transformer. rrar)rs rGrzNormalizer.fit<( dAU3 rIcv||n |j}t||dd|d}t||jddS)aScale each non zero row of X to unit norm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to normalize, row by row. scipy.sparse matrices should be in CSR format to avoid an un-necessary copy. copy : bool, default=None Copy the input X or not. Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. rTFrarrOrr*)rrYrO)rOr)r9rrs rGrzNormalizer.transformSsC"'tTYY  !5$TQV ??rIcbt|}d|j_d|_d|_|S)NTF)rrrr requires_fitrrs rGrzNormalizer.__sklearn_tags__js1w')!%!!% rIrrz)rrrrrrxrrr|rrrrrrs@rGr0r0saFR/01 $D $56,@.rIr0neitherclosed)rX thresholdrOrKr9rOct|ddgd|}tj|rd|dkr td|j|kD}t j |}d|j|<d|j|<|j|St|\}}}t|||}|j||d |kD}|j |}d||<d||<|S) aBBoolean thresholding of array-like or scipy.sparse matrix. Read more in the :ref:`User Guide `. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to binarize, element by element. scipy.sparse matrices should be in CSR or CSC format to avoid an un-necessary copy. threshold : float, default=0.0 Feature values below or equal to this are replaced by 0, above it by 1. Threshold may not be less than 0 for operations on sparse matrices. copy : bool, default=True If False, try to avoid a copy and binarize in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an object dtype, a copy will be returned even with copy=False. Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) The transformed data. See Also -------- Binarizer : Performs binarization using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). Examples -------- >>> from sklearn.preprocessing import binarize >>> X = [[0.4, 0.6, 0.5], [0.6, 0.1, 0.2]] >>> binarize(X, threshold=0.5) array([[0., 1., 0.], [1., 0., 0.]]) rr_T)rarrOrz2Cannot binarize a sparse matrix with threshold < 0r*)rTFrN) rrrfrgrr? logical_noteliminate_zerosrrr) rXr9rOcondnot_condrTrUr float_dtypes rGr6r6rs` AeU^TPTUA q q=QR Rvv !>>$'t x  H 13 Av3AyRH yyKey4y@>>$'$( HrIcreZdZUdZegdgdZeed<ddddZe dd d Z d d Z fd Z xZ S) r,a Binarize data (set feature values to 0 or 1) according to a threshold. Values greater than the threshold map to 1, while values less than or equal to the threshold map to 0. With the default threshold of 0, only positive values map to 1. Binarization is a common operation on text count data where the analyst can decide to only consider the presence or absence of a feature rather than a quantified number of occurrences for instance. It can also be used as a pre-processing step for estimators that consider boolean random variables (e.g. modelled using the Bernoulli distribution in a Bayesian setting). Read more in the :ref:`User Guide `. Parameters ---------- threshold : float, default=0.0 Feature values below or equal to this are replaced by 0, above it by 1. Threshold may not be less than 0 for operations on sparse matrices. copy : bool, default=True Set to False to perform inplace binarization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSR matrix). Attributes ---------- n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- binarize : Equivalent function without the estimator API. KBinsDiscretizer : Bin continuous data into intervals. OneHotEncoder : Encode categorical features as a one-hot numeric array. Notes ----- If the input is a sparse matrix, only the non-zero values are subject to update by the :class:`Binarizer` class. This estimator is :term:`stateless` and does not need to be fitted. However, we recommend to call :meth:`fit_transform` instead of :meth:`transform`, as parameter validation is only performed in :meth:`fit`. Examples -------- >>> from sklearn.preprocessing import Binarizer >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> transformer = Binarizer().fit(X) # fit does nothing. >>> transformer Binarizer() >>> transformer.transform(X) array([[1., 0., 1.], [1., 0., 0.], [0., 1., 0.]]) rWr:rxrKTc ||_||_yrzr:)r{r9rOs rGr|zBinarizer.__init__ s" rIr\c"t||d|S)aOnly validates estimator's parameters. This method allows to: (i) validate the estimator's parameters and (ii) be consistent with the scikit-learn transformer API. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data. y : None Ignored. Returns ------- self : object Fitted transformer. rr.r/rs rGrz Binarizer.fit r0rIcx||n |j}t||ddgd|d}t||jdS)aBinarize each element of X. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to binarize, element by element. scipy.sparse matrices should be in CSR format to avoid an un-necessary copy. copy : bool Copy the input X or not. Returns ------- X_tr : {ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. rr_TFr2r:)rOr)r6r9rs rGrzBinarizer.transform sK$'tTYY    %.   T^^%@@rIcbt|}d|_d|_d|j_|S)NFT)rrr4rrrrs rGrzBinarizer.__sklearn_tags__; s1w')!!%!% rIrz)rrrrrrxrrr|rrrrrrs@rGr,r,sZDNV $D %(d56,A>rIr,czeZdZdZdej iZdej iZddZddZ e dZ fdZ xZ S) r-uf Center an arbitrary kernel matrix :math:`K`. Let define a kernel :math:`K` such that: .. math:: K(X, Y) = \phi(X) . \phi(Y)^{T} :math:`\phi(X)` is a function mapping of rows of :math:`X` to a Hilbert space and :math:`K` is of shape `(n_samples, n_samples)`. This class allows to compute :math:`\tilde{K}(X, Y)` such that: .. math:: \tilde{K(X, Y)} = \tilde{\phi}(X) . \tilde{\phi}(Y)^{T} :math:`\tilde{\phi}(X)` is the centered mapped data in the Hilbert space. `KernelCenterer` centers the features without explicitly computing the mapping :math:`\phi(\cdot)`. Working with centered kernels is sometime expected when dealing with algebra computation such as eigendecomposition for :class:`~sklearn.decomposition.KernelPCA` for instance. Read more in the :ref:`User Guide `. Attributes ---------- K_fit_rows_ : ndarray of shape (n_samples,) Average of each column of kernel matrix. K_fit_all_ : float Average of kernel matrix. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- sklearn.kernel_approximation.Nystroem : Approximate a kernel map using a subset of the training data. References ---------- .. [1] `Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. "Nonlinear component analysis as a kernel eigenvalue problem." Neural computation 10.5 (1998): 1299-1319. `_ Examples -------- >>> from sklearn.preprocessing import KernelCenterer >>> from sklearn.metrics.pairwise import pairwise_kernels >>> X = [[ 1., -2., 2.], ... [ -2., 1., 3.], ... [ 4., 1., -2.]] >>> K = pairwise_kernels(X, metric='linear') >>> K array([[ 9., 2., -2.], [ 2., 14., -13.], [ -2., -13., 21.]]) >>> transformer = KernelCenterer().fit(K) >>> transformer KernelCenterer() >>> transformer.transform(K) array([[ 5., 0., -5.], [ 0., 14., -14.], [ -5., -14., 19.]]) Kct|\}}t||tj|}|jd|jdk7r5t dj |jd|jd|jd}|j|d|z |_|j|j|z |_ |S)aFit KernelCenterer. Parameters ---------- K : ndarray of shape (n_samples, n_samples) Kernel matrix. y : None Ignored. Returns ------- self : object Returns the instance itself. rrr*z?Kernel matrix must be a square matrix. Input is a {}x{} matrix.re) rr)rrrrgrr K_fit_rows_ K_fit_all_)r{rGrrTrUrEs rGrzKernelCenterer.fit s a A $)J)J2)N O 771: #,,2F1771:qwwqz,J  GGAJ 66!!6,y8&&!1!12Y> rIc .t|t|\}}t|||dtj|d}|j |d|j jdz dddf}||j z}||z}||jz }|S)avCenter kernel matrix. Parameters ---------- K : ndarray of shape (n_samples1, n_samples2) Kernel matrix. copy : bool, default=True Set to False to perform inplace computation. Returns ------- K_new : ndarray of shape (n_samples1, n_samples2) Returns the instance itself. TF)rOrrQrr*rerN) r'rr)rrrrIrrJ)r{rGrOrTrU K_pred_colss rGrzKernelCenterer.transform s a A    33B7  vvaav(4+;+;+A+A!+DDagN  T   [ T__rIc|jS)z&Number of transformed output features.)n_features_in_rs rG_n_features_outzKernelCenterer._n_features_out s"""rIcTt|}d|j_d|_|Sr)rrrpairwiserrs rGrzKernelCenterer.__sklearn_tags__ s)w')#' !% rIrz)T)rrrrr UNUSED,_KernelCenterer__metadata_request__transform&_KernelCenterer__metadata_request__fitrrpropertyrOrrrs@rGr-r-C sYK\&)*:*A*A$B!"$4$;$;<>#J##rIr-)rXvaluec`t|gdt}|j\}}||dzf}tj|r|j dk(r|j dz}tjtj||f}tjtj||jf}tjtj|||jf}tj|||ff|S|j dk(r|j|z}tjtj dg|f}tjtj||j"f} tjtj|||jf}tj$|| |f|S|j&} | t)|j+|Stj,tj|df||fS)aAugment dataset with an additional dummy feature. This is useful for fitting an intercept term with implementations which cannot otherwise fit it directly. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. value : float Value to use for the dummy feature. Returns ------- X : {ndarray, sparse matrix} of shape (n_samples, n_features + 1) Same data with dummy feature added as first column. Examples -------- >>> from sklearn.preprocessing import add_dummy_feature >>> add_dummy_feature([[0, 1], [1, 0]]) array([[1., 0., 1.], [1., 1., 0.]]) )r_rcoo)rarQr*rXr_r)rr%rrrfrcolr? concatenaterarangerowfullr coo_matrixrarrayrrrr5tocoohstack) rXrVrErrrYr\rrrklasss rGr5r5 sB A%:,OAGGIz  Q 'E q 88u %%!)C.."((9"5s!;?C>>2779e#>2779e#`. Read more in the :ref:`User Guide `. .. versionadded:: 0.19 Parameters ---------- n_quantiles : int, default=1000 or n_samples Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator. output_distribution : {'uniform', 'normal'}, default='uniform' Marginal distribution for the transformed data. The choices are 'uniform' (default) or 'normal'. ignore_implicit_zeros : bool, default=False Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros. subsample : int or None, default=10_000 Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices. Disable subsampling by setting `subsample=None`. .. versionadded:: 1.5 The option `None` to disable subsampling was added. random_state : int, RandomState instance or None, default=None Determines random number generation for subsampling and smoothing noise. Please see ``subsample`` for more details. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. copy : bool, default=True Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). Attributes ---------- n_quantiles_ : int The actual number of quantiles used to discretize the cumulative distribution function. quantiles_ : ndarray of shape (n_quantiles, n_features) The values corresponding the quantiles of reference. references_ : ndarray of shape (n_quantiles, ) Quantiles of references. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- quantile_transform : Equivalent function without the estimator API. PowerTransformer : Perform mapping to a normal distribution using a power transform. StandardScaler : Perform standardization that is faster, but less robust to outliers. RobustScaler : Perform robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. Examples -------- >>> import numpy as np >>> from sklearn.preprocessing import QuantileTransformer >>> rng = np.random.RandomState(0) >>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0) >>> qt = QuantileTransformer(n_quantiles=10, random_state=0) >>> qt.fit_transform(X) array([...]) r*Nleftr7uniformnormalrW random_state n_quantilesoutput_distributionignore_implicit_zeros subsamplergrOrxFi'TcX||_||_||_||_||_||_yrzrh)r{rirjrkrlrgrOs rGr|zQuantileTransformer.__init__ s2'#6 %:""( rIc|jrtjd|j\}}|jdz}|j (|j |krt |d|j |}tj||d|_ tjj|j|_ y)zCompute percentiles for dense matrices. Parameters ---------- X : ndarray of shape (n_samples, n_features) The data used to scale along the features axis. z['ignore_implicit_zeros' takes effect only with sparse matrix. This parameter has no effect.rNF)replacerErgrre) rkrmrnr references_rlrr?r quantiles_r accumulate)r{rXrgrEr referencess rG _dense_fitzQuantileTransformer._dense_fit s  % % MM@  !" :%%+ >> %$..9*D5DNNA**1jqA **//@rIcd|j\}}|jdz}g|_t|D]}|j|j ||j |dz}|j t||j kDr|j t|z|z}|jr"tj||j} n+tj|j |j} |j||d| d|nf|jr+tjt||j} n!tj||j} || dt|| js*|jjdgt|z||jjtj| |tj |j|_tj"j%|j|_y)aeCompute percentiles for sparse matrices. Parameters ---------- X : sparse matrix of shape (n_samples, n_features) The data used to scale along the features axis. The sparse matrix needs to be nonnegative. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. rr*NrF)rrpr)rrqrrr rrrlr rkr?rrQchoicerr rrrrs) r{rXrgrErrtrrcolumn_subsamplers rG _sparse_fitzQuantileTransformer._sparse_fit s!" :%%+  , RKffQXXk%:QXXkTUo=VWO~~)c/.BT^^.S#'>>C4H#HI#U --"$((1A"QK"$((qww"OK1=1D1D#*:E2E2 --.--"$((_1EQWW"UK"$((!''"JK6E 2c/23##&&sS_'<=&&r'7'7 Z'PQ/ R0,,t7 **//@rIr\c|jH|j|jkDr/tdj|j|j|j |dd}|j d}|j|kDr&t jd|jd|dtd t|j||_ t|j}tjdd |jd |_t!j"|r|j%|||S|j'|||S) a2Compute the quantiles used for transforming. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False. y : None Ignored. Returns ------- self : object Fitted transformer. zkThe number of quantiles cannot be greater than the number of samples used. Got {} quantiles and {} samples.TFin_fitrOrz n_quantiles (z/) is greater than the total number of samples (z#). n_quantiles is set to n_samples.r*)endpoint)rlrirgr _check_inputsrrmrnr min n_quantiles_r(rgr?linspacerqrrfryru)r{rXrrErngs rGrzQuantileTransformer.fit s( >> %$*:*:T^^*K##)6$*:*:DNN#K    qE  :GGAJ   i ' MM!% 0 0)=   3t'7'7#CD !2!23;;q!T->->N ??1    Q $  OOAs # rIc |j}|s|d}|d}d}d}nQd}d}|d}|d}tjd5|dk(rtjj |}dddtjd5|dk(r|t z |k} |t z|kD} |dk(r ||k(} ||k(} dddtj|} || } |sYd tj| ||jtj| |ddd |jddd z z|| <n$tj| |j||| <|| <|| <|stjd5|dk(rtjj|}tjjt tjdz } tjjdt tjdz z }tj|| |}ddd|S|S#1swYxYw#1swYxYw#1swY|SxYw) z/Private function to transform a single feature.rr*ignoreinvalidrfNreg?) rjr?errstaterrcdfBOUNDS_THRESHOLDrinterprqrspacingrw)r{X_colrinverserj lower_bound_x upper_bound_x lower_bound_y upper_bound_ylower_bounds_idxupper_bounds_idx isfinite_mask X_col_finiteclip_minclip_maxs rG_transform_colz"QuantileTransformer._transform_col0 sH#66%aLM%bMMMMMM%aLM%bMMX. 2&(2!JJNN51E 2 [[ * :"h.#(+;#;m#K #(+;#;m#K "i/#(M#9 #(M#9  :%( ]+ $' , 43C3CD))\MIddO+;d>N>NtQSt>T=TUV$E- $&99\4;K;KY#WE- "/"/X. ?&(2!JJNN51E %zz~~.>A.NOH$zz~~a3CbjjQRm3S.TUHGGE8X>E ? u _ 2 2  : :: ? s%%H3-I6B1I 3H=I  Ic 4t|||d|t|sdndd}tjd5|sN|jsBt j |r-tj|jdkr td ddd|S#1swY|SxYw) z&Check inputs before fit and transform.r_TNr`rrrrz>QuantileTransformer only accepts non-negative sparse matrices.) r)r%r?rrkrrfanyrrg)r{rXr|accept_sparse_negativerOs rGr~z!QuantileTransformer._check_inputsq s   )/DD) [[ * *22__Q'BFF166A:,> T   s AB  Bctj|rt|jdD]i}t |j ||j |dz}|j |j||jdd|f||j|<k|St|jdD]4}|j |dd|f|jdd|f||dd|f<6|S)aForward and inverse transform. Parameters ---------- X : ndarray of shape (n_samples, n_features) The data used to scale along the features axis. inverse : bool, default=False If False, apply forward transform. If True, apply inverse transform. Returns ------- X : ndarray of shape (n_samples, n_features) Projected data. r*N) rrfr rslicerrrrr)r{rXrr column_slices rG _transformzQuantileTransformer._transform s" ??1 $QWWQZ0  $QXXk%:AHH[ST_1*0.rIr2rmregj@)rYrirjrkrlrgrOct||||||}|dk(r|j|}|S|j|jj}|S)aTransform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Read more in the :ref:`User Guide `. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data to transform. axis : int, default=0 Axis used to compute the means and standard deviations along. If 0, transform each feature, otherwise (if 1) transform each sample. n_quantiles : int, default=1000 or n_samples Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator. output_distribution : {'uniform', 'normal'}, default='uniform' Marginal distribution for the transformed data. The choices are 'uniform' (default) or 'normal'. ignore_implicit_zeros : bool, default=False Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros. subsample : int or None, default=1e5 Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices. Disable subsampling by setting `subsample=None`. .. versionadded:: 1.5 The option `None` to disable subsampling was added. random_state : int, RandomState instance or None, default=None Determines random number generation for subsampling and smoothing noise. Please see ``subsample`` for more details. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. copy : bool, default=True If False, try to avoid a copy and transform in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. .. versionchanged:: 0.23 The default value of `copy` changed from False to True in 0.23. Returns ------- Xt : {ndarray, sparse matrix} of shape (n_samples, n_features) The transformed data. See Also -------- QuantileTransformer : Performs quantile-based scaling using the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). power_transform : Maps data to a normal distribution using a power transformation. scale : Performs standardization that is faster, but less robust to outliers. robust_scale : Performs robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes ----- NaNs are treated as missing values: disregarded in fit, and maintained in transform. .. warning:: Risk of data leak Do not use :func:`~sklearn.preprocessing.quantile_transform` unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using :class:`~sklearn.preprocessing.QuantileTransformer` within a :ref:`Pipeline ` in order to prevent most risks of data leaking:`pipe = make_pipeline(QuantileTransformer(), LogisticRegression())`. For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. Examples -------- >>> import numpy as np >>> from sklearn.preprocessing import quantile_transform >>> rng = np.random.RandomState(0) >>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0) >>> quantile_transform(X, n_quantiles=10, random_state=0, copy=True) array([...]) )rirjrlrkrgrOr)r2rr) rXrYrirjrkrlrgrOns rGr;r; s^J /3!   A qy OOA  H OOACC " " HrIceZdZUdZeddhgdgdgdZeed<ddddd Ze d dd Z e d dd Z dd Z dZ dZdZdZdZdZddZfdZxZS)r1aC Apply a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. For an example visualization, refer to :ref:`Compare PowerTransformer with other scalers `. To see the effect of Box-Cox and Yeo-Johnson transformations on different distributions, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_map_data_to_normal.py`. Read more in the :ref:`User Guide `. .. versionadded:: 0.20 Parameters ---------- method : {'yeo-johnson', 'box-cox'}, default='yeo-johnson' The power transform method. Available methods are: - 'yeo-johnson' [1]_, works with positive and negative values - 'box-cox' [2]_, only works with strictly positive values standardize : bool, default=True Set to True to apply zero-mean, unit-variance normalization to the transformed output. copy : bool, default=True Set to False to perform inplace computation during transformation. Attributes ---------- lambdas_ : ndarray of float of shape (n_features,) The parameters of the power transformation for the selected features. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- power_transform : Equivalent function without the estimator API. QuantileTransformer : Maps data to a standard normal distribution with the parameter `output_distribution='normal'`. Notes ----- NaNs are treated as missing values: disregarded in ``fit``, and maintained in ``transform``. References ---------- .. [1] :doi:`I.K. Yeo and R.A. Johnson, "A new family of power transformations to improve normality or symmetry." Biometrika, 87(4), pp.954-959, (2000). <10.1093/biomet/87.4.954>` .. [2] :doi:`G.E.P. Box and D.R. Cox, "An Analysis of Transformations", Journal of the Royal Statistical Society B, 26, 211-252 (1964). <10.1111/j.2517-6161.1964.tb00553.x>` Examples -------- >>> import numpy as np >>> from sklearn.preprocessing import PowerTransformer >>> pt = PowerTransformer() >>> data = [[1, 2], [3, 2], [4, 5]] >>> print(pt.fit(data)) PowerTransformer() >>> print(pt.lambdas_) [ 1.386 -3.100] >>> print(pt.transform(data)) [[-1.316 -0.707] [ 0.209 -0.707] [ 1.106 1.414]] yeo-johnsonbox-coxrWmethod standardizerOrxTrrOc.||_||_||_yrzr)r{rrrOs rGr|zPowerTransformer.__init__ s & rIr\c.|j||d|S)aEstimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to estimate the optimal transformation parameters. y : None Ignored. Returns ------- self : object Fitted transformer. F)rforce_transform_fitrs rGrzPowerTransformer.fit s( !q% 0 rIc*|j||dS)aFit `PowerTransformer` to `X`, then transform `X`. Parameters ---------- X : array-like of shape (n_samples, n_features) The data used to estimate the optimal transformation parameters and to be transformed using a power transformation. y : Ignored Not used, present for API consistency by convention. Returns ------- X_new : ndarray of shape (n_samples, n_features) Transformed data. T)rrrs rGrzPowerTransformer.fit_transform s$yyAty44rIcj|j|dd}|js|s|j}|jd}tj|dtj }tj |dtj }|j|jd|j}t|jd|j}tjd5tj|jd|j |_t!|j"D]\} } t%|| || |} |jd k(r| rd |j| <:|| |j| <|j&s|s^||dd| f|j| |dd| f< ddd|j&r[t)d j+d|_|r|j,j/|}|S|j,j1||S#1swYrxYw)NT)r|check_positiver)rYrQrrrrr*rrrLFrNdefault)r) _check_inputrOrr?rDrArC_box_cox_optimize_yeo_johnson_optimizerr_yeo_johnson_transformremptyrQlambdas_ enumeraterrHrr4 set_output_scalerrr) r{rXrrrErDrCoptim_functiontransform_functionirYis_constant_features rGrzPowerTransformer._fit s   aT  ByyAGGAJ wwqq 3ffQQbjj1--55  ++ 66  ++ [[ * LHHQWWQZqww?DM#ACC. L3';3q647I&V#;;-/4G'*DMM!$#1##6 a ##01a4$--:JKAadG L L   )u5@@9@UDLLL..q1   #- L Ls*B&H)&H))H2ct||j|ddd}t|jd|j}t |j D];\}}tjd5||dd|f||dd|f<ddd=|jr|jj|}|S#1swYpxYw)anApply the power transform to each feature using the fitted lambdas. Parameters ---------- X : array-like of shape (n_samples, n_features) The data to be transformed using a power transformation. Returns ------- X_trans : ndarray of shape (n_samples, n_features) The transformed data. FT)r|r check_shaperrrN) r'rrrrrrr?rrrr)r{rXrrlmbdas rGrzPowerTransformer.transform7 s    adPT  U66  ++"$--0 =HAuX. =,Qq!tWe<!Q$ = = =    &&q)A  = =s 2B==C ct||j|dd}|jr|jj |}t |j d|j}t|jD];\}}tjd5||dd|f||dd|f<ddd=|S#1swYIxYw)aApply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by:: if lambda_ == 0: X_original = exp(X_trans) else: X_original = (X * lambda_ + 1) ** (1 / lambda_) The inverse of the Yeo-Johnson transformation is given by:: if X >= 0 and lambda_ == 0: X_original = exp(X) - 1 elif X >= 0 and lambda_ != 0: X_original = (X * lambda_ + 1) ** (1 / lambda_) - 1 elif X < 0 and lambda_ != 2: X_original = 1 - (-(2 - lambda_) * X + 1) ** (1 / (2 - lambda_)) elif X < 0 and lambda_ == 2: X_original = 1 - exp(-X) Parameters ---------- X : array-like of shape (n_samples, n_features) The transformed data. Returns ------- X_original : ndarray of shape (n_samples, n_features) The original data. FT)r|rrrrN) r'rrrrr_yeo_johnson_inverse_transformrrrr?r)r{rXinv_funrrs rGrz"PowerTransformer.inverse_transformT s>    a4  @    ..q1A">>  ++"$--0 2HAuX. 2!!AqD'51!Q$ 2 2 2 2 2s B<<C ctj|}|dk\}t|tjdkrtj||dz ||<n(tj |||zdzd|z dz ||<t|dz tjdkDr3dtj d|z ||zdzdd|z z z ||<|Sdtj|| z ||<|S)zrReturn inverse-transformed input x following Yeo-Johnson inverse transform with parameter lambda. rrLr*r )r? zeros_likerrexppower)r{xrx_invposs rGrz/PowerTransformer._yeo_johnson_inverse_transform s a 1f u: 3 '#!+E#J!C&5.1"4a%i@1DE#J uqy>BJJsO +bhhU|ag'='A1E ?SSE3$K bffagX..E3$K rIctj|}|dk\}t|tjdkrtj||||<n%tj ||dz|dz |z ||<t|dz tjdkDr1tj || dzd|z dz d|z z ||<|Stj||  ||<|S)zbReturn transformed input x following Yeo-Johnson transform with parameter lambda. rrLr*r )r?rrrlog1pr)r{rrrrs rGrz'PowerTransformer._yeo_johnson_transform s mmA1f u: 3 'xx#'CH3!U3a75@CH uqy>BJJsO +((AsdG8a<U;a?@AINCI 1cT7(++CI rIctj|}tj|r tdt j ||d\}}|S)zFind and return optimal lambda parameter of the Box-Cox transform by MLE, for observed data x. We here use scipy builtins which uses the brent optimizer. zColumn must not be all nan.N)r)r?rallrgrr)r{rr)rUrs rGrz"PowerTransformer._box_cox_optimize sI xx{ 66$<:; ;<<4%55 rIctjtjjfd}tjt |S)zFind and return optimal lambda parameter of the Yeo-Johnson transform by MLE, for observed data x. Like for Box-Cox, MLE is done via the brent optimizer. c|j|}jd}|j}|krtjStj |}| dz |z}||dz tj tjtjzjzz }| S)z^Return the negative log likelihood of the observed data x as a function of lambda.rr r*) rrrCr?inflogsignrrr) rx_transrE x_trans_varlog_varlogliker{rx_tinys rG_neg_log_likelihoodzCPowerTransformer._yeo_johnson_optimize.._neg_log_likelihood s11!U;G I!++-KV#vv ff[)G j1nw.G  bggaj288BFF1I3F&F%K%K%MM MG8OrI)r?r@rAtinyrr)r{rrrs`` @rGrz&PowerTransformer._yeo_johnson_optimize sE "**%** & rxx{lO!"5q99rIc t||dtd|jd|}tj5tj dd|r2|j dk(r#tj|dkr tdd d d |ra|jd t|jk(s-^rIr1rXrc@t|||}|j|S)aParametric, monotonic transformation to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, power_transform supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the :ref:`User Guide `. Parameters ---------- X : array-like of shape (n_samples, n_features) The data to be transformed using a power transformation. method : {'yeo-johnson', 'box-cox'}, default='yeo-johnson' The power transform method. Available methods are: - 'yeo-johnson' [1]_, works with positive and negative values - 'box-cox' [2]_, only works with strictly positive values .. versionchanged:: 0.23 The default value of the `method` parameter changed from 'box-cox' to 'yeo-johnson' in 0.23. standardize : bool, default=True Set to True to apply zero-mean, unit-variance normalization to the transformed output. copy : bool, default=True If False, try to avoid a copy and transform in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False. Returns ------- X_trans : ndarray of shape (n_samples, n_features) The transformed data. See Also -------- PowerTransformer : Equivalent transformation with the Transformer API (e.g. as part of a preprocessing :class:`~sklearn.pipeline.Pipeline`). quantile_transform : Maps data to a standard normal distribution with the parameter `output_distribution='normal'`. Notes ----- NaNs are treated as missing values: disregarded in ``fit``, and maintained in ``transform``. For a comparison of the different scalers, transformers, and normalizers, see: :ref:`sphx_glr_auto_examples_preprocessing_plot_all_scaling.py`. References ---------- .. [1] I.K. Yeo and R.A. Johnson, "A new family of power transformations to improve normality or symmetry." Biometrika, 87(4), pp.954-959, (2000). .. [2] G.E.P. Box and D.R. Cox, "An Analysis of Transformations", Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples -------- >>> import numpy as np >>> from sklearn.preprocessing import power_transform >>> data = [[1, 2], [3, 2], [4, 5]] >>> print(power_transform(data, method='box-cox')) [[-1.332 -0.707] [ 0.256 -0.707] [ 1.076 1.414]] .. warning:: Risk of data leak. Do not use :func:`~sklearn.preprocessing.power_transform` unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using :class:`~sklearn.preprocessing.PowerTransformer` within a :ref:`Pipeline ` in order to prevent most risks of data leaking, e.g.: `pipe = make_pipeline(PowerTransformer(), LogisticRegression())`. r)r1r)rXrrrOpts rGr:r:s$N [t LB  A rI)TNrr5)rLr)Ormnumbersrrnumpyr?scipyrr scipy.specialrr sklearn.utilsr baser r r rrutilsrrrutils._array_apirrrrrutils._param_validationrrrr utils.extmathrr utils.fixesrutils.sparsefuncsrr r!r"utils.sparsefuncs_fastr#r$utils.validationr%r&r'r(r) _encodersr+r__all__rHrVr=r/r8r4r.r7r3r<r9r0r6r,r-r5r2intr;r1r:rIrGrs ",*65UT@, % .  FO ,Aq6*+[K #' $D[  [ |s')9=sl ^Aq6*+#( l !$l l ^j)+;]jZ f')9=fRO ,Aq6*+#( D[ [ |P')9=Pf  )GHq!f4M3NO"'  F F RO ,/01Aq6*+ !{ #' oADeo odE%'7EPO ,tT4 BC  #' !t8 8 vK$&6 K\c46F cLO ,4tI>?#' 6>6>rs.0@-sl   )GHq!f4M3NO"' !#h M M `\+-=}\~ <."'dDtd drI