`L i/dZddlZddlZddlmZddlmZddlm Z ddl m Z dd l m Z dd lmZmZ d d Z dd Zy)z` )log multinomialg?squaredzJUnknown loss function for SAG solver, got %s instead of 'log' or 'squared'r?)int ValueErrormin) max_squared_sum alpha_scaledloss fit_intercept n_samplesis_sagaLmunsteps _/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/linear_model/_sag.pyget_auto_step_sizers\ %% Oc-&88 9L H   c-0 0< ? X   !i-,.2a!eck" KQw Kc| i} |d}| r>tjtjg}t||dd}t||dd}|jd|jd }}t ||z }t ||z }|d k(rt |jd znd }t|||j }d | jvr| d }n$tj||f|jd }|jd|d zk(}|r|dddf}|ddddf}n!tj||j }d| jvr| d}n!tj||j }d| jvr| d}n$tj||f|jd }d| jvr| d}n$tj||f|jd }d| jvr| d}n&tj|tjd }d| jvr| d}nd}t|||| \}}| t|dj} t| ||||| }||zd k(r t!d|jtjk(rt"nt$}||||||||||||||||||||| |\} }!|!|k(rt'j(dt*|rtj,||f}|||||| d} |d k(r |j.}"n |dddf}"|"|!| fS)aSAG solver for Ridge and LogisticRegression. SAG stands for Stochastic Average Gradient: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a constant learning rate. IMPORTANT NOTE: 'sag' solver converges faster on columns that are on the same scale. You can normalize the data by using sklearn.preprocessing.StandardScaler on your data before passing it to the fit method. This implementation works with data represented as dense numpy arrays or sparse scipy arrays of floating point values for the features. It will fit the data according to squared loss or log loss. The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using the squared euclidean norm L2. .. versionadded:: 0.17 Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : ndarray of shape (n_samples,) Target values. With loss='multinomial', y must be label encoded (see preprocessing.LabelEncoder). For loss='log' it must be in [0, 1]. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). loss : {'log', 'squared', 'multinomial'}, default='log' Loss function that will be optimized: -'log' is the binary logistic loss, as used in LogisticRegression. -'squared' is the squared loss, as used in Ridge. -'multinomial' is the multinomial logistic loss, as used in LogisticRegression. .. versionadded:: 0.18 *loss='multinomial'* alpha : float, default=1. L2 regularization term in the objective function ``(0.5 * alpha * || W ||_F^2)``. beta : float, default=0. L1 regularization term in the objective function ``(beta * || W ||_1)``. Only applied if ``is_saga`` is set to True. max_iter : int, default=1000 The max number of passes over the training data if the stopping criteria is not reached. tol : float, default=0.001 The stopping criteria for the weights. The iterations will stop when max(change in weights) / max(weights) < tol. verbose : int, default=0 The verbosity level. random_state : int, RandomState instance or None, default=None Used when shuffling the data. Pass an int for reproducible output across multiple function calls. See :term:`Glossary `. check_input : bool, default=True If False, the input arrays X and y will not be checked. max_squared_sum : float, default=None Maximum squared sum of X over samples. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation. warm_start_mem : dict, default=None The initialization parameters used for warm starting. Warm starting is currently used in LogisticRegression but not in Ridge. It contains: - 'coef': the weight vector, with the intercept in last line if the intercept is fitted. - 'gradient_memory': the scalar gradient for all seen samples. - 'sum_gradient': the sum of gradient over all seen samples, for each feature. - 'intercept_sum_gradient': the sum of gradient over all seen samples, for the intercept. - 'seen': array of boolean describing the seen samples. - 'num_seen': the number of seen samples. is_saga : bool, default=False Whether to use the SAGA algorithm or the SAG algorithm. SAGA behaves better in the first epochs, and allow for l1 regularisation. Returns ------- coef_ : ndarray of shape (n_features,) Weight vector. n_iter_ : int The number of full pass on all samples. warm_start_mem : dict Contains a 'coef' key with the fitted result, and possibly the fitted intercept at the end of the array. Contains also other keys used for warm starting. Examples -------- >>> import numpy as np >>> from sklearn import linear_model >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> X = rng.randn(n_samples, n_features) >>> y = rng.randn(n_samples) >>> clf = linear_model.Ridge(solver='sag') >>> clf.fit(X, y) Ridge(solver='sag') >>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) >>> y = np.array([1, 1, 2, 2]) >>> clf = linear_model.LogisticRegression(solver='sag') >>> clf.fit(X, y) LogisticRegression(solver='sag') References ---------- Schmidt, M., Roux, N. L., & Bach, F. (2013). Minimizing finite sums with the stochastic average gradient https://hal.inria.fr/hal-00860051/document :arxiv:`Defazio, A., Bach F. & Lacoste-Julien S. (2014). "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives" <1407.0202>` See Also -------- Ridge, SGDRegressor, ElasticNet, Lasso, SVR, LogisticRegression, SGDClassifier, LinearSVC, Perceptron NcsrC)dtype accept_sparseorderF)r$ ensure_2dr&rrr)r$coef)r$r&intercept_sum_gradientgradient_memory sum_gradientseennum_seenT)r)rrzQCurrent sag implementation does not handle the case step_size * alpha_scaled == 1z?The max_iter was reached which means the coef_ did not converge)r(r,r*r+r-r.)npfloat64float32rshapefloatrmaxrr$keyszerosint32r rrZeroDivisionErrorr r warningswarnrvstackT)#Xy sample_weightralphabetamax_itertolverbose random_state check_inputrwarm_start_memr_dtyper n_featuresr beta_scaled n_classes coef_initrintercept_initr*gradient_memory_initsum_gradient_init seen_init num_seen_initdatasetintercept_decay step_sizesagr.n_iter_coef_s# r sag_solverrXWst**bjj) uC H 5 DGGAJ zI<)+L+ )K%)M$9AEEG q qI)IM $$&&"6* HHj)4AGG3O OOA&:>:M"2q5)crc1f% )177;>#6#6#88!/0H!I!#)177!CN//11-.?@!xx  "!'' ,,..*>:HHj)% ))r)NF) Nr rgr!gMbP?rNTNNF)__doc__r9numpyr/ exceptionsrutilsr utils.extmathrutils.validationr_baser _sag_fastr r rrXrrrbs\B +%3#QVBP    [*r