`L indZddlZddlmZmZddlmZddlZddl Z ddl m Z m Z mZddlmZmZmZmZmZddlmZdd lmZdd lmZmZdd lmZdd lmZd dl m!Z!m"Z"d dl m#Z$dZ%GddeeeZ&y)zGaussian processes regression.N)IntegralReal) itemgetter) cho_solvecholeskysolve_triangular) BaseEstimatorMultiOutputMixinRegressorMixin _fit_contextclone)_handle_zeros_in_scale)check_random_state)Interval StrOptions)_check_optimize_result) validate_data)RBFKernel)ConstantKernelTc eZdZUdZdegeedddejge dhe dgee dddgdgdgee ddddgd gd Z e ed < dd ddd dddddZeddZddZddZ ddZdZfdZxZS)GaussianProcessRegressoraGaussian process regression (GPR). The implementation is based on Algorithm 2.1 of [RW2006]_. In addition to standard scikit-learn estimator API, :class:`GaussianProcessRegressor`: * allows prediction without prior fitting (based on the GP prior) * provides an additional method `sample_y(X)`, which evaluates samples drawn from the GPR (prior or posterior) at given inputs * exposes a method `log_marginal_likelihood(theta)`, which can be used externally for other ways of selecting hyperparameters, e.g., via Markov chain Monte Carlo. To learn the difference between a point-estimate approach vs. a more Bayesian modelling approach, refer to the example entitled :ref:`sphx_glr_auto_examples_gaussian_process_plot_compare_gpr_krr.py`. Read more in the :ref:`User Guide `. .. versionadded:: 0.18 Parameters ---------- kernel : kernel instance, default=None The kernel specifying the covariance function of the GP. If None is passed, the kernel ``ConstantKernel(1.0, constant_value_bounds="fixed") * RBF(1.0, length_scale_bounds="fixed")`` is used as default. Note that the kernel hyperparameters are optimized during fitting unless the bounds are marked as "fixed". alpha : float or ndarray of shape (n_samples,), default=1e-10 Value added to the diagonal of the kernel matrix during fitting. This can prevent a potential numerical issue during fitting, by ensuring that the calculated values form a positive definite matrix. It can also be interpreted as the variance of additional Gaussian measurement noise on the training observations. Note that this is different from using a `WhiteKernel`. If an array is passed, it must have the same number of entries as the data used for fitting and is used as datapoint-dependent noise level. Allowing to specify the noise level directly as a parameter is mainly for convenience and for consistency with :class:`~sklearn.linear_model.Ridge`. For an example illustrating how the alpha parameter controls the noise variance in Gaussian Process Regression, see :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_noisy_targets.py`. optimizer : "fmin_l_bfgs_b", callable or None, default="fmin_l_bfgs_b" Can either be one of the internally supported optimizers for optimizing the kernel's parameters, specified by a string, or an externally defined optimizer passed as a callable. If a callable is passed, it must have the signature:: def optimizer(obj_func, initial_theta, bounds): # * 'obj_func': the objective function to be minimized, which # takes the hyperparameters theta as a parameter and an # optional flag eval_gradient, which determines if the # gradient is returned additionally to the function value # * 'initial_theta': the initial value for theta, which can be # used by local optimizers # * 'bounds': the bounds on the values of theta .... # Returned are the best found hyperparameters theta and # the corresponding value of the target function. return theta_opt, func_min Per default, the L-BFGS-B algorithm from `scipy.optimize.minimize` is used. If None is passed, the kernel's parameters are kept fixed. Available internal optimizers are: `{'fmin_l_bfgs_b'}`. n_restarts_optimizer : int, default=0 The number of restarts of the optimizer for finding the kernel's parameters which maximize the log-marginal likelihood. The first run of the optimizer is performed from the kernel's initial parameters, the remaining ones (if any) from thetas sampled log-uniform randomly from the space of allowed theta-values. If greater than 0, all bounds must be finite. Note that `n_restarts_optimizer == 0` implies that one run is performed. normalize_y : bool, default=False Whether or not to normalize the target values `y` by removing the mean and scaling to unit-variance. This is recommended for cases where zero-mean, unit-variance priors are used. Note that, in this implementation, the normalisation is reversed before the GP predictions are reported. .. versionchanged:: 0.23 copy_X_train : bool, default=True If True, a persistent copy of the training data is stored in the object. Otherwise, just a reference to the training data is stored, which might cause predictions to change if the data is modified externally. n_targets : int, default=None The number of dimensions of the target values. Used to decide the number of outputs when sampling from the prior distributions (i.e. calling :meth:`sample_y` before :meth:`fit`). This parameter is ignored once :meth:`fit` has been called. .. versionadded:: 1.3 random_state : int, RandomState instance or None, default=None Determines random number generation used to initialize the centers. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. Attributes ---------- X_train_ : array-like of shape (n_samples, n_features) or list of object Feature vectors or other representations of training data (also required for prediction). y_train_ : array-like of shape (n_samples,) or (n_samples, n_targets) Target values in training data (also required for prediction). kernel_ : kernel instance The kernel used for prediction. The structure of the kernel is the same as the one passed as parameter but with optimized hyperparameters. L_ : array-like of shape (n_samples, n_samples) Lower-triangular Cholesky decomposition of the kernel in ``X_train_``. alpha_ : array-like of shape (n_samples,) Dual coefficients of training data points in kernel space. log_marginal_likelihood_value_ : float The log-marginal-likelihood of ``self.kernel_.theta``. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- GaussianProcessClassifier : Gaussian process classification (GPC) based on Laplace approximation. References ---------- .. [RW2006] `Carl E. Rasmussen and Christopher K.I. Williams, "Gaussian Processes for Machine Learning", MIT Press 2006 `_ Examples -------- >>> from sklearn.datasets import make_friedman2 >>> from sklearn.gaussian_process import GaussianProcessRegressor >>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel >>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0) >>> kernel = DotProduct() + WhiteKernel() >>> gpr = GaussianProcessRegressor(kernel=kernel, ... random_state=0).fit(X, y) >>> gpr.score(X, y) 0.3680... >>> gpr.predict(X[:2,:], return_std=True) (array([653.0, 592.1]), array([316.6, 316.6])) Nrleft)closed fmin_l_bfgs_bbooleanr random_statekernelalpha optimizern_restarts_optimizer normalize_y copy_X_train n_targetsr_parameter_constraintsg|=FT)r"r#r$r%r&r'rct||_||_||_||_||_||_||_||_yNr ) selfr!r"r#r$r%r&r'rs c/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/gaussian_process/_gpr.py__init__z!GaussianProcessRegressor.__init__s@  "$8!&("()prefer_skip_nested_validationc  j!tddtddz_nt j_t j _jjrd\}}nd\}}t||dd|| \}}|jd kDr|jd nd }j+|jk7rtd |d jd jr`tj |d_t%tj&|dd_|j"z j(z }nW|jdk(r|jd fnd }tj*|_tj,|_tj.j0rj0jd|jdk7rgj0jdd k(rj0d_n6tdj0jdd |jddj2rtj4|n|_j2rtj4|n|_j:jj<dkDrdfd }j?|jj@jjBg}jDdkDrtjFjjBjIs tdjjB} tKjDD]N} jjM| dddf| ddd f} |jOj?|| | PtQtStUd |} |tjV| dj_ jjYtjZ|  _.n,j_jj@d_.jj6} | tj`| xxj0z cc< tc| tdd_3tojftdfj8d_8S#tjhjj$r)}djdf|jlz|_6d}~wwxYw)aFit Gaussian process regression model. Parameters ---------- X : array-like of shape (n_samples, n_features) or list of object Feature vectors or other representations of training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Returns ------- self : object GaussianProcessRegressor class instance. N?fixedconstant_value_boundslength_scale_boundsnumericTNFT) multi_output y_numeric ensure_2ddtyperzSThe number of targets seen in `y` is different from the parameter `n_targets`. Got z != .raxisF)copyr shapezFalpha must be a scalar or an array with same number of entries as y. ()cj|rj|dd\}}| | fSj|d S)NTF) eval_gradient clone_kernelrG)log_marginal_likelihood)thetarFlmlgradr+s r,obj_funcz.GaussianProcessRegressor.fit..obj_func+sO $ < <T!=!IC 4$;& 88U8SSSr.zYMultiple optimizer restarts (n_restarts_optimizer>0) requires that all bounds are finite.rHlower check_finitez The kernel, z, is not returning a positive definite matrix. Try gradually increasing the 'alpha' parameter of your GaussianProcessRegressor estimator.rP)T)9r!Crkernel_rrr_rngrequires_vector_inputrndimrCr' ValueErrorr%npmean _y_train_meanrstd _y_train_stdzerosonesiterabler"r&rAX_train_y_train_r#n_dims_constrained_optimizationrJboundsr$isfiniteallrangeuniformappendlistmaprargmin_check_bounds_paramsminlog_marginal_likelihood_value_rIdiag_indices_fromrGPR_CHOLESKY_LOWERL_linalg LinAlgErrorargsralpha_)r+Xyr=r<n_targets_seen shape_y_statsrMoptimard iteration theta_initial lml_valuesKexcs` r,fitzGaussianProcessRegressor.fits" ;; S@3DDL!-DL&t'8'89 << - -. E9* E9   1()vvzq >> %.DNN*J$$2#347GqJ    !#!3D  6rvvaa7Hu UD T'''4+<+<D  "m > %$,,*=*=*A T22 $,,"4"4dll6I6IF((1,{{4<<#6#67;;=$?,,!&t'@'@!AI$(II$5$5fQTlF1a4L$QMMM66xPVWc*Q-89J!' *(=!>q!ADLL  LL - - /3566*3E2ED /262N2N ""3O3D / LL ' "  q !"djj0" q(:ODG WW( ) MM   yy$$ #4<<.1LL  CH  sTU$;$UU$cb|r |r td|j|jjrd\}}nd\}}t||||d}t |ds|jt dd t dd z}n |j}|j |jnd }tj|jd |f j}|r=||} |d kDr,tjtj| d|d} || fS|rY|j|} |d kDr,tjtj| d|d} |tj| fS|S|j!||j"} | |j$z}|j&|z|j(z}|j*d kDr)|jd d k(rtj|d }t-|j.| j0t2d} |r|j!|| j0| zz } tj4| |j&dzj6g| jd} | jdd k(rtj| d} || fS|r|j j|j9} | tj:d| j0| z} | d k} tj<| rt?j@dd| | <tj4| |j&dzj6g| jd} | jd d k(rtj| d } |tj| fS|S)aPredict using the Gaussian process regression model. We can also predict based on an unfitted model by using the GP prior. In addition to the mean of the predictive distribution, optionally also returns its standard deviation (`return_std=True`) or covariance (`return_cov=True`). Note that at most one of the two can be requested. Parameters ---------- X : array-like of shape (n_samples, n_features) or list of object Query points where the GP is evaluated. return_std : bool, default=False If True, the standard-deviation of the predictive distribution at the query points is returned along with the mean. return_cov : bool, default=False If True, the covariance of the joint predictive distribution at the query points is returned along with the mean. Returns ------- y_mean : ndarray of shape (n_samples,) or (n_samples, n_targets) Mean of predictive distribution at query points. y_std : ndarray of shape (n_samples,) or (n_samples, n_targets), optional Standard deviation of predictive distribution at query points. Only returned when `return_std` is True. y_cov : ndarray of shape (n_samples, n_samples) or (n_samples, n_samples, n_targets), optional Covariance of joint predictive distribution at query points. Only returned when `return_cov` is True. z9At most one of return_std or return_cov can be requested.r7r9F)r<r=resetr`r1r2r3r5rrrB)repeatsr@r?rNr zij,ji->izAPredicted variances smaller than 0. Setting those variances to 0.g)! RuntimeErrorr!rUrhasattrrRrr'rXr]rCsqueezerepeat expand_dimsdiagsqrtrSr`rvr\rZrVrrrTrqouterreshaperAeinsumanywarningswarn)r+rw return_std return_covr=r<r!r'y_meany_covy_varK_transVy_var_negatives r,predictz GaussianProcessRegressor.predictosNF *K  ;; $++"C"C. E9* E9 $Ye5 QtZ({{"3g>WB*...*D!IXXQWWQZ$;<DDFFq q=IIub192Eu}$ Aq=IIub192Erwwu~-- ll1dmm4Gt{{*F&&/$2D2DDF{{Q6<<?a#7F3!*<5A Q!##'1F(9(91(<=EEWu{{WTVW;;q>Q&JJu15Eu}$ ))!,113:qssA66"'66.)MM8-0E.)F(9(91(<=EEWu{{WTVW;;q>Q&JJu15Erwwu~-- r.c t|}|j|d\}}|jdk(r|j|||j}|St |j dDcgc]@}|j|dd|f|d|f|jddtjfB}}tj|}|Scc}w)awDraw samples from Gaussian process and evaluate at X. Parameters ---------- X : array-like of shape (n_samples_X, n_features) or list of object Query points where the GP is evaluated. n_samples : int, default=1 Number of samples drawn from the Gaussian process per query point. random_state : int, RandomState instance or None, default=0 Determines random number generation to randomly draw samples. Pass an int for reproducible results across multiple function calls. See :term:`Glossary `. Returns ------- y_samples : ndarray of shape (n_samples_X, n_samples), or (n_samples_X, n_targets, n_samples) Values of n_samples samples drawn from Gaussian process and evaluated at query points. T)rrN.) rrrVmultivariate_normalrrgrCrXnewaxishstack) r+rw n_samplesrrngrr y_samplestargets r,sample_yz!GaussianProcessRegressor.sample_ys0!. Q4 8  ;;! //yIKKI$FLLO4 ''1f9%uS&['99!ArzzM#I  ),Is*ACc(||r td|jS|r|jj|}n|j}||_|r||j d\}}n||j }|t j|xx|jz cc< t|td}|j}|j dk(r|ddt j"f}t%|tf|d} d t j&d || z} | t j(t j*|j-z} | |j.d d z t j(d t j0zzz} | j-d } |rt j&d| | } t%|tft j2|j.d d} | | dt j"fz} dt j&d| z}|j-d }|r| fS| S#t jj$r>|r(t j t j|fcYSt j cYSwxYw)aReturn log-marginal likelihood of theta for training data. Parameters ---------- theta : array-like of shape (n_kernel_params,) default=None Kernel hyperparameters for which the log-marginal likelihood is evaluated. If None, the precomputed log_marginal_likelihood of ``self.kernel_.theta`` is returned. eval_gradient : bool, default=False If True, the gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta is returned additionally. If True, theta must not be None. clone_kernel : bool, default=True If True, the kernel attribute is copied. If False, the kernel attribute is modified, but may result in a performance improvement. Returns ------- log_likelihood : float Log-marginal likelihood of theta for training data. log_likelihood_gradient : ndarray of shape (n_kernel_params,), optional Gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta. Only returned when eval_gradient is True. Nz.Gradient can only be evaluated for theta!=NoneT)rFFrNrrQgzik,ik->krr rr?z ik,jk->ijk.g?z ijl,jik->kl)rWrorSclone_with_thetarJr`rXrpr"rrqrsrtinf zeros_likerarVrrrlogrsumrCpieye)r+rJrFrGr!r K_gradientLy_trainr"log_likelihood_dimslog_likelihood inner_termK_invlog_likelihood_gradient_dimslog_likelihood_gradients r,rIz0GaussianProcessRegressor.log_marginal_likelihoodsL> = !QRR66 6 \\2259F\\F FL "4==EMAzt}}%A "  q !"djj0" Q"45IA -- <<1 am,G1017O#RYYz7E%JJrvvbggaj15577qwwqzA~q255y0AAA,00b09 <>J&' );%E %RZZ0 0J,/z:2, ('C&F&FB&F&O # !#:: :! !Ayy$$ Q6CRVVGR]]512 P"&& P Qs$H66AJ=JJcJ|jdk(rLtjj||dd|}t d||j |j }}||fSt|jr|j|||\}}||fStd|jd) NrzL-BFGS-BT)methodjacrdlbfgs)rdzUnknown optimizer r>) r#scipyoptimizeminimizerxfuncallablerW)r+rM initial_thetardopt_res theta_optfunc_mins r,rcz2GaussianProcessRegressor._constrained_optimizations >>_ ,nn--! .G #7G 4")))W[[xI ("" dnn %"&..=QW."X Ix(""1$..1ACD Dr.c2t|}d|_|Sr9)super__sklearn_tags__ requires_fit)r+tags __class__s r,rz)GaussianProcessRegressor.__sklearn_tags__sw')! r.r*)FF)rr)NFT)__name__ __module__ __qualname____doc__rrrrXndarrayrrrr(dict__annotations__r-r rrrrIrcr __classcell__)rs@r,rrscL.4D8"**E /!23XtD!)(AtF!K L!{" xD@$G'( $D )!)*5L6L\AF%P=As"j#$r.r)'rrnumbersrroperatorrnumpyrXscipy.optimizer scipy.linalgrrrbaser r r r rpreprocessing._datarutilsrutils._param_validationrrutils.optimizerutils.validationrkernelsrrrrRrqrr.r,rsS$ ">>WW8&:3, (I /I r.