L i%qddlZddlZddlZddlmZmZmZddlmZddl m Z m Z m Z m Z mZmZerddlZe rddlmZddlmZmZdZej.eZgd ZGd d Zy) N)AnyOptionalUnion)version)check_peft_versionfind_adapter_config_fileis_accelerate_availableis_peft_availableis_torch_availablelogging)dispatch_model)get_balanced_memoryinfer_auto_device_mapz0.5.0)aria ayavisionemu3fuyugotocr2gemma3internvlllavamistral3mllama paligemmaqwen2vl qwen2_5_vl videollavavipllavaceZdZdZdZ ddeedeedeedeeded eed eed eed eeee fd eeedfde de deeee fddfdZ ddeeddfdZ de eeefddfdZd dZd dZdeefdZd!deedeedefdZ d"ded eed eed eeddf dZde eeefddfdZy)#PeftAdapterMixinu A class containing all functions for loading and using adapters weights that are supported in PEFT library. For more details about adapters and injecting them on a transformer-based model, check out the documentation of PEFT library: https://huggingface.co/docs/peft/index Currently supported PEFT methods are all non-prompt learning methods (LoRA, IA³, etc.). Other PEFT models such as prompt tuning, prompt learning are out of scope as these adapters are not "injectable" into a torch module. For using these methods, please refer to the usage guide of PEFT library. With this mixin, if the correct PEFT version is installed, it is possible to: - Load an adapter stored on a local path or in a remote Hub repository, and inject it in the model - Attach new adapters in the model and train them with Trainer or by your own. - Attach multiple adapters and iteratively activate / deactivate them - Activate / deactivate all adapters from the model. - Get the `state_dict` of the active adapter. FN peft_model_id adapter_namerevisiontoken device_map max_memoryoffload_folder offload_index peft_configadapter_state_dictz torch.Tensorlow_cpu_mem_usage is_trainableadapter_kwargsreturnctti}| | jddnd}|$tfdtDr j }| r_d}t jtjj dt j|k\r| |d<ntd|d ||nd }| i} d d l m }m }m}d d lm}j"r|j$vrtd|d|| | tdd| vr?t'ds j(n%t+j,j/d }n| jd}t1|t2j(r t5|}| d| vr|| d<n#|!d| vr|| dk7rt6j9dd| vr| jd}| rjStOddptEtUj,j/jWd%d&hd kDr/tEj$d'k(rjY||||(yyyycc}#w))u Load adapter weights from file or remote Hub folder. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on PEFT official documentation: https://huggingface.co/docs/peft Requires PEFT to be installed as a backend to load the adapter weights. Args: peft_model_id (`str`, *optional*): The identifier of the model to look for on the Hub, or a local path to the saved adapter config file and adapter weights. adapter_name (`str`, *optional*): The adapter name to use. If not set, will use the name "default". revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. > [!TIP] > To test a pull request you made on the Hub, you can pass `revision="refs/pr/"`. token (`str`, `optional`): Whether to use authentication token to load the remote folder. Useful to load private repositories that are on HuggingFace Hub. You might need to call `hf auth login` and paste your tokens to cache it. device_map (`str` or `dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which the model will be allocated, the device map will map the entire model to this device. Passing `device_map = 0` means put the whole model on GPU 0. To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, `optional`): If the `device_map` contains any value `"disk"`, the folder where we will offload weights. offload_index (`int`, `optional`): `offload_index` argument to be passed to `accelerate.dispatch_model` method. peft_config (`dict[str, Any]`, *optional*): The configuration of the adapter to add, supported adapters are all non-prompt learning configs (LoRA, IA³, etc). This argument is used in case users directly pass PEFT state dicts. adapter_state_dict (`dict[str, torch.Tensor]`, *optional*): The state dict of the adapter to load. This argument is used in case users directly pass PEFT state dicts. low_cpu_mem_usage (`bool`, *optional*, defaults to `False`): Reduce memory usage while loading the PEFT adapter. This should also speed up the loading process. Requires PEFT version 0.13.0 or higher. is_trainable (`bool`, *optional*, defaults to `False`): Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be used for inference. adapter_kwargs (`dict[str, Any]`, *optional*): Additional keyword arguments passed along to the `from_pretrained` method of the adapter config and `find_adapter_config_file` method.  min_versionN key_mappingc3jK|]*}|jjjv,ywN) __class____name__lower).0 allowed_nameselfs d/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/integrations/peft.py z0PeftAdapterMixin.load_adapter..s*&p[g|t~~7N7N7T7T7V'V&ps03z0.13.0peftr,zcThe version of PEFT you are using does not support `low_cpu_mem_usage` yet, please install PEFT >= .defaultr) PeftConfiginject_adapter_in_modelload_peft_weights)set_peft_model_state_dictAdapter with name - already exists. Please use a different name.zhYou should either pass a `peft_model_id` or a `peft_config` and `adapter_state_dict` to load an adapter.device hf_device_mapr$zYou passed a `revision` argument both in `adapter_kwargs` and as a standalone argument. The one in `adapter_kwargs` will be used.r%z adapter model file not found in zB. Make sure you are passing the correct path to the adapter model.T)r%rGzbase_model.model. state_dictunexpected_keyszLoading adapter weights from z0 led to unexpected keys not found in the model: , z. missing_keyslora_z# led to missing keys in the model: cpudisk)r&r'r(r))-rMIN_PEFT_VERSIONpopanyVLMS_checkpoint_conversion_mappingrparse importlibmetadata ValueErrorr>rArBrC peft.utilsrD_hf_peft_config_loadedr*hasattrrGlistrHvalues isinstancetorchstrloggererrorr from_pretrainedinference_modeitems startswithlenresubnrKjoingetattrwarningevalset intersection_dispatch_accelerate_model)%r;r"r#r$r%r&r'r(r)r*r+r,r-r.peft_load_kwargsr3min_version_lcmurArBrCrDrGadapter_config_fileprocessed_adapter_state_dictprefixkeyvaluenew_keypattern replacement n_replaceincompatible_keyserr_msg origin_namerMklora_missing_keyss%` r< load_adapterzPeftAdapterMixin.load_adapterWsR '78AOA[n((=ae  3&pko&p#p==K ' }}Y//77?@GMMRbDcc8I !45 ..>-?qB (4'?|Y  !NOO8  & &<4;K;K+K1,?lmn n  &8&@[EXz  > )(/o(FT[[DQUQcQcQjQjQlLmnoLpF#''1F fell +[F  Jn$D)1N: &  !jN&BxSablSmGm LL<  n $"&&w/E  ":##!#  #* 6}oF%% 5*44!K .:)9K & T<TCST***.D '  $!2=!oV\!o`n!o (*$$,224 :JC~~f%c&km,,7,=,=,?(G[)++w)O&GY 1}  5: ( 1 :6 .  @P   (G+8+D-,K(*;<EVEfEfAgjkAk3K=@pyy!2!B!BCDBH ##4ndKL0<$c11 Q]abQbQ$c!$c$7 }Dg99%678:G w'  % % IIKT?D 1 =S++2245BBE6?STWXXD$$%*  + +%%-+ , +Y>%ds O:"O:'O:cbttddlm}m}|xsd}|j sd|_n||j vrtd|dt||stdt|d |jjd d |_ |||||j|y ) u If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Adds a fresh new adapter to the current model for training purpose. If no adapter name is passed, a default name is assigned to the adapter to follow the convention of PEFT library (in PEFT we use "default" as the default adapter name). Note that the newly added adapter is not automatically activated. To activate it, use `model.set_adapter`. Args: adapter_config (`~peft.PeftConfig`): The configuration of the adapter to add, supported adapters are non-prompt learning methods (LoRA, IA³, etc.). adapter_name (`str`, *optional*, defaults to `"default"`): The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. r1r)rArBr@TrErFz8adapter_config should be an instance of PeftConfig. Got z instead. name_or_pathN)rrRr>rArBr\r*rZr` TypeErrortype__dict__getbase_model_name_or_path set_adapter)r;adapter_configr#rArBs r< add_adapterzPeftAdapterMixin.add_adapter1s$ '78<#0y ***.D ' T-- -1,?lmn n.*5VW[\jWkVlluvw w261B1B>SW1X.lC &c tt|js tdt |t rpt |t |jz }t|dkDrtddj|dt |jj||jvr2td|dt |jjdd l m }dd l m}d }|jD];\}}t |||fst!|d r|j#|n||_d }=|s tdy)a If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters. Args: adapter_name (`Union[list[str], str]`): The name of the adapter to set. Can be also a list of strings to set multiple adapters. r10No adapter loaded. Please load an adapter first.rz)Following adapter(s) could not be found: rLzV. Make sure you are passing the correct adapter name(s). current loaded adapters are: rEz7 not found. Please pass the correct adapter name among BaseTunerLayerModulesToSaveWrapperFrTzhDid not succeeded in setting the adapter. Please make sure you are using a model that supports adapters.N)rrRr\rZr`r^rpr*rirlkeyspeft.tuners.tuners_utilsrr[r named_modulesr]ractive_adapter)r;r#missingrr_adapters_has_been_set_modules r<rzPeftAdapterMixin.set_adapterXsl '78**OP P  d +,'#d.>.>*??G7|a ? '@R?ST559$:J:J:O:O:Q5R4SU!1!1 1$\N2ijnosoopEpEpGkHjIJ  <3!&++- .IAv&>3G"HI6=1&&|4,8F))-& .&z &rctt|js tdddlm}ddlm}|jD]:\}}t|||fst|dr|jd4d |_ <y ) a, If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Disable all adapters that are attached to the model. This leads to inferring with the base model only. r1rrrrenable_adaptersFenabledTN) rrRr\rZrrr[rrr`r]rdisable_adapters)r;rrrrs r<rz!PeftAdapterMixin.disable_adapterssx '78**OP P;3++- 3IAv&>3G"HI6#45**5*9.2F+  3rctt|js tdddlm}|j D]8\}}t||st|dr|jd2d|_ :y ) z If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Enable adapters that are attached to the model. r1rrrrTrFN) rrRr\rZrrrr`r]rr)r;rrrs r<rz PeftAdapterMixin.enable_adaptersso '78**OP P;++- 4IAv&.16#45**4*8.3F+  4rcttts td|js t dddlm}|jD]\}}t||s|j}nttr|g}|S)a3 If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly. For previous PEFT versions (that does not support multi-adapter inference), `module.active_adapter` will return a single string. r1zTPEFT is not available. Please install PEFT to use this function: `pip install peft`.rrr) rrRr ImportErrorr\rZrrrr`rrb)r;rrractive_adapterss r<rz PeftAdapterMixin.active_adapterss '78 "tu u**OP P;++- IAv&.1"("7"7  os +./OrrJctt|js tdddlm}||j d}||||}|S)ax If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter. If no adapter_name is passed, the active adapter is used. Args: adapter_name (`str`, *optional*): The name of the adapter to get the state dict from. If no name is passed, the active adapter is used. state_dict (nested dictionary of `torch.Tensor`, *optional*) The state dictionary of the model. Will default to `self.state_dict()`, but can be used if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism). r1rr)get_peft_model_state_dict)rJr#)rrRr\rZr>rr)r;r#rJrr+s r<get_adapter_state_dictz'PeftAdapterMixin.get_adapter_state_dictsT '78**OP P2  //1!4L6t amn!!rci}dtjtjvr||d<|j}|dk7rt ||||dk(}t |trt|||}t|f||d|y)a Optional re-dispatch the model and attach new hooks to the model in case the model has been loaded with accelerate (i.e. with `device_map=xxx`) Args: device_map (`str` or `dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which the model will be allocated, the device map will map the entire model to this device. Passing `device_map = 0` means put the whole model on GPU 0. To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): If the `device_map` contains any value `"disk"`, the folder where we will offload weights. offload_index (`int`, *optional*): The offload_index argument to be passed to `accelerate.dispatch_model` method. r) sequentialbalanced_low_0)r'no_split_module_classeslow_zero)r'r)r& offload_dirN) inspect signaturer parameters_no_split_modulesrr`rbr)r;r&r'r(r)dispatch_model_kwargsrs r<rrz+PeftAdapterMixin._dispatch_accelerate_models<!# g//?JJ J5B !/ 2"&"8"8  %,%(?$(88 J j# &.E\J   !& $  r adapter_namesc, ttd |js tdd fd }t j t jj dt j k\rddlm }n|}t|tr|g}|Dcgc]}||jvs|}}|rtd d j||Dcgc]6}|j|jjj!d 8}}t#||D]G\}}|||| t%|d dst'|ds,|jj)|dIt+|jdk(r |` d|_yycc}wcc}w)z Delete a PEFT adapter from the underlying model. Args: adapter_names (`Union[list[str], str]`): The name(s) of the adapter(s) to delete. r1z0.18.0rNcddlm}ddlm}d}|j D]I}t ||r|dz}t ||s"t |dr|j|@td|rtjdd yy) NrrrFTdelete_adapterzdThe version of PEFT you are using is not compatible, please use a version that is greater than 0.6.1zlThe deleted adapter contains modules_to_save, which could not be deleted. For this to work, PEFT version >= z is required.) rrr[rmodulesr`r]rrZrcrn)modelr#rwrrhas_modules_to_savermin_version_delete_adapters r<old_delete_adapterz;PeftAdapterMixin.delete_adapter..old_delete_adapter7s ? 7"' --/ f&:;'4/'fn5v'78--l;(C #45]D#rr>r)rz@The following adapter(s) are not present and cannot be deleted: rLr)r#rwr\Fr*r5)rrRr\rZrrWrXrYpeft.functionalrr`rbr*rl peft_typeryr8ziprmr]rSri) r;rrrnamemissing_adaptersr#prefixesrwrs @r<rzPeftAdapterMixin.delete_adapter's '78%-"**OP P . ==++33F; < Nh@i i 6/N mS )*OM.;d$JZJZ>ZD[[ RSWS\S\]mSnRop fssUat'' 5??EEKKMNaPss$' x$@ 9 L& 4l6 Jt5u='$P]B^  $$\48  9 t A % */D ' &\ tsF -F ;F) NNNNautoNNNNNFFNr5)r/N)NN)NNN)r7 __module__ __qualname____doc__r\rrbintdictrboolrrrr^rrrrrrrrrr<r!r!Bs$#(,&*"&# $((,'+04BF"'"37X}XsmX3- X } X  XSMX! X }Xd38n-X%T#~*=%>?X XX!c3h0X Xt%' %'QU%'N+d3in(=+$+Z3.4,c@"8C="U]^bUc"os"@%)(,'+ 6 6 SM6 ! 6  } 6  6 pA0E$s)S.,AA0dA0rr!)rXrrjtypingrrr packagingrutilsrr r r r r ra accelerateraccelerate.utilsrrrR get_loggerr7rcrUr!rrr<rsi '')K   H %&f0f0r