wL i2XdZddlZddlZddlZddlZddlmZddlmZddl m Z m Z ddl m Z ddlmZmZddlmZmZmZmZmZmZmZmZmZmZmZdd lmZd d lm Z d d l!m"Z"m#Z#m$Z$m%Z%d d l&m'Z'd dl(m)Z)m*Z*m+Z+d dl,m-Z-m.Z.m/Z/m0Z0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7d dl,m8Z9d dl:m;Z;erd dld#eeFd$eGd%eGd&eeGeGfd)eGd'eeGd*eLd4eeGddfd5ZRe7 d?d6eeEd$eGd%eGd&eeGeGfd)eGd'eeGdeeeGeeGfed7eSfffd8ZT d@deeId9eeeGeeGfed7eSffd:eGd;eeGd>V #!..77s r@rAz!CommitOperationCopy.__post_init__ws( 6t7L7L M243D3DErB) rCrDrErFrGrHrMrrNrOrArIrBr@rKrKXsI&"&L(3-&#Hhsm" $Ix}#FrBrKcVeZdZUdZeed<eeeee fed<e ddZ e ed<e dddZ eeed <e dddZeeed <e dddZeeed <e dddZeed <e dddZeed <ddZeddedee fdZdefdZedeefdZy)CommitOperationAdda Data structure holding necessary info to upload a file to a repository on the Hub. Args: path_in_repo (`str`): Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"` path_or_fileobj (`str`, `Path`, `bytes`, or `BinaryIO`): Either: - a path to a local file (as `str` or `pathlib.Path`) to upload - a buffer of bytes (`bytes`) holding the content of the file to upload - a "file object" (subclass of `io.BufferedIOBase`), typically obtained with `open(path, "rb")`. It must support `seek()` and `tell()` methods. Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is not one of `str`, `Path`, `bytes` or `io.BufferedIOBase`. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is a `str` or `Path` but not a path to an existing file. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is a `io.BufferedIOBase` but it doesn't support both `seek()` and `tell()`. r4path_or_fileobjF)initrepr upload_infoN)rUrVdefault _upload_mode_should_ignore _remote_oid _is_uploaded _is_committedreturnct|j|_t|jtrt |j|_t|jt rtt jjt jj|j}t jj|sDtd|dt|jtjtfs tdt|jtjrE |jj|jj!dt j"t|jt r%t)j*|j|_yt|jtr%t)j.|j|_yt)j0|j|_y#t$t&f$r}td|d}~wwxYw)z6Validates `path_or_fileobj` and compute `upload_info`.zProvided path: 'z(' is not a file on the local file systemzpath_or_fileobj must be either an instance of str, bytes or io.BufferedIOBase. If you passed a file-like object, make sure it is in binary mode.rzNpath_or_fileobj is a file-like object but does not implement seek() and tell()N)r9r4r;rTrrGospathnormpath expanduserisfiler=ioBufferedIOBasebytestellseekSEEK_CUROSErrorAttributeErrorr from_pathrW from_bytes from_fileobj)r?rTexcs r@rAz CommitOperationAdd.__post_init__s243D3DE d**D 1#&t';';#>/2 #3O3DDl!mnnD0023D3De2LM#  d**B,=,= > $$))+$$))!R[[9 d**C 0)33D4H4HID  ,,e 4)44T5I5IJD )66t7K7KLD ^,  d s7AHH>- H99H> with_tqdmc#Kt|jtst|jtrI|r#t |j5}|dddyt |jd5}|dddyt|jt r"tj|jyt|jtjrS|jj}|j|jj|tjyy#1swYyxYw#1swYyxYww)u A context manager that yields a file-like object allowing to read the underlying data behind `path_or_fileobj`. Args: with_tqdm (`bool`, *optional*, defaults to `False`): If True, iterating over the file object will display a progress bar. Only works if the file-like object is a path to a file. Pure bytes and buffers are not supported. Example: ```python >>> operation = CommitOperationAdd( ... path_in_repo="remote/dir/weights.h5", ... path_or_fileobj="./local/weights.h5", ... ) CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5') >>> with operation.as_file() as file: ... content = file.read() >>> with operation.as_file(with_tqdm=True) as file: ... while True: ... data = file.read(1024) ... if not data: ... break config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s] >>> with operation.as_file(with_tqdm=True) as file: ... requests.put(..., data=file) config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s] ``` Nrb) r;rTrGrr)openrgreBytesIOrfrhriSEEK_SET)r?rqfileprev_poss r@as_filezCommitOperationAdd.as_filesH d**C 0Jt?S?SUY4Z%d&:&:;tJ$..5J ,,e 4**T112 2 ,,b.?.? @++002H&& &  % %h <As7A E D4E 2E7B=E 4D=9E E E c|j5}tj|jcdddS#1swYyxYw)z[ The base64-encoded content of `path_or_fileobj` Returns: `bytes` N)rybase64 b64encodereadr?rws r@ b64contentzCommitOperationAdd.b64contents8 \\^ 1t##DIIK0 1 1 1s #>Ac|jy|jdk(r$|jjjS|j 5}t j |jcdddS#1swYyxYw)aReturn the OID of the local file. This OID is then compared to `self._remote_oid` to check if the file has changed compared to the remote one. If the file did not change, we won't upload it again to prevent empty commits. For LFS files, the OID corresponds to the SHA256 of the file content (used a LFS ref). For regular files, the OID corresponds to the SHA1 of the file content. Note: this is slightly different to git OID computation since the oid of an LFS file is usually the git-SHA1 of the pointer file content (not the actual file content). However, using the SHA256 is enough to detect changes and more convenient client-side. Nr/)rYrWsha256hexryr(git_hashr}r~s r@ _local_oidzCommitOperationAdd._local_oid sp    $   % '##**..0 0 14||DIIK0 1 1 1s #A>>Br^N)F)rCrDrErFrGrHrrrgr rrWrrYr UploadModerZr<r[r\r]rArrryrpropertyrrIrBr@rSrS|s.3eX566#U;K; */Et)TL(:&T&+E4%PNHTNP"'Et!LK#LEuEL$E UFM4F!MF/=/=(82D/=/=b1E11HSM11rBrSr4r^c4|jdr|dd}|dk(s|dk(s|jdrtd|d|jdr|d d}tD]7tfd |j dDs'td d |d |S)Nr8r.z..z../z,Invalid `path_in_repo` in CommitOperation: ''z./c3(K|] }|k( ywrQrI).0part forbiddens r@ z)_validate_path_in_repo..+sETty EszHInvalid `path_in_repo` in CommitOperation: cannot update files under a 'z/' folder (path: 'z').) startswithr=r anysplit)r4rs @r@r9r9"ss##AB' sld2l6M6Me6TG ~UVWXXt$#AB' & E\-?-?-DE EZ[dZef!N#'  rB operationsctt}|D]}|j}t|tr^||dkDrt j d|d||xxdz cc<t|jD]}|t|xxdz cc<t|ts|tt|dkDs|jrt j d|dt j d|dy)a Warn user when a list of operations is expected to overwrite itself in a single commit. Rules: - If a filepath is updated by multiple `CommitOperationAdd` operations, a warning message is triggered. - If a filepath is updated at least once by a `CommitOperationAdd` and then deleted by a `CommitOperationDelete`, a warning is triggered. - If a `CommitOperationDelete` deletes a filepath that is then updated by a `CommitOperationAdd`, no warning is triggered. This is usually useless (no need to delete before upload) but can happen if a user deletes an entire folder and then add new files to it. rzBAbout to update multiple times the same file in the same commit: 'z9'. This can cause undesired inconsistencies in your repo.rz_About to delete a folder containing files that have just been updated within the same commit: 'zLAbout to delete a file that have just been updated within the same commit: 'N) rintr4r;rSwarningswarnr parentsrGr3r6)rnb_additions_per_path operationr4parents r@_warn_on_overwriting_operationsr6s-8,<  -- i!3 4$\2Q6 %'"" ", /1 4 /' 5== 8&c&k2a72 8 i!6 7$S|)D%EFJ&&MM==INKII MM**6899/rB)endpoint num_threadsrevision create_pr additions repo_typerepo_idheadersrrrrc fg}g} i} t|tD]H} | D cgc]} | } } ddg}td| D}tr)|s|j dnt j dt| D cgc]} | jc} |||||d|\}}}|rad j|Dcgc]8}d |jd d |jd ijd:c}}td||dk(rd|vr|j|  | j|| D])} | | | jjj<+Kt| dkDrt!| | |||t|dkDrt#|||||||yycc} wcc} wcc}w)zK Negotiates per-file transfer (LFS vs Xet) and uploads in batches. ) chunk_sizebasic multipartc3dK|](}t|jtj*ywrQ)r;rTrerf)rops r@rz _upload_files..{s$"jY[:b.@.@"BSBS#T"js.0xetzcUploading files as a binary IO buffer is not supported by Xet Storage. Falling back to HTTP upload.N) upload_infosrrrrrtoken transfers z$Encountered error for file with OID oidz: `errormessagezLFS batch API returned errors: r)actions oid2addoprrr)rrrrrrr)r#UPLOAD_BATCH_MAX_NUM_FILESrr,appendloggerwarningrrWjoingetr=extendrrlen_upload_lfs_files_upload_xet_files)rrrrrrrr xet_additions lfs_actions lfs_oid2addopchunkr chunk_listrhas_buffered_io_data actions_chunk errors_chunkchosen_transfererrrs r@ _upload_filesrds/1M K35M 6PQ(@#()Rb) ) '5 ""j_i"jj  '  '3 8K3=>R"..> 8 4 |_ ii ,;3775>:J#cggV]_aNbNfNfgpNqMrsG ?yIJ J e #));   ,   } -  @=? bnn33779: @O(@T ;!##   =A# e*?s F$=F) 4=F. )rrrrcHg}|D]N}|jd*tjd|djd>|j |Pdfd }t j r=tjdt|dt|d D] }|| yt|d k(r!tjd ||d ytjdt|d|dt||dt|d|ty)a Uploads the content of `additions` to the Hub using the large file storage protocol. Relevant external documentation: - LFS Batch API: https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md Args: actions (`List[Dict]`): LFS batch actions returned by the server. oid2addop (`Dict[str, CommitOperationAdd]`): A dictionary mapping the OID of the file to the corresponding `CommitOperationAdd` object. headers (`Dict[str, str]`): Headers to use for the request, including authorization headers and user agent. endpoint (`str`, *optional*): The endpoint to use for the request. Defaults to `constants.ENDPOINT`. num_threads (`int`, *optional*): The number of concurrent threads to use when uploading. Defaults to 5. Raises: [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) If an upload failed for any reason [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`. repo_id (`str`): A namespace (user or an organization) and a repo name separated by a `/`. headers (`Dict[str, str]`): Headers to use for the request, including authorization headers and user agent. num_threads (`int`, *optional*): The number of concurrent threads to use when uploading. Defaults to 5. revision (`str`, *optional*): The git revision to upload to. Raises: [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) If an upload failed for any reason [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the server returns malformed responses [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) If the LFS batch endpoint returned an HTTP error. rNzContent of file rz/ is already present upstream - skipping upload.c |d}t||y#t$r}tdjd|d}~wwxYw)Nr)rlfs_batch_actionrrzError while uploading 'z ' to the Hub.)r Exception RuntimeErrorr4) batch_actionrrprrrs r@_wrapped_lfs_uploadz._upload_lfs_files.._wrapped_lfs_uploadsZ i!,u"56I \SZem n i!89O9O8PP]^_eh h is A>Az Uploading z* LFS files to the Hub using `hf_transfer`.zhuggingface_hub.lfs_upload)namerzUploading 1 LFS file to the Hubrz" LFS files to the Hub using up to z threads concurrentlyzUpload z LFS files)desc max_workers tqdm_classr) rrdebugr4rrHF_HUB_ENABLE_HF_TRANSFERrhf_tqdmr)rrrrrfiltered_actionsactionrs ``` r@rrs,f, ::i ( LL"9VE]#;#H#H"IIxy   # #F + ,i** z#&6"7!88bcd.5QR (F  ' (  ! # 67,Q/0 -.//QR]Q^^s t    3/01<#  rB)rrrc t|dk(ryddlm}m}ddlm}  t tjrddind} | j} | j| jf} d t t"t$fffd }t's| }|j(}nd\}} |Dcgc]}t+|j,t.s|!}}|Dcgc]%}t+|j,t"t0fs$|'}}t|dkDr/|Dcgc]}t#|j,}}||| | ||t|dkDr&|Dcgc]}|j,}}||| | ||||j3dy#t$r2} | jjd k(rtd d d | d} ~ wwxYwcc}wcc}wcc}wcc}w#||j3dwwxYw)aQ Uploads the content of `additions` to the Hub using the xet storage protocol. This chunks the files and deduplicates the chunks before uploading them to xetcas storage. Args: additions (`List` of `CommitOperationAdd`): The files to be uploaded. repo_type (`str`): Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`. repo_id (`str`): A namespace (user or an organization) and a repo name separated by a `/`. headers (`Dict[str, str]`): Headers to use for the request, including authorization headers and user agent. endpoint: (`str`, *optional*): The endpoint to use for the xetcas service. Defaults to `constants.ENDPOINT`. revision (`str`, *optional*): The git revision to upload to. create_pr (`bool`, *optional*): Whether or not to create a Pull Request with that commit. Raises: [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) If an upload failed for any reason. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the server returns malformed responses or if the user is unauthorized to upload to xet storage. [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError) If the LFS batch endpoint returned an HTTP error. **How it works:** The file download system uses Xet storage, which is a content-addressable storage system that breaks files into chunks for efficient storage and transfer. `hf_xet.upload_files` manages uploading files by: - Taking a list of file paths to upload - Breaking files into smaller chunks for efficient storage - Avoiding duplicate storage by recognizing identical chunks across files - Connecting to a storage server (CAS server) that manages these chunks The upload process works like this: 1. Create a local folder at ~/.cache/huggingface/xet/chunk-cache to store file chunks for reuse. 2. Process files in parallel (up to 8 files at once): 2.1. Read the file content. 2.2. Split the file content into smaller chunks based on content patterns: each chunk gets a unique ID based on what's in it. 2.3. For each chunk: - Check if it already exists in storage. - Skip uploading chunks that already exist. 2.4. Group chunks into larger blocks for efficient transfer. 2.5. Upload these blocks to the storage server. 2.6. Create and upload information about how the file is structured. 3. Return reference files that contain information about the uploaded files, which can be used later to download them. rN) upload_bytes upload_filesr)XetProgressReporterr1 token_typerrrrrparamsiz2You are unauthorized to upload to xet storage for r8zX. Please check that you have configured your access token with write access to the repo.r^c ttjrddind}| td|j|j fS)NrrrzFailed to refresh xet token)r$r!WRITEr access_tokenexpiration_unix_epoch)new_xet_connectionrrrrrrs r@token_refresherz*_upload_xet_files..token_refresherjs^E#)))2K%   %&'DE E!..0B0X0XXXrBNNF)rhf_xetrrutils._xet_progress_reportingrr$r!rrresponse status_coderrrrrrGrr"update_progressr;rTrgrclose)rrrrrrrrrrxet_connection_infoe xet_endpointaccess_token_inforprogressprogress_callbackr all_bytes_ops all_paths_ops all_paths all_bytess `````` r@rrs0~ 9~2BF#)))2K% "'//L,99;N;d;de YU38_ Y Y & '&($44&0##"&/Y:b>P>PRW3XY Y&/_:b>P>PSVX\R]3^_ _ }  !;HIRR//0III !!   }  !6CD++DID !!     NN5 ! {  :: ! !S ('DYKqQXPYZij   @Z_JE   NN5 ! se&F1G"5GGG"%GG G"G9!G"G-G" G -GG G""G8preupload_infoc,|jd}t|ts td|D]b}t|trFt|jdt r't|jdt r|ddvrYtd|S)Nfilesz&preupload_info is improperly formattedra uploadModer.z'preupload_info is improperly formatted:)rr;listr=dictrG)rr file_infos r@_validate_preupload_infors   w 'E eT "ABBH y$ '9==0#69==6<<(,>>FG GH rBgitignore_contentc ||ntj}i}i} i} t|dD]P} d| D cgc]\} | jt j | j jjd| j jd^c} i} ||| d<tj|d|d|d || ||rd d ind }t|t|j}|jdi|dDcic] }|d |d c}| jdi|dDcic] }|d |d c}| jdi|dDcic]}|d |j!dc}S|D]>}||j|_| |j|_| |j|_@|D]#}|j jdk(sd|_%ycc} wcc}wcc}wcc}w)a Requests the Hub "preupload" endpoint to determine whether each input file should be uploaded as a regular git blob, as a git LFS blob, or as a XET file. Input `additions` are mutated in-place with the upload mode. Args: additions (`Iterable` of :class:`CommitOperationAdd`): Iterable of :class:`CommitOperationAdd` describing the files to upload to the Hub. repo_type (`str`): Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`. repo_id (`str`): A namespace (user or an organization) and a repo name separated by a `/`. headers (`Dict[str, str]`): Headers to use for the request, including authorization headers and user agent. revision (`str`): The git revision to upload the files to. Can be any valid git revision. gitignore_content (`str`, *optional*): The content of the `.gitignore` file to know which files should be ignored. The order of priority is to first check if `gitignore_content` is passed, then check if the `.gitignore` file is present in the list of files to commit and finally default to the `.gitignore` file already hosted on the Hub (if any). Raises: [`~utils.HfHubHTTPError`] If the Hub API returned an error. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the Hub API response is improperly formatted. Nr1rascii)rasamplesize gitIgnorez/api/zs/z /preupload/rr)jsonrrrar shouldIgnorerrr0rI)rENDPOINTr#r4r{r|rWrdecoderr%postr&rr updaterrYrZr[)rrrrrrrr upload_modesshould_ignore_infooid_inforrpayloadresprrwadditions r@_fetch_upload_modesrs/N$/xY5G5GH+-L*,)+H 3/_     OO$..r~~/D/DELLWUNN//    (#4GK }!!ji[7);xj I)2K% "  D!1$))+> eN[bLcdDtF|T,-??de!!!mTbcjTk$lDT&\43G%G$lm^^T[E\]T4<%8]^1_6? ,X-B-B C"4X5J5J"K'(=(=>?.    $ $ )$-H !.A(e$l]sA!G'G,-G1G6copiesr-c ddlm}m}|||}i} i} |D cgc]} | j} } t dt | t D]P} |j|| | | t z||}|D]*}t||r|j| |j|f<,Rt|dD]\}}t|}|D cgc]} | j}} t dt |t D]} |j||| | t z|xs||}|D]}t||r td|j| |j|f<|jr|| |j|f<St!||||xs||j }t#j%|| }t'||j(| |j|f<|D]}|j|f| vr t+d |jd |xs|d | j%|j|j,f|_| j%|j|f|_| Scc} wcc} w)a; Fetch information about the files to copy. For LFS files, we only need their metadata (file size and sha256) while for regular files we need to download the raw content from the Hub. Args: copies (`Iterable` of :class:`CommitOperationCopy`): Iterable of :class:`CommitOperationCopy` describing the files to copy on the Hub. repo_type (`str`): Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`. repo_id (`str`): A namespace (user or an organization) and a repo name separated by a `/`. headers (`Dict[str, str]`): Headers to use for the request, including authorization headers and user agent. revision (`str`): The git revision to upload the files to. Can be any valid git revision. Returns: `Dict[Tuple[str, Optional[str]], Union[RepoFile, bytes]]]` Key is the file path and revision of the file to copy. Value is the raw content as bytes (for regular files) or the file information as a RepoFile (for LFS files). Raises: [`~utils.HfHubHTTPError`] If the Hub API returned an error. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the Hub API response is improperly formatted. r)HfApi RepoFolder)rrr)rpathsrrc|jSrQ)rM)rs r@z&_fetch_files_to_copy..>s 2??rB)keyz$Copying a folder is not implemented.)rrrrfilename)rz Cannot copy z at revision z: file is missing on repo.)hf_apirrr4rangerFETCH_LFS_BATCH_SIZEget_paths_infor;blob_idrarrrLNotImplementedErrorr/rr%rr&contentrrMrNrO)rrrrrrrrr files_to_copyrr dest_pathsoffsetdest_repo_filesrwrMr src_pathssrc_repo_files src_repo_fileurlrrs r@_fetch_files_to_copyr.sN* Hg 6FOQM?AH,23b"//3J33z?,@A ? //Vf/C&CD 0  $ ?DdJ/26,,$))X./ ? ?%,F8R$S'S j*% 3=>RR((> >As9~/CD YF#222F)FG%1# 3N"0 Y mZ8-.TUU?L?T?T-,,l;< $$HUM=#5#5|"DE%!)"+ '!-!9!.!3!3 C +}00g0FH'1HPHXHXM=#5#5|"DE% Y Y:$ SI**L9N("9#=#=">m#/x00JL"*y/I/I9KaKa.b!cI "*,, 0F0F/Q"RI  SA'SP k4?s IIr&commit_messagecommit_description parent_commitc #K||nd}||d}|||d<d|dd}|D]}t|tr5|jr)tj d|j d |d z }It|trA|j d k(r2d |jj|j d ddt|tr]|j dk(rNd|j d|jjj|jjddt|tr#|jrdndd|j id:t|tr||j |j"f}t|t$r8d t'j(|j|j d dd|j*r+d|j d|j*jddt-dt-d|dt/|dd|dkDrtj1d|dyyw)aG Builds the payload to POST to the `/commit` API of the Hub. Payload is returned as an iterator so that it can be streamed as a ndjson in the POST request. For more information, see: - https://github.com/huggingface/huggingface_hub/issues/1085#issuecomment-1265208073 - http://ndjson.org/ N)summary description parentCommitheader)rvaluerzSkipping file 'z(' in commit (ignored by gitignore file).rr0rwr{)r%raencodingr/lfsFiler)raalgorr deletedFolder deletedFilera)rar;rz_Malformed files_to_copy (should be raw file content as bytes or RepoFile objects with LFS info.z(Unknown operation to commit. Operation: z. Upload mode: rYzSkipped z/ file(s) in commit (ignored by gitignore file).)r;rSrZrrr4rYrr rWrrrr3r6rKrLrMrgr{r|r/r=getattrinfo) rr&r/r0r1 header_valuenb_ignored_filesr file_to_copys r@_prepare_commit_payloadrCis"0B/M+SU .>PQL '4 ^$\ 22 @ i!3 49Q9Q LL?9+A+A*BBjk l  !   i!3 49O9OS\9\(335<<>%22 (  #5 69;Q;QUZ;Z %22$$0077;;=%1166   #8 9*3*=*== )"8"89   #6 7()*D*DiF\F\)]^L,.!#)#3#3L#A#H#H#J ) 6 6$,!!$ ) 6 6 (+//66!u :9+FI~t<=? {@D! h/00_`asI I )NFNrQr)VrFr{rer`r collectionsr contextlibr dataclassesrr itertoolsrpathlibrr typingr r r r rrrrrrrtqdm.contrib.concurrentrr3rerrorsrrrr file_downloadrr/rrrutilsr r!r"r#r$r%r&r'r(r)r*r+rutils._runtimer,rr- get_loggerrCrrr!rr3rKrSrGr9CommitOperationrrr<rrrrrrrgr.rCrIrBr@rQs #%('pppp.cc%<<    #,    H %% &    <  F F  FF b1b1 b1J"*,?AVVW+_0E+$+\#" $N &'N N  N #s(^ N sm N N smN ~N N b # T $ZT C++,T #s(^ T sm T  T T n#" $M &'M M  M #s(^ M sm M smM ~M M ` T #'+Q.*+Q.Q.Q.#s(^ Q.  Q. sm Q.Q. }Q. Q.Q.h# b ( )bbb#s(^ b  b sm b %Xc]" #U:u+<%= =>bbR)-#' _b)_bc8C=015U9J3KKL_b_b! _b C= _b d38n _brB