VL iudZddlmZmZmZddlmZddlmZm Z m Z ddl m Z ddl mZddlZddlZdZGd d eeeeZy) z/Module containing a database to deal with packs) FileDBBase ObjectDBR CachingDB) LazyMixin) BadObjectUnsupportedOperationAmbiguousObjectName) PackEntity)reduceN)PackedDBcpeZdZdZdZfdZdZdZdZdZ dZ d Z d Z d Z d Zdd ZdZdZxZS)r z-A database operating on a set of object packsic@t||d|_d|_yNr)super__init__ _hit_count _st_mtime)self root_path __class__s S/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/gitdb/db/pack.pyrzPackedDB.__init__)s  # cR|dk(r"t|_|jdyy)N _entitiesT)force)listr update_cache)rattrs r _set_cache_zPackedDB._set_cache_3s) ; !VDN   D  ) rc@|jjddy)Nc |dSr)ls rz)PackedDB._sort_entities..:s !A$rT)keyreverse)rsort)rs r_sort_entitieszPackedDB._sort_entities9s =rc|j|jzdk(r|j|jD]:}|d|}||dxxdz cc<|xjdz c_|d|fcSt |)a:return: tuple(entity, index) for an item at the given sha :param sha: 20 or 40 byte sha :raise BadObject: **Note:** This method is not thread-safe, but may be hit in multi-threaded operation. The worst thing that can happen though is a counter that was not incremented, or the list being in wrong order. So we safe the time for locking here, lets see how that goesr)r_sort_intervalr(rr)rshaitemindexs r _pack_infozPackedDB._pack_info<s ??T00 0A 5    !NN (DDGCLE Q1 1$Q''  (nrcF |j|y#t$rYywxYw)NTF)r0r)rr-s r has_objectzPackedDB.has_objectYs(  OOC   s   cL|j|\}}|j|SN)r0 info_at_indexrr-entityr/s rinfoz PackedDB.infoas%, ##E**rcL|j|\}}|j|Sr4)r0stream_at_indexr6s rstreamzPackedDB.streames%, %%e,,rc#K|jD]F}|j}|j}t|j D] }||Hywr4)entitiesr/r-rangesize)rr7r/ sha_by_indexs rsha_iterzPackedDB.sha_iterisTmmo *FLLNE 99Luzz|, *"5)) * *sAAc|jDcgc]#}|djj%}}td|dScc}w)Nr+c ||zSr4r")xys rr$zPackedDB.size..ts 1q5rr)rr/r?r )rr.sizess rr?z PackedDB.sizersB48NNCDa%%'CC(%33Ds(Act)zStoring individual objects is not feasible as a pack is designed to hold multiple objects. Writing or rewriting packs for single objects is inefficient)r)ristreams rstorezPackedDB.storezs #$$rcjtj|j}|s|j|jkry|j|_t t j tjj|jd}|jDchc]#}|djj%}}||z D]_}t|}|jj|jj||jjga||z D]]}d}t!|jD]-\} }|djj|k(s+| }n|dk7sJ|j|=_|j#ycc}w)a Update our cache with the actually existing packs on disk. Add new ones, and remove deleted ones. We keep the unchanged ones :param force: If True, the cache will be updated even though the directory does not appear to have changed according to its modification timestamp. :return: True if the packs have been updated so there is new information, False if there was no change to the pack databaseFz pack-*.packr+T)osstatrst_mtimersetglobpathjoinrpackr appendr?r/ sha_to_index enumerater() rrrM pack_filesr.our_pack_files pack_filer7 del_indexis rrzPackedDB.update_cachesqwwt~~'($..8277<<0@-#PQR <@NNKD$q',,.--/KK%~5 _I *F NN ! !6;;=#5#5#7A\A\"] ^  _):5 +II$T^^4 47<<>&&(I5 !I  ? "?y) + 3Ls)(F0cF|jDcgc]}|d c}Scc}w)z=:return: list of pack entities operated upon by this databaser+)r)rr.s rr=zPackedDB.entitiess$(NN3DQ333s cd}|jD]^}|djj||}|)|djj|}|r||k7r t ||}`|r|St |)a:return: 20 byte sha as inferred by the given partial binary sha :param partial_binsha: binary sha with less than 20 bytes :param canonical_length: length of the corresponding canonical representation. It is required as binary sha's cannot display whether the original hex sha had an odd or even number of characters :raise AmbiguousObjectName: :raise BadObject: Nr+)rr/partial_sha_to_indexr-r r)rpartial_binshacanonical_length candidater. item_indexr-s rpartial_to_complete_shaz PackedDB.partial_to_complete_shas NN Da==nN^_J%1gmmo))*5c!1-n==    ''r)F)__name__ __module__ __qualname____doc__r,rrr(r0r2r8r;rAr?rIrr=rc __classcell__)rs@rr r sO7 N* >:+-*4%+Z4(rr )rg gitdb.db.baserrr gitdb.utilr gitdb.excrrr gitdb.packr functoolsr rLrP__all__r r"rrrosJ 6 ! "  l(z9il(r