VL ik^ddlmZddlZddlZddlZddlZddlmZmZm Z m Z m Z ddl m Z mZmZmZmZddlmZmZddlmZdZ ddlmZd Zd ZGd d eZGd deZGddZGddeZ GddeZ!GddeZ"GddZ#GddZ$y#e$rY_wxYw))BytesION)msb_size stream_copyapply_delta_dataconnect_deltas delta_types)allocate_memory LazyMixinmake_shawriteclose) NULL_BYTE BYTE_SPACE) force_bytesF) apply_deltaT) DecompressMemMapReaderFDCompressedSha1WriterDeltaApplyReader Sha1WriterFlexibleSha1WriterZippedStoreShaWriterrFDStream NullStreamcxeZdZdZdZdZddZdZdZdZ e dd Z d Z d Z d Zeed dfdZddZy)raReads data in chunks from a memory map and decompresses it. The client sees only the uncompressed data, respective file-like read calls are handling on-demand buffered decompression accordingly A constraint on the total size of bytes is activated, simulating a logical file within a possibly larger physical memory area To read efficiently, you clearly don't want to read individual bytes, instead, read a few kilobytes at least. **Note:** The chunk-size should be carefully selected as it will involve quite a bit of string copying due to the way the zlib is implemented. Its very wasteful, hence we try to find a good tradeoff between allocation time and number of times we actually allocate. An own zlib implementation would be good here to better support streamed reading - it would only need to keep the mmap and decompress it into chunks, that's all ... ) _m_zip_buf_buflen_br_cws_cwe_s_close_cbr_phiiNc||_tj|_d|_d|_|||_d|_d|_d|_ d|_ d|_ ||_ y)z|Initialize with mmap for stream reading :param m: must be content data - use new if you have object data and no sizeNrF) rzlib decompressobjrrrr"rr r!r$r%r#)selfmclose_on_deletionsizes R/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/gitdb/stream.py__init__zDecompressMemMapReader.__init__Es`&&(    DG    ' c2|dk(sJ|jy)Nr")_parse_header_infor)attrs r- _set_cache_z"DecompressMemMapReader._set_cache_Ust|| !r/c$|jyN)r r)s r-__del__zDecompressMemMapReader.__del__[s  r/c4d}||_|j|}|jt}|d|j t \}}t |}||_d|_|dz }t||d|_ t||z |_ d|_ ||fS)zIf this stream contains object data, parse the header info and skip the stream to a point where each read will yield object content :return: parsed type_string, sizei NrT) r"readfindrsplitrintrrrlenrr%)r)maxbhdrhdrendtypr,s r-r1z)DecompressMemMapReader._parse_header_info^siio)$L&&z2 T4y! CL) 3x&(  Dyr/cLt||d}|j\}}|||fS)aCreate a new DecompressMemMapReader instance for acting as a read-only stream This method parses the object header from m and returns the parsed type and size, as well as the created stream instance. :param m: memory map on which to operate. It must be object data ( header + contents ) :param close_on_deletion: if True, the memory map will be closed once we are being deletedr)rr1)r)r*r+instrCr,s r-newzDecompressMemMapReader.new{s1&a):A>++- TD$r/c|jS)z8:return: random access compatible data we are working on)rr7s r-datazDecompressMemMapReader.datas wwr/c|jr8t|jdr|jjd|_yy)zClose our underlying stream of compressed bytes if this was allowed during initialization :return: True if we closed the underlying stream :note: can be called safely r FN)r#hasattrrr r7s r-r zDecompressMemMapReader.closes2 ;;tww( DK r/c|j|jk(rD|jjs-d|_t |jdro|jj t jk(r|jtj|jj t jk(rGn|jjsz|jt|jk7rX|jtj|jjs#|jt|jk7rX|j|_|jS)z :return: number of compressed bytes read. This includes the bytes it took to decompress the header ( if there was one )rstatus)rr"r unused_datarJrLr'Z_OKr;mmapPAGESIZEr$r?rr7s r-compressed_bytes_readz,DecompressMemMapReader.compressed_bytes_reads0 88tww tyy'<'<DHtyy(+ii&&$))3IIdmm,ii&&$))3 ))//DIITWW4MIIdmm,))//DIITWW4M wwDH yyr/SEEK_SETrc|dk7s|ttddk7r tdtj|_dx|_x|_x|_|_ |jr d|_ |` yy)zgAllows to reset the stream to restart reading :raise ValueError: If offset and whence are not 0rrRCan only seek to position 0FN) getattros ValueErrorr'r(rrr r!r$r%r"r)offsetwhences r-seekzDecompressMemMapReader.seeksm Q;&GB A$>>:; ;&&( 7888498ty49 99DI r/c`|dkr|j|jz }n#t||j|jz }|dk(ryd}|jr|j|k\rG|jj |}|xj|zc_|xj|z c_|S|jj }||jz}|xj|jz c_d|_d|_|j j}|r2|jt|z |_ |j|z|_n'|j}|j|_ ||z|_|j|jz dkr|jdz|_|j|j|j}|jt|z|_|j j||}ttdtjdvr3t j"dk(s t|j j}n?t|j jt|j j$z}|xj&t||z z c_|xjt|z c_|r||z}|rSt|t|z |kr9|j|jkr ||j |t|z z }|S)Nr:rr/ZLIB_RUNTIME_VERSION)z1.2.7z1.2.5darwin)r"rminrrr;runconsumed_tailr!r?r r decompressrUr' ZLIB_VERSIONsysplatformrMr$)r)r,dattailcwsindatadcompdatunused_datalens r-r;zDecompressMemMapReader.readsv !877TXX%DtTWWtxx/0D 19  99||t#iinnT* $ D  iinn& $DLL(    yy((  CI-DI D(DI))C DId DI 99tyy 1 $ A DI499-IIF + 99''5 4/1B1B CGY Ybebnbnrzbz !:!:;N !:!:;c$))BWBW>XXN S[>11  CM! X~H XS1T9dhh>P  $X"67 7Hr/r6F))__name__ __module__ __qualname____doc__ __slots__ max_read_sizer.r4r8r1 classmethodrFrHr rQrUrVr[r;r/r-rr.sf: !IM( " :   -b#*"j!"< ir/rceZdZdZdZdZdZdZdZe seZ neZ ddZ e e d dfd Zed Zed Zed ZedZy)raA reader which dynamically applies pack deltas to a base object, keeping the memory demands to a minimum. The size of the final object is only obtainable once all deltas have been applied, unless it is retrieved from a pack index. The uncompressed Delta has the following layout (MSB being a most significant bit encoded dynamic size): * MSB Source Size - the size of the base against which the delta was created * MSB Target Size - the size of the resulting data after the delta was applied * A list of one byte commands (cmd) which are followed by a specific protocol: * cmd & 0x80 - copy delta_data[offset:offset+size] * Followed by an encoded offset into the delta data * Followed by an encoded size of the chunk to copy * cmd & 0x7f - insert * insert cmd bytes from the delta buffer into the output stream * cmd == 0 - invalid operation ( or error in delta stream ) )_bstream _dstreams _mm_target_sizericvt|dkDsJd|d|_t|dd|_d|_y)zInitialize this instance with a list of streams, the first stream being the delta to apply on top of all following deltas, the last stream being the base object onto which to apply the deltasr:z+Need at least one delta and one base streamrmNr)r?rwtuplerxr)r) stream_lists r-r.zDeltaApplyReader.__init__hsB;!#R%RR##B {3B/0r/ct|jdk(r|j|St|j}|j dk(rd|_t d|_y|j |_t |j |_t |jj}t|jj|j|jjdtjz|jj}|j|||jj!dy)Nr:r)r?rx_set_cache_brute_rrboundrzr ryrwr,rr;r rOrPapplyr[)r)r3dclbbufr s r-_set_cache_too_slow_without_cz.DeltaApplyReader._set_cache_too_slow_without_crs t~~ ! #))$/ / T^^, ::<1 DJ-a0DO ZZ\ )$**5t}}112DMM&& DMM4F4FdmmH[\%% $ Qr/c "t}d}|jD]T}|jd}t|\}}t||\}}|j ||d|||ft ||}V|j j} |}t|jdkDrt | |x} }t| } t|j j| j| dtjzt|} d} tt|t|jD]\\} }}}}t|j|z }|j| t|j|j|jdtjzdt!vrt#| || n"t%| ||t|| j| | } } | j'd| j'd|} | |_| |_y)z*If we are here, we apply the actual deltasriNr:r c_apply_delta)listrxr;rappendmaxrwr,r?r rr rOrPzipreversedglobalsrrr[ryrz)r)r3buffer_info_listmax_target_sizedstreambufrYsrc_size target_size base_sizertbuffinal_target_sizedbufddatas r-rz"DeltaApplyReader._set_cache_brute_s  6~~ @G,,s#C'} FH"*3"7 FK  # #S\68[$Q R!/;?O  @MM&& %  t~~  "&))_&E EI y)DMM&& IsT]]?RS {+ !>A(K[B\^fgkgugu^v>w , : 1T68[7 $GLL6$9:E KK   ekk7<<t}}AT U')+dE40 xE DJJO t$D IIaL IIaL + / ,:& r/rc|j|jz }|dks||kDr|}|jj|}|xjt |z c_|S)Nr:)rzrryr;r?)r)countblrHs r-r;zDeltaApplyReader.readsS ZZ$(( " 19 E##E* CI r/rRc|dk7s|ttddk7r tdd|_|jj dy)zhAllows to reset the stream to restart reading :raise ValueError: If offset and whence are not 0rrRrTN)rUrVrWrryr[rXs r-r[zDeltaApplyReader.seeks@ Q;&GB A$>>:; ; Qr/ct|dkr td|djtvrtd|djz||S)a Convert the given list of streams into a stream which resolves deltas when reading from it. :param stream_list: two or more stream objects, first stream is a Delta to the object that you want to resolve, followed by N additional delta streams. The list's last stream must be a non-delta stream. :return: Non-Delta OPackStream object whose stream can be used to obtain the decompressed resolved data :raise ValueError: if the stream list cannot be handledzNeed at least two streamsrmzNCannot resolve deltas if there is no base object stream, last one was type: %s)r?rWtype_idrtype)clsr}s r-rFzDeltaApplyReader.newsb { a 89 9 r? " "k 1`cnoqcrcwcwwy y;r/c.|jjSr6)rwrr7s r-rzDeltaApplyReader.types}}!!!r/c.|jjSr6)rwrr7s r-rzDeltaApplyReader.type_ids}}$$$r/c|jS)z3:return: number of uncompressed bytes in the stream)rzr7s r-r,zDeltaApplyReader.sizeszzr/Nr)rnrorprqrrk_max_memory_mover.rr has_perf_modr4r;rUrVr[rtrFpropertyrrr,rur/r-rrBs0I*  DH'V ' 3 #*"j!"<   4""%%r/rc(eZdZdZdZdZdZddZy)rzpSimple stream writer which produces a sha whenever you like as it degests everything it is supposed to writesha1c"t|_yr6)r rr7s r-r.zSha1Writer.__init__2s J r/cN|jj|t|S)z{:raise IOError: If not all bytes could be written :param data: byte object :return: length of incoming data)rupdater?r)rHs r-r zSha1Writer.write7s 4yr/cn|r|jjS|jjS)z]:return: sha so far :param as_hex: if True, sha will be hex-encoded, binary otherwise)r hexdigestdigest)r)as_hexs r-shazSha1Writer.shaDs- 99&&( (yy!!r/Nrl)rnrorprqrrr.r rrur/r-rr,s*I "r/rc eZdZdZdZdZdZy)rzZWriter producing a sha1 while passing on the written bytes to the given write functionwriterc<tj|||_yr6)rr.r)r)rs r-r.zFlexibleSha1Writer.__init__TsD! r/cRtj|||j|yr6)rr rrs r-r zFlexibleSha1Writer.writeXst$ Dr/N)rnrorprqrrr.r rur/r-rrNsIr/rcLeZdZdZdZdZdZdZdZe e ddfd Z d Z y ) rz=Remembers everything someone writes to it and generates a sha)rrctj|t|_t j tj |_yr6)rr.rrr' compressobj Z_BEST_SPEEDrr7s r-r.zZippedStoreShaWriter.__init__bs1D!9##D$5$56r/c.t|j|Sr6)rUrr2s r- __getattr__z ZippedStoreShaWriter.__getattr__gstxx&&r/ctj||}|jj|jj ||Sr6)rr rrcompress)r)rHalens r-r zZippedStoreShaWriter.writejs8d+ txx((./ r/cj|jj|jjyr6)rr rflushr7s r-r zZippedStoreShaWriter.closeps txx~~'(r/rRrc|dk7s|ttddk7r td|jj dy)z`Seeking currently only supports to rewind written data Multiple writes are not supportedrrRrTN)rUrVrWrr[rXs r-r[zZippedStoreShaWriter.seekss7 Q;&GB A$>>:; ;  ar/c6|jjS)zA:return: string value from the current stream position to the end)rgetvaluer7s r-rzZippedStoreShaWriter.getvalue{sxx  ""r/N) rnrorprqrrr.rr r rUrVr[rrur/r-rr]s6GI7 ' )#*"j!"<#r/rcBeZdZdZdZedZfdZdZdZ xZ S)rzDigests data written to it, making the sha available, then compress the data and write it to the file descriptor **Note:** operates on raw file descriptors **Note:** for this to work, you have to use the close-method of this instance)fdrrz+Failed to write all bytes to filedescriptorct|||_tjtj |_yr6)superr.rr'rrr)r)r __class__s r-r.zFDCompressedSha1Writer.__init__s- ##D$5$56r/c|jj||jj|}t |j |}|t |k7r |jt |S)zZ:raise IOError: If not all bytes could be written :return: length of incoming data)rrrrr rr?exc)r)rHcdata bytes_writtens r-r zFDCompressedSha1Writer.writesY !!$'dggu- CJ &((N4yr/c|jj}t|j|t |k7r |j t |jSr6)rrr rr?rr )r) remainders r-r zFDCompressedSha1Writer.closes@HHNN$ ) $I 6((NTWW~r/) rnrorprqrrIOErrorrr.r r __classcell__)rs@r-rrs,U &I ? @C7 r/rc:eZdZdZdZdZdZd dZdZdZ dZ y ) rzA simple wrapper providing the most basic functions on a file descriptor with the fileobject interface. Cannot use os.fdopen as the resulting stream takes ownership_fd_posc ||_d|_yNrr)r)rs r-r.zFDStream.__init__s r/c|xjt|z c_tj|j|yr6)rr?rVr rrs r-r zFDStream.writes& SY  4 r/c|dk(r)tjj|j}tj|j |}|xj t|z c_|Sr)rVpathgetsize _filepathr;rrr?)r)rbytess r-r;z FDStream.readsL A:GGOODNN3E%( SZ  r/c|jSr6)rr7s r-filenozFDStream.filenos xxr/c|jSr6)rr7s r-tellz FDStream.tells yyr/c.t|jyr6)r rr7s r-r zFDStream.closes  dhhr/Nr) rnrorprqrrr.r r;rrr rur/r-rrs, I!r/rc2eZdZdZeZddZdZdZy)rzVA stream that does nothing but providing a stream interface. Use it like /dev/nullcy)Nru)r)r,s r-r;zNullStream.readsr/cyr6rur7s r-r zNullStream.closes r/ct|Sr6)r?rs r-r zNullStream.writes 4yr/Nr) rnrorprqr|rrr;r r rur/r-rrsI r/r)%iorrOrVrdr' gitdb.funrrrrr gitdb.utilr r r r r gitdb.constrrgitdb.utils.encodingrrgitdb_speedups._perfrr ImportError__all__rrrrrrrrrur/r-rs  .,  AL %QYQh`y`T""D   #: #F#Z#PD  M  sB$$B,+B,