L i+ddlmZddlmZmZddlmZddlmZm Z ddl Z ddl m Z mZmZmZmZddlmZmZmZmZmZdd lmZmZmZmZmZer ddlZdd l mZerddl Z erddl!Z"erddl#m Z$ dNd e jJd e e e&fd ee e e&fde jJfdZ'de jPdfd e jJde)dee de jTdee e&e fde jJf dZ+dZ, dOd e e jJddddfdee-dee&dee e&e fddf dZ.dNde/e0e0ffdZ1 dPde jJde e0e/e0e0fe2e0e/e0d ffd!e-d"ee0dee e&e fde/f d#Z3 dQd e jJde/e0e0fd$ed%d&ee0dee d'e-dee e&e fde jJfd(Z4 dRd e jJd)e e)ee)fd*e e)ee)fdee dee e&e fde jJf d+Z5 dRd e jJde/e0e0fdee e&e fdee e&e fde jJf d,Z6dSd.Z7d-e jJde jJfd/Z8dTd0Z9d-edefd1Z:dUd3Z;d2e jJde jJfd4Zd7Z?d8Z@Gd9d:eZAeAjd;ddfd e jJde e)ee)fdee e&e fdee e&e fde jJfd?ZCd edefd@ZD dRd e jJdee dee e&e fde jJfdAZEdBZFdWdCe-fdDZGdEZH dWdFe e2ddfdGe-dCe-de/eIe/e0e0fe2dfeIe e0e/e0e0ffe/e/e0e0fe0ffffdHZJ dWdIeIe/e0e0fdfdJeIe e0e/e0e0ffe/e/e0e0fe0ffdCe-de e2ddffdKZKGdLdMZLy)X) defaultdict) CollectionIterable)ceil)OptionalUnionN)ChannelDimension ImageInputget_channel_dimension_axisget_image_sizeinfer_channel_dimension_format) ExplicitEnum TensorType is_jax_tensor is_tf_tensoris_torch_tensor)is_flax_availableis_tf_availableis_torch_availableis_vision_availablerequires_backends)PILImageResamplingimage channel_diminput_channel_dimreturnct|tjstdt || t |}t |}||k(r|S|t jk(ratt|jdz |jdz |jdz |jdz gz}|j|}|S|t jk(ratt|jdz |jdz |jdz |jdz gz}|j|}|Std|)a Converts `image` to the channel dimension format specified by `channel_dim`. The input can have arbitrary number of leading dimensions. Only last three dimension will be permuted to format the `image`. Args: image (`numpy.ndarray`): The image to have its channel dimension set. channel_dim (`ChannelDimension`): The channel dimension format to use. input_channel_dim (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The image with the channel dimension set to `channel_dim`. ,Input image must be of type np.ndarray, got r z&Unsupported channel dimension format: ) isinstancenpndarray TypeErrortyperr FIRSTlistrangendim transposeLAST ValueError)rrrtarget_channel_dimaxess c/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/image_transforms.pyto_channel_dimension_formatr16s** eRZZ (FtE{mTUU :5A)+6.. -333E%**q.)*ejj1nejj1nejj[\n-]]% L /44 4E%**q.)*ejj1nejj1nejj[\n-]]% LA+OPPscale data_formatdtypeinput_data_formatct|tjstdt ||j tj |z}| t|||}|j |}|S)a Rescales `image` by `scale`. Args: image (`np.ndarray`): The image to rescale. scale (`float`): The scale to use for rescaling the image. data_format (`ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. dtype (`np.dtype`, *optional*, defaults to `np.float32`): The dtype of the output image. Defaults to `np.float32`. Used for backwards compatibility with feature extractors. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The rescaled image. r)r"r#r$r%r&astypefloat64r1)rr3r4r5r6rescaled_images r0rescaler;ask4 eRZZ (FtE{mTUU\\"**-5N4^[Rcd#**51N r2c|jtjk(rd}|Stj||j t rbtj d|krtj |dkrd}|Std|jd|jdtj d|krtj |dkrd}|Std |jd|jd) z Detects whether or not the image needs to be rescaled before being converted to a PIL image. The assumption is that if the image is of type `np.float` and all values are between 0 and 1, it needs to be rescaled. FrzZThe image to be converted to a PIL image contains values outside the range [0, 255], got [z, z%] which cannot be converted to uint8.r TzXThe image to be converted to a PIL image contains values outside the range [0, 1], got [) r5r#uint8allcloser8intallr-minmax)r do_rescales r0_rescale_for_pil_conversionrEs {{bhh  UELL- . 66!u* "&&#"6J  }Buyy{m3XZ  U uz 2   IIK=599;-/T V  r2zPIL.Image.Image torch.Tensor tf.Tensorz jnp.ndarrayrD image_modecttdgt|tjjr|St |s t |r|j}nRt|rtj|}n1t|tjstdt|t|tj |}|j"ddk(rtj$|dn|}| t'|n|}|r t)|d}|j+tj,}tjj/||S)a Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed. Args: image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor` or `tf.Tensor`): The image to convert to the `PIL.Image` format. do_rescale (`bool`, *optional*): Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default to `True` if the image type is a floating type and casting to `int` would result in a loss of precision, and `False` otherwise. image_mode (`str`, *optional*): The mode to use for the PIL image. If unset, will use the default mode for the input image type. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `PIL.Image.Image`: The converted image. visionz Input image type not supported: r axisr=mode)r to_pil_imager"PILImagerrnumpyrr#arrayr$r-r&r1r r,shapesqueezerEr;r8r> fromarray)rrDrHr6s r0rPrPs2lXJ/%) ue!4  u  rzz *;DK=IJJ (/?/D/DFW XE+0++b/Q*>BJJu2 &EE8B7I,U3zJs# LL "E 99  u:  66r2c|\}}d}|Stt||f}tt||f}||z |z|kDr||z|z }tt |}||kr||k(s ||kr ||k(r||} }|| fS||kr0|} ||t||z|z }|| fSt||z|z }|| fS|}||t||z|z } || fSt||z|z } || fS)aC Computes the output image size given the input image size and the desired output size. Args: image_size (`tuple[int, int]`): The input image size. size (`int`): The desired output size. max_size (`int`, *optional*): The maximum allowed output size. N)floatrBrCr@round) image_sizesizemax_sizeheightwidthraw_sizemin_original_sizemax_original_sizeohows r0get_size_with_aspect_ratioresFMFEH!#vuo"67!#vuo"67 0 04 7( B"336GGHuX'D%FdNETMB 8O    H$8X&./B 8OTF]U*+B 8O   H$8X%./B 8OTE\F*+B 8Or2 input_imager\.default_to_squarer]ct|ttfr8t|dk(r t|St|dk(r|d}n t d|r||fSt ||\}}||kr||fn||f\}}|} | t | |z|z } } |.|| krt d|d|| |kDrt || z| z |} } ||kr| | fS| | fS)a Find the target (height, width) dimension of the output image after resizing given the input image and the desired size. Args: input_image (`np.ndarray`): The image to resize. size (`int` or `tuple[int, int]` or list[int] or `tuple[int]`): The size to use for resizing the image. If `size` is a sequence like (h, w), output size will be matched to this. If `size` is an int and `default_to_square` is `True`, then image will be resized to (size, size). If `size` is an int and `default_to_square` is `False`, then smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size). default_to_square (`bool`, *optional*, defaults to `True`): How to convert `size` when it is a single int. If set to `True`, the `size` will be converted to a square (`size`,`size`). If set to `False`, will replicate [`torchvision.transforms.Resize`](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize) with support for resizing only the smallest edge and providing an optional `max_size`. max_size (`int`, *optional*): The maximum allowed for the longer edge of the resized image: if the longer edge of the image is greater than `max_size` after being resized according to `size`, then the image is resized again so that the longer edge is equal to `max_size`. As a result, `size` might be overruled, i.e the smaller edge may be shorter than `size`. Only used if `default_to_square` is `False`. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `tuple`: The target (height, width) dimension of the output image after resizing. r!r rz7size must have 1 or 2 elements if it is a list or tuplez max_size = zN must be strictly greater than the requested size for the smaller edge size = )r"tupler(lenr-r r@) rfr\rgr]r6r^r_shortlongrequested_new_short new_shortnew_longs r0get_resize_output_image_sizerpsJ$ & t9>;  Y!^7DVW Wd|";0ABMFE%*f_5&/65/KE4-s3F3MPU3U/VxI * *hZ(4486;  h "%h&:X&E"FxI$)VOHi N)X9NNr2resampler reducing_gap return_numpyc2ttdg||ntj}t |dk(s t d| t |}||n|}d}t|tjjst|}t|||}|\}} |j| |f||} |ritj| } | jdk(rtj| dn| } t!| |t"j$ } |r t'| d n| } | S) a Resizes `image` to `(height, width)` specified by `size` using the PIL library. Args: image (`np.ndarray`): The image to resize. size (`tuple[int, int]`): The size to use for resizing the image. resample (`int`, *optional*, defaults to `PILImageResampling.BILINEAR`): The filter to user for resampling. reducing_gap (`int`, *optional*): Apply optimization by resizing the image in two steps. The bigger `reducing_gap`, the closer the result to the fair resampling. See corresponding Pillow documentation for more details. data_format (`ChannelDimension`, *optional*): The channel dimension format of the output image. If unset, will use the inferred format from the input. return_numpy (`bool`, *optional*, defaults to `True`): Whether or not to return the resized image as a numpy array. If False a `PIL.Image.Image` object is returned. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If unset, will use the inferred format from the input. Returns: `np.ndarray`: The resized image. rJr!zsize must have 2 elementsF)rDr6)rqrrrKrLrgp?)rresizerBILINEARrjr-rr"rQrRrErPr#rTr* expand_dimsr1r r,r;) rr\rqrrr4rsr6rDr^r_ resized_images r0rvrvCsBfxj)#/x5G5P5PH t9>455 :5A'2':# KJ eSYY__ -07 UzM^_MFELL%8R^L_M/ CPBTBTXYBY}2>_l 3 ;:J:O:O > NBJJ /.~>> n %+N;; .tN/C.DE FFr2cvt|tjrt|jdk(rh|j tj k(r|jtj}|dddddfd|dddddfzzd|dddddfzzSt|dd|dzzd|dzzS)z* Converts RGB color to unique ID. r Nrr ir!) r"r#r$rjrUr5r>r8int32r@)colors r0 rgb_to_idrs%$U[[)9Q)> ;;"(( "LL*EQ1W~eAq!Gn 44y5Aq>7QQQ uQx#a.(9uQx+?? @@r2ct|tjrx|j}t t |j dgz}tj|tj}tdD]}|dz|d|f<|dz}|Sg}tdD]}|j|dz|dz}|S)z* Converts unique ID to RGB color. r r}r.) r"r#r$copyrir(rUzerosr>r)append)id_map id_map_copy rgb_shapergb_mapir_s r0 id_to_rgbrs&"**%kkm $v||,s23 ((9BHH5q A)C/GCFO C K  E 1X Vc\"3 Lr2c eZdZdZdZdZdZdZy) PaddingModezP Enum class for the different padding modes to use when padding images. constantreflect replicate symmetricN)__name__ __module__ __qualname____doc__CONSTANTREFLECT REPLICATE SYMMETRICr2r0rrsHGIIr2rgpaddingrOconstant_valuesc tfd}||}|tjk(r"||}tj|d|n|tj k(rtj|dnf|tj k(rtj|dn:|tjk(rtj|dntd||t|SS) a Pads the `image` with the specified (height, width) `padding` and `mode`. Args: image (`np.ndarray`): The image to pad. padding (`int` or `tuple[int, int]` or `Iterable[tuple[int, int]]`): Padding to apply to the edges of the height, width axes. Can be one of three formats: - `((before_height, after_height), (before_width, after_width))` unique pad widths for each axis. - `((before, after),)` yields same before and after pad for height and width. - `(pad,)` or int is a shortcut for before = after = pad width for all axes. mode (`PaddingMode`): The padding mode to use. Can be one of: - `"constant"`: pads with a constant value. - `"reflect"`: pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. - `"replicate"`: pads with the replication of the last value on the edge of the array along each axis. - `"symmetric"`: pads with the reflection of the vector mirrored along the edge of the array. constant_values (`float` or `Iterable[float]`, *optional*): The value to use for the padding if `mode` is `"constant"`. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the output image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use same as the input image. input_data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format for the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image. Returns: `np.ndarray`: The padded image. c t|ttfr ||f||ff}nt|tr#t |dk(r|d|df|d|dff}nvt|tr&t |dk(rt|dtr||f}n@t|tr"t |dk(rt|dtrnt d|t jk(rdg|ng|d}jdk(rdg|}|S|}|S)za Convert values to be in the format expected by np.pad based on the data format. r rr!zUnsupported format: )rr) r"r@rYrirjr-r r'r*)valuesrr6s r0_expand_for_data_formatz$pad.._expand_for_data_formats fsEl +v&(89F  &3v;!+;ay&),vay&).DEF  &3v;!+; 6RS9VY@Zf%F  &3v;!+; 6RS9V[@\ 3F8<= ='8;K;Q;Q&Q&"6"WhY_WhagWh',jjAo&"6" z0_reconstruct_nested_structure.._s23A2sNr )rCrr(rr)) indicesprocessed_images max_outer_idxresultnested_indicesrr inner_max_idx inner_listrUrs r0_reconstruct_nested_structurer \s2'22MV}q( )F!&N$1q  #$=1$ %#  q 12M=1#45J=1,- Aq6W$!(!QJE3$4U$;C$@JqM A#F1I# Mr2imagesdisable_groupingc |(|r|ddjn|dj}|dk(}|r|rtt|Dcic]8}tt||D]}||f|||jd:c}}tt|Dcic]'}tt||D] }||f||fdf )c}}fStt|Dcic]}|||jdc}tt|Dcic]}||df c}fSt ||\}}|j D cic]\}} |t j| d}}} ||fScc}}wcc}}wcc}wcc}wcc} }w)a Groups images by shape. Returns a dictionary with the shape as key and a list of images with that shape as value, and a dictionary with the index of the image in the original list as key and the shape and index in the grouped list as value. The function supports both flat lists of tensors and nested structures. The input must be either all flat or all nested, not a mix of both. Args: images (Union[list["torch.Tensor"], "torch.Tensor"]): A list of images or a single tensor disable_grouping (bool): Whether to disable grouping. If None, will be set to True if the images are on CPU, and False otherwise. This choice is based on empirical observations, as detailed here: https://github.com/huggingface/transformers/pull/38157 is_nested (bool, *optional*, defaults to False): Whether the images are nested. Returns: tuple[dict[tuple[int, int], list["torch.Tensor"]], dict[Union[int, tuple[int, int]], tuple[tuple[int, int], int]]]: - A dictionary with shape as key and list of images with that shape as value - A dictionary mapping original indices to (shape, index) tuples rcpur)devicer)rj unsqueezeritemsrr) r r rrrrrrrU images_lists r0group_images_by_shapervs<(11$$vay7G7G!U? ?DS[?Qq![`adeklmenao[pqVWQFF1IaL22155qFq-23v;-?t()sSYZ[S\~I^tDEA!Q #tt 8=S[7IJ!Avay**1--J`efijpfq`rLs[\QQRTUPVYLss s,B&)+T(N(XfWkWkWmnAS eU[[!<<nNn / //rtKLs os=E3,E9$E? F "F rrc|s4tt|Dcgc]}|||d||dc}St||Scc}w)a Reconstructs images in the original order, preserving the original structure (nested or not). The input structure is either all flat or all nested. Args: processed_images (dict[tuple[int, int], "torch.Tensor"]): Dictionary mapping shapes to batched processed images. grouped_images_index (dict[Union[int, tuple[int, int]], tuple[tuple[int, int], int]]): Dictionary mapping original indices to (shape, index) tuples. is_nested (bool, *optional*, defaults to False): Whether the images are nested. Cannot be inferred from the input, as some processing functions outputs nested images. even with non nested images,e.g functions splitting images into patches. We thus can't deduce is_nested from the input. Returns: Union[list["torch.Tensor"], "torch.Tensor"]: Images in the original structure. rr )r)rjr )rrrrs r0reorder_imagesrsf. 3345  1!4Q7 89Ma9PQR9S T  ))=?O PP  sAc0eZdZdZdej fdZy) NumpyToTensorz4 Convert a numpy array to a PyTorch tensor. rcjtj|jdddjS)Nr!rr )r from_numpyr+ contiguous)selfrs r0__call__zNumpyToTensor.__call__s+1a 89DDFFr2N)rrrrr#r$rrr2r0rrsGbjjGr2rr)NNN)TNN)NNNTN)NN)rrFrrF)rrGrrG)rrFrrF)rrGrrG)F)M collectionsrcollections.abcrrmathrtypingrrrSr# image_utilsr r r r rutilsrrrrrutils.import_utilsrrrrrrQrr tensorflowr jax.numpyjnpr$strr1rrYr5r;rEboolrPrir@rer(rprvrrrrrrrrrrrrrrrrrrrr dictrrrrr2r0r+s$0"ZY/ AE( ::(',-( &6&; <=(ZZ (\/3jj@D # ::# #*+# 88 #  c+;&; <= # ZZ #L:"& $@D 37 . ]Z [3737 37 c+;&; <= 37  37l$5c?$V#"@D @O@O U38_d3isCx@ A@O@Osm @O  c+;&; <= @O  @OL04"&.2@DD ::D S/D+,D3- D *+ D  D c+;&; <=DZZDV/3@D : ::: z%(( ): uj'' (:*+ :  c+;&; <= : ZZ :@;?@D Q ::Q S/Q%%5 567Q c+;&; <= Q ZZ Qh2::"**FJF:F*" BJJ 2::  GZGJG.A&,$,,58:>@D S ::S 3c3h%S/)BB CS S5(5/12 S %%5 567 S  c+;&; <= SZZSn*,/3@D# ::#*+# c+;&; <=#ZZ #L 0T 0:00 $~&6 7000000 sCx$~. ./eCsCx