L i<ddlmZddlmZmZmZmZmZddlm Z ddl m Z m Z m Z mZddlmZmZe r ddlZdd lmZerdd lmZe j.eZe ed d GddeZy)) defaultdict) TYPE_CHECKINGAnyOptionalUnionoverload) load_image)add_end_docstringsis_torch_availableloggingrequires_backends) ChunkPipelinebuild_pipeline_init_argsN)'MODEL_FOR_MASK_GENERATION_MAPPING_NAMES)ImageT)has_image_processora points_per_batch (*optional*, int, default to 64): Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory. output_bboxes_mask (`bool`, *optional*, default to `False`): Whether or not to output the bounding box predictions. output_rle_masks (`bool`, *optional*, default to `False`): Whether or not to output the masks in `RLE` formatc beZdZdZdZdZdZdZfdZdZ e de e dfde d e d ee e ffd Ze de ee edfde d e d eee e ffd Zde e dee edfde d e d e ee e feee e ffffd Z ddededededeef dZ ddZ ddZxZS)MaskGenerationPipelinea0 Automatic mask generation for images using `SamForMaskGeneration`. This pipeline predicts binary masks for an image, given an image. It is a `ChunkPipeline` because you can separate the points in a mini-batch in order to avoid OOM issues. Use the `points_per_batch` argument to control the number of points that will be processed at the same time. Default is `64`. The pipeline works in 3 steps: 1. `preprocess`: A grid of 1024 points evenly separated is generated along with bounding boxes and point labels. For more details on how the points and bounding boxes are created, check the `_generate_crop_boxes` function. The image is also preprocessed using the `image_processor`. This function `yields` a minibatch of `points_per_batch`. 2. `forward`: feeds the outputs of `preprocess` to the model. The image embedding is computed only once. Calls both `self.model.get_image_embeddings` and makes sure that the gradients are not computed, and the tensors and models are on the same device. 3. `postprocess`: The most important part of the automatic mask generation happens here. Three steps are induced: - image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks, resizes them according to the image size, and transforms there to binary masks. - image_processor.filter_masks (on each minibatch loop): uses both `pred_iou_thresh` and `stability_scores`. Also applies a variety of filters based on non maximum suppression to remove bad masks. - image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones. Example: ```python >>> from transformers import pipeline >>> generator = pipeline(model="facebook/sam-vit-base", task="mask-generation") >>> outputs = generator( ... "http://images.cocodataset.org/val2017/000000039769.jpg", ... ) >>> outputs = generator( ... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", points_per_batch=128 ... ) ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"mask-generation"`. See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=mask-generation). FTc t|di|t|dt|d|jdk7rt d|j d|j ty)NvisiontorchptzThe z is only available in PyTorch.)super__init__r framework ValueError __class__check_model_typer)selfkwargsr s l/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/transformers/pipelines/mask_generation.pyrzMaskGenerationPipeline.__init__\s\ "6"$)$( >>T !tDNN#33QRS S EFc i}i}i}d|vr|d|d<d|vr|d|d<d|vr|d|d<d|vr|d|d<d|vr|d|d<d|vr|d|d<d|vr|d|d<d|vr|d|d<d |vr|d |d <d |vr|d |d <d |vr|d |d <d |vr|d |d <d |vr|d |d <d|vr|d|d<d|vr|d|d<|||fS)Npoints_per_batchpoints_per_cropcrops_n_layerscrop_overlap_ratiocrop_n_points_downscale_factortimeoutpred_iou_threshstability_score_offsetmask_thresholdstability_score_thresh max_hole_areamax_sprinkle_areacrops_nms_threshoutput_rle_maskoutput_bboxes_maskr)r"r#preprocess_kwargspostprocess_kwargsforward_paramss r$_sanitize_parametersz+MaskGenerationPipeline._sanitize_parametersfs  '4:;M4N 0 1  &39:K3L / 0 v %289I2J . / 6 )6<=Q6R 2 3 +v 5BHIiBj > ?  +1)+< i (  &067H0IN, - #v -7=>V7WN3 4 v %/56F/GN+ , #v -7=>V7WN3 4 f $.4_.EN? + & (289L2MN. /  '5;R7S 3 4 .2DDDr%imagez Image.Imageargsr#returncyNrr"r:r;r#s r$__call__zMaskGenerationPipeline.__call__sgjr%cyr>rr?s r$r@zMaskGenerationPipeline.__call__s #r%cx|jdd}|jdd}t||g|||d|S)a Generates binary segmentation masks Args: image (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`): Image or list of images. mask_threshold (`float`, *optional*, defaults to 0.0): Threshold to use when turning the predicted masks into binary values. pred_iou_thresh (`float`, *optional*, defaults to 0.88): A filtering threshold in `[0,1]` applied on the model's predicted mask quality. stability_score_thresh (`float`, *optional*, defaults to 0.95): A filtering threshold in `[0,1]`, using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions. stability_score_offset (`int`, *optional*, defaults to 1): The amount to shift the cutoff when calculated the stability score. crops_nms_thresh (`float`, *optional*, defaults to 0.7): The box IoU cutoff used by non-maximal suppression to filter duplicate masks. crops_n_layers (`int`, *optional*, defaults to 0): If `crops_n_layers>0`, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops. crop_overlap_ratio (`float`, *optional*, defaults to `512 / 1500`): Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap. crop_n_points_downscale_factor (`int`, *optional*, defaults to `1`): The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n. timeout (`float`, *optional*, defaults to None): The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever. Return: `Dict`: A dictionary with the following keys: - **mask** (`PIL.Image`) -- A binary mask of the detected object as a PIL Image of shape `(width, height)` of the original image. Returns a mask filled with zeros if no object is found. - **score** (*optional* `float`) -- Optionally, when the model is capable of estimating a confidence of the "object" described by the label and the mask. num_workersN batch_size)rCrD)poprr@)r"r:r;r#rCrDr s r$r@zMaskGenerationPipeline.__call__sIPjj5 ZZ d3 wgg+R\g`fggr%r)r*r(r+r,c#Kt||}|jjjd|jjjd}|jj ||||||\} } } } |j| d} |j dk(r| j |j} |j5|j dk(r|j}|5|j| |j} |jj| jd}t|t r |\}}|| d<n|}|| d <dddddd| j"d }||n|}|d kr t%d t'd ||D]7}| dd|||zddddf}| dd|||zf}|||z k(}||| |d | 9y#1swY~xYw#1swYxYww)N)r, longest_edgeheightr)imagesreturn_tensors)device pixel_valuesintermediate_embeddingsimage_embeddingsrrzCannot have points_per_batch<=0. Must be >=1 to returned batched outputs. To return all points at once, set points_per_batch to None) input_points input_labels input_boxesis_last)r image_processorsizegetgenerate_crop_boxesrtodtypedevice_placementget_inference_context_ensure_tensor_on_devicerKmodelget_image_embeddingsrE isinstancetupleshaperrange)r"r:r'r)r*r(r+r, target_size crop_boxes grid_pointscropped_imagesrP model_inputsinference_context embeddingsrNrMn_pointsibatched_pointslabelsrRs r$ preprocessz!MaskGenerationPipeline.preprocesss?5'2**//33NDDXDXD]D]DaDabjDkl @D@T@T@h@h ;0BOUsA = K++>RV+W >>T !'??4::6L  " " $ H~~%$($>$>$@!&( H#'#@#@VZVaVa#@#bL!%!@!@AQAQR`Aa!bJ"*e4DNA(*ABY %>?+5(8HL!34 H H$$$Q'/?/K+QY q M  q($45 A(A4D0D,Da)JKN!!Q-=)=%="=>F8&666G . &)"     1 H H H Hs8C G1 'G%2A*GG%$A5G1G" G%%G.*G1c |jd}|jd} |jdj} |jdj} |jd i|} | d} i}|||d<| |dkDr||d<|r"|jj| | f|| d d |} |jj | | || d }| d }|jj |d|d| d|d||||\}}}|| ||d S)NrQrRoriginal_sizesreshaped_input_sizes pred_masksr1rr2F)r/rpbinarize iou_scores)masksrRboxesrsr)rEtolistr\rSpost_process_masks filter_masks)r"rfr-r0r/r.r1r2rQrRrorp model_outputslow_resolution_masksr7rtrsrus r$_forwardzMaskGenerationPipeline._forwards|#&&}5 ""9-%))*:;BBD+//0FGNNP" 2\2  -\:  $2?  /  (->-B6G 2 3 #J4#7#7#J#J$$ .%9 $ % $ $$77 )!5 8 #<0 #'#7#7#D#D !H qM 1  N  "  " $  z5$   r%c,g}g}g}|D]b}|j|jd|j|jd|j|jddtj|}tj|}|j j ||||\} } } } tt} |D].}|jD]\}}| |j|0i}|r| |d<|r| |d<| | d|| S)Nrsrtrurle_maskbounding_boxes)rtscores) appendrEextendrcatrS post_process_for_mask_generationrlistitems)r"ryr4r5r3 all_scores all_masks all_boxes model_output output_masksrsr}r~extraoutputkvoptionals r$ postprocessz"MaskGenerationPipeline.postprocess4s/   ) 8L   l..|< =   \--g6 7   \--g6 7 8 YYz* IIi( =A=Q=Q=r=r z9.>> : j(ND!# #F  #1a" # # #+HZ )7H% &%QxQ5QQr%)@rgg? rN)g)\(?gffffff?rrNN)FFgffffff?)__name__ __module__ __qualname____doc___load_processor_load_image_processor_load_feature_extractor_load_tokenizerrr9rrstrrdictr@rintfloatrrmr{r __classcell__)r s@r$rrs0dO #OG$ELjeC$67jjsjW[\_ad\dWejj #49d=&99:#CF#RU# d38n ##*h3 tCy$}:MMN*hWZ*hfi*h tCH~tDcN33 4*h^$.!./#'8 8 " 8  8),8%8z# 8 z  !Rr%r) collectionsrtypingrrrrr image_utilsr utilsr r r rbaserrrmodels.auto.modeling_autorPILr get_loggerrloggerrrr%r$rsz#@@$ :S   H %6B qR]qR qRr%