L iUdZddlmZddlmZddlmZddlmZm Z ddl Z ddl m Z m Z mZmZmZddlmZiZeed fed <gd ZGd d eZd ededdfdZdedededede j6dedefdZdedededede j6dedefdZedeedededede ee j6fdedef dZdefdZ edededfd Z!y)!z This is an experimental new API for PyTorch Distributed. This is actively in development and subject to change or deletion entirely. This is intended as a proving ground for more flexible and object oriented distributed APIs. ) Generator)contextmanager) timedelta)ProtocolUnionN)_current_process_group_set_process_group ProcessGroupReduceOpStore) rendezvousProcessGroupFactory _BACKENDS)r r rregister_backend new_groupcurrent_process_group process_groupcHeZdZdZdededededejde de fd Z y ) rz%Protocol for process group factories.storerank world_sizetimeoutdevicekwargsreturnc y)N)selfrrrrrrs ^/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/distributed/_dist2.py__call__zProcessGroupFactory.__call__'sN) __name__ __module__ __qualname____doc__r intrtorchrobjectr r rr!rrr$sR/      r!namefuncrcD|tvrtd|d|t|<y)z Register a new process group backend. Args: name: The name of the backend. func: The function to create the process group. Backend z already registeredN)r ValueError)r)r*s rrr2s* y8D6)<=>>IdOr!rrrrrrc hddlm}t|dk(sJd|||||}|jt |||}|j tj j|j|tj j||jtjdtj j|tjjr>|jtjdtj j||S)Nr)ProcessGroupGlooz'Gloo backend received unexpected kwargscpucuda) torch.distributedr/len_set_sequence_number_for_groupr _set_default_backend BackendTypeGLOO_register_backendr'rr1 is_available) rrrrrrr/ backend_classpgs r _gloo_factoryr<@s3 v;! FFF $UD*gFM002 eT: .BL4499:!9!9!>!> N U\55::M zz   LL ,":":"?"?  Ir!c ddlm}|j}||_|j D],\}} t ||st d|t||| .|||||} | j| j|t|||} | jtjj| j|tjj| | S)Nr)ProcessGroupNCCLzUnknown option )r2r>Options_timeoutitemshasattrKeyErrorsetattrr4eager_connect_single_devicer r5r6NCCLr8) rrrrrrr>optskvr:r;s r _nccl_factoryrJ^s3  # # %DDM 1tQ_QC01 1a %UD*dCM002--f5 eT: .BL4499:!9!9!>!> N Ir!glooncclbackendc |tvrtd|dtj|}t t t d\}}}|j|t||||||fi|S)aF Create a new process group with the given backend and options. This group is independent and will not be globally registered and thus not usable via the standard torch.distributed.* APIs. Args: backend: The backend to use for the process group. timeout: The timeout for collective operations. device: The device to use for the process group. **kwargs: All remaining arguments are passed to the backend constructor. See the backend specific documentation for details. Returns: A new process group. r,z not registeredzenv://)rr-r'rnextiterr set_timeout)rMrrrrrrs rrr~st*i8G9O<== \\& !F"4 8(<#=>E4 g W eT:w Q& QQr!ctS)zn Get the current process group. Thread local method. Returns: The current process group. )rrr!rrrs  " ##r!r;)NNNc#vKt}t| dt|y#t|wxYww)zs Context manager for process groups. Thread local method. Args: pg: The process group to use. N)rr )r;prev_pgs rrrs1$%Gr$ 7#7#s9) 9 69)"r%collections.abcr contextlibrdatetimertypingrrr'torch._C._distributed_c10drr r r r torch.distributed.rendezvousr rdictstr__annotations____all__rrr&rr(r<rJrrrrr!rr_s &%" 4/1 4** +0  (  3 &9 d     LL   <    LL   8''R R R #u||# $R R  R@$|$ $l $y1A'B $ $r!