L i[z dZddlZddlZddlZddlZddlZddlZddl m Z ddl Z gdZ d:de ddfdZde fdZd:de ddfd Zde fd Zd:de ddfd Zde fd Zd eddfdZdefdZdeddfdZdefdZd;dede ddfdZdefdZdeeeeeffdZdeeeffdZde ddfdZ dd#ed$ed%ed&ed'ed(ed)e d*e d+e jRd,ed-e e jRd.e d/e dee jTe jTffd0Z+ d;d#ed$ed%ed1ed&ed'ed(ed)e d*e d2e jRd,ed/e dee jTe jTffd3Z,d4ed5eddfd6Z-d=d7Z.d ed8eddfd9Z/y)?aq# This module exposes a TunableOp interface. Some operations, such as GEMMs, could be implemented using more than one library or more than one technique. For example, a GEMM could be implemented for CUDA or ROCm using either the blas or blasLt libraries. Further, ROCm's rocblas and hipblaslt libraries allow the user to query for all possible algorithms and then choose one. How does one know which implementation is the fastest and should be chosen? That's what TunableOp provides. Enabling TunableOp and Tuning Separately ======================================== The TunableOp feature is enabled separately from enabling the tuning phase itself. Enabling TunableOp means that PyTorch will replace any standard operators with their Tunable implementations. Any call to a TunableOp first checks whether it has already been tuned for the given operator inputs. If so, it will immediately call the tuned operation; no further tuning will take place even when the tuning setting is enabled. Instead if no tuning result is found, and tuning is enabled, the TunableOp will benchmark every registered implementation of that operator for the given set of inputs and select the fastest. File Input and Output ===================== The first time any TunableOp is invoked, the internal database of tuned operations will be prepared by attempting to read the results from the given file. The default filename is 'tunableop_results.csv'. To support tuning when multiple GPUs are used across multiple processes, the GPU device ordinal is automatically inserted into the filename to avoid multiple processes overwriting the same file. If tuning is enabled and new tunings are discovered during the course of your workload, it will also write out to this same filename with all tunings, both the ones it read in at startup as well as the new ones found at runtime. This can be used, for example, to build up a tunings file across many workloads by reusing the same file. The output file is automatically created when the application terminates. This behavior can be controlled by the C++ and Python APIs but not the environment variables. Assuming you specified a filename, you'll end up with a CSV file with contents like so:: Validator,PT_VERSION,2.2.0 Validator,ROCM_VERSION,6.0.0.0-12969-1544e39 Validator,HIPBLASLT_VERSION,0.6.0-a9c5cc7 Validator,ROCBLAS_VERSION,4.0.0-72e57364-dirty GemmTunableOp_float_NT,nt_25088_4096_64,Gemm_Hipblaslt_1219,1.262 GemmTunableOp_float_NT,nt_4096_4096_64,Gemm_Rocblas_1216,0.033 Note the "Validator" lines. If you change a library version, or ROCm version, or PyTorch version, TunableOp will detect this and reject the tunings file because the prior tunings are likely affected by other software changes. The remaining lines are the tuned solutions for each TunableOp encountered during your execution. Each line consists of 4 comma-separated fields: operator name, operator parameters, solution name, and average execution time. The execution time is an optional field. The CSV file can be edited, but with caution. For example, the solution name (field 3) can be changed to "Default" and it will fall back to the original PyTorch untuned implementation. Or, in the case of ROCm's hipBLAS or hipBLASLt libraries, if you know the specific solution index you can override the solution that TunableOp selected by replacing the value. The operator name and parameters (fields 1 and 2) are internally named and should not be modified. In the case of GemmTunableOp, field 1 indicates the datatype and whether the inputs are transposed (T) or not (N) and field 2 indicates the M, N, K input shapes. There is an option to enable verbose output but it is only recommended for debugging purposes. This will produce a lot of diagnostic messages but may be useful to see if TunableOp is being used at all. Otherwise, TunableOp is completely silent, besides file output, unless there is a warning or error during its use. The verbose option is only available by setting the environment variable PYTORCH_TUNABLEOP_VEROBSE=1. A Note on Tuning Behavior, Warmup, and Cache Effects ==================================================== Tuning an operator consists of iterating through the list or registered implementations and profiling each one. The profile is established by running a single implementation in a loop multiple times and taking the average execution time. There is also an optional warmup phase prior to tuning that can help with reaching stable power states by the hardware. During tuning of a workload the various hardware caches will more likely produce hits than when not tuning. There are options for flushing the instruction cache and rotate the input tensors which might help produce a more faithful profile of the tuned operator as if the operator were run within a larger workload instead of in a tight, repetitive loop. By default, each possible solution for a given operator will be run for either 100 iterations or as many iterations that can be run within 30ms, whichever is smaller, and its average execution will be calculated. The fastest solution among all that were successfully profiled will be chosen. A profile might fail if the given solution doesn't achieve the same accuracy as the default implementation or if the solution returns an error code. Current Tunable Operators ========================= TunableGemm for ROCm -------------------- Currently only a TunableGemm for ROCm is implemented. Note that CUDA builds of PyTorch will function correctly when using TunableOp but the only solution available to CUDA builds is the 'Default' implementation i.e. the original cuBLAS default, now called through TunableOp. Any call to at::cuda::blas::gemm() or ::bgemm() will be routed through TunableOp when enabled. Calling gemm() for a given set of input arguments (transa, transb, m, n, k) will attempt to use the fastest available implementation across both rocblas and hipblaslt. Offline Tuning ============== Motivation ---------- There are several use cases for offline tuning. One use case involves a workload with a high-memory utilization, where regular tuning might lead to running out of memory. Another use case is for compute-intensive workloads. In such cases, it is more resource-efficient to collect the GEMMs for the workload once and then tune repeatedly with different tuning parameters or libraries. Workflow -------- There are basically two steps: 1) Set the environment variables to collect the untuned GEMM and this will generate ``tunableop_untuned0.csv``: .. code-block:: bash export PYTORCH_TUNABLEOP_ENABLED=1 export PYTORCH_TUNABLEOP_TUNING=0 export PYTORCH_TUNABLEOP_RECORD_UNTUNED=1 ... 2) Run a Python script that reads the ``tunableop_untuned0.csv`` and generates the ``tunableop_results0.csv``, like this: .. code-block:: python import torch.cuda.tunable as tunable import os os.putenv("PYTORCH_TUNABLEOP_ENABLED", "1") os.putenv("PYTORCH_TUNABLEOP_TUNING", "1") os.putenv("PYTORCH_TUNABLEOP_RECORD_UNTUNED", "0") tunable.tune_gemm_in_file("tunableop_untuned0.csv") It is also possible to take multiple untuned files and distribute the GEMMs for tuning to multiple GPUs within a single node. In the first step, the GEMMs are first gathered and duplicate GEMMs are eliminated. Next, the GEMMs are distributed to different GPUs for tuning. After all GEMMs are tuned, the results from all the GPUs are then gathered into a single file whose base filename has ``_full0`` appended to it (for example ``tunableop_results_full0.csv``). Finally, this new file, containing the gathered results, will be duplicated N times, once for each GPU as convenience to the user will run the workload with the tuned configuration on N GPUs. .. code-block:: python if __name__ == "__main__": num_gpus = 8 # number of GPUs that will be used during the tuning process tunable.mgpu_tune_gemm_in_file("tunableop_untuned?.csv", num_gpus) Note that the usage of the ``mgpu_tune_gemm_in_file`` API is different from its single GPU counterpart (``tune_gemm_in_file``). The body of the Python script that calls the API must be wrapped in ``main()`` as shown due to the use of concurrent futures module. The argument to ``mgpu_tune_gemm_in_file`` must contain a wild card expression (``?`` or ``*``) to generate the list of untuned files containing the GEMMs to be processed. The ``num_gpus`` must between 1 and the total number of GPUs available. Tuning Context ============== The behavior of TunableOp is currently manipulated through environment variables, the C++ interface of at::cuda::tunable::getTuningContext(), or the torch.cuda.tunable python interfaces. The environment variables take precedence over any setting you manipulate using the C++ or Python APIs. Environment Variable Interface ------------------------------ Environment variables are cached the first time they are read. You cannot use the environment variable interface programmatically since the settings become fixed. Use the C++ or Python APIs instead. N)Optional)enable is_enabled tuning_enabletuning_is_enabledrecord_untuned_enablerecord_untuned_is_enabledset_max_tuning_durationget_max_tuning_durationset_max_tuning_iterationsget_max_tuning_iterations set_filename get_filename get_resultsget_validatorswrite_file_on_exit write_file read_filetune_gemm_in_filemgpu_tune_gemm_in_fileset_rotating_buffer_sizeget_rotating_buffer_sizevalreturncBtjj|y)z@This is the big on/off switch for all TunableOp implementations.N)torch_C_cuda_tunableop_enablers X/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/cuda/tunable.pyrrs HH##C(c>tjjS)z1Returns whether the TunableOp feature is enabled.)rr_cuda_tunableop_is_enabledr!r rrs 88 . . 00r!cBtjj|y)zEnable tuning of TunableOp implementations. When enabled, if a tuned entry isn't found, run the tuning step and record the entry. N)rr_cuda_tunableop_tuning_enablers r rrs  HH**3/r!c>tjjS)z7Returns whether TunableOp implementations can be tuned.)rr!_cuda_tunableop_tuning_is_enabledr$r!r rrs 88 5 5 77r!cBtjj|y)zEnable recording untuned of TunableOp perations for offline tuning. When enabled, if a tuned entry isn't found, write it to the untuned file. N)rr_cuda_record_untuned_enablers r rrs  HH((-r!c>tjjS)zEReturns whether TunableOp operations are recorded for offline tuning.)rr_cuda_record_untuned_is_enabledr$r!r r r s 88 3 3 55r!durationcBtjj|y)zSet max time in milliseconds to spend tuning a given solution. If both max tuning duration and iterations are set, the smaller of the two will be honored. At minimum 1 tuning iteration will always be run. N)rr'_cuda_tunableop_set_max_tuning_duration)r-s r r r s  HH44X>r!c>tjjS)z.Get max time to spend tuning a given solution.)rr'_cuda_tunableop_get_max_tuning_durationr$r!r r r s 88 ; ; ==r! iterationscBtjj|y)zSet max number of iterations to spend tuning a given solution. If both max tuning duration and iterations are set, the smaller of the two will be honored. At minimum 1 tuning iteration will always be run. N)rr)_cuda_tunableop_set_max_tuning_iterations)r2s r r r s  HH66zBr!c>tjjS)z4Get max iterations to spend tuning a given solution.)rr)_cuda_tunableop_get_max_tuning_iterationsr$r!r r r s 88 = = ??r!filenameinsert_device_ordinalcDtjj||y)a0Set the filename to use for input/output of tuning results. If :attr:`insert_device_ordinal` is ``True`` then the current device ordinal will be added to the given filename automatically. This can be used in a 1-process-per-gpu scenario to ensure all processes write to a separate file. N)rr_cuda_tunableop_set_filename)r7r8s r rrs HH))(4IJr!c>tjjS)zGet the results filename.)rr_cuda_tunableop_get_filenamer$r!r rr&s 88 0 0 22r!c>tjjS)zReturn all TunableOp results.)rr_cuda_tunableop_get_resultsr$r!r rr+s 88 / / 11r!c>tjjS)z Return the TunableOp validators.)rr_cuda_tunableop_get_validatorsr$r!r rr0s 88 2 2 44r!cBtjj|y)aDuring Tuning Context destruction, write file to disk. This is useful as a final flush of your results to disk if your application terminates as result of normal operation or an error. Manual flushing of your results can be achieved by manually calling ``write_file()``.N)rr"_cuda_tunableop_write_file_on_exitrs r rr5s  HH//4r!cX| t}tjj|S)zfWrite results to a CSV file. If :attr:`filename` is not given, ``get_filename()`` is called. )rrr_cuda_tunableop_write_filer7s r rr>s& > 88 . .x 88r!cX| t}tjj|S)zqRead results from a TunableOp CSV file. If :attr:`filename` is not given, ``get_filename()`` is called. )rrr_cuda_tunableop_read_filerEs r rrHs& > 88 - -h 77r! buffer_sizec@tjj|S)zSet rotating buffer size to this value in MB, if the buffer size is greater than zero. If less than zero, query L2 cache size. If equal to zero, means deactivate rotating buffer. )rr(_cuda_tunableop_set_rotating_buffer_size)rHs r rrRs 88 < <[ IIr!c>tjjS)z*Get the rotating buffer size in kilobytes.)rr(_cuda_tunableop_get_rotating_buffer_sizer$r!r rrZs 88 < < >>r!ctsJtsJtjj }t |5}|D] }|j dst||" dddy#1swYyxYw)ztune GEMM in file.Gemm ScaledGemmN)rrrcudacurrent_deviceopen startswith_process_single_offline_gemm)r7deviceidfilelines r rr_sp <<   zz((*H h=4 =D56,T8< ====sA2A22A;filename_patternct}tj|D]A}t|5}|D]%}|jds|j |' dddC|S#1swYOxYw)zOProcess multiple untuned results file and return a set with duplicates removed.rNN)setglobrSrTadd)rYunique_gemm_entries file_pathrWrXs r &_gather_unique_untuned_gemm_from_filesr`msx%YY/02 )_ 2 2??#9:'++D1 2 2 22   2 2sA&A&&A/ ct}g}t}|5|dk7r0|jd}|dk7r|dkDr|d|dz dz||dz}nKd}nHtjd}||dk(rd }n)d |vr|j d d}n|j dd }d|vsJd }t j |}t|}|D]W} t| 5} | D]9} | jd r|r|j| )|j| ; dddd}Y|j dd} t| d5} |D]} | j| |D]} | j|  dddtd|D]3}| j dt|}tj | |5y#1swYxYw#1swYXxYw)zMGather results from multiple tunableop results file and create a single file.N.r?PYTORCH_TUNABLEOP_FILENAMEztunableop_results?.csvz%dz?.F ValidatorT_full0w0)r[rfindosgetenvreplacer\lenrSrTappendr]writerangestrshutilcopy) gemm_linesvalidator_linesresults_filenamedot_posrYresults_filename_env FirstFilematching_filesnum_matching_filesr_rWrX output_fileout_fileiduplicate_files r _gather_tunableop_resultsrzs JO$~ $)9R)?"'', b=Wq[!7Q;/#58H8RR  " !yy)EF  '+?2+E7  ) )3;;D#F 3;;CF  " "" "IYY/0N^,#  )_ ) )??;/%'..t4NN4(  ) )  #**39K k3 !8# !D NN4  ! !D NN4  !!1( )1$,,S#a&9 K01) ) )!!sF5(%F581G5F> G mnkldaldbldctransAtransBdtypeArVdtypeBrandn subMatrixc d} d}| |} | rt||x}}| r3tj|||| }tj|||| }n6tj||f| | | }tj||f|| | }|r|d|d|fj n |d|d|f}|r|d|d|fj n |d|d|f}||fS| r|r'tj |||| j ntj |||| }|r'tj ||| | j ntj ||| | }||fS|r)tj||f| || j ntj||f| || }|r)tj||f|| | j ntj||f|| | }||fS)zHelper function for _process_single_offline_gemm. Creates matrices that are then consumed by one of the Torch GEMM APIs. g?g?Ndtypedevice)maxrrfulltrand)rrrrrrrrrrVrrrfillAfillBrowsArowsBmatAmatBsubAsubBs r _create_matricesrs& E E ~ C # ;;ucID;;ucID::uclEQD::uclEQD#)tBQBF|~~tBQBF|#)tBQBF|~~tBQBF|Tz  1avh?AACZZ1F8D  1avh?AACZZ1F8D  Tz Aq65xHJJLZZAVHM  Aq65xHJJLZZAVHM  Tzr!brc | rt||x} } tj|| || | }tj|| || | }|r|d|d|d|fjddn |d|d|d|f}|r|d|d|d|fjddn |d|d|d|f}||fS|rtj|||| | ntj|||| | }|rtj|||| | ntj|||| | }|r|jddn|}|r|jddn|}||fS)zHelper function for _process_single_offline_gemm. Creates batch matrices that are then consumed by one of the Torch GEMM APIs. Similar to _create_matrices but for 3D batch matrices. rNre)rrr transposer)rrrrrrrrrrrVrrrrrrrs r _create_batch_matricesrss$ C #{{1eShG{{1eShG39tBQBBQBJ))!Q/tBQBBQBJ?O39tBQBBQBJ))!Q/tBQBBQBJ?OTz JJq!QeH =Aq!5B  JJq!QeH =Aq!5B (.t~~a#4'-t~~a#4Tzr!untuned_gemm_linegpu_idc< #$%dt|z}tjtjtjtjtj tj tjtjtjtjtjd }|jjddd}|djd}d}d}d}d} |dk(r|djd\} } } | ddk(} | d dk(}|j| }| d k(r*d tj j"j$_nd tj j"j$_n|djd}|d vsJ|djd}|d} |d dz|dz}|ddz|dz}|dk(r|ddz|dz}n|d}||ddk(} ||d dk(}|j|}|j|}|j|} |d jd}|d dDcgc] }t)|c}\%$#| dk(r+|ddk(sJ|ddDcgc] }t)|c}\}}}n*|ddk(sJ|ddDcgc] }t)|c}\}}}t+#$%fd|||fDrd }nd }| dk(r$d k(s %d k(s#d k(r,|s| sn'|r%d k(rnt-j.dd|d zy|t1|tj2st5d|t7$%#||||| ||| \}}tj8||y| dk(r$d k(s %d k(s#d k(rt-j.dd|d zy|ddDcgc] }t)|c}\}|t1|tj2st5d|t;$%#|||||| ||| \}}tj<||y| dk(r| d usJ|d usJ|t1|tj2st5d|t7$%#||||| |||d | \}}|dd k(sJ|d!d"k(rd }nd }|ri|rtj>d $f|#ntj>$d f|#}| rtj>d %f|#ntj>%d f|#}n.tj@d$|#}tj@d%|#}|dd&k(sJ|d'd(k(rtjB||||| )yd*}|j|d'} | rtjD%f|| |+ntjD$f|| |+}!tjB||||| |!,y| d-k(r|| k7sJ|t1|tj2st5d|tjF%||+}!t7$%#||||| ||| \}"}|jI}tjJjLjO|"||!yt-j.d.| ycc}wcc}wcc}wcc}w)/zProcess a single untuned GEMM.zcuda:) floattf32doubleBFloat16Halfzc10::complexzc10::complex Float8_e4m3fn Float8_e5m2Float8_e4m3fnuzFloat8_e5m2fnuz,Nr_rTrerTF)rrGemmStridedBatchedTunableOpld c3,K|] }|fv ywNr$).0itemrrrs r z/_process_single_offline_gemm..ts 94Aq!9  9s GemmTunableOpzJOffline tuning is not supported for this GEMM. Use online tuning instead. zSkipped tuning for: z%dtype must be a torch.dtype, but got )rzHOffline tuning is not support for this GEMM. Use online tuning instead. ScaledGemmTunableOp)rrrrw 1)rg?g?bias None)scale_ascale_b out_dtypeg?r)rrrrGemmAndBiasTunableOpzerror: unknown op )(rtrfloat32float64bfloat16half complex128 complex64 float8_e4m3fn float8_e5m2float8_e4m3fnuzfloat8_e5m2fnuzstripsplitcountgetbackendsrQmatmul allow_tf32intallwarningswarn isinstancer TypeErrorrmmrbmmonestensor _scaled_mmrrrnn functionallinear)&rrrV dtype_dict untuned_gemmunderscore_countrrrdtypeCop_sig data_typelayoutrrruntuned_gemm_temp data_typeA data_typeB data_typeCgrrrrrrrrowwisescaleAscaleBfillbias bias_dtyperXrrrs& @@@r rUrU.sV$H --NN % 0 0$,,(( 00 00 J%**,2237:L#A,,S1 E F F F1&21o&;&;C&@#Fc!c!y)  48ENN   & & 149ENN   & & 1Q%%c*(O11#6#1%&q)C/2CA2FF &q)C/2CA2FF A:*1-36G6JJJ*1-J"5)!,3"5)!,3 + + +$Q--c2!21Q!78AQ8IQ1 .. #t++++? = 5%++ >CE7KL L% q!S#sFFE8y d t 0 0 6Q!VqAv MMZ(a(9:;  0156!s1v6 = 5%++ >CE7KL L+         d  $ ( (~~ >FEKK!@CF8LM M%        d !#t+++ Q 3 &GG  Aq6(3ZZAx8  Aq6(3ZZAx8  \\#h7F\\#h7F $... R F *   dFFf H#(9"(=>J A4HMZZhjR    dFFfSW  ) ) = 5%++ >CE7KL Lzz!5:" q!S#sFFE8y 4vvx ""1dD1 *6(34c9DCP7sZ  Z5ZZctdur tjdtdtdusJt dusJt dusJy)zHelper function for multi-GPU tuning case. Need to check that TunableOp feature is enabled and that tuning is enabled. Fz-TunableOp was disabled. Trying to enable now.TN)rrrrrr r$r!r _check_tuning_assertionsrsT |u EFt <4    $ && & $ &% // /r!num_gpusct|}tjj}d|cxkr|ksJJt j d}g}g}d}t jj||t5}|D]2} |jt| |} |j| |dz|z}4t jj|D]} | jt|D](} |jt } |j| *t jj|D]} | j dddtjj#t%y#1swY2xYw)zDProcess one or more files and distribute work over one or more GPUs.respawnr) max_workers mp_context initializerN)r`rrQ device_countmp get_context concurrentfuturesProcessPoolExecutorrsubmitrUrq as_completedresultrsr synchronizer) rYrr^ total_gpusrr  flush_resultshexecutorrXfuturer flush_results r rr st@AQR((*J  &J && && &(JGM A    / /, 0 "  ( #D__%A4KF NN6 "Q("A # !((55g> F MMO x /A#??:6L   . /'..;;MJ "L    ! "-"2 JJ7""s ;CFF )T)Fr)rN)NTF)0__doc__concurrent.futuresr r\multiprocessingr rmrurtypingrr__all__boolrrrrrr rr r r r rtrrtuplerrrrrrrrrr[r`rrTensorrrrUrrr$r!r r s\tl    2))) 1D1 0t0t0848 .t.t.646 ?c?d?>> C#C$C@3@ K3KtKK3c3 2U3S%/02 5c3h5 5D5T59#9$98 88J#J$J?#? = = = S SX <1T%)B B B B B B B B B KKBB U[[ !B BB 5<< %&Bb- - - - - - - - - - ;;--- 5<< %&-`N5CN5N5N5b 0* S* C* D* r!