L i)]DddlZddlmZddlmZmZmZmZddlZddl m cm Z ddl mZddlmZddlmZddlmZdd lmZdd lmZdd lmZdd lmZdd lmZmZm Z m!Z!m"Z"ddl#m$Z$erddl%m&Z&ddgZ'edGddZ(edGdde(Z)y)N)contextmanager)AnyOptional TYPE_CHECKINGUnion)trace_structured)tqdm)config) compatibility)_make_graph_module)Tracer)Graph) GraphModule)Argument map_aggregatemap_argNodeTarget)Proxy)Iterator Interpreter TransformerTis_backward_compatiblec $eZdZdZed d dej jdede e fdZ edddd d e e e efd ed efd ZeddZedZedde d efdZeddddeedfde eefd efdZeddddeedfde eefd efdZeddddeedfde eefd efdZeddddeedfde eefd efdZeddddeedfde eefd efdZeddddeedfde eefd efdZeddefdZedde d eee ffdZeddede d efdZy)!raa An Interpreter executes an FX graph Node-by-Node. This pattern can be useful for many things, including writing code transformations as well as analysis passes. Methods in the Interpreter class can be overridden to customize the behavior of execution. The map of overridable methods in terms of call hierarchy:: run() +-- run_node +-- placeholder() +-- get_attr() +-- call_function() +-- call_method() +-- call_module() +-- output() Example: Suppose we want to swap all instances of ``torch.neg`` with ``torch.sigmoid`` and vice versa (including their ``Tensor`` method equivalents). We could subclass Interpreter like so:: class NegSigmSwapInterpreter(Interpreter): def call_function( self, target: Target, args: Tuple, kwargs: Dict ) -> Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(target, args, kwargs) def call_method(self, target: Target, args: Tuple, kwargs: Dict) -> Any: if target == "neg": call_self, *args_tail = args return call_self.sigmoid(*args_tail, **kwargs) return super().call_method(target, args, kwargs) def fn(x): return torch.sigmoid(x).neg() gm = torch.fx.symbolic_trace(fn) input = torch.randn(3, 4) result = NegSigmSwapInterpreter(gm).run(input) torch.testing.assert_close(result, torch.neg(input).sigmoid()) Args: module (torch.nn.Module): The module to be executed garbage_collect_values (bool): Whether to delete values after their last use within the Module's execution. This ensures optimal memory usage during execution. This can be disabled to, for example, examine all of the intermediate values in the execution by looking at the ``Interpreter.env`` attribute. graph (Optional[Graph]): If passed, the interpreter will execute this graph instead of `module.graph`, using the provided `module` argument to satisfy any requests for state. TrNmodulegarbage_collect_valuesgraphc|_tjj_||_njj_i_d_|_d_jr[ii_ dtdtffd }tjjD]}|jD] }||| yy)NrTnusercp|vr1||<jj|gj|yyN)user_to_last_uses setdefaultappend)r!r"node_to_last_useselfs Z/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/torch/fx/interpreter.pyregister_last_usesz0Interpreter.__init__..register_last_usesss=,,*.$Q'**55dB?FFqI-)rdict named_modules submodulesrenvnamerextra_tracebackr%rreversednodes _input_nodes)r)rrrr+noder!r(s` @r*__init__zInterpreter.__init__Ys t{{88:;  DJ**DJ$&! &<##  & & 24 =?D " Jd J$ J !!1!12 0**0A&q$/0 0 'r,) initial_envenable_io_processingr8r9returnc  ||ni_|rjj|}t|_t t jjjdtjr(ttjjnddddtjd}jjD]}|jd|jvr# j|j|<j<r.j>jA|gD]}j|=|jBdk(sj|}|rjjE|cS|cSy#t $r#}j"rd|j% |j&r|j&dd n t  d |j(z t+j,t.raj,jKt+j,jt0j2j4rt7d d fd  dz dz f|j&ddz|_t+|t8rt;|j&|d}~wwxYw)a Run `module` via interpretation and return the result. Args: *args: The arguments to the Module to run, in positional order initial_env (Optional[Dict[Node, Any]]): An optional starting environment for execution. This is a dict mapping `Node` to any value. This can be used, for example, to pre-populate results for certain `Nodes` so as to do only partial evaluation within the interpreter. enable_io_processing (bool): If true, we process the inputs and outputs with graph's process_inputs and process_outputs function first before using them. Returns: Any: The value returned from executing the Module Nz: rT)totaldescinitialpositionleavedisabledelayr zWhile executing z z Original traceback: artifactc dddS)Nfx_interpreter_errorstring)r1encodingrIr,r*z!Interpreter.run..s(>,41r,cHdjjddS)Nz GraphModule: FT) print_outputinclude_stride)rprint_readable)msgr)sr*rJz!Interpreter.run..s0#&%#';;#=#=5ae#=#f"g!ir,) metadata_fn payload_fnz Use tlparse to see full graph. zY(https://github.com/pytorch/tlparse?tab=readme-ov-file#tlparse-parse-structured-pt2-logs)output)#r0rprocess_inputsiter args_iterr lenr4r1r verbose_progressstrlistdisable_progressupdaterun_node Exceptionr2 format_nodeargs stack_trace isinstancerrtorchfxrrKeyError RuntimeErrorrr%getopprocess_outputs) r)r8r9r_pbarr6e to_delete output_valrOs ` @r*runzInterpreter.run|sv,#."9;r ,4::,,d3D(,T djj&&'II;b@W@WT$***:*:%;!<]_ `a++ JJ$$2 D KKNtxx  !%t!4<**!%!7!7!;!;D"!E,I+,ww("!XXd^ ,JJ..z:$]2  '',T-=-=-?,@AC56VVQVVAYKtC51SC4T5E5E4FGGC"4;; < KK--9&t{{'8'8%((..I(&)( >>CvvC!VaffQRj0AF!!X.*AFF3:7 s7F KDJ;;Kct|}i}|jjD] }|jdk(st |||<"|j |j |S)a  Run `module` via interpretation and return the result. This uses the "boxed" calling convention, where you pass a list of arguments, which will be cleared by the interpreter. This ensures that input tensors are promptly deallocated. placeholder)r8)rTrr4rgnextclearrm)r) args_listrUr0r!s r* boxed_runzInterpreter.boxed_runsaO !! )Att}$iA ) xxCx((r,c#Ktj|d|jj5ddddy#1swYyxYww)N Interpreter_) fx_tracebackset_current_meta __class____name__)r)r6s r*_set_current_nodezInterpreter._set_current_nodesD  * * L!8!8 9:      s.A > A AA r!c|j|5|j|\}}t|tsJt|tsJt ||j |j||cdddS#1swYyxYw)aB Run a specific node ``n`` and return the result. Calls into placeholder, get_attr, call_function, call_method, call_module, or output depending on ``node.op`` Args: n (Node): The Node to execute Returns: Any: The result of executing ``n`` N)rzfetch_args_kwargs_from_envratupler-getattrrgtargetr)r!r_kwargss r*r\zInterpreter.run_nodesx # #A & ?::1=LD&dE* **fd+ ++&74&qxxv>  ? ? ?s A A<<Brrr_.rct|tsJ|jdrt|jS t |jS#t $r-}t|dkDr |dcYd}~Std|d|d}~wwxYw)a Execute a ``placeholder`` node. Note that this is stateful: ``Interpreter`` maintains an internal iterator over arguments passed to ``run`` and this method returns next() on that iterator. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Returns: Any: The argument value that was retrieved. *rNz+Expected positional argument for parameter z, but one was not passed in!) rarX startswithrYrUrp StopIterationrVre)r)rr_rsis r*rozInterpreter.placeholders(&#&&&   S !' ' DNN++  t9q=7N&EfXMij  s#A BB*B0BBcHt|tsJ|j|S)a0 Execute a ``get_attr`` node. Will retrieve an attribute value from the ``Module`` hierarchy of ``self.module``. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Return: Any: The value of the attribute that was retrieved rarX fetch_attrr)rr_rs r*get_attrzInterpreter.get_attr's#$&#&&&v&&r,c6t|trJ||i|S)a Execute a ``call_function`` node and return the result. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Return Any: The value returned by the function invocation )rarXrs r* call_functionzInterpreter.call_function<s&"fc***t&v&&r,cR|^}}t|tsJt|||i|S)a Execute a ``call_method`` node and return the result. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Return Any: The value returned by the method invocation )rarXr~)r)rr_rself_obj args_tails r* call_methodzInterpreter.call_methodRs9$ $9&#&&&(wx()>v>>r,cXt|tsJ|j|}||i|S)a Execute a ``call_module`` node and return the result. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Return Any: The value returned by the module invocation rr)rr_rsubmods r* call_modulezInterpreter.call_modulejs2(&#&&&(t&v&&r,c |dS)a4 Execute an ``output`` node. This really just retrieves the value referenced by the ``output`` node and returns it. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation Return: Any: The return value referenced by the output node rrIrs r*rRzInterpreter.outputs $Awr,c |jd}|j}t|D]@\}}t||s#t ddj |d|dzt ||}B|S)z Fetch an attribute from the ``Module`` hierarchy of ``self.module``. Args: target (str): The fully-qualified name of the attribute to fetch Return: Any: The value of the attribute. .z#Node referenced nonexistent target Nr )splitr enumeratehasattrrejoinr~)r)r target_atomsattr_itriatoms r*rzInterpreter.fetch_attrs||C( ;; . /GAt8T*"9#((.load_argsEDHH$"A3 Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(target, args, kwargs) def call_method( self, target: "Target", args: Tuple[Argument, ...], kwargs: Dict[str, Any], ) -> Any: if target == "neg": call_self, *args_tail = args return call_self.sigmoid(*args_tail, **kwargs) return super().call_method(target, args, kwargs) def fn(x): return torch.sigmoid(x).neg() gm = torch.fx.symbolic_trace(fn) transformed: torch.nn.Module = NegSigmSwapXformer(gm).transform() input = torch.randn(3, 4) torch.testing.assert_close(transformed(input), torch.neg(input).sigmoid()) Args: module (GraphModule): The ``Module`` to be transformed. Trct||t|_|jj |j j Gddt}||j|_||j_ y)Nc0eZdZdeffd ZdefdZxZS)/Transformer.__init__..TransformerTracerrc>t|||_i|_yr$)superr7r tensor_attrs)r)rrxs r*r7z8Transformer.__init__..TransformerTracer.__init__s "" =?!r,r:cy)NTrI)r)___s r*is_leaf_modulez>Transformer.__init__..TransformerTracer.is_leaf_modulesr,)ryrrrr7rr __classcell__rxs@r*TransformerTracerrs @e @  t r,r) rr7r new_graph set_codegenr_codegenrtracerroot)r)rrrxs r*r7zTransformer.__init__ s_   ""6<<#8#89  (7 ! r,rrr_.rr:ct|tsJ|rtt|ntj j }t|jj|||jS)a Execute a ``placeholder`` node. In ``Transformer``, this is overridden to insert a new ``placeholder`` into the output graph. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation ) default_value) rarXrprTinspect Signatureemptyrrror)r)rr_rrs r*rozTransformer.placeholders\ &#&&&,0T$Z(g6G6G6M6M  NN & &v] & KT[[  r,cbt|tsJ|jjd|||S)a Execute a ``get_attr`` node. In ``Transformer``, this is overridden to insert a new ``get_attr`` node into the output graph. Args: target (Target): The call target for this node. See `Node `__ for details on semantics args (Tuple): Tuple of positional args for this invocation kwargs (Dict): Dict of keyword arguments for this invocation r)rarXr create_proxyrs r*rzTransformer.get_attr2s/ &#&&&{{'' FD&IIr,ct|tsJ|j|}|jj ||j ||Sr$)rarXrrrforwardrs r*rzTransformer.call_moduleEsA &#&&&({{&&vv~~tVLLr,c>|jjd|||S)Nr)rrrs r*rzTransformer.call_functionNs {{''vNNr,ctj5t| d}ddddtt t fdtfd}|jjt||}t|jjd}|jdk(sJ|jj!D]\}}||j|<t#|j$|jS#1swYxYw) z_ Transform ``self.module`` and return the transformed ``GraphModule``. F)r9Nar:c>t|tr |jS|Sr$)rarr6)rs r* strip_proxyz*Transformer.transform..strip_proxy_s!+Au!5qvv<1> = =s C77D)ryrrrr r7r}rr-rXrrrorrrrrrrs@r*rrs|0d$/"0""$/  &+HcM&: DHcN  0 *$/JJ&+HcM&:JDHcNJ J0J$$/MM&+HcM&:MDHcNM M0M$/OO&+HcM&:ODHcNO O0O $/?;?0?r,)*r contextlibrtypingrrrrrbtorch.fx.tracebackrc tracebackrvtorch._loggingr torch.hubr r<r _compatibilityr _lazy_graph_moduler _symbolic_tracerrr graph_modulerr6rrrrrproxyrcollections.abcr__all__rrrIr,r*rs%66 ))+)2#%@@( - (d+v'v',v'r d+R?+R?,R?r,