1- # The Tier 2 Interpreter
2-
3- The [ basic interpreter] ( interpreter.md ) , also referred to as the ` tier 1 `
4- interpreter, consists of a main loop that executes the bytecode instructions
5- generated by the [ bytecode compiler] ( compiler.md ) and their
6- [ specializations] ( interpreter.md#Specialization ) . Runtime optimization in tier 1
7- can only be done for one instruction at a time. The ` tier 2 ` interpreter is
8- based on a mechanism to replace an entire sequence of bytecode instructions,
1+ # The JIT
2+
3+ The [ adaptive interpreter] ( interpreter.md ) consists of a main loop that
4+ executes the bytecode instructions generated by the
5+ [ bytecode compiler] ( compiler.md ) and their
6+ [ specializations] ( interpreter.md#Specialization ) . Runtime optimization in
7+ this interpreter can only be done for one instruction at a time. The JIT
8+ is based on a mechanism to replace an entire sequence of bytecode instructions,
99and this enables optimizations that span multiple instructions.
1010
11+ Historically, the adaptive interpreter was referred to as ` tier 1 ` and
12+ the JIT as ` tier 2 ` . You will see remnants of this in the code.
13+
1114## The Optimizer and Executors
1215
13- The program begins running in tier 1 , until a ` JUMP_BACKWARD ` instruction
14- determines that it is ` hot ` because the counter in its
15- [ inline cache] ( interpreter.md#inline-cache-entries ) indicates that is
16+ The program begins running on the adaptive interpreter , until a ` JUMP_BACKWARD `
17+ instruction determines that it is " hot" because the counter in its
18+ [ inline cache] ( interpreter.md#inline-cache-entries ) indicates that it
1619executed more than some threshold number of times (see
1720[ ` backoff_counter_triggers ` ] ( ../Include/internal/pycore_backoff.h ) ).
1821It then calls the function ` _PyOptimizer_Optimize() ` in
@@ -23,40 +26,41 @@ constructs an object of type
2326an optimized version of the instruction trace beginning at this jump.
2427
2528The optimizer determines where the trace ends, and the executor is set up
26- to either return to ` tier 1 ` and resume execution, or transfer control
27- to another executor (see ` _PyExitData ` in Include/internal/pycore_optimizer.h).
29+ to either return to the adaptive interpreter and resume execution, or
30+ transfer control to another executor (see ` _PyExitData ` in
31+ Include/internal/pycore_optimizer.h).
2832
2933The executor is stored on the [ ` code object ` ] ( code_objects.md ) of the frame,
3034in the ` co_executors ` field which is an array of executors. The start
3135instruction of the trace (the ` JUMP_BACKWARD ` ) is replaced by an
3236` ENTER_EXECUTOR ` instruction whose ` oparg ` is equal to the index of the
3337executor in ` co_executors ` .
3438
35- ## The uop optimizer
39+ ## The micro-op optimizer
3640
37- The optimizer that ` _PyOptimizer_Optimize() ` runs is configurable
38- via the ` _Py_SetTier2Optimizer() ` function (this is used in test
39- via ` _testinternalcapi.set_optimizer() ` .)
41+ The optimizer that ` _PyOptimizer_Optimize() ` runs is configurable via the
42+ ` _Py_SetTier2Optimizer() ` function (this is used in test via
43+ ` _testinternalcapi.set_optimizer() ` .)
4044
41- The tier 2 optimizer, ` _PyUOpOptimizer_Type ` , is defined in
42- [ ` Python/optimizer.c ` ] ( ../Python/optimizer.c ) . It translates
43- an instruction trace into a sequence of micro-ops by replacing
44- each bytecode by an equivalent sequence of micro-ops
45- (see ` _PyOpcode_macro_expansion ` in
45+ The micro-op optimizer (abbreviated ` uop ` to approximate ` μop ` ) is defined in
46+ [ ` Python/optimizer.c ` ] ( ../Python/optimizer.c ) as the type ` _PyUOpOptimizer_Type ` .
47+ It translates an instruction trace into a sequence of micro-ops by replacing
48+ each bytecode by an equivalent sequence of micro-ops (see
49+ ` _PyOpcode_macro_expansion ` in
4650[ pycore_opcode_metadata.h] ( ../Include/internal/pycore_opcode_metadata.h )
4751which is generated from [ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) ).
4852The micro-op sequence is then optimized by
4953` _Py_uop_analyze_and_optimize ` in
5054[ ` Python/optimizer_analysis.c ` ] ( ../Python/optimizer_analysis.c )
5155and a ` _PyUOpExecutor_Type ` is created to contain it.
5256
53- ## Running a uop executor on the tier 2 interpreter
57+ ## Debugging a uop executor in the JIT interpreter
5458
55- After a tier 1 ` JUMP_BACKWARD ` instruction invokes the uop optimizer
56- to create a tier 2 uop executor, it transfers control to this executor
57- via the ` GOTO_TIER_TWO ` macro.
59+ After a ` JUMP_BACKWARD ` instruction invokes the uop optimizer to create a uop
60+ executor, it transfers control to this executor via the ` GOTO_TIER_TWO ` macro.
5861
59- When tier 2 is enabled but the JIT is not (python was configured with
62+ When the JIT is configured to run on its interpreter (i.e., python is
63+ configured with
6064[ ` --enable-experimental-jit=interpreter ` ] ( https://docs.python.org/dev/using/configure.html#cmdoption-enable-experimental-jit ) ),
6165the executor jumps to ` tier2_dispatch: ` in
6266[ ` Python/ceval.c ` ] ( ../Python/ceval.c ) , where there is a loop that
@@ -67,19 +71,19 @@ which is generated by the build script
6771from the bytecode definitions in
6872[ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) .
6973This loop exits when an ` _EXIT_TRACE ` or ` _DEOPT ` uop is reached,
70- and execution returns to teh tier 1 interpreter.
74+ and execution returns to the adaptive interpreter.
7175
7276## Invalidating Executors
7377
7478In addition to being stored on the code object, each executor is also
75- inserted into a list of all executors which is stored in the interpreter
79+ inserted into a list of all executors, which is stored in the interpreter
7680state's ` executor_list_head ` field. This list is used when it is necessary
77- to invalidate executors because values that their construction depended
78- on may have changed.
81+ to invalidate executors because values they used in their construction may
82+ have changed.
7983
8084## The JIT
8185
82- When the jit is enabled (python was configured with
86+ When the full jit is enabled (python was configured with
8387[ ` --enable-experimental-jit ` ] ( https://docs.python.org/dev/using/configure.html#cmdoption-enable-experimental-jit ) ,
8488the uop executor's ` jit_code ` field is populated with a pointer to a compiled
8589C function that implement the executor logic. This function's signature is
@@ -89,7 +93,7 @@ the uop interpreter at `tier2_dispatch`, the executor runs the function
8993that ` jit_code ` points to. This function returns the instruction pointer
9094of the next Tier 1 instruction that needs to execute.
9195
92- The generation of the jitted fuctions uses the copy-and-patch technique
96+ The generation of the jitted functions uses the copy-and-patch technique
9397which is described in
9498[ Haoran Xu's article] ( https://sillycross.github.io/2023/05/12/2023-05-12/ ) .
9599At its core are statically generated ` stencils ` for the implementation
@@ -113,8 +117,8 @@ functions are used to generate the file
113117that the JIT can use to emit code for each of the bytecodes.
114118
115119For Python maintainers this means that changes to the bytecodes and
116- their implementations do not require changes related to the JIT ,
117- because everything the JIT needs is automatically generated from
120+ their implementations do not require changes related to the stencils ,
121+ because everything is automatically generated from
118122[ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) at build time.
119123
120124See Also:
0 commit comments