Almost all interpreter state is nominally stored in the frame structure. A pointer to the current frame is held in frame
. It contains:
There are some other fields in the frame structure of less importance; notably frames are linked together in a singly-linked list via the previous
pointer, pointing from callee to caller. The frame also holds a pointer to the current function, globals, builtins, and the locals converted to dict (used to support the locals()
built-in).
The frame contains a single array of object pointers, localsplus
, which contains both the fast locals and the stack. The top of the stack, including the locals, is indicated by stacktop
. For example, in a function with three locals, if the stack contains one value, frame->stacktop == 4
.
The interpreters share an implementation which uses the same memory but caches the depth (as a pointer) in a C local, stack_pointer
. We aren't sure yet exactly how the JIT will implement the stack; likely some of the values near the top of the stack will be held in registers.
The canonical, in-memory, representation of the instruction pointer is frame->instr_ptr
. It always points to an instruction in the bytecode array of the frame's code object. Dispatching on frame->instr_ptr
would be very inefficient, so in Tier 1 we cache the upcoming value of frame->instr_ptr
in the C local next_instr
.
stack_pointer
is the same as in Tier 1 (but may be different in the JIT).frame->instr_ptr
, as all stores to frame->instr_ptr
are explicit.frame->instr_ptr
, emitting _SET_IP
whenever frame->instr_ptr
would have been updated.The Tier 2 instruction pointer is strictly internal to the Tier 2 interpreter, so isn't visible to any other part of the code.
Unwinding uses exception tables to find the next point at which normal execution can occur, or fail if there are no exception handlers. During unwinding both the stack and the instruction pointer should be in their canonical, in-memory representation.
The implementation of jumps within a single Tier 2 superblock/trace is just that, an implementation. The implementation in the JIT and in the Tier 2 interpreter will necessarily be different. What is in common is that representation in the Tier 2 optimizer.
We need the following types of jumps:
Currently, we don't have patchable exits. Patching exits should be fairly straightforward in the interpreter. It will be more complex in the JIT.
(We might also consider deoptimizations as a separate jump type.)
Another important piece of VM state is the thread state, held in tstate
. The current frame pointer, frame
, is always equal to tstate->current_frame
. The thread state also holds the exception state (tstate->exc_info
) and recursion tracking data (tstate->py_recursion_remaining
, tstate->c_stack*
).
The thread state is also used to access the interpreter state (tstate->interp
), which is important since the “eval breaker” flags are stored there (tstate->interp->ceval.eval_breaker
, an “atomic” variable), as well as the “PEP 523 function” (tstate->interp->eval_frame
). The interpreter state also holds the optimizer state (optimizer
and some counters). Note that the eval breaker may be moved to the thread state soon as part of the multicore (PEP 703) work.
The tier 2 IR (Internal Representation) format is also the basis for the Tier 2 interpreter (though the two formats may eventually differ). This format is also used as the input to the machine code generator (the JIT compiler).
Tier 2 IR entries are all the same size; there is no equivalent to EXTENDED_ARG
or trailing inline cache entries. Each instruction is a struct with the following fields (all integers of varying sizes):
_
.EXTENDED_ARG
prefixes. Up to 32 bits.