Instruction Set Architecture
Triton VM is a stack machine with RAM. It is a Harvard architecture with readonly memory for the program. The arithmetization of the VM is defined over the Bfield $F_{p}$ where $p$ is the Oxfoi prime, i.e., $2_{64}−2_{32}+1$.^{1} This means the registers and memory elements take values from $F_{p}$, and the transition function gives rise to lowdegree transition verification polynomials from the ring of multivariate polynomials over $F_{p}$.
At certain points in the course of arithmetization we need an extension field over $F_{p}$ to ensure sufficient soundness. To this end we use $F_{p_{3}}=⟨X_{3}−X+1⟩F_{p}[X] $, i.e., the quotient ring of remainders of polynomials after division by the Shah polynomial, $X_{3}−X+1$. We refer to this field as the Xfield for short, and its elements as Xfield elements.
Instructions have variable width:
they either consist of one word, i.e., one Bfield element, or of two words, i.e., two Bfield elements.
An example for a singleword instruction is add
, adding the two elements on top of the stack, leaving the result as the new top of the stack.
An example for a doubleword instruction is push
+ arg
, pushing arg
to the stack.
Triton VM has two interfaces for data input, one for public and one for secret data, and one interface for data output, whose data is always public. The public interfaces differ from the private one, especially regarding their arithmetization.
The name “Oxfoi” comes from the prime's hexadecimal representation 0xffffffff00000001
.
Data Structures
Memory
The term memory refers to a data structure that gives read access (and possibly write access, too) to elements indicated by an address. Regardless of the data structure, the address lives in the Bfield. There are four separate notions of memory:
 Program Memory, from which the VM reads instructions.
 Op Stack Memory, which stores the operational stack.
 RAM, to which the VM can read and write field elements.
 Jump Stack Memory, which stores the entire jump stack.
Program Memory
Program memory holds the instructions of the program currently executed by Triton VM. It is immutable.
Operational Stack
The stack is a lastin;firstout data structure that allows the program to store intermediate variables, pass arguments, and keep pointers to objects held in RAM. In this document, the operational stack is either referred to as just “stack” or, if more clarity is desired, “op stack.”
From the Virtual Machine's point of view, the stack is a single, continuous object. The first 16 elements of the stack can be accessed very conveniently. Elements deeper in the stack require removing some of the higher elements, possibly after storing them in RAM.
For reasons of arithmetization, the stack is actually split into two distinct parts:
 the operational stack registers
st0
throughst15
, and  the Op Stack Underflow Memory.
The motivation and the interplay between the two parts is described and exemplified in arithmetization of the Op Stack table.
Random Access Memory
Triton VM has dedicated Random Access Memory.
It can hold up to $p$ many base field elements, where $p$ is the Oxfoi prime^{1}.
Programs can read from and write to RAM using instructions read_mem
and write_mem
.
The initial RAM is determined by the entity running Triton VM. Populating RAM this way can be beneficial for a program's execution and proving time, especially if substantial amounts of data from the input streams needs to be persisted in RAM. This initialization is one form of secret input, and one of two mechanisms that make Triton VM a nondeterministic virtual machine. The other mechanism is dedicated instructions.
Jump Stack
Another lastin;firstout data structure, similar to the op stack.
The jump stack keeps track of return and destination addresses.
It changes only when control follows a call
or return
instruction, and might change through the recurse_or_return
instruction.
Furthermore, executing instructions return
, recurse
, and recurse_or_return
require a nonempty jump stack.
Of course, the machine running Triton VM might have stricter limitations: storing or accessing $(2_{64}−2_{32}+1)⋅63.99$ bits $≈148$ exabytes of data is a nontrivial engineering feat.
Registers
This section covers all columns in the Processor Table. Only a subset of these registers relate to the instruction set; the remaining registers exist only to enable an efficient arithmetization and are marked with an asterisk (*).
Register  Name  Purpose 

*clk  cycle counter  counts the number of cycles the program has been running for 
*IsPadding  padding indicator  indicates whether current state is only recorded to improve on STARK's computational runtime 
ip  instruction pointer  contains the memory address (in Program Memory) of the instruction 
ci  current instruction register  contains the current instruction 
nia  next instruction register  contains either the instruction at the next address in Program Memory, or the argument for the current instruction 
*ib0 through ib6  instruction bit  decomposition of the instruction's opcode used to keep the AIR degree low 
jsp  jump stack pointer  contains the memory address (in jump stack memory) of the top of the jump stack 
jso  jump stack origin  contains the value of the instruction pointer of the last call 
jsd  jump stack destination  contains the argument of the last call 
st0 through st15  operational stack registers  contain explicit operational stack values 
*op_stack_pointer  operational stack pointer  the current size of the operational stack 
*hv0 through hv5  helper variable registers  helper variables for some arithmetic operations 
*cjd_mul  clock jump difference lookup multiplicity  multiplicity with which the current clk is looked up by the Op Stack Table, RAM Table, and Jump Stack Table 
Instruction
Register ip
, the instruction pointer, contains the address of the current instruction in Program Memory.
The instruction is contained in the register current instruction, or ci
.
Register next instruction (or argument), or nia
, either contains the next instruction or the argument for the current instruction in ci
.
For reasons of arithmetization, ci
is decomposed, giving rise to the instruction bit registers, labeled ib0
through ib6
.
Stack
The stack is represented by 16 registers called stack registers (st0
– st15
) plus the op stack underflow memory.
The top 16 elements of the op stack are directly accessible, the remainder of the op stack, i.e, the part held in op stack underflow memory, is not.
In order to access elements of the op stack held in op stack underflow memory, the stack has to shrink by discarding elements from the top – potentially after writing them to RAM – thus moving lower elements into the stack registers.
The stack grows upwards, in line with the metaphor that justifies the name "stack".
For reasons of arithmetization, the stack always contains a minimum of 16 elements. Trying to run an instruction which would result in a stack of smaller total length than 16 crashes the VM.
Stack elements st0
through st10
are initially 0.
Stack elements st11
through st15
, i.e., the very bottom of the stack, are initialized with the hash digest of the program that is being executed.
See the mechanics of program attestation for further explanations on stack initialization.
The register op_stack_pointer
is not directly accessible by the program running in TritonVM.
It exists only to allow efficient arithmetization.
Helper Variables
Some instructions require helper variables in order to generate an efficient arithmetization.
To this end, there are 6 helper variable registers, labeled hv0
through hv5
.
These registers are part of the arithmetization of the architecture, but not needed to define the instruction set.
Because they are only needed for some instructions, the helper variables are not generally defined.
For instruction group decompose_arg
and instructions
skiz
,
split
,
eq
,
merkle_step
,
merkle_step_mem
,
xx_dot_step
, and
xb_dot_step
,
the behavior is defined in the respective sections.
Instructions
Most instructions are contained within a single, parameterless machine word.
Some instructions take a machine word as argument and are so considered doubleword instructions.
They are recognized by the form “instr
+ arg
”.
Regarding Opcodes
An instruction's operation code, or opcode, is the machine word uniquely identifying the instruction. For reasons of efficient arithmetization, certain properties of the instruction are encoded in the opcode. Concretely, interpreting the field element in standard representation:
 for all doubleword instructions, the least significant bit is 1.
 for all instructions shrinking the operational stack, the secondtoleast significant bit is 1.
 for all u32 instructions , the thirdtoleast significant bit is 1.
The first property is used by instruction skiz. The second property helps with proving consistency of the Op Stack. The third property allows efficient arithmetization of the running product for the Permutation Argument between Processor Table and U32 Table.
Op Stack Manipulation
Instruction  Opcode  old op stack  new op stack  Description 

pop + n  3  e.g., _ c b a  e.g., _  Pops the n top elements from the stack. 1 ⩽ n ⩽ 5 
push + a  1  _  _ a  Pushes a onto the stack. 
divine + n  9  e.g., _  e.g., _ b a  Pushes n nondeterministic elements a to the stack. Interface for secret input. 1 ⩽ n ⩽ 5 
pick + i  17  e.g., _ d x c b a  e.g., _ d c b a x  Moves the element indicated by i to the top of the stack. 0 ⩽ i < 16 
place + i  25  e.g., _ d c b a x  e.g., _ d x c b a  Moves the top of the stack to the indicated position i . 0 ⩽ i < 16 
dup + i  33  e.g., _ e d c b a  e.g., _ e d c b a d  Duplicates the element i positions away from the top. 0 ⩽ i < 16 
swap + i  41  e.g., _ e d c b a  e.g., _ e a c b d  Swaps the i th stack element with the top of the stack. 0 ⩽ i < 16 
Instruction divine n
(together with merkle_step
) make Triton a virtual machine that can execute nondeterministic programs.
As programs go, this concept is somewhat unusual and benefits from additional explanation.
The name of the instruction is the verb (not the adjective) meaning “to discover by intuition or insight.”
From the perspective of the program, the instruction divine n
makes some n
elements magically appear on the stack.
It is not at all specified what those elements are, but generally speaking, they have to be exactly correct, else execution fails.
Hence, from the perspective of the program, it just nondeterministically guesses the correct values in a moment of divine clarity.
Looking at the entire system, consisting of the VM, the program, and all inputs – both public and secret – execution is deterministic: the divined values were supplied as and are read from secret input.
Control Flow
Instruction  Opcode  old op stack  new op stack  old ip  new ip  old jump stack  new jump stack  Description 

halt  0  _  _  ip  ip+1  _  _  Solves the halting problem (if the instruction is reached). Indicates graceful shutdown of the VM. 
nop  8  _  _  ip  ip+1  _  _  Do nothing 
skiz  2  _ a  _  ip  ip+s  _  _  Skip next instruction if a is zero. s ∈ {1, 2, 3} depends on a and whether the next instruction takes an argument. 
call + d  49  _  _  ip  d  _  _ (ip+2, d)  Push (ip+2,d) to the jump stack, and jump to absolute address d 
return  16  _  _  ip  o  _ (o, d)  _  Pop one pair off the jump stack and jump to that pair's return address (which is the first element). 
recurse  24  _  _  ip  d  _ (o, d)  _ (o, d)  Peek at the top pair of the jump stack and jump to that pair's destination address (which is the second element). 
recurse_or_return  32  _ b a .....  _ b a .....  ip  d or o  _ (o, d)  _ (o, d) or _  Like recurse if st5 = a != b = st6 , like return if a == b . See also extended description below. 
assert  10  _ a  _  ip  ip+1  _  _  Pops a if a == 1 , else crashes the virtual machine. 
The instructions return
, recurse
, and recurse_or_return
require a nonempty jump stack.
Should the jump stack be empty, executing any of these instruction causes Triton VM to crash.
Instruction recurse_or_return
behaves – surprise! – either like instruction recurse
or like instruction return
.
The (deterministic) decision which behavior to exhibit is made at runtime and depends on stack elements st5
and st6
.
If st5 != st6
, then recurse_or_return
acts like instruction recurse
, else like return
.
The instruction is designed to facilitate loops using pointer equality as termination condition and to play nicely with instructions merkle_step
and merkle_step_mem
.
Memory Access
Instruction  Opcode  old op stack  new op stack  old RAM  new RAM  Description 

read_mem + n  57  e.g., _ p+2  e.g., _ v2 v1 v0 p1  [p: v0, p+1, v1, …]  [p: v0, p+1, v1, …]  Reads consecutive values vi from RAM at address p and puts them onto the op stack. Decrements RAM pointer (st0 ) by n . 1 ⩽ n ⩽ 5 
write_mem + n  11  e.g., _ v2 v1 v0 p  e.g., _ p+3  []  [p: v0, p+1, v1, …]  Writes op stack's n topmost values vi to RAM at the address p+i , popping the vi . Increments RAM pointer (st0 ) by n . 1 ⩽ n ⩽ 5 
For the benefit of clarity, the effect of every possible argument is given below.
instruction  old op stack  new op stack  old RAM  new RAM 

read_mem 1  _ p  _ a p1  [p: a]  [p: a] 
read_mem 2  _ p+1  _ b a p1  [p: a, p+1: b]  [p: a, p+1: b] 
read_mem 3  _ p+2  _ c b a p1  [p: a, p+1: b, p+2: c]  [p: a, p+1: b, p+2: c] 
read_mem 4  _ p+3  _ d c b a p1  [p: a, p+1: b, p+2: c, p+3: d]  [p: a, p+1: b, p+2: c, p+3: d] 
read_mem 5  _ p+4  _ e d c b a p1  [p: a, p+1: b, p+2: c, p+3: d, p+4: e]  [p: a, p+1: b, p+2: c, p+3: d, p+4: e] 
write_mem 1  _ a p  _ p+1  []  [p: a] 
write_mem 2  _ b a p  _ p+2  []  [p: a, p+1: b] 
write_mem 3  _ c b a p  _ p+3  []  [p: a, p+1: b, p+2: c] 
write_mem 4  _ d c b a p  _ p+4  []  [p: a, p+1: b, p+2: c, p+3: d] 
write_mem 5  _ e d c b a p  _ p+5  []  [p: a, p+1: b, p+2: c, p+3: d, p+4: e] 
Hashing
Instruction  Opcode  old op stack  new op stack  Description 

hash  18  _ jihgfedcba  _ yxwvu  Hashes the stack's 10 topmost elements and puts their digest onto the stack, shrinking the stack by 5. 
assert_vector  26  _ edcba edcba  _ edcba  Assert equality of st(i) to st(i+5) for 0 <= i < 4 . Crashes the VM if any pair is unequal. Pops the 5 topmost elements. 
sponge_init  40  _  _  Initializes (resets) the Sponge's state. Must be the first Sponge instruction executed. 
sponge_absorb  34  _ jihgfedcba  _  Absorbs the stack's ten topmost elements into the Sponge state. 
sponge_absorb_mem  48  _ dcba p  _ hgfe (p+10)  Absorbs the ten RAM elements at addresses p , p+1 , … into the Sponge state. Overwrites stack elements st1 through st4 with the first four absorbed elements. 
sponge_squeeze  56  _  _ zyxwvutsrq  Squeezes the Sponge and pushes the 10 squeezed elements onto the stack. 
The instruction hash
works as follows.
The stack's 10 topmost elements (jihgfedcba
) are popped from the stack, reversed, and concatenated with six zeros, resulting in abcdefghij000000
.
The Tip5 permutation is applied to abcdefghij000000
, resulting in αβγδεζηθικuvwxyz
.
The first five elements of this result, i.e., αβγδε
, are reversed and pushed to the stack.
For example, the old stack was _ jihgfedcba
and the new stack is _ εδγβα
.
The instructions sponge_init
, sponge_absorb
, sponge_absorb_mem
, and sponge_squeeze
are the interface for using the Tip5 permutation in a Sponge construction.
The capacity is never accessible to the program that's being executed by Triton VM.
At any given time, at most one Sponge state exists.
Only instruction sponge_init
resets the state of the Sponge, and only the three Sponge instructions influence the Sponge's state.
Notably, executing instruction hash
does not modify the Sponge's state.
When using the Sponge instructions, it is the programmer's responsibility to take care of proper input padding:
Triton VM cannot know the number of elements that will be absorbed.
Base Field Arithmetic on Stack
Instruction  Opcode  old op stack  new op stack  Description 

add  42  _ b a  _ c  Computes the sum (c ) of the top two elements of the stack (b and a ) over the field. 
addi + a  65  _ b  _ c  Computes the sum (c ) of the top element of the stack (b ) and the immediate argument (a ). 
mul  50  _ b a  _ c  Computes the product (c ) of the top two elements of the stack (b and a ) over the field. 
invert  64  _ a  _ b  Computes the multiplicative inverse (over the field) of the top of the stack. Crashes the VM if the top of the stack is 0. 
eq  58  _ b a  _ (a == b)  Tests the top two stack elements for equality. 
Bitwise Arithmetic on Stack
Instruction  Opcode  old op stack  new op stack  Description 

split  4  _ a  _ hi lo  Decomposes the top of the stack into the lower 32 bits and the upper 32 bits. 
lt  6  _ b a  _ a<b  “Less than” of the stack's two topmost elements. Crashes the VM if a or b is not u32. 
and  14  _ b a  _ a&b  Bitwise and of the stack's two topmost elements. Crashes the VM if a or b is not u32. 
xor  22  _ b a  _ a^b  Bitwise exclusive or of the stack's two topmost elements. Crashes the VM if a or b is not u32. 
log_2_floor  12  _ a  _ ⌊log₂(a)⌋  The number of bits in a minus 1, i.e., $⌊g_{2}a⌋$. Crashes the VM if a is 0 or not u32. 
pow  30  _ e b  _ b**e  The top of the stack to the power of the stack's runner up. Crashes the VM if exponent e is not u32. 
div_mod  20  _ d n  _ q r  Division with remainder of numerator n by denominator d . Guarantees the properties n == q·d + r and r < d . Crashes the VM if n or d is not u32 or if d is 0. 
pop_count  28  _ a  _ w  Computes the hamming weight or “population count” of a . Crashes the VM if a is not u32. 
Extension Field Arithmetic on Stack
Instruction  Opcode  old op stack  new op stack  Description 

xx_add  66  _ z y x b c a  _ w v u  Adds the two extension field elements encoded by field elements z y x and b c a . 
xx_mul  74  _ z y x b c a  _ w v u  Multiplies the two extension field elements encoded by field elements z y x and b c a . 
x_invert  72  _ z y x  _ w v u  Inverts the extension field element encoded by field elements z y x inplace. Crashes the VM if the extension field element is 0. 
xb_mul  82  _ z y x a  _ w v u  Scalar multiplication of the extension field element encoded by field elements z y x with field element a . Overwrites z y x with the result. 
Input/Output
Instruction  Opcode  old op stack  new op stack  Description 

read_io + n  73  e.g., _  e.g., _ c b a  Reads n BField elements from standard input and pushes them to the stack. 1 ⩽ n ⩽ 5 
write_io + n  19  e.g., _ c b a  e.g., _  Pops n elements from the stack and writes them to standard output. 1 ⩽ n ⩽ 5 
ManyInOne
Instruction  Opcode  old op stack  new op stack  Description 

merkle_step  36  _ i edcba  _ (i div 2) zyxwv  Helps traversing a Merkle tree during authentication path verification. Crashes the VM if i is not u32. See extended description below. 
merkle_step_mem  44  _ p f i edcba  _ p+5 f (i div 2) zyxwv  Helps traversing a Merkle tree during authentication path verification with the authentication path being supplied in RAM. Crashes the VM if i is not u32. See extended description below. 
xx_dot_step  80  _ z y x *b *a  _ z+p2 y+p1 x+p0 *b+3 *a+3  Reads two extension field elements from RAM located at the addresses corresponding to the two top stack elements, multiplies the extension field elements, and adds the product (p0, p1, p2) to an accumulator located on stack immediately below the two pointers. Also, increase the pointers by the number of words read. 
xb_dot_step  88  _ z y x *b *a  _ z+p2 y+p1 x+p0 *b+3 *a+1  Reads one base field element from RAM located at the addresses corresponding to the top of the stack, one extension field element from RAM located at the address of the second stack element, multiplies the field elements, and adds the product (p0, p1, p2) to an accumulator located on stack immediately below the two pointers. Also, increase the pointers by the number of words read. 
The instruction merkle_step
works as follows.
The 6th element of the stack i
is taken as the node index for a Merkle tree that is claimed to include data whose digest is the content of stack registers st4
through st0
, i.e., edcba
.
The sibling digest of edcba
is εδγβα
and is read from the input interface of secret data.
The leastsignificant bit of i
indicates whether edcba
is the digest of a left leaf or a right leaf of the Merkle tree's current level.
Depending on this least significant bit of i
, merkle_step
either
 (
i
= 0 mod 2) interpretsedcba
as the left digest,εδγβα
as the right digest, or  (
i
= 1 mod 2) interpretsεδγβα
as the left digest,edcba
as the right digest.
In either case,
 the left and right digests are hashed, and the resulting digest
zyxwv
replaces the top of the stack, and  6th register
i
is shifted by 1 bit to the right, i.e., the leastsignificant bit is dropped.
Instruction merkle_step_mem
works very similarly to instruction merkle_step
.
The main difference, as the name suggests, is the source of the sibling digest:
Instead of reading it from the input interface of secret data, it is supplied via RAM.
Stack element st7
is taken as the RAM pointer, holding the memory address at which the next sibling digest is located in RAM.
Executing instruction merkle_step_mem
increments the memory pointer by the length of one digest, anticipating an authentication path that is laid out continuously.
Stack element st6
does not change when executing instruction merkle_step_mem
in order to facilitate instruction recurse_or_return
.
In conjunction with instructions recurse_or_return
and assert_vector
, the instructions merkle_step
and merkle_step_mem
allow efficient verification of a Merkle authentication path.
Furthermore, instruction merkle_step_mem
allows verifiable reuse of an authentication path.
This is necessary, for example, when verifiably updating a Merkle tree:
first, the authentication path is used to confirm inclusion of some old leaf, and then to compute the tree's new root from the new leaf.
Arithmetization
An arithmetization defines two things:
 algebraic execution tables (AETs), and
 arithmetic intermediate representation (AIR) constraints.
For Triton VM, the execution trace is spread over multiple tables.
These tables are linked by various cryptographic arguments.
This division allows for a separation of concerns.
For example, the main processor's trace is recorded in the Processor Table.
The main processor can delegate the execution of somewhatdifficulttoarithmetize instructions like hash
or xor
to a coprocessor.
The arithmetization of the coprocessor is largely independent from the main processor and recorded in its separate trace.
For example, instructions relating to hashing are executed by the hash coprocessor.
Its trace is recorded in the Hash Table.
Similarly, bitwise instructions are executed by the u32 coprocessor, and the corresponding trace recorded in the U32 Table.
Another example for the separation of concerns is memory consistency, the bulk of which is delegated to the Operational Stack Table, RAM Table, and Jump Stack Table.
Algebraic Execution Tables
There are 9 Arithmetic Execution Tables in TritonVM. Their relation is described by below figure. A a blue arrow indicates a Permutation Argument, a red arrow indicates an Evaluation Argument, a purple arrow indicates a Lookup Argument, and the green arrow is the Contiguity Argument.
The grayedout elements “program digest”, “input”, and “output” are not AETs but publicly available information. Together, they constitute the claim for which Triton VM produces a proof. See “Arguments Using Public Information” and “Program Attestation” for the Evaluation Arguments with which they are linked into the rest of the proof system.
Main Tables
The values of all registers, and consequently the elements on the stack, in memory, and so on, are elements of the Bfield, i.e., $F_{p}$ where $p$ is the Oxfoi prime, $2_{64}−2_{32}+1$. All values of columns corresponding to one such register are elements of the BField as well. Together, these columns are referred to as table's main columns, and make up the main table.
Auxiliary Tables
The entries of a table's columns corresponding to Permutation, Evaluation, and Lookup Arguments are elements from the Xfield $F_{p_{3}}$. These columns are referred to as a table's auxiliary columns, both because the entries are elements of the Xfield and because the entries can only be computed using the main tables, through an auxiliary process. Together, these columns are referred to as a table's auxiliary columns, and make up the auxiliary table.
Padding
For reasons of computational efficiency, it is beneficial that an Algebraic Execution Table's height equals a power of 2. To this end, tables are padded. The height $h$ of the longest AET determines the padded height for all tables, which is $2_{⌈log_{2}h⌉}$.
Arithmetic Intermediate Representation
For each table, up to four lists containing constraints of different type are given:
 Initial Constraints, defining values in a table's first row,
 Consistency Constraints, establishing consistency within any given row,
 Transition Constraints, establishing the consistency of two consecutive rows in relation to each other, and
 Terminal Constraints, defining values in a table's last row.
Together, all these constraints constitute the AIR constraints.
Arguments Using Public Information
Triton VM uses a number of public arguments. That is, one side of the link can be computed by the verifier using publicly available information; the other side of the link is verified through one or more AIR constraints.
The two most prominent instances of this are public input and output: both are given to the verifier explicitly. The verifier can compute the terminal of an Evaluation Argument using public input (respectively, output). In the Processor Table, AIR constraints guarantee that the used input (respectively, output) symbols are accumulated similarly. Comparing the two terminal values establishes that use of public input (respectively, production of public output) was integral.
The third public argument establishes correctness of the Lookup Table, and is explained there. The fourth public argument relates to program attestation, and is also explained on its corresponding page.
Arithmetization Overview
Tables
table name  #main cols  #aux cols  total width 

ProgramTable  7  3  16 
ProcessorTable  39  11  72 
OpStackTable  4  2  10 
RamTable  7  6  25 
JumpStackTable  5  2  11 
HashTable  67  20  127 
CascadeTable  6  2  12 
LookupTable  4  2  10 
U32Table  10  1  13 
DegreeLowering (/8/4)  0/118/230  0/14/38  0/160/344 
Randomizers  0  1  3 
TOTAL  149/267/379  50/64/88  299/459/643 
Constraints
The following table captures the state of affairs in terms of constraints before automatic degree lowering. In particular, automatic degree lowering introduces new columns, modifies the constraint set (in a way that is equivalent to what was there before), and lowers the constraints' maximal degree.
Before automatic degree lowering:
table name  #initial  #consistency  #transition  #terminal  max degree 

ProgramTable  6  4  10  2  4 
ProcessorTable  29  10  41  1  19 
OpStackTable  3  0  5  0  4 
RamTable  7  0  12  1  5 
JumpStackTable  6  0  6  0  5 
HashTable  22  45  47  2  9 
CascadeTable  2  1  3  0  4 
LookupTable  3  1  4  1  3 
U32Table  1  15  22  2  12 
Grand CrossTable Argument  0  0  0  14  1 
TOTAL  79  76  150  23  19 
(# nodes)  (534)  (624)  (6679)  (213) 
After lowering degree to 8:
table name  #initial  #consistency  #transition  #terminal 

ProgramTable  6  4  10  2 
ProcessorTable  29  10  165  1 
OpStackTable  3  0  5  0 
RamTable  7  0  12  1 
JumpStackTable  6  0  6  0 
HashTable  22  46  49  2 
CascadeTable  2  1  3  0 
LookupTable  3  1  4  1 
U32Table  1  18  24  2 
Grand CrossTable Argument  0  0  0  14 
TOTAL  79  80  278  23 
(# nodes)  (534)  (635)  (6956)  (213) 
After lowering degree to 4:
table name  #initial  #consistency  #transition  #terminal 

ProgramTable  6  4  10  2 
ProcessorTable  31  10  238  1 
OpStackTable  3  0  5  0 
RamTable  7  0  13  1 
JumpStackTable  6  0  7  0 
HashTable  22  52  84  2 
CascadeTable  2  1  3  0 
LookupTable  3  1  4  1 
U32Table  1  26  34  2 
Grand CrossTable Argument  0  0  0  14 
TOTAL  81  94  398  23 
(# nodes)  (538)  (676)  (7246)  (213) 
Triton Assembly Constraint Evaluation
Triton VM's recursive verifier needs to evaluate Triton VM's AIR constraints. In order to gauge the runtime cost for this step, the following table provides estimates for that step's contribution to various tables.
Type  Processor  Op Stack  RAM 

static  37759  70227  24996 
dynamic  49456  78029  28895 
Opcode Pressure
When changing existing or introducing new instructions, one consideration is: how many other instructions compete for opcodes in the same instruction category? The table below helps answer this question at a glance.
IsU32  ShrinksStack  HasArg  Num Opcodes 

n  n  n  12 
n  n  y  10 
n  y  n  11 
n  y  y  3 
y  n  n  6 
y  n  y  0 
y  y  n  4 
y  y  y  0 
Maximum number of opcodes per row is 16.
Program Table
The Program Table contains the entire program as a readonly list of instruction opcodes and their arguments.
The processor looks up instructions and arguments using its instruction pointer ip
.
For program attestation, the program is padded and sent to the Hash Table in chunks of size 10, which is the $rate$ of the Tip5 hash function. Program padding is one 1 followed by the minimal number of 0’s necessary to make the padded input length a multiple of the $rate$^{1}.
Main Columns
The Program Table consists of 7 main columns. Those columns marked with an asterisk (*) are only used for program attestation.
Column  Description 

Address  an instruction's address 
Instruction  the (opcode of the) instruction 
LookupMultiplicity  how often an instruction has been executed 
*IndexInChunk  Address modulo the Tip5 $rate$, which is 10 
*MaxMinusIndexInChunkInv  the inverseorzero of $rate−1−IndexInChunk$ 
*IsHashInputPadding  padding indicator for absorbing the program into the Sponge 
IsTablePadding  padding indicator for rows only required due to the dominating length of some other table 
Auxiliary Columns
A Lookup Argument with the Processor Table establishes that the processor has loaded the correct instruction (and its argument) from program memory.
To establish the program memory's side of the Lookup Argument, the Program Table has auxiliary column InstructionLookupServerLogDerivative
.
For sending the padded program to the Hash Table, a combination of two Evaluation Arguments is used.
The first, PrepareChunkRunningEvaluation
, absorbs one chunk of $rate$ (i.e. 10) instructions at a time, after which it is reset and starts absorbing again.
The second, SendChunkRunningEvaluation
, absorbs one such prepared chunk every 10 instructions.
Padding
A padding row is a copy of the Program Table's last row with the following modifications:
 column
Address
is increased by 1,  column
Instruction
is set to 0,  column
LookupMultiplicity
is set to 0,  column
IndexInChunk
is set toAddress
mod $rate$,  column
MaxMinusIndexInChunkInv
is set to the inverseorzero of $rate−1−IndexInChunk$,  column
IsHashInputPadding
is set to 1, and  column
IsTablePadding
is set to 1.
Above procedure is iterated until the necessary number of rows have been added.
A Note on the Instruction Lookup Argument
For almost all tablelinking arguments, the initial row contains the argument's initial value after having applied the first update. For example, the initial row for a Lookup Argument usually contains $α−yx $ for some $x$ and $y$. As an exception, the Program Table's instruction Lookup Argument always records 0 in the initial row.
Recall that the Lookup Argument is not just about the instruction, but also about the instruction's argument if it has one, or the next instruction if it has none. In the Program Table, this argument (or the next instruction) is recorded in the next row from the instruction in question. Therefore, verifying correct application of the logarithmic derivative's update rule requires access to both the current and the next row.
Out of all constraint types, only Transition Constraints have access to more than one row at a time. This implies that the correct application of the first update of the instruction Lookup Argument cannot be verified by an initial constraint. Therefore, the recorded initial value must be independent of the second row.
Consequently, the final value for the Lookup Argument is recorded in the first row just after the program description ends. This row is guaranteed to exist because of the mechanics for program attestation: the program has to be padded with at least one 1 before it is hashed.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 The
Address
is 0.  The
IndexInChunk
is 0.  The indicator
IsHashInputPadding
is 0.  The
InstructionLookupServerLogDerivative
is 0. PrepareChunkRunningEvaluation
has absorbedInstruction
with respect to challenge 🪑.SendChunkRunningEvaluation
is 1.
Initial Constraints as Polynomials
Address
IndexInChunk
IsHashInputPadding
InstructionLookupServerLogDerivative
PrepareChunkRunningEvaluation  🪑  Instruction
SendChunkRunningEvaluation  1
Consistency Constraints
 The
MaxMinusIndexInChunkInv
is zero or the inverse of $rate−1−$IndexInChunk
.  The
IndexInChunk
is $rate−1$ or theMaxMinusIndexInChunkInv
is the inverse of $rate−1−$IndexInChunk
.  Indicator
IsHashInputPadding
is either 0 or 1.  Indicator
IsTablePadding
is either 0 or 1.
Consistency Constraints as Polynomials
(1  MaxMinusIndexInChunkInv · (rate  1  IndexInChunk)) · MaxMinusIndexInChunkInv
(1  MaxMinusIndexInChunkInv · (rate  1  IndexInChunk)) · (rate  1  IndexInChunk)
IsHashInputPadding · (IsHashInputPadding  1)
IsTablePadding · (IsTablePadding  1)
Transition Constraints
 The
Address
increases by 1.  If the
IndexInChunk
is not $rate−1$, it increases by 1. Else, theIndexInChunk
in the next row is 0.  The indicator
IsHashInputPadding
is 0 or remains unchanged.  The padding indicator
IsTablePadding
is 0 or remains unchanged.  If
IsHashInputPadding
is 0 in the current row and 1 in the next row, thenInstruction
in the next row is 1.  If
IsHashInputPadding
is 1 in the current row thenInstruction
in the next row is 0.  If
IsHashInputPadding
is 1 in the current row andIndexInChunk
is $rate−1$ in the current row thenIsTablePadding
is 1 in the next row.  If the current row is not a padding row, the logarithmic derivative accumulates the current row's address, the current row's instruction, and the next row's instruction with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥 respectively. Otherwise, it remains unchanged.
 If the
IndexInChunk
in the current row is not $rate−1$, thenPrepareChunkRunningEvaluation
absorbs theInstruction
in the next row with respect to challenge 🪑. Otherwise,PrepareChunkRunningEvaluation
resets and absorbs theInstruction
in the next row with respect to challenge 🪑.  If the next row is not a padding row and the
IndexInChunk
in the next row is $rate−1$, thenSendChunkRunningEvaluation
absorbsPrepareChunkRunningEvaluation
in the next row with respect to variable 🪣. Otherwise, it remains unchanged.
Transition Constraints as Polynomials
Address'  Address  1
MaxMinusIndexInChunkInv · (IndexInChunk'  IndexInChunk  1)
+ (1  MaxMinusIndexInChunkInv · (rate  1  IndexInChunk)) · IndexInChunk'
IsHashInputPadding · (IsHashInputPadding'  IsHashInputPadding)
IsTablePadding · (IsTablePadding'  IsTablePadding)
(IsHashInputPadding  1) · IsHashInputPadding' · (Instruction'  1)
IsHashInputPadding · Instruction'
IsHashInputPadding · (1  MaxMinusIndexInChunkInv · (rate  1  IndexInChunk)) · IsTablePadding'
(1  IsHashInputPadding) · ((InstructionLookupServerLogDerivative'  InstructionLookupServerLogDerivative) · (🪥  🥝·Address  🥥·Instruction  🫐·Instruction')  LookupMultiplicity)
+ IsHashInputPadding · (InstructionLookupServerLogDerivative'  InstructionLookupServerLogDerivative)
(rate  1  IndexInChunk) · (PrepareChunkRunningEvaluation'  🪑·PrepareChunkRunningEvaluation  Instruction')
+ (1  MaxMinusIndexInChunkInv · (rate  1  IndexInChunk)) · (PrepareChunkRunningEvaluation'  🪑  Instruction')
(IsTablePadding'  1) · (1  MaxMinusIndexInChunkInv' · (rate  1  IndexInChunk')) · (SendChunkRunningEvaluation'  🪣·SendChunkRunningEvaluation  PrepareChunkRunningEvaluation')
+ (SendChunkRunningEvaluation'  SendChunkRunningEvaluation) · IsTablePadding'
+ (SendChunkRunningEvaluation'  SendChunkRunningEvaluation) · (rate  1  IndexInChunk')
Terminal Constraints
 The indicator
IsHashInputPadding
is 1.  The
IndexInChunk
is $rate−1$ or the indicatorIsTablePadding
is 1.
Terminal Constraints as Polynomials
IsHashInputPadding  1
(rate  1  IndexInChunk) · (IsTablePadding  1)
See also section 2.5 “FixedLength versus VariableLength” in the Tip5 paper.
Processor Table
The Processor Table records all of Triton VM states during execution of a particular program.
The states are recorded in chronological order.
The first row is the initial state, the last (nonpadding) row is the terminal state, i.e., the state after having executed instruction halt
.
It is impossible to generate a valid proof if the instruction executed last is anything but halt
.
It is worth highlighting the initialization of the operational stack.
Stack elements st0
through st10
are initially 0.
However, stack elements st11
through st15
, i.e., the very bottom of the stack, are initialized with the hash digest of the program that is being executed.
This is primarily useful for recursive verifiers:
they can compare their own program digest to the program digest of the proof they are verifying.
This way, a recursive verifier can easily determine if they are actually recursing, or whether the proof they are checking was generated using an entirely different program.
A more detailed explanation of the mechanics can be found on the page about program attestation.
Main Columns
The processor consists of all registers defined in the Instruction Set Architecture. Each register is assigned a column in the processor table.
Auxiliary Columns
The Processor Table has the following auxiliary columns, corresponding to Evaluation Arguments, Permutation Arguments, and Lookup Arguments:
RunningEvaluationStandardInput
for the Evaluation Argument with the input symbols.RunningEvaluationStandardOutput
for the Evaluation Argument with the output symbols.InstructionLookupClientLogDerivative
for the Lookup Argument with the Program TableRunningProductOpStackTable
for the Permutation Argument with the Op Stack Table.RunningProductRamTable
for the Permutation Argument with the RAM Table. Note that virtual columninstruction_type
holds value 1 for reads and 0 for writes.RunningProductJumpStackTable
for the Permutation Argument with the Jump Stack Table.RunningEvaluationHashInput
for the Evaluation Argument with the Hash Table for copying the input to the hash function from the processor to the hash coprocessor.RunningEvaluationHashDigest
for the Evaluation Argument with the Hash Table for copying the hash digest from the hash coprocessor to the processor.RunningEvaluationSponge
for the Evaluation Argument with the Hash Table for copying the 10 next tobeabsorbed elements from the processor to the hash coprocessor or the 10 next squeezed elements from the hash coprocessor to the processor, depending on the instruction.U32LookupClientLogDerivative
for the Lookup Argument with the U32 Table.ClockJumpDifferenceLookupServerLogDerivative
for the Lookup Argument of clock jump differences with the Op Stack Table, the RAM Table, and the Jump Stack Table.
Permutation Argument with the Op Stack Table
The subset Permutation Argument with the Op Stack Table RunningProductOpStackTable
establishes consistency of the op stack underflow memory.
The number of factors incorporated into the running product at any given cycle depends on the executed instruction in this cycle:
for every element pushed to or popped from the stack, there is one factor.
Namely, if the op stack grows, every element spilling from st15
into op stack underflow memory will be incorporated as one factor;
and if the op stack shrinks, every element from op stack underflow memory being transferred into st15
will be one factor.
Notably, if an instruction shrinks the op stack by more than one element in a single clock cycle, each spilled element is incorporated as one factor. The same holds true for instructions growing the op stack by more than one element in a single clock cycle.
One key insight for this Permutation Argument is that the processor will always have access to the elements that are to be read from or written to underflow memory:
if the instruction grows the op stack, then the elements in question currently reside in the directly accessible, top part of the stack;
if the instruction shrinks the op stack, then the elements in question will be in the top part of the stack in the next cycle.
In either case, the Transition Constraint for the Permutation Argument can incorporate the explicitly listed elements as well as the corresponding trivialtocompute op_stack_pointer
.
Padding
A padding row is a copy of the Processor Table's last row with the following modifications:
 column
clk
is increased by 1,  column
IsPadding
is set to 1,  column
cjd_mul
is set to 0,
A notable exception:
if the row with clk
equal to 1 is a padding row, then the value of cjd_mul
is not constrained in that row.
The reason for this exception is the lack of “awareness” of padding rows in the Jump Stack Table:
it keeps looking up clock jump differences in its padding section.
All these clock jumps are guaranteed to have magnitude 1.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Note, that the transition constraint's use of some_column
vs some_column_next
might be a little unintuitive.
For example, take the following part of some execution trace.
Clock Cycle  Current Instruction  st0  …  st15  Running Evaluation “To Hash Table”  Running Evaluation “From Hash Table” 

$i−1$  foo  17  …  22  $a$  $b$ 
$i$  hash  17  …  22  $🚪⋅a+∑_{j}🧄_{j}⋅st_{j}$  $b$ 
$i+1$  bar  1337  …  22  $🚪⋅a+∑_{j=0}🧄_{j}⋅st_{j}$  $🪟⋅b+∑_{j=0}🧄_{j}⋅st_{j+5}$ 
In order to verify the correctness of RunningEvaluationHashInput
, the corresponding transition constraint needs to conditionally “activate” on rowtuple ($i−1$, $i$), where it is conditional on ci_next
(not ci
), and verifies absorption of the next row, i.e., row $i$.
However, in order to verify the correctness of RunningEvaluationHashDigest
, the corresponding transition constraint needs to conditionally “activate” on rowtuple ($i$, $i+1$), where it is conditional on ci
(not ci_next
), and verifies absorption of the next row, i.e., row $i+1$.
Initial Constraints
 The cycle counter
clk
is 0.  The instruction pointer
ip
is 0.  The jump address stack pointer
jsp
is 0.  The jump address origin
jso
is 0.  The jump address destination
jsd
is 0.  The operational stack element
st0
is 0.  The operational stack element
st1
is 0.  The operational stack element
st2
is 0.  The operational stack element
st3
is 0.  The operational stack element
st4
is 0.  The operational stack element
st5
is 0.  The operational stack element
st6
is 0.  The operational stack element
st7
is 0.  The operational stack element
st8
is 0.  The operational stack element
st9
is 0.  The operational stack element
st10
is 0.  The Evaluation Argument of operational stack elements
st11
throughst15
with respect to indeterminate 🥬 equals the public part of program digest challenge, 🫑. See program attestation for more details.  The
op_stack_pointer
is 16. RunningEvaluationStandardInput
is 1.RunningEvaluationStandardOutput
is 1.InstructionLookupClientLogDerivative
has absorbed the first row with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥.RunningProductOpStackTable
is 1.RunningProductRamTable
has absorbed the first row with respect to challenges 🍍, 🍈, 🍎, and 🌽 and indeterminate 🛋.RunningProductJumpStackTable
has absorbed the first row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.RunningEvaluationHashInput
has absorbed the first row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪 if the current instruction ishash
. Otherwise, it is 1.RunningEvaluationHashDigest
is 1.RunningEvaluationSponge
is 1.U32LookupClientLogDerivative
is 0.ClockJumpDifferenceLookupServerLogDerivative
starts having accumulated the first contribution.
Consistency Constraints
 The composition of instruction bits
ib0
throughib6
corresponds to the current instructionci
.  The instruction bit
ib0
is a bit.  The instruction bit
ib1
is a bit.  The instruction bit
ib2
is a bit.  The instruction bit
ib3
is a bit.  The instruction bit
ib4
is a bit.  The instruction bit
ib5
is a bit.  The instruction bit
ib6
is a bit.  The padding indicator
IsPadding
is either 0 or 1.  If the current padding row is a padding row and
clk
is not 1, then the clock jump difference lookup multiplicity is 0.
Transition Constraints
Due to their complexity, instructionspecific constraints are defined in their own section. The following additional constraints also apply to every pair of rows.
 The cycle counter
clk
increases by 1.  The padding indicator
IsPadding
is 0 or remains unchanged.  If the next row is not a padding row, the logarithmic derivative for the Program Table absorbs the next row with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥. Otherwise, it remains unchanged.
 The running sum for the logarithmic derivative of the clock jump difference lookup argument accumulates the next row's
clk
with the appropriate multiplicitycjd_mul
with respect to indeterminate 🪞.  The running product for the Jump Stack Table absorbs the next row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.

 If the current instruction in the next row is
hash
, the running evaluation “Hash Input” absorbs the next row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪.  If the current instruction in the next row is
merkle_step
ormerkle_step_mem
and helper variablehv5
… …is 0, the running evaluation “Hash Input” absorbs next row's
st0
throughst4
andhv0
throughhv4
…  …is 1, the running evaluation “Hash Input” absorbs next row's
hv0
throughhv4
andst0
throughst4
…
…with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪.
 …is 0, the running evaluation “Hash Input” absorbs next row's
 Otherwise, it remains unchanged.
 If the current instruction in the next row is
 If the current instruction is
hash
,merkle_step
, ormerkle_step_mem
, the running evaluation “Hash Digest” absorbs the next row with respect to challenges 🧄₀ through 🧄₄ and indeterminate 🪟. Otherwise, it remains unchanged.  If the current instruction is
sponge_init
, then the running evaluation “Sponge” absorbs the current instruction and the Sponge's default initial state with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction issponge_absorb
, then the running evaluation “Sponge” absorbs the current instruction and the current row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction issponge_squeeze
, then the running evaluation “Sponge” absorbs the current instruction and the next row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction issponge_absorb_mem
, then the running evaluation “Sponge” absorbs the opcode of instructionsponge_absorb
and stack elementsst1
throughst4
and all 6 helper variables with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Otherwise, the running evaluation remains unchanged. 
 If the current instruction is
split
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulatesst0
andst1
in the next row andci
in the current row with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.  If the current instruction is
lt
,and
,xor
, orpow
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulatesst0
,st1
, andci
in the current row andst0
in the next row with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.  If the current instruction is
log_2_floor
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulatesst0
andci
in the current row andst0
in the next row with respect to challenges 🥜, 🥑, and 🥕, and indeterminate 🧷.  If the current instruction is
div_mod
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates bothst0
in the next row andst1
in the current row as well as the constantsopcode(lt)
and1
with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.st0
in the current row andst1
in the next row as well asopcode(split)
with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.
 If the current instruction is
merkle_step
ormerkle_step_mem
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulatesst5
from the current and next rows as well asopcode(split)
with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.  If the current instruction is
pop_count
, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulatesst0
andci
in the current row andst0
in the next row with respect to challenges 🥜, 🥑, and 🥕, and indeterminate 🧷.  Else, i.e., if the current instruction is not a u32 instruction, the logarithmic derivative for the Lookup Argument with the U32 Table remains unchanged.
 If the current instruction is
Terminal Constraints
 In the last row, register “current instruction”
ci
is 0, corresponding to instructionhalt
.
Instruction Groups
Some transition constraints are shared across some, or even many instructions.
For example, most instructions must not change the jump stack.
Likewise, most instructions must not change RAM.
To simplify presentation of the instructionspecific transition constraints, these common constraints are grouped together and aliased.
Continuing above example, instruction group keep_jump_stack
contains all transition constraints to ensure that the jump stack remains unchanged.
The next section treats each instruction group in detail. The following table lists and briefly explains all instruction groups.
group name  description 

decompose_arg  instruction's argument held in nia is binary decomposed into helper registers hv0 through hv3 
prohibit_illegal_num_words  constrain the instruction's argument n to 1 ⩽ n ⩽ 5 
no_io  the running evaluations for public input & output remain unchanged 
no_ram  RAM is not being accessed, i.e., the running product of the Permutation Argument with the RAM Table remains unchanged 
keep_jump_stack  jump stack does not change, i.e., registers jsp , jso , and jsd do not change 
step_1  jump stack does not change and instruction pointer ip increases by 1 
step_2  jump stack does not change and instruction pointer ip increases by 2 
grow_op_stack  op stack elements are shifted down by one position, top element of the resulting stack is unconstrained 
grow_op_stack_by_any_of  op stack elements are shifted down by n positions, top n elements of the resulting stack are unconstrained, where n is the instruction's argument 
keep_op_stack_height  the op stack height remains unchanged, and so in particular the running product of the Permutation Argument with the Op Stack table remains unchanged 
op_stack_remains_except_top_n  all but the top n elements of the op stack remain unchanged 
keep_op_stack  op stack remains entirely unchanged 
binary_operation  op stack elements starting from st2 are shifted up by one position, highest two elements of the resulting stack are unconstrained 
shrink_op_stack  op stack elements starting from st1 are shifted up by one position 
shrink_op_stack_by_any_of  op stack elements starting from stn are shifted up by one position, where n is the instruction's argument 
The following instruction groups conceptually fit the category of 'instruction groups' but are not used or enforced through AIR constraints.
group name  description 

no_hash  The hash coprocessor is not accessed. The constraints are implied by the evaluation argument with the Hash Table which takes the current instruction into account. 
A summary of all instructions and which groups they are part of is given in the following table.
instruction  decompose_arg  prohibit_illegal_num_words  no_io  no_ram  keep_jump_stack  step_1  step_2  grow_op_stack  grow_op_stack_by_any_of  keep_op_stack_height  op_stack_remains_except_top_n  keep_op_stack  binary_operation  shrink_op_stack  shrink_op_stack_by_any_of 

pop + n  x  x  x  x  x  x  
push + a  x  x  x  x  
divine + n  x  x  x  x  x  x  
pick + i  x  x  x  x  x  
place + i  x  x  x  x  x  
dup + i  x  x  x  x  x  
swap + i  x  x  x  x  x  
nop  x  x  x  x  0  x  
skiz  x  x  x  x  
call + d  x  x  x  0  x  
return  x  x  x  0  x  
recurse  x  x  x  x  0  x  
recurse_or_return  x  x  x  0  x  
assert  x  x  x  x  
halt  x  x  x  x  0  x  
read_mem + n  x  x  x  x  
write_mem + n  x  x  x  x  
hash  x  x  x  
assert_vector  x  x  x  
sponge_init  x  x  x  
sponge_absorb  x  x  x  
sponge_absorb_mem  x  x  5  
sponge_squeeze  x  x  x  
add  x  x  x  
addi + a  x  x  x  x  1  
mul  x  x  x  x  
invert  x  x  x  x  1  
eq  x  x  x  x  
split  x  x  x  
lt  x  x  x  x  
and  x  x  x  x  
xor  x  x  x  x  
log_2_floor  x  x  x  x  1  
pow  x  x  x  x  
div_mod  x  x  x  x  2  
pop_count  x  x  x  x  1  
xx_add  x  x  x  
xx_mul  x  x  x  
x_invert  x  x  x  x  3  
xb_mul  x  x  x  
read_io + n  x  x  x  x  x  
write_io + n  x  x  x  x  x  
merkle_step  x  x  x  6  
merkle_step_mem  x  x  8  
xx_dot_step  x  x  5  
xb_dot_step  x  x  5  x 
Indicator Polynomials ind_i(hv3, hv2, hv1, hv0)
In this and the following sections, a register marked with a '
refers to the next state of that register.
For example, st0' = st0 + 2
means that stack register st0
is incremented by 2.
An alternative view for the same concept is that registers marked with '
are those of the next row in the table.
For instructions like dup i
, swap i
, pop n
, et cetera, it is beneficial to have polynomials that evaluate to 1 if the instruction's argument is a specific value, and to 0 otherwise.
This allows indicating which registers are constraint, and in which way they are, depending on the argument.
This is the purpose of the indicator polynomials ind_i
.
Evaluated on the binary decomposition of i
, they show the behavior described above.
For example, take i = 13
.
The corresponding binary decomposition is (hv3, hv2, hv1, hv0) = (1, 1, 0, 1)
.
Indicator polynomial ind_13(hv3, hv2, hv1, hv0)
is defined as hv3·hv2·(1  hv1)·hv0
.
It evaluates to 1 on (1, 1, 0, 1)
, i.e., ind_13(1, 1, 0, 1) = 1
.
Any other indicator polynomial, like ind_7
, evaluates to 0 on (1, 1, 0, 1)
.
Likewise, the indicator polynomial for 13 evaluates to 0 for any other argument.
The list of all 16 indicator polynomials is:
ind_0(hv3, hv2, hv1, hv0) = (1  hv3)·(1  hv2)·(1  hv1)·(1  hv0)
ind_1(hv3, hv2, hv1, hv0) = (1  hv3)·(1  hv2)·(1  hv1)·hv0
ind_2(hv3, hv2, hv1, hv0) = (1  hv3)·(1  hv2)·hv1·(1  hv0)
ind_3(hv3, hv2, hv1, hv0) = (1  hv3)·(1  hv2)·hv1·hv0
ind_4(hv3, hv2, hv1, hv0) = (1  hv3)·hv2·(1  hv1)·(1  hv0)
ind_5(hv3, hv2, hv1, hv0) = (1  hv3)·hv2·(1  hv1)·hv0
ind_6(hv3, hv2, hv1, hv0) = (1  hv3)·hv2·hv1·(1  hv0)
ind_7(hv3, hv2, hv1, hv0) = (1  hv3)·hv2·hv1·hv0
ind_8(hv3, hv2, hv1, hv0) = hv3·(1  hv2)·(1  hv1)·(1  hv0)
ind_9(hv3, hv2, hv1, hv0) = hv3·(1  hv2)·(1  hv1)·hv0
ind_10(hv3, hv2, hv1, hv0) = hv3·(1  hv2)·hv1·(1  hv0)
ind_11(hv3, hv2, hv1, hv0) = hv3·(1  hv2)·hv1·hv0
ind_12(hv3, hv2, hv1, hv0) = hv3·hv2·(1  hv1)·(1  hv0)
ind_13(hv3, hv2, hv1, hv0) = hv3·hv2·(1  hv1)·hv0
ind_14(hv3, hv2, hv1, hv0) = hv3·hv2·hv1·(1  hv0)
ind_15(hv3, hv2, hv1, hv0) = hv3·hv2·hv1·hv0
Group decompose_arg
Description
 The helper variables
hv0
throughhv3
are the binary decomposition of the instruction's argument, which is held in registernia
.  The helper variable
hv0
is either 0 or 1.  The helper variable
hv1
is either 0 or 1.  The helper variable
hv2
is either 0 or 1.  The helper variable
hv3
is either 0 or 1.
Polynomials
nia  (8·hv3 + 4·hv2 + 2·hv1 + hv0)
hv0·(hv0  1)
hv1·(hv1  1)
hv2·(hv2  1)
hv3·(hv3  1)
Group prohibit_illegal_num_words
Is only ever used in combination with instruction group decompose_arg
.
Therefore, the instruction argument's correct binary decomposition is known to be in helper variables hv0
through hv3
.
Description
 The argument is not 0.
 The argument is not 6.
 The argument is not 7.
 The argument is not 8.
 The argument is not 9.
 The argument is not 10.
 The argument is not 11.
 The argument is not 12.
 The argument is not 13.
 The argument is not 14.
 The argument is not 15.
Polynomials
ind_0(hv3, hv2, hv1, hv0)
ind_6(hv3, hv2, hv1, hv0)
ind_7(hv3, hv2, hv1, hv0)
ind_8(hv3, hv2, hv1, hv0)
ind_9(hv3, hv2, hv1, hv0)
ind_10(hv3, hv2, hv1, hv0)
ind_11(hv3, hv2, hv1, hv0)
ind_12(hv3, hv2, hv1, hv0)
ind_13(hv3, hv2, hv1, hv0)
ind_14(hv3, hv2, hv1, hv0)
ind_15(hv3, hv2, hv1, hv0)
Group no_io
Description
 The running evaluation for standard input remains unchanged.
 The running evaluation for standard output remains unchanged.
Polynomials
RunningEvaluationStandardInput'  RunningEvaluationStandardInput
RunningEvaluationStandardOutput'  RunningEvaluationStandardOutput
Group no_ram
Description
 The running product for the Permutation Argument with the RAM table does not change.
Polynomials
RunningProductRamTable'  RunningProductRamTable
Group keep_jump_stack
Description
 The jump stack pointer
jsp
does not change.  The jump stack origin
jso
does not change.  The jump stack destination
jsd
does not change.
Polynomials
jsp'  jsp
jso'  jso
jsd'  jsd
Group step_1
Contains all constraints from instruction group keep_jump_stack
, and additionally:
Description
 The instruction pointer increments by 1.
Polynomials
ip'  (ip + 1)
Group step_2
Contains all constraints from instruction group keep_jump_stack
, and additionally:
Description
 The instruction pointer increments by 2.
Polynomials
ip'  (ip + 2)
Group grow_op_stack
Description
 The stack element in
st0
is moved intost1
.  The stack element in
st1
is moved intost2
.  The stack element in
st2
is moved intost3
.  The stack element in
st3
is moved intost4
.  The stack element in
st4
is moved intost5
.  The stack element in
st5
is moved intost6
.  The stack element in
st6
is moved intost7
.  The stack element in
st7
is moved intost8
.  The stack element in
st8
is moved intost9
.  The stack element in
st9
is moved intost10
.  The stack element in
st10
is moved intost11
.  The stack element in
st11
is moved intost12
.  The stack element in
st12
is moved intost13
.  The stack element in
st13
is moved intost14
.  The stack element in
st14
is moved intost15
.  The op stack pointer is incremented by 1.
 The running product for the Op Stack Table absorbs the current row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤.
Polynomials
st1'  st0
st2'  st1
st3'  st2
st4'  st3
st5'  st4
st6'  st5
st7'  st6
st8'  st7
st9'  st8
st10'  st9
st11'  st10
st12'  st11
st13'  st12
st14'  st13
st15'  st14
op_stack_pointer'  (op_stack_pointer + 1)
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊·ib1  🍉·op_stack_pointer  🫒·st15)
Group grow_op_stack_by_any_of
Is only ever used in combination with instruction group decompose_arg
.
Therefore, the instruction's argument n
correct binary decomposition is known to be in helper variables hv0
through hv3
.
Description
 If
n
is 1, thenst0
is moved intost1
else ifn
is 2, thenst0
is moved intost2
else ifn
is 3, thenst0
is moved intost3
else ifn
is 4, thenst0
is moved intost4
else ifn
is 5, thenst0
is moved intost5
.  If
n
is 1, thenst1
is moved intost2
else ifn
is 2, thenst1
is moved intost3
else ifn
is 3, thenst1
is moved intost4
else ifn
is 4, thenst1
is moved intost5
else ifn
is 5, thenst1
is moved intost6
.  If
n
is 1, thenst2
is moved intost3
else ifn
is 2, thenst2
is moved intost4
else ifn
is 3, thenst2
is moved intost5
else ifn
is 4, thenst2
is moved intost6
else ifn
is 5, thenst2
is moved intost7
.  If
n
is 1, thenst3
is moved intost4
else ifn
is 2, thenst3
is moved intost5
else ifn
is 3, thenst3
is moved intost6
else ifn
is 4, thenst3
is moved intost7
else ifn
is 5, thenst3
is moved intost8
.  If
n
is 1, thenst4
is moved intost5
else ifn
is 2, thenst4
is moved intost6
else ifn
is 3, thenst4
is moved intost7
else ifn
is 4, thenst4
is moved intost8
else ifn
is 5, thenst4
is moved intost9
.  If
n
is 1, thenst5
is moved intost6
else ifn
is 2, thenst5
is moved intost7
else ifn
is 3, thenst5
is moved intost8
else ifn
is 4, thenst5
is moved intost9
else ifn
is 5, thenst5
is moved intost10
.  If
n
is 1, thenst6
is moved intost7
else ifn
is 2, thenst6
is moved intost8
else ifn
is 3, thenst6
is moved intost9
else ifn
is 4, thenst6
is moved intost10
else ifn
is 5, thenst6
is moved intost11
.  If
n
is 1, thenst7
is moved intost8
else ifn
is 2, thenst7
is moved intost9
else ifn
is 3, thenst7
is moved intost10
else ifn
is 4, thenst7
is moved intost11
else ifn
is 5, thenst7
is moved intost12
.  If
n
is 1, thenst8
is moved intost9
else ifn
is 2, thenst8
is moved intost10
else ifn
is 3, thenst8
is moved intost11
else ifn
is 4, thenst8
is moved intost12
else ifn
is 5, thenst8
is moved intost13
.  If
n
is 1, thenst9
is moved intost10
else ifn
is 2, thenst9
is moved intost11
else ifn
is 3, thenst9
is moved intost12
else ifn
is 4, thenst9
is moved intost13
else ifn
is 5, thenst9
is moved intost14
.  If
n
is 1, thenst10
is moved intost11
else ifn
is 2, thenst10
is moved intost12
else ifn
is 3, thenst10
is moved intost13
else ifn
is 4, thenst10
is moved intost14
else ifn
is 5, thenst10
is moved intost15
.  If
n
is 1, thenst11
is moved intost12
else ifn
is 2, thenst11
is moved intost13
else ifn
is 3, thenst11
is moved intost14
else ifn
is 4, thenst11
is moved intost15
else ifn
is 5, then the op stack pointer grows by 5.  If
n
is 1, thenst12
is moved intost13
else ifn
is 2, thenst12
is moved intost14
else ifn
is 3, thenst12
is moved intost15
else ifn
is 4, then the op stack pointer grows by 4
else ifn
is 5, then the running product with the Op Stack Table accumulatesst11
throughst15
.  If
n
is 1, thenst13
is moved intost14
else ifn
is 2, thenst13
is moved intost15
else ifn
is 3, then the op stack pointer grows by 3
else ifn
is 4, then the running product with the Op Stack Table accumulatesst12
throughst15
.  If
n
is 1, thenst14
is moved intost15
else ifn
is 2, then the op stack pointer grows by 2
else ifn
is 3, then the running product with the Op Stack Table accumulatesst13
throughst15
.  If
n
is 1, then the op stack pointer grows by 1
else ifn
is 2, then the running product with the Op Stack Table accumulatesst14
andst15
.  If
n
is 1, then the running product with the Op Stack Table accumulatesst15
.
Group keep_op_stack_height
The op stack pointer and the running product for the Permutation Argument with the Op Stack Table remain the same. In other words, there is no access (neither read nor write) from/to the Op Stack Table.
Description
 The op stack pointer does not change.
 The running product for the Op Stack Table remains unchanged.
Polynomials
op_stack_pointer'  op_stack_pointer
RunningProductOpStackTable'  RunningProductOpStackTable
Group op_stack_remains_except_top_n_elements_unconstrained
Contains all constraints from group keep_op_stack_height
, and additionally ensures that all but the top n
op stack elements remain the same. (The top n
elements are unconstrained.)
Description
 For
i
in{n, ..., NUM_OP_STACK_REGISTERS1}
The stack element insti
does not change.
Polynomials
stn'  stn
 ...
stN'  stN
Group keep_op_stack
Shorthand for op_stack_remains_except_top_n_elements_unconstrained
with n=0
.
Group binary_operation
Description
 The stack element in
st2
is moved intost1
.  The stack element in
st3
is moved intost2
.  The stack element in
st4
is moved intost3
.  The stack element in
st5
is moved intost4
.  The stack element in
st6
is moved intost5
.  The stack element in
st7
is moved intost6
.  The stack element in
st8
is moved intost7
.  The stack element in
st9
is moved intost8
.  The stack element in
st10
is moved intost9
.  The stack element in
st11
is moved intost10
.  The stack element in
st12
is moved intost11
.  The stack element in
st13
is moved intost12
.  The stack element in
st14
is moved intost13
.  The stack element in
st15
is moved intost14
.  The op stack pointer is decremented by 1.
 The running product for the Op Stack Table absorbs clock cycle and instruction bit 1 from the current row as well as op stack pointer and stack element 15 from the next row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤.
Polynomials
st1'  st2
st2'  st3
st3'  st4
st4'  st5
st5'  st6
st6'  st7
st7'  st8
st8'  st9
st9'  st10
st10'  st11
st11'  st12
st12'  st13
st13'  st14
st14'  st15
op_stack_pointer'  (op_stack_pointer  1)
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊·ib1  🍉·op_stack_pointer'  🫒·st15')
Group shrink_op_stack
Contains all constraints from instruction group binary_operation
, and additionally:
Description
 The stack element in
st1
is moved intost0
.
Polynomials
st0'  st1
Group shrink_op_stack_by_any_of
Is only ever used in combination with instruction group decompose_arg
.
Therefore, the instruction's argument n
correct binary decomposition is known to be in helper variables hv0
through hv3
.
Description
 If
n
is 1, thenst1
is moved intost0
else ifn
is 2, thenst2
is moved intost0
else ifn
is 3, thenst3
is moved intost0
else ifn
is 4, thenst4
is moved intost0
else ifn
is 5, thenst5
is moved intost0
.  If
n
is 1, thenst2
is moved intost1
else ifn
is 2, thenst3
is moved intost1
else ifn
is 3, thenst4
is moved intost1
else ifn
is 4, thenst5
is moved intost1
else ifn
is 5, thenst6
is moved intost1
.  If
n
is 1, thenst3
is moved intost2
else ifn
is 2, thenst4
is moved intost2
else ifn
is 3, thenst5
is moved intost2
else ifn
is 4, thenst6
is moved intost2
else ifn
is 5, thenst7
is moved intost2
.  If
n
is 1, thenst4
is moved intost3
else ifn
is 2, thenst5
is moved intost3
else ifn
is 3, thenst6
is moved intost3
else ifn
is 4, thenst7
is moved intost3
else ifn
is 5, thenst8
is moved intost3
.  If
n
is 1, thenst5
is moved intost4
else ifn
is 2, thenst6
is moved intost4
else ifn
is 3, thenst7
is moved intost4
else ifn
is 4, thenst8
is moved intost4
else ifn
is 5, thenst9
is moved intost4
.  If
n
is 1, thenst6
is moved intost5
else ifn
is 2, thenst7
is moved intost5
else ifn
is 3, thenst8
is moved intost5
else ifn
is 4, thenst9
is moved intost5
else ifn
is 5, thenst10
is moved intost5
.  If
n
is 1, thenst7
is moved intost6
else ifn
is 2, thenst8
is moved intost6
else ifn
is 3, thenst9
is moved intost6
else ifn
is 4, thenst10
is moved intost6
else ifn
is 5, thenst11
is moved intost6
.  If
n
is 1, thenst8
is moved intost7
else ifn
is 2, thenst9
is moved intost7
else ifn
is 3, thenst10
is moved intost7
else ifn
is 4, thenst11
is moved intost7
else ifn
is 5, thenst12
is moved intost7
.  If
n
is 1, thenst9
is moved intost8
else ifn
is 2, thenst10
is moved intost8
else ifn
is 3, thenst11
is moved intost8
else ifn
is 4, thenst12
is moved intost8
else ifn
is 5, thenst13
is moved intost8
.  If
n
is 1, thenst10
is moved intost9
else ifn
is 2, thenst11
is moved intost9
else ifn
is 3, thenst12
is moved intost9
else ifn
is 4, thenst13
is moved intost9
else ifn
is 5, thenst14
is moved intost9
.  If
n
is 1, thenst11
is moved intost10
else ifn
is 2, thenst12
is moved intost10
else ifn
is 3, thenst13
is moved intost10
else ifn
is 4, thenst14
is moved intost10
else ifn
is 5, thenst15
is moved intost10
.  If
n
is 1, thenst12
is moved intost11
else ifn
is 2, thenst13
is moved intost11
else ifn
is 3, thenst14
is moved intost11
else ifn
is 4, thenst15
is moved intost11
else ifn
is 5, then the op stack pointer shrinks by 5.  If
n
is 1, thenst13
is moved intost12
else ifn
is 2, thenst14
is moved intost12
else ifn
is 3, thenst15
is moved intost12
else ifn
is 4, then the op stack pointer shrinks by 4
else ifn
is 5, then the running product with the Op Stack Table accumulates next row'sst11
throughst15
.  If
n
is 1, thenst14
is moved intost13
else ifn
is 2, thenst15
is moved intost13
else ifn
is 3, then the op stack pointer shrinks by 3
else ifn
is 4, then the running product with the Op Stack Table accumulates next row'sst12
throughst15
.  If
n
is 1, thenst15
is moved intost14
else ifn
is 2, then the op stack pointer shrinks by 2
else ifn
is 3, then the running product with the Op Stack Table accumulates next row'sst13
throughst15
.  If
n
is 1, then the op stack pointer shrinks by 1
else ifn
is 2, then the running product with the Op Stack Table accumulates next row'sst14
andst15
.  If
n
is 1, then the running product with the Op Stack Table accumulates next row'sst15
.
InstructionSpecific Transition Constraints
Instruction pop
+ n
This instruction is fully constrained by its instruction groups
Instruction push
+ a
In addition to its instruction groups, this instruction has the following constraints.
Description
 The instruction's argument
a
is moved onto the stack.
Polynomials
st0'  nia
Instruction divine
+ n
This instruction is fully constrained by its instruction groups
Instruction pick
+ i
This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.
Description
For 0 ⩽ i
< 16:
 Stack element
i
is moved to the top.  For 0 ⩽
j
<i
: stack elementj
is shifted down by 1.  For
j
>i
: stack elementj
remains unchanged.
Instruction place
+ i
This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.
Description
For 0 ⩽ i
< 16:
 Stack element 0 is moved to position
i
.  For 0 ⩽
j
<i
: stack elementj
is shifted up by 1.  For
j
>i
: stack elementj
remains unchanged.
Instruction dup
+ i
This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.
Description
 If
i
is 0, thenst0
is put on top of the stack and
ifi
is 1, thenst1
is put on top of the stack and
ifi
is 2, thenst2
is put on top of the stack and
ifi
is 3, thenst3
is put on top of the stack and
ifi
is 4, thenst4
is put on top of the stack and
ifi
is 5, thenst5
is put on top of the stack and
ifi
is 6, thenst6
is put on top of the stack and
ifi
is 7, thenst7
is put on top of the stack and
ifi
is 8, thenst8
is put on top of the stack and
ifi
is 9, thenst9
is put on top of the stack and
ifi
is 10, thenst10
is put on top of the stack and
ifi
is 11, thenst11
is put on top of the stack and
ifi
is 12, thenst12
is put on top of the stack and
ifi
is 13, thenst13
is put on top of the stack and
ifi
is 14, thenst14
is put on top of the stack and
ifi
is 15, thenst15
is put on top of the stack.
Polynomials
ind_0(hv3, hv2, hv1, hv0)·(st0'  st0)
+ ind_1(hv3, hv2, hv1, hv0)·(st0'  st1)
+ ind_2(hv3, hv2, hv1, hv0)·(st0'  st2)
+ ind_3(hv3, hv2, hv1, hv0)·(st0'  st3)
+ ind_4(hv3, hv2, hv1, hv0)·(st0'  st4)
+ ind_5(hv3, hv2, hv1, hv0)·(st0'  st5)
+ ind_6(hv3, hv2, hv1, hv0)·(st0'  st6)
+ ind_7(hv3, hv2, hv1, hv0)·(st0'  st7)
+ ind_8(hv3, hv2, hv1, hv0)·(st0'  st8)
+ ind_9(hv3, hv2, hv1, hv0)·(st0'  st9)
+ ind_10(hv3, hv2, hv1, hv0)·(st0'  st10)
+ ind_11(hv3, hv2, hv1, hv0)·(st0'  st11)
+ ind_12(hv3, hv2, hv1, hv0)·(st0'  st12)
+ ind_13(hv3, hv2, hv1, hv0)·(st0'  st13)
+ ind_14(hv3, hv2, hv1, hv0)·(st0'  st14)
+ ind_15(hv3, hv2, hv1, hv0)·(st0'  st15)
Instruction swap
+ i
This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.
Description
For 0 ⩽ i
< 16:
 Stack element
i
is moved to the top.  The top stack element is moved to position
i
 For
j
≠i
: stack elementj
remains unchanged.
Instruction nop
This instruction is fully constrained by its instruction groups
Instruction skiz
For the correct behavior of instruction skiz
, the instruction pointer ip
needs to increment by either 1, or 2, or 3.
The concrete value depends on the top of the stack st0
and the next instruction, held in nia
.
Helper variable hv0
helps with identifying whether st0
is 0.
To this end, it holds the inverseorzero of st0
, i.e., is 0 if and only if st0
is 0, and is the inverse of st0
otherwise.
Efficient arithmetization of instruction skiz
makes use of one of the properties of opcodes.
Concretely, the least significant bit of an opcode is 1 if and only if the instruction takes an argument.
The arithmetization of skiz
can incorporate this simple flag by decomposing nia
into helper variable registers hv
,
similarly to how ci
is (always) deconstructed into instruction bit registers ib
.
Correct decomposition is guaranteed by employing a range check.
In addition to its instruction groups, this instruction has the following constraints.
Description
 Helper variable
hv1
is the inverse ofst0
or 0.  Helper variable
hv1
is the inverse ofst0
orst0
is 0.  The next instruction
nia
is decomposed into helper variableshv1
throughhv5
.  The indicator helper variable
hv1
is either 0 or 1. Here,hv1 == 1
means thatnia
takes an argument.  The helper variable
hv2
is either 0 or 1 or 2 or 3.  The helper variable
hv3
is either 0 or 1 or 2 or 3.  The helper variable
hv4
is either 0 or 1 or 2 or 3.  The helper variable
hv5
is either 0 or 1 or 2 or 3.  If
st0
is nonzero, registerip
is incremented by 1. Ifst0
is 0 andnia
takes no argument, registerip
is incremented by 2. Ifst0
is 0 andnia
takes an argument, registerip
is incremented by 3.
Written as Disjunctive Normal Form, the last constraint can be expressed as:
 (Register
st0
is 0 orip
is incremented by 1), and (st0
has a multiplicative inverse orhv1
is 1 orip
is incremented by 2), and (st0
has a multiplicative inverse orhv1
is 0 orip
is incremented by 3).
Since the three cases are mutually exclusive, the three respective polynomials can be summed up into one.
Polynomials
(st0·hv0  1)·hv0
(st0·hv0  1)·st0
nia  hv1  2·hv2  8·hv3  32·hv4  128·hv5
hv1·(hv1  1)
hv2·(hv2  1)·(hv2  2)·(hv2  3)
hv3·(hv3  1)·(hv3  2)·(hv3  3)
hv4·(hv4  1)·(hv4  2)·(hv4  3)
hv5·(hv5  1)·(hv5  2)·(hv5  3)
(ip'  (ip + 1)·st0)
+ ((ip'  (ip + 2))·(st0·hv0  1)·(hv1  1))
+ ((ip'  (ip + 3))·(st0·hv0  1)·hv1)
Helper variable definitions for skiz
hv0 = inverse(st0)
(ifst0 ≠ 0
)hv1 = nia mod 2
hv2 = (nia >> 1) mod 4
hv3 = (nia >> 3) mod 4
hv4 = (nia >> 5) mod 4
hv5 = nia >> 7
Instruction call
+ d
In addition to its instruction groups, this instruction has the following constraints.
Description
 The jump stack pointer
jsp
is incremented by 1.  The jump's origin
jso
is set to the current instruction pointerip
plus 2.  The jump's destination
jsd
is set to the instruction's argumentd
.  The instruction pointer
ip
is set to the instruction's argumentd
.
Polynomials
jsp'  (jsp + 1)
jso'  (ip + 2)
jsd'  nia
ip'  nia
Instruction return
In addition to its instruction groups, this instruction has the following constraints.
Description
 The jump stack pointer
jsp
is decremented by 1.  The instruction pointer
ip
is set to the last call's originjso
.
Polynomials
jsp'  (jsp  1)
ip'  jso
Instruction recurse
In addition to its instruction groups, this instruction has the following constraints.
Description
 The jump stack pointer
jsp
does not change.  The last jump's origin
jso
does not change.  The last jump's destination
jsd
does not change.  The instruction pointer
ip
is set to the last jump's destinationjsd
.
Polynomials
jsp'  jsp
jso'  jso
jsd'  jsd
ip'  jsd
Instruction recurse_or_return
In addition to its instruction groups, this instruction has the following constraints.
Description
 If
ST5
equalsST6
, thenip
in the next row equalsjso
in the current row.  If
ST5
equalsST6
, thenjsp
decrements by one.  If
ST5
equalsST6
, thenhv0
in the current row is 0.  If
ST5
is unequal toST6
, thenhv0
in the current row is the inverse of(ST6  ST5)
.  If
ST5
is unequal toST6
, thenip
in the next row is equal tojsd
in the current row.  If
ST5
is unequal toST6
, thenjsp
remains unchanged.  If
ST5
is unequal toST6
, thenjso
remains unchanged.  If
ST5
is unequal toST6
, thenjsd
remains unchanged.
Helper variable definitions for recurse_or_return
To help arithmetizing the equality check between ST5
and ST6
, helper variable hv0
is the inverseorzero of (ST6  ST5)
.
Instruction assert
In addition to its instruction groups, this instruction has the following constraints.
Description
 The current top of the stack
st0
is 1.
Polynomials
st0  1
Instruction halt
In addition to its instruction groups, this instruction has the following constraints.
Description
 The instruction executed in the following step is instruction
halt
.
Polynomials
ci'  ci
Instruction read_mem
+ n
In addition to its instruction groups, this instruction has the following constraints.
Description
 The RAM pointer
st0
is decremented byn
.  If
n
is 1, thenst1
is moved intost2
else ifn
is 2, thenst1
is moved intost3
else ifn
is 3, thenst1
is moved intost4
else ifn
is 4, thenst1
is moved intost5
else ifn
is 5, thenst1
is moved intost6
.  If
n
is 1, thenst2
is moved intost3
else ifn
is 2, thenst2
is moved intost4
else ifn
is 3, thenst2
is moved intost5
else ifn
is 4, thenst2
is moved intost6
else ifn
is 5, thenst2
is moved intost7
.  If
n
is 1, thenst3
is moved intost4
else ifn
is 2, thenst3
is moved intost5
else ifn
is 3, thenst3
is moved intost6
else ifn
is 4, thenst3
is moved intost7
else ifn
is 5, thenst3
is moved intost8
.  If
n
is 1, thenst4
is moved intost5
else ifn
is 2, thenst4
is moved intost6
else ifn
is 3, thenst4
is moved intost7
else ifn
is 4, thenst4
is moved intost8
else ifn
is 5, thenst4
is moved intost9
.  If
n
is 1, thenst5
is moved intost6
else ifn
is 2, thenst5
is moved intost7
else ifn
is 3, thenst5
is moved intost8
else ifn
is 4, thenst5
is moved intost9
else ifn
is 5, thenst5
is moved intost10
.  If
n
is 1, thenst6
is moved intost7
else ifn
is 2, thenst6
is moved intost8
else ifn
is 3, thenst6
is moved intost9
else ifn
is 4, thenst6
is moved intost10
else ifn
is 5, thenst6
is moved intost11
.  If
n
is 1, thenst7
is moved intost8
else ifn
is 2, thenst7
is moved intost9
else ifn
is 3, thenst7
is moved intost10
else ifn
is 4, thenst7
is moved intost11
else ifn
is 5, thenst7
is moved intost12
.  If
n
is 1, thenst8
is moved intost9
else ifn
is 2, thenst8
is moved intost10
else ifn
is 3, thenst8
is moved intost11
else ifn
is 4, thenst8
is moved intost12
else ifn
is 5, thenst8
is moved intost13
.  If
n
is 1, thenst9
is moved intost10
else ifn
is 2, thenst9
is moved intost11
else ifn
is 3, thenst9
is moved intost12
else ifn
is 4, thenst9
is moved intost13
else ifn
is 5, thenst9
is moved intost14
.  If
n
is 1, thenst10
is moved intost11
else ifn
is 2, thenst10
is moved intost12
else ifn
is 3, thenst10
is moved intost13
else ifn
is 4, thenst10
is moved intost14
else ifn
is 5, thenst10
is moved intost15
.  If
n
is 1, thenst11
is moved intost12
else ifn
is 2, thenst11
is moved intost13
else ifn
is 3, thenst11
is moved intost14
else ifn
is 4, thenst11
is moved intost15
else ifn
is 5, then the op stack pointer grows by 5.  If
n
is 1, thenst12
is moved intost13
else ifn
is 2, thenst12
is moved intost14
else ifn
is 3, thenst12
is moved intost15
else ifn
is 4, then the op stack pointer grows by 4
else ifn
is 5, then with the Op Stack Table accumulatesst11
throughst15
.  If
n
is 1, thenst13
is moved intost14
else ifn
is 2, thenst13
is moved intost15
else ifn
is 3, then the op stack pointer grows by 3
else ifn
is 4, then the running product with the Op Stack Table accumulatesst12
throughst15
else ifn
is 5, then the running product with the RAM Table accumulates next row'sst1
throughst5
.  If
n
is 1, thenst14
is moved intost15
else ifn
is 2, then the op stack pointer grows by 2
else ifn
is 3, then the running product with the Op Stack Table accumulatesst13
throughst15
else ifn
is 4, then the running product with the RAM Table accumulates next row'sst1
throughst4
 If
n
is 1, then the op stack pointer grows by 1
else ifn
is 2, then the running product with the Op Stack Table accumulatesst14
andst15
else ifn
is 3, then the running product with the RAM Table accumulates next row'sst1
throughst3
.  If
n
is 1, then the running product with the Op Stack Table accumulatesst15
else ifn
is 2, then the running product with the RAM Table accumulates next row'sst1
andst2
.  If
n
is 1, then the running product with the RAM Table accumulates next row'sst1
.
Instruction write_mem
+ n
In addition to its instruction groups, this instruction has the following constraints.
Description
 The RAM pointer
st0
is incremented byn
.  If
n
is 1, thenst2
is moved intost1
else ifn
is 2, thenst3
is moved intost1
else ifn
is 3, thenst4
is moved intost1
else ifn
is 4, thenst5
is moved intost1
else ifn
is 5, thenst6
is moved intost1
.  If
n
is 1, thenst3
is moved intost2
else ifn
is 2, thenst4
is moved intost2
else ifn
is 3, thenst5
is moved intost2
else ifn
is 4, thenst6
is moved intost2
else ifn
is 5, thenst7
is moved intost2
.  If
n
is 1, thenst4
is moved intost3
else ifn
is 2, thenst5
is moved intost3
else ifn
is 3, thenst6
is moved intost3
else ifn
is 4, thenst7
is moved intost3
else ifn
is 5, thenst8
is moved intost3
.  If
n
is 1, thenst5
is moved intost4
else ifn
is 2, thenst6
is moved intost4
else ifn
is 3, thenst7
is moved intost4
else ifn
is 4, thenst8
is moved intost4
else ifn
is 5, thenst9
is moved intost4
.  If
n
is 1, thenst6
is moved intost5
else ifn
is 2, thenst7
is moved intost5
else ifn
is 3, thenst8
is moved intost5
else ifn
is 4, thenst9
is moved intost5
else ifn
is 5, thenst10
is moved intost5
.  If
n
is 1, thenst7
is moved intost6
else ifn
is 2, thenst8
is moved intost6
else ifn
is 3, thenst9
is moved intost6
else ifn
is 4, thenst10
is moved intost6
else ifn
is 5, thenst11
is moved intost6
.  If
n
is 1, thenst8
is moved intost7
else ifn
is 2, thenst9
is moved intost7
else ifn
is 3, thenst10
is moved intost7
else ifn
is 4, thenst11
is moved intost7
else ifn
is 5, thenst12
is moved intost7
.  If
n
is 1, thenst9
is moved intost8
else ifn
is 2, thenst10
is moved intost8
else ifn
is 3, thenst11
is moved intost8
else ifn
is 4, thenst12
is moved intost8
else ifn
is 5, thenst13
is moved intost8
.  If
n
is 1, thenst10
is moved intost9
else ifn
is 2, thenst11
is moved intost9
else ifn
is 3, thenst12
is moved intost9
else ifn
is 4, thenst13
is moved intost9
else ifn
is 5, thenst14
is moved intost9
.  If
n
is 1, thenst11
is moved intost10
else ifn
is 2, thenst12
is moved intost10
else ifn
is 3, thenst13
is moved intost10
else ifn
is 4, thenst14
is moved intost10
else ifn
is 5, thenst15
is moved intost10
.  If
n
is 1, thenst12
is moved intost11
else ifn
is 2, thenst13
is moved intost11
else ifn
is 3, thenst14
is moved intost11
else ifn
is 4, thenst15
is moved intost11
else ifn
is 5, then the op stack pointer shrinks by 5.  If
n
is 1, thenst13
is moved intost12
else ifn
is 2, thenst14
is moved intost12
else ifn
is 3, thenst15
is moved intost12
else ifn
is 4, then the op stack pointer shrinks by 4
else ifn
is 5, then the running product with the Op Stack Table accumulates next row'sst11
throughst15
.  If
n
is 1, thenst14
is moved intost13
else ifn
is 2, thenst15
is moved intost13
else ifn
is 3, then the op stack pointer shrinks by 3
else ifn
is 4, then the running product with the Op Stack Table accumulates next row'sst12
throughst15
else ifn
is 5, then the running product with the RAM Table accumulatesst1
throughst5
.  If
n
is 1, thenst15
is moved intost14
else ifn
is 2, then the op stack pointer shrinks by 2
else ifn
is 3, then the running product with the Op Stack Table accumulates next row'sst13
throughst15
else ifn
is 4, then the running product with the RAM Table accumulatesst1
throughst4
.  If
n
is 1, then the op stack pointer shrinks by 1
else ifn
is 2, then the running product with the Op Stack Table accumulates next row'sst14
andst15
else ifn
is 3, then the running product with the RAM Table accumulatesst1
throughst2
.  If
n
is 1, then the running product with the Op Stack Table accumulates next row'sst15
else ifn
is 2, then the running product with the RAM Table accumulatesst1
andst1
.  If
n
is 1, then the running product with the RAM Table accumulatesst1
.
Instruction hash
In addition to its instruction groups, this instruction has the following constraints.
Description
st10
is moved intost5
.st11
is moved intost6
.st12
is moved intost7
.st13
is moved intost8
.st14
is moved intost9
.st15
is moved intost10
. The op stack pointer shrinks by 5.
 The running product with the Op Stack Table accumulates next row's
st11
throughst15
.
Polynomials
st5'  st10
st6'  st11
st7'  st12
st8'  st13
st9'  st14
st10'  st15
op_stack_pointer'  op_stack_pointer + 5
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 1)  🫒·st14')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 2)  🫒·st13')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 3)  🫒·st12')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 4)  🫒·st11')
Instruction assert_vector
In addition to its instruction groups, this instruction has the following constraints.
Description
st0
is equal tost5
.st1
is equal tost6
.st2
is equal tost7
.st3
is equal tost8
.st4
is equal tost9
.st10
is moved intost5
.st11
is moved intost6
.st12
is moved intost7
.st13
is moved intost8
.st14
is moved intost9
.st15
is moved intost10
. The op stack pointer shrinks by 5.
 The running product with the Op Stack Table accumulates next row's
st11
throughst15
.
Polynomials
st5  st0
st6  st1
st7  st2
st8  st3
st9  st4
st5'  st10
st6'  st11
st7'  st12
st8'  st13
st9'  st14
st10'  st15
op_stack_pointer'  op_stack_pointer + 5
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 1)  🫒·st14')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 2)  🫒·st13')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 3)  🫒·st12')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 4)  🫒·st11')
Instruction sponge_init
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the Hash Table.
Instruction sponge_absorb
In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table.
Description
st10
is moved intost0
.st11
is moved intost1
.st12
is moved intost2
.st13
is moved intost3
.st14
is moved intost4
.st15
is moved intost5
. The op stack pointer shrinks by 10.
 The running product with the Op Stack Table accumulates next row's
st6
throughst15
.
Polynomials
st0'  st10
st1'  st11
st2'  st12
st3'  st13
st4'  st14
st5'  st15
op_stack_pointer'  op_stack_pointer + 10
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 1)  🫒·st14')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 2)  🫒·st13')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 3)  🫒·st12')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 4)  🫒·st11')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 5)  🫒·st10')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 6)  🫒·st9')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 7)  🫒·st8')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 8)  🫒·st7')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 9)  🫒·st6')
Instruction sponge_absorb_mem
In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table and the RAM Table.
Description
st0
is incremented by 10.
Instruction sponge_squeeze
In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table.
Description
st0
is moved intost10
.st1
is moved intost11
.st2
is moved intost12
.st3
is moved intost13
.st4
is moved intost14
.st5
is moved intost15
. The op stack pointer grows by 10.
 The running product with the Op Stack Table accumulates
st6
throughst15
.
Polynomials
st10'  st0
st11'  st1
st12'  st2
st13'  st3
st14'  st4
st15'  st5
op_stack_pointer'  op_stack_pointer  10
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍉·op_stack_pointer  🫒·st15)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 1)  🫒·st14)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 2)  🫒·st13)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 3)  🫒·st12)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 4)  🫒·st11)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 5)  🫒·st10)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 6)  🫒·st9)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 7)  🫒·st8)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 8)  🫒·st7)
·(🪤  🍋·clk  🍉·(op_stack_pointer + 9)  🫒·st6)
Instruction add
In addition to its instruction groups, this instruction has the following constraints.
Description
 The sum of the top two stack elements is moved into the top of the stack.
Polynomials
st0'  (st0 + st1)
Instruction mul
In addition to its instruction groups, this instruction has the following constraints.
Description
 The product of the top two stack elements is moved into the top of the stack.
Polynomials
st0'  st0·st1
Instruction invert
In addition to its instruction groups, this instruction has the following constraints.
Description
 The top of the stack's inverse is moved into the top of the stack.
Polynomials
st0'·st0  1
Instruction eq
In addition to its instruction groups, this instruction has the following constraints.
Description
 Helper variable
hv0
is the inverse of the difference of the stack's two topmost elements or 0.  Helper variable
hv0
is the inverse of the difference of the stack's two topmost elements or the difference is 0.  The new top of the stack is 1 if the difference between the stack's two topmost elements is not invertible, 0 otherwise.
Polynomials
hv0·(hv0·(st1  st0)  1)
(st1  st0)·(hv0·(st1  st0)  1)
st0'  (1  hv0·(st1  st0))
Helper variable definitions for eq
hv0 = inverse(rhs  lhs)
ifrhs ≠ lhs
.hv0 = 0
ifrhs = lhs
.
Instruction split
In addition to its instruction groups, this instruction has the following constraints. Part of the correct transition, namely the range check on the instruction's result, is guaranteed by the U32 Table.
Description
 The top of the stack is decomposed as 32bit chunks into the stack's topmost two elements.
 Helper variable
hv0
holds the inverse of $2_{32}−1$ subtracted from the high 32 bits or the low 32 bits are 0.
Polynomials
st0  (2^32·st1' + st0')
st0'·(hv0·(st1'  (2^32  1))  1)
Helper variable definitions for split
Given the high 32 bits of st0
as hi = st0 >> 32
and the low 32 bits of st0
as lo = st0 & 0xffff_ffff
,
hv0 = (hi  (2^32  1))
iflo ≠ 0
.hv0 = 0
iflo = 0
.
Instruction lt
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction and
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction xor
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction log2floor
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction pow
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction div_mod
Recall that instruction div_mod
takes stack _ d n
and computes _ q r
where n
is the numerator, d
is the denominator, r
is the remainder, and q
is the quotient.
The following two properties are guaranteed by the U32 Table:
 The remainder
r
is smaller than the denominatord
, and  all four of
n
,d
,q
, andr
are u32s.
In addition to its instruction groups, this instruction has the following constraints.
Description
 Numerator is quotient times denominator plus remainder:
n == q·d + r
.
Polynomials
st0  st1·st1'  st0'
st2'  st2
Instruction pop_count
This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.
Instruction xx_add
In addition to its instruction groups, this instruction has the following constraints.
Description
 The result of adding
st0
tost3
is moved intost0
.  The result of adding
st1
tost4
is moved intost1
.  The result of adding
st2
tost5
is moved intost2
. st6
is moved intost3
.st7
is moved intost4
.st8
is moved intost5
.st9
is moved intost6
.st10
is moved intost7
.st11
is moved intost8
.st12
is moved intost9
.st13
is moved intost10
.st14
is moved intost11
.st15
is moved intost12
. The op stack pointer shrinks by 3.
 The running product with the Op Stack Table accumulates next row's
st13
throughst15
.
Polynomials
st0'  (st0 + st3)
st1'  (st1 + st4)
st2'  (st2 + st5)
st3'  st6
st4'  st7
st5'  st8
st6'  st9
st7'  st10
st8'  st11
st9'  st12
st10'  st13
st11'  st14
st12'  st15
op_stack_pointer'  op_stack_pointer + 3
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 1)  🫒·st14')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 2)  🫒·st13')
Instruction xx_mul
In addition to its instruction groups, this instruction has the following constraints.
Description
 The coefficient of x^0 of multiplying the two XField elements on the stack is moved into
st0
.  The coefficient of x^1 of multiplying the two XField elements on the stack is moved into
st1
.  The coefficient of x^2 of multiplying the two XField elements on the stack is moved into
st2
. st6
is moved intost3
.st7
is moved intost4
.st8
is moved intost5
.st9
is moved intost6
.st10
is moved intost7
.st11
is moved intost8
.st12
is moved intost9
.st13
is moved intost10
.st14
is moved intost11
.st15
is moved intost12
. The op stack pointer shrinks by 3.
 The running product with the Op Stack Table accumulates next row's
st13
throughst15
.
Polynomials
st0'  (st0·st3  st2·st4  st1·st5)
st1'  (st1·st3 + st0·st4  st2·st5 + st2·st4 + st1·st5)
st2'  (st2·st3 + st1·st4 + st0·st5 + st2·st5)
st3'  st6
st4'  st7
st5'  st8
st6'  st9
st7'  st10
st8'  st11
st9'  st12
st10'  st13
st11'  st14
st12'  st15
op_stack_pointer'  op_stack_pointer + 3
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 1)  🫒·st14')
·(🪤  🍋·clk  🍊  🍉·(op_stack_pointer' + 2)  🫒·st13')
Instruction x_invert
In addition to its instruction groups, this instruction has the following constraints.
Description
 The coefficient of x^0 of multiplying XField element on top of the current stack and on top of the next stack is 1.
 The coefficient of x^1 of multiplying XField element on top of the current stack and on top of the next stack is 0.
 The coefficient of x^2 of multiplying XField element on top of the current stack and on top of the next stack is 0.
Polynomials
st0·st0'  st2·st1'  st1·st2'  1
st1·st0' + st0·st1'  st2·st2' + st2·st1' + st1·st2'
st2·st0' + st1·st1' + st0·st2' + st2·st2'
Instruction xb_mul
In addition to its instruction groups, this instruction has the following constraints.
Description
 The result of multiplying the top of the stack with the XField element's coefficient for x^0 is moved into
st0
.  The result of multiplying the top of the stack with the XField element's coefficient for x^1 is moved into
st1
.  The result of multiplying the top of the stack with the XField element's coefficient for x^2 is moved into
st2
. st4
is moved intost3
.st5
is moved intost4
.st6
is moved intost5
.st7
is moved intost6
.st8
is moved intost7
.st9
is moved intost8
.st10
is moved intost9
.st11
is moved intost10
.st12
is moved intost11
.st13
is moved intost12
.st14
is moved intost13
.st15
is moved intost14
. The op stack pointer shrinks by 3.
 The running product with the Op Stack Table accumulates next row's
st15
.
Polynomials
st0'  st0·st1
st1'  st0·st2
st2'  st0·st3
st3'  st4
st4'  st5
st5'  stt
st6'  st7
st7'  st8
st8'  st9
st9'  st10
st10'  st11
st11'  st12
st12'  st13
st13'  st14
st14'  st15
op_stack_pointer'  op_stack_pointer + 1
RunningProductOpStackTable'  RunningProductOpStackTable·(🪤  🍋·clk  🍊  🍉·op_stack_pointer'  🫒·st15')
Instruction read_io
+ n
In addition to its instruction groups, this instruction has the following constraints.
Description
 If
n
is 1, the running evaluation for standard input accumulates next row'sst0
else ifn
is 2, the running evaluation for standard input accumulates next row'sst0
andst1
else ifn
is 3, the running evaluation for standard input accumulates next row'sst0
throughst2
else ifn
is 4, the running evaluation for standard input accumulates next row'sst0
throughst3
else ifn
is 5, the running evaluation for standard input accumulates next row'sst0
throughst4
.
Polynomials
ind_1(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput'  🛏·RunningEvaluationStandardInput  st0')
+ ind_2(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput'  🛏·(🛏·RunningEvaluationStandardInput  st0')  st1')
+ ind_3(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput'  🛏·(🛏·(🛏·RunningEvaluationStandardInput  st0')  st1')  st2')
+ ind_4(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput'  🛏·(🛏·(🛏·(🛏·RunningEvaluationStandardInput  st0')  st1')  st2')  st3')
+ ind_5(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput'  🛏·(🛏·(🛏·(🛏·(🛏·RunningEvaluationStandardInput  st0')  st1')  st2')  st3')  st4')
Instruction write_io
+ n
In addition to its instruction groups, this instruction has the following constraints.
Description
 If
n
is 1, the running evaluation for standard output accumulatesst0
else ifn
is 2, the running evaluation for standard output accumulatesst0
andst1
else ifn
is 3, the running evaluation for standard output accumulatesst0
throughst2
else ifn
is 4, the running evaluation for standard output accumulatesst0
throughst3
else ifn
is 5, the running evaluation for standard output accumulatesst0
throughst4
.
Polynomials
ind_1(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput'  🧯·RunningEvaluationStandardOutput  st0)
+ ind_2(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput'  🧯·(🧯·RunningEvaluationStandardOutput  st0)  st1)
+ ind_3(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput'  🧯·(🧯·(🧯·RunningEvaluationStandardOutput  st0)  st1)  st2)
+ ind_4(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput'  🧯·(🧯·(🧯·(🧯·RunningEvaluationStandardOutput  st0)  st1)  st2)  st3)
+ ind_5(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput'  🧯·(🧯·(🧯·(🧯·(🧯·RunningEvaluationStandardOutput  st0)  st1)  st2)  st3)  st4)
Instruction merkle_step
Recall that in a Merkle tree, the indices of left (respectively right) leaves have 0 (respectively 1) as their least significant bit. This motivates the use of a helper variable to hold that least significant bit.
In addition to its instruction groups, this instruction has the following constraints. The two Evaluation Arguments also used for instruction hash guarantee correct transition of the top of the stack.
Description
 Helper variable
hv5
is either 0 or 1. st5
is shifted by 1 bit to the right. In other words, twicest5
in the next row plushv5
equalsst5
in the current row.
Helper variable definitions for merkle_step
hv0
throughhv4
hold the sibling digest of the node indicated byst5
, as read from the interface for nondeterministic input.hv5
holds the result ofst5 % 2
, the Merkle tree node index's least significant bit, indicating whether it is a left or right node.
Instruction merkle_step_mem
In addition to its instruction groups, this instruction has the following constraints.
The two Evaluation Arguments also used for instruction hash guarantee correct transition of the top of the stack.
See also merkle_step
.
Description
 Stack element
st6
does not change.  Stack element
st7
is incremented by 5.  Helper variable
hv5
is either 0 or 1. st5
is shifted by 1 bit to the right. In other words, twicest5
in the next row plushv5
equalsst5
in the current row. The running product with the RAM Table accumulates
hv0
throughhv4
using the (appropriately adjusted) RAM pointer held inst7
.
Helper variable definitions for merkle_step_mem
hv0
throughhv4
hold the sibling digest of the node indicated byst5
, as read from RAM at the address held inst7
.hv5
holds the result ofst5 % 2
, the Merkle tree node index's least significant bit, indicating whether it is a left or right node.
Instruction xx_dot_step
In addition to its instruction groups, this instruction has the following constraints.
Description
 Store
(RAM[st0], RAM[st0+1], RAM[st0+2])
in(hv0, hv1, hv2)
.  Store
(RAM[st1], RAM[st1+1], RAM[st1+2])
in(hv3, hv4, hv5)
.  Add
(hv0 + hv1·x + hv2·x²) · (hv3 + hv4·x + hv5·x²)
into(st2, st3, st4)
 Increase the pointers:
st0
andst1
by 3 each.
Instruction xb_dot_step
In addition to its instruction groups, this instruction has the following constraints.
Description
 Store
RAM[st0]
inhv0
.  Store
(RAM[st1], RAM[st1+1], RAM[st1+2])
in(hv1, hv2, hv3)
.  Add
hv0 · (hv1 + hv2·x + hv3·x²)
into(st1, st2, st3)
 Increase the pointers:
st0
andst1
by 1 and 3, respectively.
Operational Stack Table
The operational stack^{1} is where the program stores simple elementary operations, function arguments, and pointers to important objects.
There are 16 registers (st0
through st15
) that the program can access directly.
These registers correspond to the top of the stack.
They are recorded in the Processor Table.
The rest of the op stack is stored in a dedicated memory object called “operational stack underflow memory”. It is initially empty. The evolution of the underflow memory is recorded in the Op Stack Table. The sole task of the Op Stack Table is to keep underflow memory immutable. To achieve this, any read or write accesses to the underflow memory are recorded in the Op Stack Table. Read and write accesses to op stack underflow memory are a side effect of shrinking or growing the op stack.
Main Columns
The Op Stack Table consists of 4 columns:
 the cycle counter
clk
 the shrink stack indicator
shrink_stack
, corresponding to the processor's instruction bit 1ib1
,  the op stack pointer
stack_pointer
, and  the first underflow element
first_underflow_element
.
Clock  Shrink Stack Indicator  Stack Pointer  First Underflow Element 

       
Column clk
records the processor's execution cycle during which the read or write access happens.
The shrink_stack
indicator signals whether the underflow memory access is a read or a write:
a read corresponds to a shrinking stack is indicated by a 1, a write corresponds to a growing stack and is indicated by a 0.
The same column doubles up as a padding indicator, in which case shrink_stack
is set to 2.
The stack_pointer
is the address at which the tobereadorwritten element resides.
Finally, the first_underflow_element
is the stack element being transferred from stack register st15
into underflow memory on a write, or the other way around on a read.
A subset Permutation Argument with the Processor Table establishes that the rows recorded in the Op Stack Table are consistent with the processor's part of the op stack.
In order to guarantee memory consistency, the rows of the operational stack table are sorted by stack_pointer
first, cycle count clk
second.
The mechanics are best illustrated by an example.
Observe how the malicious manipulation of the Op Stack Underflow Memory, the change of “42” into “99” in cycle 8, is detected:
The transition constraints of the Op Stack Table stipulate that if stack_pointer
does not change, then the first_underflow_element
can only change if the next instruction grows the stack.
Consequently, row [4, 0, 8, 42]
being followed by row [10, 1, 8, 99]
violates the constraints.
The shrink stack indicator being correct is guaranteed by the Permutation Argument between Op Stack Table and the Processor Table.
For illustrative purposes only, we use four stack registers st0
through st3
in the example.
Triton VM has 16 stack registers, st0
through st15
.
Furthermore, implied next instructions usually recorded in register “next instruction or argument” nia
are omitted for reasons of readability.
Processor's execution trace:
clk  ci  nia  st0  st1  st2  st3  Op Stack Underflow Memory  op_stack_pointer 

0  push  42  0  0  0  0  []  4 
1  push  43  42  0  0  0  [ 0]  5 
2  push  44  43  42  0  0  [ 0, 0]  6 
3  push  45  44  43  42  0  [ 0, 0, 0]  7 
4  push  46  45  44  43  42  [ 0, 0, 0, 0]  8 
5  push  47  46  45  44  43  [42, 0, 0, 0, 0]  9 
6  push  48  47  46  45  44  [43, 42, 0, 0, 0, 0]  10 
7  nop  48  47  46  45  [44, 43, 42, 0, 0, 0]  11  
8  pop  48  47  46  45  [44, 43, 99, 0, 0, 0]  11  
9  pop  47  46  45  44  [43, 99, 0, 0, 0, 0]  10  
10  pop  46  45  44  43  [99, 0, 0, 0, 0]  9  
11  pop  45  44  43  99  [ 0, 0, 0, 0]  8  
12  push  77  44  43  99  0  [ 0, 0, 0]  7 
13  swap  3  77  44  43  99  [ 0, 0, 0, 0]  8 
14  push  78  99  44  43  77  [ 0, 0, 0, 0]  8 
15  swap  3  78  99  44  43  [77, 0, 0, 0, 0]  9 
16  push  79  43  99  44  78  [77, 0, 0, 0, 0]  9 
17  pop  79  43  99  44  [78, 77, 0, 0, 0, 0]  10  
18  pop  43  99  44  78  [77, 0, 0, 0, 0]  9  
19  pop  99  44  78  77  [ 0, 0, 0, 0]  8  
20  pop  44  78  77  0  [ 0, 0, 0]  7  
21  pop  78  77  0  0  [ 0, 0]  6  
22  pop  77  0  0  0  [ 0]  5  
23  halt  0  0  0  0  []  4 
Operational Stack Table:
clk  shrink_stack  stack_pointer  first_underflow_element 

0  0  4  0 
22  1  4  0 
1  0  5  0 
21  1  5  0 
2  0  6  0 
20  1  6  0 
3  0  7  0 
11  1  7  0 
12  0  7  0 
19  1  7  0 
4  0  8  42 
10  1  8  99 
14  0  8  77 
18  1  8  77 
5  0  9  43 
9  1  9  43 
16  0  9  78 
17  1  9  78 
6  0  10  44 
8  1  10  44 
Auxiliary Columns
The Op Stack Table has 2 auxiliary columns, rppa
and ClockJumpDifferenceLookupClientLogDerivative
.
 A Permutation Argument establishes that the rows of the Op Stack Table correspond to the rows of the Processor Table.
The running product for this argument is contained in the
rppa
column.  In order to achieve memory consistency, a Lookup Argument shows that all clock jump differences are contained in the
clk
column of the Processor Table. The logarithmic derivative for this argument is contained in theClockJumpDifferenceLookupClientLogDerivative
column.
Padding
The last row in the Op Stack Table is taken as the padding template row.
Should the Op Stack Table be empty, the row (clk
, shrink_stack
, stack_pointer
, first_underflow_element
) = (0, 0, 16, 0) is used instead.
In the template row, the shrink_stack
indicator is set to 2, signifying padding.
The template row is inserted below the last row until the desired padded height is reached.
MemoryConsistency
Memoryconsistency follows from two more primitive properties:
 Contiguity of regions of constant memory pointer.
Since in Op Stack Table, the
stack_pointer
can change by at most one per cycle, it is possible to enforce a full sorting using AIR constraints.  Correct innersorting within contiguous regions. Specifically, the rows within each contiguous region of constant memory pointer should be sorted for clock cycle. This property is established by the clock jump difference Lookup Argument. In a nutshell, every difference of consecutive clock cycles that occurs within one contiguous block of constant memory pointer is shown itself to be a valid clock cycle through a separate crosstable argument.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 The
stack_pointer
is the number of available stack registers, i.e., 16.  If the row is not a padding row, the running product for the permutation argument with the Processor Table
rppa
starts off having accumulated the first row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤. Otherwise, it is the Permutation Argument's default initial, i.e., 1.  The logarithmic derivative for the clock jump difference lookup
ClockJumpDifferenceLookupClientLogDerivative
is 0.
Initial Constraints as Polynomials
stack_pointer  16
(shrink_stack  2)·(rppa  (🪤  🍋·clk  🍊·shrink_stack  🍉·stack_pointer  🫒·first_underflow_element))
+ (shrink_stack  0)·(shrink_stack  1)·(rppa  1)
ClockJumpDifferenceLookupClientLogDerivative
Consistency Constraints
None.
Transition Constraints

 the
stack_pointer
increases by 1, or  the
stack_pointer
does not change AND thefirst_underflow_element
does not change, or  the
stack_pointer
does not change AND the shrink stack indicatorshrink_stack
in the next row is 0.
 the
 If thex next row is not a padding row, the running product for the permutation argument with the Processor Table
rppa
absorbs the next row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤. Otherwise, the running product remains unchanged.  If the current row is a padding row, then the next row is a padding row.
 If the next row is not a padding row and the op stack pointer
stack_pointer
does not change, then the logarithmic derivative for the clock jump difference lookupClockJumpDifferenceLookupClientLogDerivative
accumulates a factor(clk'  clk)
relative to indeterminate 🪞. Otherwise, it remains the unchanged.
Written as Disjunctive Normal Form, the same constraints can be expressed as:
 The
stack_pointer
increases by 1 or thestack_pointer
does not change.  The
stack_pointer
increases by 1 or thefirst_underflow_element
does not change or the shrink stack indicatorshrink_stack
in the next row is 0. 
 The next row is a padding row or
rppa
has accumulated the next row, and  the next row is not a padding row or
rppa
remains unchanged.
 The next row is a padding row or
 The current row is not a padding row or the next row is a padding row.

 the
stack_pointer
changes or the next row is a padding row or the logarithmic derivative accumulates a summand,  the
stack_pointer
remains unchanged or the logarithmic derivative remains unchanged, and  the next row is not a padding row or the logarithmic derivative remains unchanged.
 the
Transition Constraints as Polynomials
(stack_pointer'  stack_pointer  1)·(stack_pointer'  stack_pointer)
(stack_pointer'  stack_pointer  1)·(first_underflow_element'  first_underflow_element)·(shrink_stack'  0)
(shrink_stack'  2)·(rppa'  rppa·(🪤  🍋·clk'  🍊·shrink_stack'  🍉·stack_pointer'  🫒first_underflow_element'))
+ (shrink_stack'  0)·(shrink_stack'  1)·(rppa'  rppa)
(shrink_stack  0)·(shrink_stack  1)·(shrink_stack'  2)
(stack_pointer'  stack_pointer  1)·(shrink_stack'  2)·((ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative) · (🪞  clk' + clk)  1)
+ (stack_pointer'  stack_pointer)·(ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative)
+ (shrink_stack'  0)·(shrink_stack'  1)·(ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative)
Terminal Constraints
None.
frequently abbreviated as “Op Stack”
Random Access Memory Table
The purpose of the RAM Table is to ensure that the RAM is accessed in a consistent manner.
That is, all mutation of RAM happens explicitly through instruction write_mem
,
and any invocations of instruction read_mem
return the values last written.
The fundamental principle is identical to that of the Op Stack Table.
The main difference is the absence of a dedicated stack pointer.
Instead, op stack element st0
is used as RAM pointer ram_pointer
.
If some RAM address is read from before it is written to, the corresponding value is not determined.
This is one of interfaces for nondeterministic input in Triton VM.
Consecutive reads from any such address always returns the same value (until overwritten via write_mem
).
Main Columns
The RAM Table has 7 main columns:
 the cycle counter
clk
,  the executed
instruction_type
– 0 for “write”, 1 for “read”, 2 for padding rows,  RAM pointer
ram_pointer
,  RAM value
ram_value
,  helper variable "inverse of
ram_pointer
difference"iord
,  Bézout coefficient polynomial coefficient 0
bcpc0
,  Bézout coefficient polynomial coefficient 1
bcpc1
,
Column iord
helps with detecting a change of ram_pointer
across two RAM Table rows.
The function of iord
is best explained in the context of sorting the RAM Table's rows, which is what the next section is about.
The Bézout coefficient polynomial coefficients bcpc0
and bcpc1
represent the coefficients of polynomials that are needed for the contiguity argument.
This argument establishes that all regions of constant ram_pointer
are contiguous.
Auxiliary Columns
The RAM Table has 6 auxiliary columns:
RunningProductOfRAMP
, accumulating next row'sram_pointer
as a root wheneverram_pointer
changes between two rows,FormalDerivative
, the (evaluated) formal derivative ofRunningProductOfRAMP
,BezoutCoefficient0
, the (evaluated) polynomial with main columnbcpc0
as coefficients,BezoutCoefficient1
, the (evaluated) polynomial with main columnbcpc1
as coefficients,RunningProductPermArg
, the Permutation Argument with the Processor Table, andClockJumpDifferenceLookupClientLogDerivative
, part of memory consistency.
Columns RunningProductOfRAMP
, FormalDerivative
, BezoutCoefficient0
, and BezoutCoefficient1
are part of the Contiguity Argument.
Sorting Rows
In the Hash Table, the rows are arranged such that they
 form contiguous regions of
ram_pointer
, and  are sorted by cycle counter
clk
within each such region.
One way to achieve this is to sort by ram_pointer
first, clk
second.
Coming back to iord
:
if the difference between ram_pointer
in row $i$ and row $i+1$ is 0, then iord
in row $i$ is 0.
Otherwise, iord
in row $i$ is the multiplicative inverse of the difference between ram_pointer
in row $i+1$ and ram_pointer
in row $i$.
In the last row, there being no next row, iord
is 0.
An example of the mechanics can be found below.
To increase visual clarity, stack registers holding value “0” are represented by an empty cell.
For illustrative purposes only, we use six stack registers st0
through st5
in the example.
Triton VM has 16 stack registers, st0
through st15
.
Processor Table:
clk  ci  nia  st0  st1  st2  st3  st4  st5 

0  push  20  
1  push  100  20  
2  write_mem  1  100  20  
3  pop  1  101  
4  push  5  
5  push  6  5  
6  push  7  6  5  
7  push  8  7  6  5  
8  push  9  8  7  6  5  
9  push  42  9  8  7  6  5  
10  write_mem  5  42  9  8  7  6  5 
11  pop  1  47  
12  push  42  
13  read_mem  1  42  
14  pop  2  41  9  
15  push  45  
16  read_mem  3  45  
17  pop  4  42  8  7  6  
18  push  17  
19  push  18  17  
20  push  19  18  17  
21  push  43  19  18  17  
22  write_mem  3  43  19  18  17  
23  pop  1  46  
24  push  46  
25  read_mem  5  46  
26  pop  1  41  9  19  18  17  5 
27  pop  5  9  19  18  17  5  
28  push  42  
29  read_mem  1  42  
30  pop  2  41  9  
31  push  100  
32  read_mem  1  100  
33  pop  2  99  20  
34  halt 
RAM Table:
clk  type  pointer  value  iord 

10  write  42  9  0 
13  read  42  9  0 
25  read  42  9  0 
29  read  42  9  1 
10  write  43  8  0 
16  read  43  8  0 
22  write  43  19  0 
25  read  43  19  1 
10  write  44  7  0 
16  read  44  7  0 
22  write  44  18  0 
25  read  44  18  1 
10  write  45  6  0 
16  read  45  6  0 
22  write  45  17  0 
25  read  45  17  1 
10  write  46  5  0 
25  read  46  5  $54_{−1}$ 
2  write  100  20  0 
32  read  100  20  0 
Padding
The row used for padding the RAM Table is its last row, with the instruction_type
set to 2.
If the RAM Table is empty, the allzero row with the following modifications is used instead:
instruction_type
is set to 2, andbcpc1
is set to 1.
This ensures that the Contiguity Argument works correctly in the absence of any actual contiguous RAM pointer region.
The padding row is inserted below the RAM Table until the desired height is reached.
Row Permutation Argument
The permutation argument with the Processor Table establishes that the RAM Table's rows correspond to the Processor Table's sent and received RAM values, at the correct cycle counter and RAM address.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 The first coefficient of the Bézout coefficient polynomial 0
bcpc0
is 0.  The Bézout coefficient 0
bc0
is 0.  The Bézout coefficient 1
bc1
is equal to the first coefficient of the Bézout coefficient polynomialbcpc1
.  The running product polynomial
RunningProductOfRAMP
starts with🧼  ram_pointer
.  The formal derivative starts with 1.
 If the first row is not a padding row, the running product for the permutation argument with the Processor Table
RunningProductPermArg
has absorbed the first row with respect to challenges 🍍, 🍈, 🍎, and 🌽 and indeterminate 🛋.
Else, the running product for the permutation argument with the Processor TableRunningProductPermArg
is 1.  The logarithmic derivative for the clock jump difference lookup
ClockJumpDifferenceLookupClientLogDerivative
is 0.
Initial Constraints as Polynomials
bcpc0
bc0
bc1  bcpc1
RunningProductOfRAMP  🧼 + ram_pointer
FormalDerivative  1
(RunningProductPermArg  🛋  🍍·clk  🍈·ram_pointer  🍎·ram_value  🌽·instruction_type)·(instruction_type  2)
(RunningProductPermArg  1)·(instruction_type  1)·(instruction_type  0)
ClockJumpDifferenceLookupClientLogDerivative
Consistency Constraints
None.
Transition Constraints
 If the current row is a padding row, then the next row is a padding row.
 The “inverse of
ram_pointer
difference”iord
is 0 oriord
is the inverse of the difference between current and next row'sram_pointer
.  The
ram_pointer
changes oriord
is the inverse of the difference between current and next row'sram_pointer
.  The
ram_pointer
changes orinstruction_type
is “write” or theram_value
remains unchanged.  The
bcbp0
changes if and only if theram_pointer
changes.  The
bcbp1
changes if and only if theram_pointer
changes.  If the
ram_pointer
changes, theRunningProductOfRAMP
accumulates nextram_pointer
.
Otherwise, it remains unchanged.  If the
ram_pointer
changes, theFormalDerivative
updates under the product rule of differentiation.
Otherwise, it remains unchanged.  If the
ram_pointer
changes, Bézout coefficient 0bc0
updates according to the running evaluation rules with respect tobcpc0
.
Otherwise, it remains unchanged.  If the
ram_pointer
changes, Bézout coefficient 1bc1
updates according to the running evaluation rules with respect tobcpc1
.
Otherwise, it remains unchanged.  If the next row is not a padding row, the
RunningProductPermArg
accumulates the next row.
Otherwise, it remains unchanged.  If the
ram_pointer
does not change and the next row is not a padding row, theClockJumpDifferenceLookupClientLogDerivative
accumulates the difference ofclk
.
Otherwise, it remains unchanged.
Transition Constraints as Polynomials
(instruction_type  0)·(instruction_type  1)·(instruction_type'  2)
(iord·(ram_pointer'  ram_pointer)  1)·iord
(iord·(ram_pointer'  ram_pointer)  1)·(ram_pointer'  ram_pointer)
(iord·(ram_pointer'  ram_pointer)  1)·(instruction_type  0)·(ram_value'  ram_value)
(iord·(ram_pointer'  ram_pointer)  1)·(bcpc0'  bcpc0)
(iord·(ram_pointer'  ram_pointer)  1)·(bcpc1'  bcpc1)
(iord·(ram_pointer'  ram_pointer)  1)·(RunningProductOfRAMP'  RunningProductOfRAMP)
+ (ram_pointer'  ram_pointer)·(RunningProductOfRAMP'  RunningProductOfRAMP·(ram_pointer'🧼))
(iord·(ram_pointer'  ram_pointer)  1)·(FormalDerivative'  FormalDerivative)
+ (ram_pointer'  ram_pointer)·(FormalDerivative'  FormalDerivative·(ram_pointer'🧼)  RunningProductOfRAMP)
(iord·(ram_pointer'  ram_pointer)  1)·(bc0'  bc0)
+ (ram_pointer'  ram_pointer)·(bc0'  bc0·🧼  bcpc0')
(iord·(ram_pointer'  ram_pointer)  1)·(bc1'  bc1)
+ (ram_pointer'  ram_pointer)·(bc1'  bc1·🧼  bcpc1')
(RunningProductPermArg'  RunningProductPermArg·(🛋  🍍·clk'  🍈·ram_pointer'  🍎·ram_value'  🌽·instruction_type'))·(instruction_type'  2)
(RunningProductPermArg'  RunningProductPermArg)·(instruction_type  1)·(instruction_type  0))
(iord·(ram_pointer'  ram_pointer)  1)·(instruction_type'  2)·((ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative) · (🪞  clk' + clk)  1)
+ (ram_pointer'  ram_pointer)·(ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative)
+ (instruction_type'  1)·(instruction_type'  0)·(ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative)
Terminal Constraints
 The Bézout relation holds between
RunningProductOfRAMP
,FormalDerivative
,bc0
, andbc1
.
Terminal Constraints as Polynomials
RunningProductOfRAMP·bc0 + FormalDerivative·bc1  1
Jump Stack Table
The Jump Stack Memory contains the underflow from the Jump Stack.
Main Columns
The Jump Stack Table consists of 5 columns:
 the cycle counter
clk
 current instruction
ci
,  the jump stack pointer
jsp
,  the last jump's origin
jso
, and  the last jump's destination
jsd
.
Clock  Current Instruction  Jump Stack Pointer  Jump Stack Origin  Jump Stack Destination 

         
The rows are sorted by jump stack pointer jsp
, then by cycle counter clk
.
The column jsd
contains the destination of stackextending jump (call
) as well as of the nostackchange jump (recurse
);
the column jso
contains the source of the stackextending jump (call
) or equivalently the destination of the stackshrinking jump (return
).
The AIR for this table guarantees that the return address of a single cell of return address memory can change only if there was a call
instruction.
An example program, execution trace, and jump stack table are shown below.
Program:
address  instruction 

0x00  foo 
0x01  bar 
0x02  call 
0x03  0xA0 
0x04  buzz 
0x05  bar 
0x06  call 
0x07  0xB0 
0x08  foo 
0x09  bar 
⋮  ⋮ 
0xA0  buzz 
0xA1  foo 
0xA2  bar 
0xA3  return 
0xA4  foo 
⋮  ⋮ 
0xB0  foo 
0xB1  call 
0xB2  0xC0 
0xB3  return 
0xB4  bazz 
⋮  ⋮ 
0xC0  buzz 
0xC1  foo 
0xC2  bar 
0xC3  return 
0xC4  buzz 
Execution trace:
clk  ip  ci  nia  jsp  jso  jsd  jump stack 

0  0x00  foo  bar  0  0x00  0x00  [ ] 
1  0x01  bar  call  0  0x00  0x00  [ ] 
2  0x02  call  0xA0  0  0x00  0x00  [ ] 
3  0xA0  buzz  foo  1  0x04  0xA0  [(0x04 , 0xA0 )] 
4  0xA1  foo  bar  1  0x04  0xA0  [(0x04 , 0xA0 )] 
5  0xA2  bar  return  1  0x04  0xA0  [(0x04 , 0xA0 )] 
6  0xA3  return  foo  1  0x04  0xA0  [(0x04 , 0xA0 )] 
7  0x04  buzz  bar  0  0x00  0x00  [ ] 
8  0x05  bar  call  0  0x00  0x00  [ ] 
9  0x06  call  0xB0  0  0x00  0x00  [ ] 
10  0xB0  foo  call  1  0x08  0xB0  [(0x08 , 0xB0 )] 
11  0xB1  call  0xC0  1  0x08  0xB0  [(0x08 , 0xB0 )] 
12  0xC0  buzz  foo  2  0xB3  0xC0  [(0x08 , 0xB0 ), (0xB3 , 0xC0 )] 
13  0xC1  foo  bar  2  0xB3  0xC0  [(0x08 , 0xB0 ), (0xB3 , 0xC0 )] 
14  0xC2  bar  return  2  0xB3  0xC0  [(0x08 , 0xB0 ), (0xB3 , 0xC0 )] 
15  0xC3  return  buzz  2  0xB3  0xC0  [(0x08 , 0xB0 ), (0xB3 , 0xC0 )] 
16  0xB3  return  bazz  1  0x08  0xB0  [(0x08 , 0xB0 )] 
17  0x08  foo  bar  0  0x00  0x00  [ ] 
Jump Stack Table:
clk  ci  jsp  jso  jsd 

0  foo  0  0x00  0x00 
1  bar  0  0x00  0x00 
2  call  0  0x00  0x00 
7  buzz  0  0x00  0x00 
8  bar  0  0x00  0x00 
9  call  0  0x00  0x00 
17  foo  0  0x00  0x00 
3  buzz  1  0x04  0xA0 
4  foo  1  0x04  0xA0 
5  bar  1  0x04  0xA0 
6  return  1  0x04  0xA0 
10  foo  1  0x08  0xB0 
11  call  1  0x08  0xB0 
16  return  1  0x08  0xB0 
12  buzz  2  0xB3  0xC0 
13  foo  2  0xB3  0xC0 
14  bar  2  0xB3  0xC0 
15  return  2  0xB3  0xC0 
Auxiliary Columns
The Jump Stack Table has 2 auxiliary columns, rppa
and ClockJumpDifferenceLookupClientLogDerivative
.
 A Permutation Argument establishes that the rows of the Jump Stack Table match with the rows in the Processor Table.
The running product for this argument is contained in the
rppa
column.  In order to achieve memory consistency, a Lookup Argument shows that all clock jump differences are contained in the
clk
column of the Processor Table. The logarithmic derivative for this argument is contained in theClockJumpDifferenceLookupClientLogDerivative
column.
Padding
A padding row is a direct copy of the Jump Stack Table's row with the highest value for column clk
, called template row, with the exception of the cycle count column clk
.
In a padding row, the value of column clk
is 1 greater than the value of column clk
in the template row.
The padding row is inserted right below the template row.
These steps are repeated until the desired padded height is reached.
In total, above steps ensure that the Permutation Argument between the Jump Stack Table and the Processor Table holds up.
MemoryConsistency
Memoryconsistency follows from two more primitive properties:
 Contiguity of regions of constant memory pointer.
Since the memory pointer for the Jump Stack table,
jsp
can change by at most one per cycle, it is possible to enforce a full sorting using AIR constraints.  Correct innersorting within contiguous regions. Specifically, the rows within each contiguous region of constant memory pointer should be sorted for clock cycle. This property is established by the clock jump difference Lookup Argument. In a nutshell, every difference of consecutive clock cycles that occurs within one contiguous block of constant memory pointer is shown itself to be a valid clock cycle through a separate crosstable argument.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 Cycle count
clk
is 0.  Jump Stack Pointer
jsp
is 0.  Jump Stack Origin
jso
is 0.  Jump Stack Destination
jsd
is 0.  The running product for the permutation argument with the Processor Table
rppa
has absorbed the first row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.  The running product of clock jump differences
ClockJumpDifferenceLookupClientLogDerivative
is 0.
Initial Constraints as Polynomials
clk
jsp
jso
jsd
rppa  (🧴  🍇·clk  🍅·ci  🍌·jsp  🍏·jso  🍐·jsd)
ClockJumpDifferenceLookupClientLogDerivative
Consistency Constraints
None.
Transition Constraints
 The jump stack pointer
jsp
increases by 1, or  (
jsp
does not change andjso
does not change andjsd
does not change and the cycle counterclk
increases by 1), or  (
jsp
does not change andjso
does not change andjsd
does not change and the current instructionci
iscall
), or  (
jsp
does not change and the current instructionci
isreturn
), or  (
jsp
does not change and the current instructionci
isrecurse_or_return
).  The running product for the permutation argument
rppa
absorbs the next row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.  If the jump stack pointer
jsp
does not change, then the logarithmic derivative for the clock jump difference lookupClockJumpDifferenceLookupClientLogDerivative
accumulates a factor(clk'  clk)
relative to indeterminate 🪞. Otherwise, it remains the same.
Written as Disjunctive Normal Form, the same constraints can be expressed as:
 The jump stack pointer
jsp
increases by 1 or the jump stack pointerjsp
does not change  The jump stack pointer
jsp
increases by 1 or the jump stack originjso
does not change or current instructionci
isreturn
orrecurse_or_return
 The jump stack pointer
jsp
increases by 1 or the jump stack destinationjsd
does not change or current instructionci
isreturn
orrecurse_or_return
 The jump stack pointer
jsp
increases by 1 or the cycle countclk
increases by 1 or current instructionci
iscall
or current instructionci
isreturn
orrecurse_or_return
rppa'  rppa·(🧴  🍇·clk'  🍅·ci'  🍌·jsp'  🍏·jso'  🍐·jsd')

 the
jsp
changes or the logarithmic derivative accumulates a summand, and  the
jsp
does not change or the logarithmic derivative does not change.
 the
Transition Constraints as Polynomials
(jsp'  (jsp + 1))·(jsp'  jsp)
(jsp'  (jsp + 1))·(jso'  jso)·(ci  op_code(return))
(jsp'  (jsp + 1))·(jsd'  jsd)·(ci  op_code(return))
(jsp'  (jsp + 1))·(clk'  (clk + 1))·(ci  op_code(call))·(ci  op_code(return))
clk_di·(jsp'  jsp  1)·(1  clk_di·(clk'  clk  one))
(clk'  clk  one)·(jsp'  jsp  1)·(1  clk_di·(clk'  clk  one))
rppa'  rppa·(🧴  🍇·clk'  🍅·ci'  🍌·jsp'  🍏·jso'  🍐·jsd')
(jsp'  (jsp + 1))·((ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative) · (🪞  clk' + clk)  1)
+ (jsp'  jsp)·(ClockJumpDifferenceLookupClientLogDerivative'  ClockJumpDifferenceLookupClientLogDerivative)
Terminal Constraints
None.
Hash Table
The instruction hash
hashes the Op Stack's 10 topmost elements in one cycle.
Similarly, the Sponge instructions sponge_init
, sponge_absorb
, and sponge_squeeze
also all complete in one cycle.
The main processor achieves this by using a hash coprocessor.
The Hash Table is part of the arithmetization of that coprocessor, the other two parts being the Cascade Table and the Lookup Table.
In addition to accelerating these hashing instructions, the Hash Table helps with program attestation by hashing the program.
Note: the Hash Table is not “aware” of instruction sponge_absorb_mem
.
Instead, the processor requests a “regular” sponge_absorb
from the Hash Table, fetching the tobeabsorbed elements from RAM instead of the stack.
The arithmetization for instruction hash
, the Sponge instructions sponge_init
, sponge_absorb
, and sponge_squeeze
, and for program hashing are quite similar.
The main differences are in updates to the state
registers between executions of the pseudorandom permutation used in Triton VM, the permutation of Tip5.
A summary of the four instructions' mechanics:
 Instruction
hash
 sets all the hash coprocessor's rate registers (
state_0
throughstate_9
) to equal the processor's stack registersstate_0
throughstate_9
,  sets all the hash coprocessor's capacity registers (
state_10
throughstate_15
) to 1,  executes the 5 rounds of the Tip5 permutation,
 overwrites the processor's stack registers
state_0
throughstate_4
with 0, and  overwrites the processor's stack registers
state_5
throughstate_9
with the hash coprocessor's registersstate_0
throughstate_4
.
 sets all the hash coprocessor's rate registers (
 Instruction
sponge_init
 sets all the hash coprocessor's registers (
state_0
throughstate_15
) to 0.
 sets all the hash coprocessor's registers (
 Instruction
sponge_absorb
 overwrites the hash coprocessor's rate registers (
state_0
throughstate_9
) with the processor's stack registersstate_0
throughstate_9
, and  executes the 5 rounds of the Tip5 permutation.
 overwrites the hash coprocessor's rate registers (
 Instruction
sponge_squeeze
 overwrites the processor's stack registers
state_0
throughstate_9
with the hash coprocessor's rate registers (state_0
throughstate_9
), and  executes the 5 rounds of the Tip5 permutation.
 overwrites the processor's stack registers
Program hashing happens in the initialization phase of Triton VM.
The tobeexecuted program has no control over it.
Program hashing is mechanically identical to performing instruction sponge_absorb
as often as is necessary to hash the entire program.
A notable difference is the source of the tobeabsorbed elements:
they come from program memory, not the processor (which is not running yet).
Once all instructions have been absorbed, the resulting digest is checked against the publicly claimed digest.
Due to the various similar but distinct tasks of the Hash Table, it has an explicit Mode
register.
The four separate modes are program_hashing
, sponge
, hash
, and pad
, and they evolve in that order.
Changing the mode is only possible when the permutation has been applied in full, i.e., when the round number is 5.
Once mode pad
is reached, it is not possible to change the mode anymore.
It is not possible to skip mode program_hashing
:
the program is always hashed.
Skipping any or all of the modes sponge
, hash
, or pad
is possible in principle:
 if no Sponge instructions are executed, mode
sponge
will be skipped,  if no
hash
instruction is executed, modehash
will be skipped, and  if the Hash Table does not require any padding, mode
pad
will be skipped.
The distinct modes translate into distinct sections in the Hash Table, which are recorded in order:
First, the entire Sponge's transition of hashing the program is recorded.
Then, the Hash Table records all Sponge instructions in the order the processor executed them.
Then, the Hash Table records all hash
instructions in the order the processor executed them.
Lastly, as many padding rows as necessary are inserted.
In total, this separation allows the processor to execute hash
instructions without affecting the Sponge's state, and keeps program hashing independent from both.
Note that state_0
through state_3
, corresponding to those states that are being splitandlookedup in the Tip5 permutation, are not stored as a single field element.
Instead, four limbs “highest”, “mid high”, “mid low”, and “lowest” are recorded in the Hash Table.
This (basically) corresponds to storing the result of $σ(R⋅state_element)$.
In the Hash Table, the resulting limbs are 16 bit wide, and hence, there are only 4 limbs;
the split into 8bit limbs happens in the Cascade Table.
For convenience, this document occasionally refers to those states as if they were a single register.
This is an alias for
$(2_{48}⋅state_i_highest_lkin+2_{32}⋅state_i_mid_high_lkin+2_{16}⋅state_i_mid_low_lkin+state_i_lowest_lkin)⋅R_{−1}$.
Main Columns
The Hash Table has 67 main columns:
 The
Mode
indicator, as described above. It takes value $1$ for mode
program_hashing
,  $2$ for mode
sponge
,  $3$ for mode
hash
, and  $0$ for mode
pad
.
 $1$ for mode
 Current instruction
CI
, holding the instruction the processor is currently executing. This column is only relevant for modesponge
.  Round number indicator
round_no
, which can be one of ${0,…,5}$. The Tip5 permutation has 5 rounds, indexed ${0,…,4}$. The round number 5 indicates that the Tip5 permutation has been applied in full.  16 columns
state_i_highest_lkin
,state_i_mid_high_lkin
,state_i_mid_low_lkin
,state_i_lowest_lkin
for the tobelookedup value ofstate_0
throughstate_4
, each of which holds one 16bit wide limb.  16 columns
state_i_highest_lkout
,state_i_mid_high_lkout
,state_i_mid_low_lkout
,state_i_lowest_lkout
for the lookedup value ofstate_0
throughstate_4
, each of which holds one 16bit wide limb.  12 columns
state_5
throughstate_15
.  4 columns
state_i_inv
establishing correct decomposition ofstate_0_*_lkin
throughstate_3_*_lkin
into 16bit wide limbs.  16 columns
constant_i
, which hold the round constant for the round indicated byRoundNumber
, or 0 if no round with this round number exists.
Auxiliary Columns
The Hash Table has 20 auxiliary columns:
RunningEvaluationReceiveChunk
for the Evaluation Argument for copying chunks of size $rate$ from the Program Table. Relevant for program attestation.RunningEvaluationHashInput
for the Evaluation Argument for copying the input to the hash function from the processor to the hash coprocessor,RunningEvaluationHashDigest
for the Evaluation Argument for copying the hash digest from the hash coprocessor to the processor,RunningEvaluationSponge
for the Evaluation Argument for copying the 10 next tobeabsorbed elements from the processor to the hash coprocessor or the 10 next squeezed elements from the hash coprocessor to the processor, depending on the instruction, 16 columns
state_i_limb_LookupClientLogDerivative
(fori
$∈{0,…,3}$ andlimb
$∈{$highest
,mid_high
,mid_low
,lowest
$}$) establishing correct lookup of the respective limbs in the Cascade Table.
Padding
Each padding row is the allzero row with the exception of
CI
, which is the opcode of instructionhash
,state_i_inv
fori
$∈{0,…,3}$, which is $(2_{32}−1)_{−1}$, andconstant_i
fori
$∈{0,…,15}$, which is thei
th constant for round 0.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 The
Mode
isprogram_hashing
.  The round number is 0.
RunningEvaluationReceiveChunk
has absorbed the first chunk of instructions with respect to indeterminate 🪣.RunningEvaluationHashInput
is 1.RunningEvaluationHashDigest
is 1.RunningEvaluationSponge
is 1. For
i
$∈{0,…,3}$ andlimb
$∈{$highest
,mid_high
,mid_low
,lowest
$}$:
state_i_limb_LookupClientLogDerivative
has accumulatedstate_i_limb_lkin
andstate_i_limb_lkout
with respect to challenges 🍒, 🍓 and indeterminate 🧺.
Initial Constraints as Polynomials
Mode  1
round_no
RunningEvaluationReceiveChunk  🪣  (🪑^10 + state_0·🪑^9 + state_1·🪑^8 + state_2·🪑^7 + state_3·🪑^6 + state_4·🪑^5 + state_5·🪑^4 + state_6·🪑^3 + state_7·🪑^2 + state_8·🪑 + state_9)
RunningEvaluationHashInput  1
RunningEvaluationHashDigest  1
RunningEvaluationSponge  1
 For
i
$∈{0,…,3}$ andlimb
$∈{$highest
,mid_high
,mid_low
,lowest
$}$:
state_i_limb_LookupClientLogDerivative·(🧺  🍒·state_i_limb_lkin  🍓·state_i_limb_lkout)  1
Consistency Constraints
 The
Mode
is a valid mode, i.e., $∈{0,…,3}$.  If the
Mode
isprogram_hashing
,hash
, orpad
, then the current instruction is the opcode ofhash
.  If the
Mode
issponge
, then the current instruction is a Sponge instruction.  If the
Mode
ispad
, then theround_no
is 0.  If the current instruction
CI
issponge_init
, then theround_no
is 0.  For
i
$∈{10,…,15}$: If the current instructionCI
issponge_init
, then registerstate_i
is 0. (Note: the remaining registers, corresponding to the rate, are guaranteed to be 0 through the Evaluation Argument “Sponge” with the processor.)  For
i
$∈{10,…,15}$: If the round number is 0 and the currentMode
ishash
, then registerstate_i
is 1.  For
i
$∈{0,…,3}$: ensure that decomposition ofstate_i
is unique. That is, if both high limbs ofstate_i
are the maximum value, then both low limbs are 0^{1}. To make the corresponding polynomials low degree, registerstate_i_inv
holds the inverseorzero of the recomposed highest two limbs ofstate_i
subtracted from their maximum value. Letstate_i_hi_limbs_minus_2_pow_32
be an alias for that difference:state_i_hi_limbs_minus_2_pow_32
$:=2_{32}−1−2_{16}⋅$state_i_highest_lk_in
$−$state_i_mid_high_lk_in
. If the two high limbs of
state_i
are both the maximum possible value, then the two low limbs ofstate_i
are both 0.  The
state_i_inv
is the inverse ofstate_i_hi_limbs_minus_2_pow_32
orstate_i_inv
is 0.  The
state_i_inv
is the inverse ofstate_i_hi_limbs_minus_2_pow_32
orstate_i_hi_limbs_minus_2_pow_32
is 0.
 If the two high limbs of
 The round constants adhere to the specification of Tip5.
Consistency Constraints as Polynomials
(Mode  0)·(Mode  1)·(Mode  2)·(Mode  3)
(Mode  2)·(CI  opcode(hash))
(Mode  0)·(Mode  1)·(Mode  3)
·(CI  opcode(sponge_init))·(CI  opcode(sponge_absorb))·(CI  opcode(sponge_squeeze))
(Mode  1)·(Mode  2)·(Mode  3)·(round_no  0)
(CI  opcode(hash))·(CI  opcode(sponge_absorb))·(CI  opcode(sponge_squeeze))·(round_no  0)
 For
i
$∈{10,…,15}$:
·(CI  opcode(hash))·(CI  opcode(sponge_absorb))·(CI  opcode(sponge_squeeze))
·(state_i  0)
 For
i
$∈{10,…,15}$:
(round_no  1)·(round_no  2)·(round_no  3)·(round_no  4)·(round_no  5)
·(Mode  0)·(Mode  1)·(Mode  2)
·(state_i  1)
 For
i
$∈{0,…,3}$: definestate_i_hi_limbs_minus_2_pow_32 := 2^32  1  2^16·state_i_highest_lk_in  state_i_mid_high_lk_in
.(1  state_i_inv · state_i_hi_limbs_minus_2_pow_32)·(2^16·state_i_mid_low_lk_in + state_i_lowest_lk_in)
(1  state_i_inv · state_i_hi_limbs_minus_2_pow_32)·state_i_inv
(1  state_i_inv · state_i_hi_limbs_minus_2_pow_32)·state_i_hi_limbs_minus_2_pow_32
Transition Constraints
 If the
round_no
is 5, then theround_no
in the next row is 0.  If the
Mode
is notpad
and the current instructionCI
is notsponge_init
and theround_no
is not 5, then theround_no
increments by 1.  If the
Mode
in the next row isprogram_hashing
and theround_no
in the next row is 0, then receive a chunk of instructions with respect to challenges 🪣 and 🪑.  If the
Mode
changes fromprogram_hashing
, then the Evaluation Argument ofstate_0
throughstate_4
with respect to indeterminate 🥬 equals the public program digest challenge, 🫑.  If the
Mode
isprogram_hashing
and theMode
in the next row issponge
, then the current instruction in the next row issponge_init
.  If the
round_no
is not 5 and the current instructionCI
is notsponge_init
, then the current instruction does not change.  If the
round_no
is not 5 and the current instructionCI
is notsponge_init
, then theMode
does not change.  If the
Mode
issponge
, then theMode
in the next row issponge
orhash
orpad
.  If the
Mode
ishash
, then theMode
in the next row ishash
orpad
.  If the
Mode
ispad
, then theMode
in the next row ispad
.  If the
round_no
in the next row is 0 and theMode
in the next row is eitherprogram_hashing
orsponge
and the instruction in the next row is eithersponge_absorb
orsponge_init
, then the capacity's state registers don't change.  If the
round_no
in the next row is 0 and the current instruction in the next row issponge_squeeze
, then none of the state registers change.  If the
round_no
in the next row is 0 and theMode
in the next row ishash
, thenRunningEvaluationHashInput
accumulates the next row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪. Otherwise, it remains unchanged.  If the
round_no
in the next row is 5 and theMode
in the next row ishash
, thenRunningEvaluationHashDigest
accumulates the next row with respect to challenges 🧄₀ through 🧄₄ and indeterminate 🪟. Otherwise, it remains unchanged.  If the
round_no
in the next row is 0 and theMode
in the next row issponge
, thenRunningEvaluationSponge
accumulates the next row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Otherwise, it remains unchanged.  For
i
$∈{0,…,3}$ andlimb
$∈{$highest
,mid_high
,mid_low
,lowest
$}$:
If the next round number is not 5 and themode
in the next row is notpad
and the current instructionCI
in the next row is notsponge_init
, thenstate_i_limb_LookupClientLogDerivative
has accumulatedstate_i_limb_lkin'
andstate_i_limb_lkout'
with respect to challenges 🍒, 🍓 and indeterminate 🧺. Otherwise,state_i_limb_LookupClientLogDerivative
remains unchanged.  For
r
$∈{0,…,4}$:
If theround_no
isr
, thestate
registers adhere to the rules of applying roundr
of the Tip5 permutation.
Transition Constraints as Polynomials
(round_no  0)·(round_no  1)·(round_no  2)·(round_no  3)·(round_no  4)·(round_no'  0)
(Mode  0)·(round_no  5)·(CI  opcode(sponge_init))·(round_no'  round_no  1)
RunningEvaluationReceiveChunk'  🪣·RunningEvaluationReceiveChunk  (🪑^10 + state_0·🪑^9 + state_1·🪑^8 + state_2·🪑^7 + state_3·🪑^6 + state_4·🪑^5 + state_5·🪑^4 + state_6·🪑^3 + state_7·🪑^2 + state_8·🪑 + state_9)
(Mode  0)·(Mode  2)·(Mode  3)·(Mode'  1)·(🥬^5 + state_0·🥬^4 + state_1·🥬^3 + state_2·🥬^2 + state_3·🥬^1 + state_4  🫑)
(Mode  0)·(Mode  2)·(Mode  3)·(Mode'  2)·(CI'  opcode(sponge_init))
(round_no  5)·(CI  opcode(sponge_init))·(CI'  CI)
(round_no  5)·(CI  opcode(sponge_init))·(Mode'  Mode)
(Mode  0)·(Mode  1)·(Mode  3)·(Mode'  0)·(Mode'  2)·(Mode'  3)
(Mode  0)·(Mode  1)·(Mode  2)·(Mode'  0)·(Mode'  3)
(Mode  1)·(Mode  2)·(Mode  3)·(Mode'  0)
(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)·(round_no'  5)
·(Mode'  3)·(Mode'  0)
·(CI'  opcode(sponge_init))
·(🧄₁₀·(state_10'  state_10) + 🧄₁₁·(state_11'  state_11) + 🧄₁₂·(state_12'  state_12) + 🧄₁₃·(state_13'  state_13) + 🧄₁₄·(state_14'  state_14) + 🧄₁₅·(state_15'  state_15))
(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)·(round_no'  5)
·(CI'  opcode(hash))·(CI'  opcode(sponge_init))·(CI'  opcode(sponge_absorb))
·(🧄₀·(state_0'  state_0) + 🧄₁·(state_1'  state_1) + 🧄₂·(state_2'  state_2) + 🧄₃·(state_3'  state_3) + 🧄₄·(state_4'  state_4)
+ 🧄₅·(state_5'  state_5) + 🧄₆·(state_6'  state_6) + 🧄₇·(state_7'  state_7) + 🧄₈·(state_8'  state_8) + 🧄₉·(state_9'  state_9)
+ 🧄₁₀·(state_10'  state_10) + 🧄₁₁·(state_11'  state_11) + 🧄₁₂·(state_12'  state_12) + 🧄₁₃·(state_13'  state_13) + 🧄₁₄·(state_14'  state_14) + 🧄₁₅·(state_15'  state_15))
(round_no'  0)·(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)
·(RunningEvaluationHashInput'  🚪·RunningEvaluationHashInput  🧄₀·state_0'  🧄₁·state_1'  🧄₂·state_2'  🧄₃·state_3'  🧄₄·state_4'  🧄₅·state_5'  🧄₆·state_6'  🧄₇·state_7'  🧄₈·state_8'  🧄₉·state_9')
+ (round_no'  0)·(RunningEvaluationHashInput'  RunningEvaluationHashInput)
+ (Mode'  3)·(RunningEvaluationHashInput'  RunningEvaluationHashInput)
(round_no'  0)·(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)
·(Mode'  0)·(Mode'  1)·(Mode'  2)
·(RunningEvaluationHashDigest'  🪟·RunningEvaluationHashDigest  🧄₀·state_0'  🧄₁·state_1'  🧄₂·state_2'  🧄₃·state_3'  🧄₄·state_4')
+ (round_no'  5)·(RunningEvaluationHashDigest'  RunningEvaluationHashDigest)
+ (Mode'  3)·(RunningEvaluationHashDigest'  RunningEvaluationHashDigest)
(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)·(round_no'  5)
·(CI'  opcode(hash))
·(RunningEvaluationSponge'  🧽·RunningEvaluationSponge  🧅·CI'  🧄₀·state_0'  🧄₁·state_1'  🧄₂·state_2'  🧄₃·state_3'  🧄₄·state_4'  🧄₅·state_5'  🧄₆·state_6'  🧄₇·state_7'  🧄₈·state_8'  🧄₉·state_9')
+ (RunningEvaluationSponge'  RunningEvaluationSponge)·(round_no'  0)
+ (RunningEvaluationSponge'  RunningEvaluationSponge)·(CI'  opcode(sponge_init))·(CI'  opcode(sponge_absorb))·(CI'  opcode(sponge_squeeze))
 For
i
$∈{0,…,3}$ andlimb
$∈{$highest
,mid_high
,mid_low
,lowest
$}$:
(round_no'  5)·(Mode'  0)·(CI'  opcode(sponge_init))·((state_i_limb_LookupClientLogDerivative'  state_i_limb_LookupClientLogDerivative)·(🧺  🍒·state_i_limb_lkin'  🍓·state_i_limb_lkout')  1)
+ (round_no'  0)·(round_no'  1)·(round_no'  2)·(round_no'  3)·(round_no'  4)
·(state_i_limb_LookupClientLogDerivative'  state_i_limb_LookupClientLogDerivative)
+ (CI'  opcode(hash))·(CI'  opcode(sponge_absorb))·(CI'  opcode(sponge_squeeze))
·(state_i_limb_LookupClientLogDerivative'  state_i_limb_LookupClientLogDerivative)
 The remaining constraints are left as an exercise to the reader. For hints, see the Tip5 paper.
Terminal Constraints
 If the
Mode
isprogram_hashing
, then the Evaluation Argument ofstate_0
throughstate_4
with respect to indeterminate 🥬 equals the public program digest challenge, 🫑.  If the
Mode
is notpad
and the current instructionCI
is notsponge_init
, then theround_no
is 5.
Terminal Constraints as Polynomials
🥬^5 + state_0·🥬^4 + state_1·🥬^3 + state_2·🥬^2 + state_3·🥬 + state_4  🫑
(Mode  0)·(CI  opcode(sponge_init))·(round_no  5)
This is a special property of the Oxfoi prime.
Cascade Table
The Cascade Table helps arithmetizing the lookups necessary for the splitandlookup Sbox used in the Tip5 permutation. The main idea is to allow the Hash Table to look up limbs that are 16 bit wide even though the Sbox is defined over limbs that are 8 bit wide. The Cascade Table facilitates the translation of limb widths. For the actual lookup of the 8bit limbs, the Lookup Table is used. For a more detailed explanation and indepth analysis, see the Tip5 paper.
Main Columns
The Cascade Table has 6 main columns:
name  description 

IsPadding  Indicator for padding rows. 
LookInHi  The more significant bits of the lookup input. 
LookInLo  The less significant bits of the lookup input. 
LookOutHi  The more significant bits of the lookup output. 
LookOutLo  The less significant bits of the lookup output. 
LookupMultiplicity  The number of times the value is looked up by the Hash Table. 
Auxiliary Columns
The Cascade Table has 2 auxiliary columns:
HashTableServerLogDerivative
, the (running sum of the) logarithmic derivative for the Lookup Argument with the Hash Table. In every row, the sum accumulatesLookupMultiplicity / (🧺  Combo)
where 🧺 is a verifiersupplied challenge andCombo
is the weighted sum ofLookInHi · 2^8 + LookInLo
andLookOutHi · 2^8 + LookOutLo
with weights 🍒 and 🍓 supplied by the verifier.LookupTableClientLogDerivative
, the (running sum of the) logarithmic derivative for the Lookup Argument with the Lookup Table. In every row, the sum accumulates the two summands1 / combo_hi
wherecombo_hi
is the verifierweighted combination ofLookInHi
andLookOutHi
, and1 / combo_lo
wherecombo_lo
is the verifierweighted combination ofLookInLo
andLookOutLo
.
Padding
Each padding row is the allzero row with the exception of IsPadding
, which is 1.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 If the first row is not a padding row, then
HashTableServerLogDerivative
has accumulated the first row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else,HashTableServerLogDerivative
is 0.  If the first row is not a padding row, then
LookupTableClientLogDerivative
has accumulated the first row with respect to challenges 🥦 and 🥒 and indeterminate 🪒. Else,LookupTableClientLogDerivative
is 0.
Initial Constraints as Polynomials
(1  IsPadding)·(HashTableServerLogDerivative·(🧺  🍒·(2^8·LookInHi + LookInLo)  🍓·(2^8 · LookOutHi + LookOutLo))  LookupMultiplicity)
+ IsPadding · HashTableServerLogDerivative
(1  IsPadding)·(LookupTableClientLogDerivative·(🪒  🥦·LookInLo  🥒·LookOutLo)·(🪒  🥦·LookInHi  🥒·LookOutHi)  2·🪒 + 🥦·(LookInLo + LookInHi) + 🥒·(LookOutLo + LookOutHi))
+ IsPadding·LookupTableClientLogDerivative
Consistency Constraints
IsPadding
is 0 or 1.
Consistency Constraints as Polynomials
IsPadding·(1  IsPadding)
Transition Constraints
 If the current row is a padding row, then the next row is a padding row.
 If the next row is not a padding row, then
HashTableServerLogDerivative
accumulates the next row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else,HashTableServerLogDerivative
remains unchanged.  If the next row is not a padding row, then
LookupTableClientLogDerivative
accumulates the next row with respect to challenges 🥦 and 🥒 and indeterminate 🪒. Else,LookupTableClientLogDerivative
remains unchanged.
Transition Constraints as Polynomials
IsPadding·(1  IsPadding')
(1  IsPadding')·((HashTableServerLogDerivative'  HashTableServerLogDerivative)·(🧺  🍒·(2^8·LookInHi' + LookInLo')  🍓·(2^8 · LookOutHi' + LookOutLo'))  LookupMultiplicity')
+ IsPadding'·(HashTableServerLogDerivative'  HashTableServerLogDerivative)
(1  IsPadding')·((LookupTableClientLogDerivative'  LookupTableClientLogDerivative)·(🪒  🥦·LookInLo'  🥒·LookOutLo')·(🪒  🥦·LookInHi'  🥒·LookOutHi')  2·🪒 + 🥦·(LookInLo' + LookInHi') + 🥒·(LookOutLo' + LookOutHi'))
+ IsPadding'·(LookupTableClientLogDerivative'  LookupTableClientLogDerivative)
Terminal Constraints
None.
Lookup Table
The Lookup Table helps arithmetizing the lookups necessary for the splitandlookup Sbox used in the Tip5 permutation. It works in tandem with the Cascade Table. In the language of the Tip5 paper, it is a “narrow lookup table.” This means it is always fully populated, independent of the actual number of lookups.
Correct creation of the Lookup Table is guaranteed through a publicfacing Evaluation Argument: after sampling some challenge $X$, the verifier computes the terminal of the Evaluation Argument over the list of all the expected lookup values with respect to challenge $X$. The equality of this verifiersupplied terminal against the similarly computed, intable part of the Evaluation Argument is checked by the Lookup Table's terminal constraint.
Main Columns
The Lookup Table has 4 main columns:
name  description 

IsPadding  Indicator for padding rows. 
LookIn  The lookup input. 
LookOut  The lookup output. 
LookupMultiplicity  The number of times the value is looked up. 
Auxiliary Columns
The Lookup Table has 2 auxiliary columns:
CascadeTableServerLogDerivative
, the (running sum of the) logarithmic derivative for the Lookup Argument with the Cascade Table. In every row, accumulates the summandLookupMultiplicity / Combo
whereCombo
is the verifierweighted combination ofLookIn
andLookOut
.PublicEvaluationArgument
, the running sum for the public evaluation argument of the Lookup Table. In every row, accumulatesLookOut
.
Padding
Each padding row is the allzero row with the exception of IsPadding
, which is 1.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
LookIn
is 0.CascadeTableServerLogDerivative
has accumulated the first row with respect to challenges 🍒 and 🍓 and indeterminate 🧺.PublicEvaluationArgument
has accumulated the firstLookOut
with respect to indeterminate 🧹.
Initial Constraints as Polynomials
LookIn
CascadeTableServerLogDerivative·(🧺  🍒·LookIn  🍓·LookOut)  LookupMultiplicity
PublicEvaluationArgument  🧹  LookOut
Consistency Constraints
IsPadding
is 0 or 1.
Consistency Constraints as Polynomials
IsPadding·(1  IsPadding)
Transition Constraints
 If the current row is a padding row, then the next row is a padding row.
 If the next row is not a padding row,
LookIn
increments by 1. Else,LookIn
is 0.  If the next row is not a padding row,
CascadeTableServerLogDerivative
accumulates the next row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else,CascadeTableServerLogDerivative
remains unchanged.  If the next row is not a padding row,
PublicEvaluationArgument
accumulates the nextLookOut
with respect to indeterminate 🧹. Else,PublicEvaluationArgument
remains unchanged.
Transition Constraints as Polynomials
IsPadding·(1  IsPadding')
((1  IsPadding')·(LookIn'  LookIn  1))
+ IsPadding'·LookIn'
(1  IsPadding')·((CascadeTableServerLogDerivative'  CascadeTableServerLogDerivative)·(🧺  🍒·LookIn'  🍓·LookOut')  LookupMultiplicity')
+ IsPadding'·(CascadeTableServerLogDerivative'  CascadeTableServerLogDerivative)
(1  IsPadding')·((PublicEvaluationArgument'  PublicEvaluationArgument)·(🧹  lookup_output'))
+ IsPadding'·(PublicEvaluationArgument'  PublicEvaluationArgument)
Terminal Constraints
PublicEvaluationArgument
matches verifiersupplied challenge 🪠.
Terminal Constraints as Polynomials
PublicEvaluationArgument
 🪠
U32 Table
The U32 Operations Table arithmetizes the coprocessor for “difficult” 32bit unsigned integer operations.
The two inputs to the U32 Operations Table are lefthand side (LHS) and righthand side (RHS), usually corresponding to the processor's st0
and st1
, respectively.
(For more details see the arithmetization of the specific u32 instructions further below.)
To allow efficient arithmetization, a u32 instruction's result is constructed using multiple rows.
The collection of rows arithmetizing the execution of one instruction is called a section.
The U32 Table's sections are independent of each other.
The processor's current instruction CI
is recorded within the section and dictates its arithmetization.
Crucially, the rows of the U32 table are independent of the processor's clock. Hence, the result of the instruction can be transferred into the processor within one clock cycle.
Main Columns
name  description 

CopyFlag  The row to be copied between processor and u32 coprocessor. Marks the beginning of a section. 
CI  Current instruction, the instruction the processor is currently executing. 
Bits  The number of bits that LHS and RHS have already been shifted by. 
BitsMinus33Inv  The inverseorzero of the difference between 33 and Bits . 
LHS  Lefthand side of the operation. Usually corresponds to the processor's st0 . 
LhsInv  The inverseorzero of LHS. Needed to check whether LHS is unequal to 0. 
RHS  Righthand side of the operation. Usually corresponds to the processor's st1 . 
RhsInv  The inverseorzero of RHS. Needed to check whether RHS is unequal to 0. 
Result  The result (or intermediate result) of the instruction requested by the processor. 
LookupMultiplicity  The number of times the processor has executed the current instruction with the same arguments. 
An example U32 Table follows. Some columns are omitted for presentation reasons. All cells in the U32 Table contain elements of the Bfield. For clearer illustration of the mechanics, the columns marked “$_{2}$” are presented in base 2, whereas columns marked “$_{10}$” are in the usual base 10.
CopyFlag$_{2}$  CI  Bits$_{10}$  LHS$_{2}$  RHS$_{2}$  Result$_{2}$  Result$_{10}$ 

1  and  0  11000  11010  11000  24 
0  and  1  1100  1101  1100  12 
0  and  2  110  110  110  6 
0  and  3  11  11  11  3 
0  and  4  1  1  1  1 
0  and  5  0  0  0  0 
1  pow  0  10  101  100000  32 
0  pow  1  10  10  100  4 
0  pow  2  10  1  10  2 
0  pow  3  10  0  1  1 
1  log_2_floor  0  100110  0  101  5 
0  log_2_floor  1  10011  0  101  5 
0  log_2_floor  2  1001  0  101  5 
0  log_2_floor  3  100  0  101  5 
0  log_2_floor  4  10  0  101  5 
0  log_2_floor  5  1  0  101  5 
0  log_2_floor  6  0  0  1  1 
1  lt  0  11111  11011  0  0 
0  lt  1  1111  1101  0  0 
0  lt  2  111  110  0  0 
0  lt  3  11  11  10  2 
0  lt  4  1  1  10  2 
0  lt  5  0  0  10  2 
The AIR verifies the correct update of each consecutive pair of rows.
For most instructions, the current least significant bit of both LHS
and RHS
is eliminated between two consecutive rows.
This eliminated bit is used to successively build the required result in the Result
column.
For instruction pow
, only the least significant bit RHS
is eliminated, while LHS
remains unchanged throughout the section.
There are 6 instructions the U32 Table is “aware” of: split
, lt
, and
, log_2_floor
, pow
, and pop_count
.
The instruction split
uses the U32 Table for range checking only.
Concretely, the U32 Table ensures that the instruction split
's resulting “high bits” and “low bits” each fit in a u32.
Since the processor does not expect any result from instruction split
, the Result
must be 0 else the Lookup Argument fails.
Similarly, for instructions log_2_floor
and pop_count
, the processor always sets the RHS
to 0.
For instruction xor
, the processor requests the computation of the two arguments' and
and converts the result using the following equality:
$axorb=a+b−2⋅(aandb)$
Credit for this trick goes, to the best of our knowledge, to Daniel Lubarov.
For the remaining u32 instruction div_mod
, the processor triggers the creation of two sections in the U32 Table:
 One section to ensure that the remainder
r
is smaller than the divisord
. The processor requests the result oflt
by setting the U32 Table'sCI
register to the opcode oflt
. This also guarantees thatr
andd
each fit in a u32.  One section to ensure that the quotient
q
and the numeratorn
each fit in a u32. The processor needs no result, only the range checking capabilities, like for instructionsplit
. Consequently, the processor sets the U32 Table'sCI
register to the opcode ofsplit
.
If the current instruction is lt
, the Result
can take on the values 0, 1, or 2, where
 0 means
LHS
>=RHS
is definitely known in the current row,  1 means
LHS
<RHS
is definitely known in the current row, and  2 means the result is unknown in the current row.
This is only an intermediate result.
It can never occur in the first row of a section, i.e., when
CopyFlag
is 1.
A new row with CopyFlag = 1
can only be inserted if
LHS
is 0 or the current instruction ispow
, andRHS
is 0.
It is impossible to create a valid proof of correct execution of Triton VM if Bits
is 33 in any row.
Auxiliary Columns
The U32 Table has 1 auxiliary column, U32LookupServerLogDerivative
.
It corresponds to the Lookup Argument with the Processor Table, establishing that whenever the processor executes a u32 instruction, the following holds:
 the processor's requested lefthand side is copied into
LHS
,  the processor's requested righthand side is copied into
RHS
,  the processor's requested u32 instruction is copied into
CI
, and  the result
Result
is copied to the processor.
Padding
Each padding row is the allzero row with the exception of
CI
, which is the opcode ofsplit
, andBitsMinus33Inv
, which is $−33_{−1}$.
Additionally, if the U32 Table is nonempty before applying padding, the padding row's columns CI
, LHS
, LhsInv
, and Result
are taken from the U32 Table's last row.
Arithmetic Intermediate Representation
Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are Xfield elements, i.e., elements of $F_{p_{3}}$.
Initial Constraints
 If the
CopyFlag
is 0, thenU32LookupServerLogDerivative
is 0. Otherwise, theU32LookupServerLogDerivative
has accumulated the first row with multiplicityLookupMultiplicity
and with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.
Initial Constraints as Polynomials
(CopyFlag  1)·U32LookupServerLogDerivative
+ CopyFlag·(U32LookupServerLogDerivative·(🧷  🥜·LHS  🌰·RHS  🥑·CI  🥕·Result)  LookupMultiplicity)
Consistency Constraints
 The
CopyFlag
is 0 or 1.  If
CopyFlag
is 1, thenBits
is 0. BitsMinus33Inv
is the inverse of (Bits
 33).LhsInv
is 0 or the inverse ofLHS
.Lhs
is 0 orLhsInv
is the inverse ofLHS
.RhsInv
is 0 or the inverse ofRHS
.Rhs
is 0 orRhsInv
is the inverse ofRHS
. If
CopyFlag
is 0 and the current instruction islt
andLHS
is 0 andRHS
is 0, thenResult
is 2.  If
CopyFlag
is 1 and the current instruction islt
andLHS
is 0 andRHS
is 0, thenResult
is 0.  If the current instruction is
and
andLHS
is 0 andRHS
is 0, thenResult
is 0.  If the current instruction is
pow
andRHS
is 0, thenResult
is 1.  If
CopyFlag
is 0 and the current instruction islog_2_floor
andLHS
is 0, thenResult
is 1.  If
CopyFlag
is 1 and the current instruction islog_2_floor
andLHS
is 0, the VM crashes.  If
CopyFlag
is 0 and the current instruction ispop_count
andLHS
is 0, thenResult
is 0.  If
CopyFlag
is 0, thenLookupMultiplicity
is 0.
Written in Disjunctive Normal Form, the same constraints can be expressed as:
 The
CopyFlag
is 0 or 1. CopyFlag
is 0 orBits
is 0.BitsMinus33Inv
the inverse of (Bits
 33).LhsInv
is 0 or the inverse ofLHS
.Lhs
is 0 orLhsInv
is the inverse ofLHS
.RhsInv
is 0 or the inverse ofRHS
.Rhs
is 0 orRhsInv
is the inverse ofRHS
.CopyFlag
is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
orLHS
is not 0 orRHS
is not 0 orResult
is 2.CopyFlag
is 0 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
orLHS
is not 0 orRHS
is not 0 orResult
is 0.CI
is the opcode ofsplit
,lt
,pow
,log_2_floor
, orpop_count
orLHS
is not 0 orRHS
is not 0 orResult
is 0.CI
is the opcode ofsplit
,lt
,and
,log_2_floor
, orpop_count
orRHS
is not 0 orResult
is 1.CopyFlag
is 1 orCI
is the opcode ofsplit
,lt
,and
,pow
, orpop_count
orLHS
is not 0 orResult
is 1.CopyFlag
is 0 orCI
is the opcode ofsplit
,lt
,and
,pow
, orpop_count
orLHS
is not 0.CopyFlag
is 1 orCI
is the opcode ofsplit
,lt
,and
,pow
, orlog_2_floor
orLHS
is not 0 orResult
is 0.CopyFlag
is 1 orLookupMultiplicity
is 0.
Consistency Constraints as Polynomials
CopyFlag·(CopyFlag  1)
CopyFlag·Bits
1  BitsMinus33Inv·(Bits  33)
LhsInv·(1  LHS·LhsInv)
LHS·(1  LHS·LhsInv)
RhsInv·(1  RHS·RhsInv)
RHS·(1  RHS·RhsInv)
(CopyFlag  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(1  LHS·LhsInv)·(1  RHS·RhsInv)·(Result  2)
(CopyFlag  0)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(1  LHS·LhsInv)·(1  RHS·RhsInv)·(Result  0)
(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(1  LHS·LhsInv)·(1  RHS·RhsInv)·Result
(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(1  RHS·RhsInv)·(Result  1)
(CopyFlag  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(pop_count))·(1  LHS·LhsInv)·(Result + 1)
CopyFlag·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(pop_count))·(1  LHS·LhsInv)
(CopyFlag  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(1  LHS·LhsInv)·(Result  0)
(CopyFlag  1)·LookupMultiplicity
Transition Constraints
Even though they are never explicitly represented, it is useful to alias the LHS
's and RHS
's least significant bit, or “lsb.”
Given two consecutive rows, the (current) least significant bit of LHS
can be computed by subtracting twice the next row's LHS
from the current row's LHS
.
These aliases, i.e., LhsLsb
:= LHS  2·LHS'
and RhsLsb
:= RHS  2·RHS'
, are used throughout the following.
 If the
CopyFlag
in the next row is 1 and the current instruction is notpow
, thenLHS
in the current row is 0.  If the
CopyFlag
in the next row is 1, thenRHS
in the current row is 0.  If the
CopyFlag
in the next row is 0, thenCI
in the next row isCI
in the current row.  If the
CopyFlag
in the next row is 0 andLHS
in the current row is unequal to 0 and the current instruction is notpow
, thenBits
in the next row isBits
in the current row plus 1.  If the
CopyFlag
in the next row is 0 andRHS
in the current row is unequal to 0, thenBits
in the next row isBits
in the current row plus 1.  If the
CopyFlag
in the next row is 0 and the current instruction is notpow
, thenLhsLsb
is either 0 or 1.  If the
CopyFlag
in the next row is 0, thenRhsLsb
is either 0 or 1.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 0, thenResult
in the current row is 0.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 1, thenResult
in the current row is 1.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 2 andLhsLsb
is 0 andRhsLsb
is 1, thenResult
in the current row is 1.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 2 andLhsLsb
is 1 andRhsLsb
is 0, thenResult
in the current row is 0.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 2 andLhsLsb
isRhsLsb
and theCopyFlag
in the current row is 0, thenResult
in the current row is 2.  If the
CopyFlag
in the next row is 0 and the current instruction islt
andResult
in the next row is 2 andLhsLsb
isRhsLsb
and theCopyFlag
in the current row is 1, thenResult
in the current row is 0.  If the
CopyFlag
in the next row is 0 and the current instruction isand
, thenResult
in the current row is twiceResult
in the next row plus the product ofLhsLsb
andRhsLsb
.  If the
CopyFlag
in the next row is 0 and the current instruction islog_2_floor
andLHS
in the next row is 0 andLHS
in the current row is not 0, thenResult
in the current row isBits
.  If the
CopyFlag
in the next row is 0 and the current instruction islog_2_floor
andLHS
in the next row is not 0, thenResult
in the current row isResult
in the next row.  If the
CopyFlag
in the next row is 0 and the current instruction ispow
, thenLHS
remains unchanged.  If the
CopyFlag
in the next row is 0 and the current instruction ispow
andRhsLsb
in the current row is 0, thenResult
in the current row isResult
in the next row squared.  If the
CopyFlag
in the next row is 0 and the current instruction ispow
andRhsLsb
in the current row is 1, thenResult
in the current row isResult
in the next row squared timesLHS
in the current row.  If the
CopyFlag
in the next row is 0 and the current instruction ispop_count
, thenResult
in the current row isResult
in the next row plusLhsLsb
.  If the
CopyFlag
in the next row is 0, thenU32LookupServerLogDerivative
in the next row isU32LookupServerLogDerivative
in the current row.  If the
CopyFlag
in the next row is 1, thenU32LookupServerLogDerivative
in the next row has accumulated the next row with multiplicityLookupMultiplicity
and with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.
Written in Disjunctive Normal Form, the same constraints can be expressed as:
CopyFlag
' is 0 orLHS
is 0 orCI
is the opcode ofpow
.CopyFlag
' is 0 orRHS
is 0.CopyFlag
' is 1 orCI
' isCI
.CopyFlag
' is 1 orLHS
is 0 orCI
is the opcode ofpow
orBits
' isBits
+ 1.CopyFlag
' is 1 orRHS
is 0 orBits
' isBits
+ 1.CopyFlag
' is 1 orCI
is the opcode ofpow
orLhsLsb
is 0 orLhsLsb
is 1.CopyFlag
' is 1 orRhsLsb
is 0 orRhsLsb
is 1.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 1 or 2) orResult
is 0.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 0 or 2) orResult
is 1.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 0 or 1) orLhsLsb
is 1 orRhsLsb
is 0 orResult
is 1.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 0 or 1) orLhsLsb
is 0 orRhsLsb
is 1 orResult
is 0.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 0 or 1) orLhsLsb
is unequal toRhsLsb
orCopyFlag
is 1 orResult
is 2.CopyFlag
' is 1 orCI
is the opcode ofsplit
,and
,pow
,log_2_floor
, orpop_count
or (Result
' is 0 or 1) orLhsLsb
is unequal toRhsLsb
orCopyFlag
is 0 orResult
is 0.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,pow
,log_2_floor
, orpop_count
orResult
is twiceResult
' plus the product ofLhsLsb
andRhsLsb
.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,pow
, orpop_count
orLHS
' is not 0 orLHS
is 0 orResult
isBits
.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,pow
, orpop_count
orLHS
' is 0 orResult
isResult
'.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,log_2_floor
, orpop_count
orLHS
' isLHS
.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,log_2_floor
, orpop_count
orRhsLsb
is 1 orResult
isResult
' timesResult
'.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,log_2_floor
, orpop_count
orRhsLsb
is 0 orResult
isResult
' timesResult
' timesLHS
.CopyFlag
' is 1 orCI
is the opcode ofsplit
,lt
,and
,pow
, orlog_2_floor
orResult
isResult
' plusLhsLsb
.CopyFlag
' is 1 orU32LookupServerLogDerivative
' isU32LookupServerLogDerivative
.CopyFlag
' is 0 or the difference ofU32LookupServerLogDerivative
' andU32LookupServerLogDerivative
times(🧷  🥜·LHS'  🌰·RHS'  🥑·CI'  🥕·Result')
isLookupMultiplicity
'.
Transition Constraints as Polynomials
(CopyFlag'  0)·LHS·(CI  opcode(pow))
(CopyFlag'  0)·RHS
(CopyFlag'  1)·(CI'  CI)
(CopyFlag'  1)·LHS·(CI  opcode(pow))·(Bits'  Bits  1)
(CopyFlag'  1)·RHS·(Bits'  Bits  1)
(CopyFlag'  1)·(CI  opcode(pow))·LhsLsb·(LhsLsb  1)
(CopyFlag'  1)·RhsLsb·(RhsLsb  1)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  1)·(Result'  2)·(Result  0)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  0)·(Result'  2)·(Result  1)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  0)·(Result'  1)·(LhsLsb  1)·(RhsLsb  0)·(Result  1)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  0)·(Result'  1)·(LhsLsb  0)·(RhsLsb  1)·(Result  0)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  0)·(Result'  1)·(1  LhsLsb  RhsLsb + 2·LhsLsb·RhsLsb)·(CopyFlag  1)·(Result  2)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result'  0)·(Result'  1)·(1  LhsLsb  RhsLsb + 2·LhsLsb·RhsLsb)·(CopyFlag  0)·(Result  0)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(Result  2·Result'  LhsLsb·RhsLsb)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(pop_count))·(1  LHS'·LhsInv')·LHS·(Result  Bits)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(pop_count))·LHS'·(Result'  Result)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(LHS'  LHS)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(RhsLsb  1)·(Result  Result'·Result')
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(log_2_floor))·(CI  opcode(pop_count))·(RhsLsb  0)·(Result  Result'·Result'·LHS)
(CopyFlag'  1)·(CI  opcode(split))·(CI  opcode(lt))·(CI  opcode(and))·(CI  opcode(pow))·(CI  opcode(log_2_floor))·(Result  Result'  LhsLsb)
(CopyFlag'  1)·(U32LookupServerLogDerivative'  U32LookupServerLogDerivative)
(CopyFlag'  0)·((U32LookupServerLogDerivative'  U32LookupServerLogDerivative)·(🧷  🥜·LHS  🌰·RHS  🥑·CI  🥕·Result)  LookupMultiplicity')
Terminal Constraints
LHS
is 0 or the current instruction ispow
.RHS
is 0.
Terminal Constraints as Polynomials
LHS·(CI  opcode(pow))
RHS
Table Linking
Triton VM's tables are linked together using a variety of different cryptographic arguments. The section on arithmetization incorporates all the necessary ingredients by defining the corresponding parts of the Arithmetic Execution Tables and the Arithmetic Intermediate Representation. However, that presentation doesn't make it easy to follow and understand the specific arguments in use. Therefor, the distinct “crosstable” or “tablelinking” arguments are presented here in isolation.
Compressing Multiple Elements
Mathematically, all arguments used in Triton VM are about single elements of the finite field $F_{p}$. In practice, it is very useful to argue about multiple elements. A common trick is to “compress” multiple elements into a single one using random weights. These weights are “supplied by the STARK verifier,” i.e., sampled using the FiatShamir heuristic after the prover has committed to the elements in question. For example, if $a$, $b$, and $c$ are to be incorporated into the cryptographic argument, then the prover
 commits to $a$, $b$, and $c$,
 samples weights $α$, $β$, and $γ$ using the FiatShamir heuristic,
 “compresses” the elements in question to $δ=α⋅a+β⋅b+γ⋅c$, and
 uses $δ$ in the cryptographic argument.
In the following, all cryptographic arguments are presented using single field elements. Depending on the use case, this single element represents multiple, “compressed” elements.
Permutation Argument
The Permutation Argument establishes that two lists $A=(a_{0},…,a_{n})$ and $B=(b_{0},…,b_{n})$ are permutations of each other. To achieve this, the lists' elements are interpreted as the roots of polynomials $f_{A}(X)$ and $f_{B}(X)$, respectively:
$f_{A}(X)f_{B}(X) =i=0∏n X−a_{i}=i=0∏n X−b_{i} $
The two lists $A$ and $B$ are a permutation of each other if and only if the two polynomials $f_{A}(X)$ and $f_{B}(X)$ are identical. By the Schwartz–Zippel lemma, probing the polynomials in a single point $α$ establishes the polynomial identity with high probability.^{1}
In Triton VM, the Permutation Argument is generally applied to show that the rows of one table appear in some other table without enforcing the rows' order in relation to each other. To establish this, the prover
 commits to the main column in question,^{2}
 samples a random challenge $α$ through the FiatShamir heuristic,
 computes the running product of $f_{A}(α)$ and $f_{B}(α)$ in the respective tables' auxiliary column.
For example, in Table A:
main column A  auxiliary column A: running product 

0  $(α−0)$ 
1  $(α−0)(α−1)$ 
2  $(α−0)(α−1)(α−2)$ 
3  $(α−0)(α−1)(α−2)(α−3)$ 
And in Table B:
main column B  auxiliary column B: running product 

2  $(α−2)$ 
1  $(α−2)(α−1)$ 
3  $(α−2)(α−1)(α−3)$ 
0  $(α−2)(α−1)(α−3)(α−0)$ 
It is possible to establish a subset relation by skipping over certain elements when computing the running product. The running product must incorporate the same elements in both tables. Otherwise, the Permutation Argument will fail.
An example of a subset Permutation Argument can be found between the U32 Table and the Processor Table.
This depends on the length $n$ of the lists $A$ and $B$ as well as the field size. For Triton VM, $n<2_{32}$. The polynomials $f_{A}(X)$ and $f_{B}(X)$ are evaluated over the auxiliary field with $p_{3}≈2_{192}$ elements. The false positive rate is therefore $n/∣F_{p_{3}}∣⩽2_{−160}$.
Evaluation Argument
The Evaluation Argument establishes that two lists $A=(a_{0},…,a_{n})$ and $B=(b_{0},…,b_{n})$ are identical. To achieve this, the lists' elements are interpreted as the coefficients of polynomials $f_{A}(X)$ and $f_{B}(X)$, respectively:
$f_{A}(X)f_{B}(X) =X_{n+1}+i=0∑n a_{n−i}X_{i}=X_{n+1}+i=0∑n b_{n−i}X_{i} $
The two lists $A$ and $B$ are identical if and only if the two polynomials $f_{A}(X)$ and $f_{B}(X)$ are identical. By the Schwartz–Zippel lemma, probing the polynomials in a single point $α$ establishes the polynomial identity with high probability.^{1}
In Triton VM, the Evaluation Argument is generally used to show that (parts of) some row appear in two tables in the same order. To establish this, the prover
 commits to the main column in question,^{2}
 samples a random challenge $α$ through the FiatShamir heuristic,
 computes the running evaluation of $f_{A}(α)$ and $f_{B}(α)$ in the respective tables' auxiliary column.
For example, in both Table A and B:
main column  auxiliary column: running evaluation 

0  $α_{1}+0α_{0}$ 
1  $α_{2}+0α_{1}+1α_{0}$ 
2  $α_{3}+0α_{2}+1α_{1}+2α_{0}$ 
3  $α_{4}+0α_{3}+1α_{2}+2α_{1}+3α_{0}$ 
It is possible to establish a subset relation by skipping over certain elements when computing the running evaluation. The running evaluation must incorporate the same elements in both tables. Otherwise, the Evaluation Argument will fail.
Examples for subset Evaluation Arguments can be found between the Hash Table and the Processor Table.
This depends on the length $n$ of the lists $A$ and $B$ as well as the field size. For Triton VM, $n<2_{32}$. The polynomials $f_{A}(X)$ and $f_{B}(X)$ are evaluated over the auxiliary field with $p_{3}≈2_{192}$ elements. The false positive rate is therefore $n/∣F_{p_{3}}∣⩽2_{−160}$.
Lookup Argument
The Lookup Argument establishes that all elements of list $A=(a_{0},…,a_{ℓ})$ also occur in list $B=(b_{0},…,b_{n})$. In this context, $A$ contains the values that are being looked up, while $B$ is the lookup table.^{1} Both lists $A$ and $B$ may contain duplicates. However, it is inefficient if $B$ does, and is therefor assumed not to.
The example at the end of this section summarizes the necessary computations for the Lookup Argument. The rest of the section derives those computations. The first step is to interpret both lists' elements as the roots of polynomials $f_{A}(X)$ and $f_{B}(X)$, respectively:
$f_{A}(X)f_{B}(X) =i=0∏ℓ X−a_{i}=i=0∏n X−b_{i} $
By counting the number of occurrences $m_{i}$ of unique elements $a_{i}∈A$ and eliminating duplicates, $f_{A}(X)$ can equivalently be expressed as:
$f_{A}(X)=i=0∏n (X−a_{i})_{m_{i}}$
The next step uses
 the formal derivative $f_{A}(X)$ of $f_{A}(X)$, and
 the multiplicityweighted formal derivative $f_{B}(X)$ of $f_{B}(X)$.
The former is the familiar formal derivative:
$f_{A}(X)=i=0∑n m_{i}(X−a_{i})_{m_{i}−1}j=i∏ (X−a_{j})_{m_{j}}$
The multiplicityweighted formal derivative uses the lookupmultiplicities $m_{i}$ as additional factors:
$f_{B}(X)=i=0∑n m_{i}j=i∏ (X−b_{j})$
Let $g(X)$ the greatest common divisor of $f_{A}(X)$ and $f_{A}(X)$. The polynomial $f_{A}(X)/g(X)$ has the same roots as $f_{A}(X)$, but all roots have multiplicity 1. This polynomial is identical to $f_{B}(X)$ if and only if all elements in list $A$ also occur in list $B$.
A similar property holds for the formal derivatives: The polynomial $f_{A}(x)/g(X)$ is identical to $f_{B}(X)$ if and only if all elements in list $A$ also occur in list $B$. By solving for $g(X)$ and equating, the following holds:
$f_{A}(X)⋅f_{B}(X)=f_{A}(X)⋅f_{B}(X)$
Optimization through Logarithmic Derivatives
The logarithmic derivative of $f(X)$ is defined as $f_{′}(X)/f(X)$. Furthermore, the equality of monic polynomials $f(X)$ and $g(X)$ is equivalent to the equality of their logarithmic derivatives.^{2} This allows rewriting above equation as:
$f_{A}(X)f_{A}(X) =f_{B}(X)f_{B}(X) $
Using logarithmic derivatives for the equality check decreases the computational effort for both prover and verifier. Concretely, instead of four running products – one each for $f_{A}$, $f_{A}$, $f_{B}$, and $f_{B}$ – only two logarithmic derivatives are needed. Namely, considering once again list $A$ containing duplicates, above equation can be written as:
$i=0∑ℓ X−a_{i}1 =i=0∑n X−b_{i}m_{i} $
To compute the sums, the lists $A$ and $B$ are main columns in the two respective tables. Additionally, the lookup multiplicity is recorded explicitly in a main column of the lookup table.
Example
In Table A:
main column A  auxiliary column A: logarithmic derivative 

0  $α−01 $ 
2  $α−01 +α−21 $ 
2  $α−01 +α−22 $ 
1  $α−01 +α−22 +α−11 $ 
2  $α−01 +α−23 +α−11 $ 
And in Table B:
main column B  multiplicity  auxiliary column B: logarithmic derivative 

0  1  $α−01 $ 
1  1  $α−01 +α−11 $ 
2  3  $α−01 +α−11 +α−23 $ 
It is possible to establish a subset relation by skipping over certain elements when computing the logarithmic derivative. The logarithmic derivative must incorporate the same elements with the same multiplicity in both tables. Otherwise, the Lookup Argument will fail.
An example for a Lookup Argument can be found between the Program Table and the Processor Table.
The lookup table may represent a mapping from one or more “input” elements to one or more “output” elements – see “Compressing Multiple Elements.”
See Lemma 3 in this paper for a proof.
Memory Consistency
Triton has three memorylike units: the RAM, the jump stack, and the op stack. Each such unit has a corresponding table. Memoryconsistency is the property that whenever the processor reads a data element from these units, the value that it receives corresponds to the value sent the last time when that cell was written. Memory consistency follows from two intermediate properties.
 Contiguity of regions of constant memory pointer. After selecting from each table all the rows with any given memory pointer, the resulting sublist is contiguous, which is to say, there are no gaps and two such sublists can never interleave.
 Innersorting within contiguous regions. Within each such contiguous region, the rows are sorted in ascending order by clock cycle.
The contiguity of regions of constant memory pointer is established differently for the RAM Table than for the Op Stack or Jump Stack Tables. The Op Stack and Jump Stack Tables enjoy a particular property whereby the memory pointer can only ever increase or decrease by one (or stay the same). As a result, simple AIR constraints can enforce the correct sorting by memory pointer. In contrast, the memory pointer for the RAM Table can jump arbitrarily. As explained in the next section, a Contiguity Argument establishes contiguity.
The correct inner sorting is established the same way for all three memorylike tables. Their respective clock jump differences – differences of clock cycles within regions of constant memory pointer – are shown to be contained in the set of all clock cycles through Lookup Arguments. Under reasonable assumptions about the running time, this fact implies that all clock jumps are directed forwards, as opposed to backwards, which in turn implies that the rows are sorted for clock cycle.
The next sections elaborate on these constructions. A dedicated section shows that these two properties do indeed suffice to prove memory consistency.
Contiguity of MemoryPointer Regions
Contiguity for Op Stack Table
In each cycle, the memory pointer for the Op Stack Table, op_stack_pointer
, can only ever increase by one, remain the same, or decrease by one.
As a result, it is easy to enforce that the entire table is sorted for memory pointer using one initial constraint and one transition constraint.
 Initial constraint:
op_stack_pointer
starts with 16^{1}, so in terms of polynomials the constraint isop_stack_pointer  16
.  Transition constraint: the new
op_stack_pointer
is either the same as the previous or one larger. The polynomial representation for this constraint is(op_stack_pointer'  op_stack_pointer  1) · (op_stack_pointer'  op_stack_pointer)
.
Contiguity for Jump Stack Table
Analogously to the Op Stack Table, the Jump Stack's memory pointer jsp
can only ever decrease by one, remain the same, or increase by one, within each cycle. As a result, similar constraints establish that the entire table is sorted for memory pointer.
 Initial constraint:
jsp
starts with zero, so in terms of polynomials the constraint isjsp
.  Transition constraint: the new
jsp
is either the same as the previous or one larger. The polynomial representation for this constraint is(jsp'  jsp  1) * (jsp'  jsp)
.
Contiguity for RAM Table
The Contiguity Argument for the RAM table establishes that all RAM pointer regions start with distinct values. It is easy to ignore consecutive duplicates in the list of all RAM pointers using one additional main column. This allows identification of the RAM pointer values at the regions' boundaries, $A$. The Contiguity Argument then shows that the list $A$ contains no duplicates. For this, it uses Bézout's identity for univariate polynomials. Concretely, the prover shows that for the polynomial $f_{A}(X)$ with all elements of $A$ as roots, i.e., $f_{A}(X)=i=0∏n X−a_{i}$ and its formal derivative $f_{A}(X)$, there exist $u(X)$ and $v(X)$ with appropriate degrees such that $u(X)⋅f_{A}(X)+v(X)⋅f_{A}(X)=1.$ In other words, the Contiguity Argument establishes that the greatest common divisor of $f_{A}(X)$ and $f_{A}(X)$ is 1. This implies that all roots of $f_{A}(X)$ have multiplicity 1, which holds if and only if there are no duplicate elements in list $A$.
The following columns and constraints are needed for the Contiguity Argument:
 Main column
iord
and two deterministic transition constraints enable conditioning on a changed memory pointer.  Main columns
bcpc0
andbcpc1
and two deterministic transition constraints contain and constrain the symbolic Bézout coefficient polynomials' coefficients.  Auxiliary column
rpp
is a running product similar to that of a conditioned permutation argument. A randomized transition constraint verifies the correct accumulation of factors for updating this column.  Auxiliary column
fd
is the formal derivative ofrpp
. A randomized transition constraint verifies the correct application of the product rule of differentiation to update this column.  Auxiliary columns
bc0
andbc1
build up the Bézout coefficient polynomials based on the corresponding main columns,bcpc0
andbcpc1
. Two randomized transition constraints enforce the correct buildup of the Bézout coefficient polynomials.  A terminal constraint takes the weighted sum of the running product and the formal derivative, where the weights are the Bézout coefficient polynomials, and equates it to one. This equation asserts the Bézout relation.
The following table illustrates the idea. Columns not needed for establishing memory consistency are not displayed.
ramp  iord  bcpc0  bcpc1  rpp  fd  bc0  bc1 

$a$  0  $0$  $ℓ$  $(X−a)$  $1$  $0$  $ℓ$ 
$a$  $(b−a)_{−1}$  $0$  $ℓ$  $(X−a)$  $1$  $0$  $ℓ$ 
$b$  0  $j$  $m$  $(X−a)(X−b)$  $p(X)$  $j$  $ℓX+m$ 
$b$  0  $j$  $m$  $(X−a)(X−b)$  $p(X)$  $j$  $ℓX+m$ 
$b$  $(c−b)_{−1}$  $j$  $m$  $(X−a)(X−b)$  $p(X)$  $j$  $ℓX+m$ 
$c$  0  $k$  $n$  $(X−a)(X−b)(X−c)$  $q(X)$  $jX+k$  $ℓX_{2}+mX+n$ 
$c$    $k$  $n$  $(X−a)(X−b)(X−c)$  $q(X)$  $jX+k$  $ℓX_{2}+mX+n$ 
The values contained in the auxiliary columns are undetermined until the verifier's challenge $α$ is known; before that happens it is worthwhile to present the polynomial expressions in $X$, anticipating the substitution $X↦α$. The constraints are articulated relative to α
.
The inverse of RAMP difference iord
takes the inverse of the difference between the current and next ramp
values if that difference is nonzero, and zero else. This constraint corresponds to two transition constraint polynomials:
(ramp'  ramp) ⋅ ((ramp'  ramp) ⋅ iord  1)
iord ⋅ (ramp'  ramp) ⋅ iord  1)
The running product rpp
starts with $X−ramp$ initially, which is enforced by an initial constraint.
It accumulates a factor $X−ramp_{′}$ in every pair of rows where ramp ≠ ramp'
. This evolution corresponds to one transition constraint: (ramp'  ramp) ⋅ (rpp'  rpp ⋅ (α  ramp')) + (1  (ramp'  ramp) ⋅ di) ⋅ (rpp'  rp)
Denote by $f_{rp}(X)$ the polynomial that accumulates all factors $X−ramp_{′}$ in every pair of rows where $ramp=ramp_{′}$.
The column fd
contains the “formal derivative” of the running product with respect to $X$.
The formal derivative is initially $1$, which is enforced by an initial constraint.
The transition constraint applies the product rule of differentiation conditioned upon the difference in ramp
being nonzero; in other words, if $ramp=ramp_{′}$ then the same value persists; but if $ramp=ramp_{′}$ then $fd$ is mapped as
$fd↦fd_{′}=(X−ramp_{′})⋅fd+rp.$
This update rule is called the product rule of differentiation because, assuming $ramp_{′}=ramp$, then
$dXdrp_{′} =dXd(X−ramp_{′})⋅rp =(X−ramp_{′})⋅dXdrp +dXd(X−ramp_{′}) ⋅rp=(X−ramp_{′})⋅fd+rp. $
The transition constraint for fd
is (ramp'  ramp) ⋅ (fd'  rp  (α  ramp') ⋅ fd) + (1  (ramp'  ramp) ⋅ iord) ⋅ (fd'  fd)
.
The polynomials $f_{bc0}(X)$ and $f_{bc1}(X)$ are the Bézout coefficient polynomials satisfying the relation $f_{bc0}(X)⋅f_{rp}(X)+f_{bc1}(X)⋅f_{fd}(X)=g(f_{rp}(X),f_{fd}(X))=!1.$ The prover finds $f_{bc0}(X)$ and $f_{bc1}(X)$ as the minimaldegree Bézout coefficients as returned by the extended Euclidean algorithm. Concretely, the degree of $f_{bc0}(X)$ is smaller than the degree of $f_{fd}(X)$, and the degree of $f_{bc1}(X)$ is smaller than the degree of $f_{rp}(X)$.
The (scalar) coefficients of the Bézout coefficient polynomials are recorded in main columns bcpc0
and bcpc1
, respectively.
The transition constraints for these columns enforce that the value in one such column can only change if the memory pointer ramp
changes.
However, unlike the conditional update rule enforced by the transition constraints of rp
and fd
, the new value is unconstrained.
Concretely, the two transition constraints are:
(1  (ramp'  ramp) ⋅ iord) ⋅ (bcpc0'  bcpc0)
(1  (ramp'  ramp) ⋅ iord) ⋅ (bcpc1'  bcpc1)
Additionally, bcpc0
must initially be zero, which is enforced by an initial constraint.
This upperbounds the degrees of the Bézout coefficient polynomials, which are built from main columns bcpc0
and bcpc1
.
Two transition constraints enforce the correct buildup of the Bézout coefficient polynomials:
(1  (ramp'  ramp) ⋅ iord) ⋅ (bc0'  bc0) + (ramp'  ramp) ⋅ (bc0'  α ⋅ bc0  bcpc0')
(1  (ramp'  ramp) ⋅ iord) ⋅ (bc1'  bc1) + (ramp'  ramp) ⋅ (bc1'  α ⋅ bc1  bcpc1')
Additionally, bc0
must initially be zero, and bc1
must initially equal bcpc1
.
This is enforced by initial constraints bc0
and bc1  bcpc1
.
Lastly, the verifier verifies the randomized AIR terminal constraint bc0 ⋅ rpp + bc1 ⋅ fd  1
.
Completeness.
For honest provers, the gcd is guaranteed to be one. As a result, the protocol has perfect completeness. $□$
Soundness.
If the table has at least one noncontiguous region, then $f_{rp}(X)$ and $f_{fd}(X)$ share at least one factor. As a result, no Bézout coefficients $f_{bc0}(X)$ and $f_{bc1}(X)$ can exist such that $f_{bc0}(X)⋅f_{rp}(X)+f_{bc1}(X)⋅f_{fd}(X)=1$. The verifier therefore probes unequal polynomials of degree at most $2T−2$, where $T$ is the length of the execution trace, which is upper bounded by $2_{32}$. According to the SchwartzZippel lemma, the false positive probability is at most $(2T−2)/∣F∣$. $□$
ZeroKnowledge.
Since the contiguity argument is constructed using only AET and AIR, zeroknowledge follows from Triton VM's zkSTNARK engine. $□$
Summary of constraints
We present a summary of all constraints.
Initial
bcpc0
bc0
bc1  bcpc1
fd  1
rpp  (α  ramp)
Consistency
None.
Transition
(ramp'  ramp) ⋅ ((ramp'  ramp) ⋅ iord  1)
iord ⋅ ((ramp'  ramp) ⋅ iord  1)
(1  (ramp'  ramp) ⋅ iord) ⋅ (bcpc0'  bcpc0)
(1  (ramp'  ramp) ⋅ iord) ⋅ (bcpc1'  bcpc1)
(ramp'  ramp) ⋅ (rpp'  rpp ⋅ (α  ramp')) + (1  (ramp'  ramp) ⋅ iord) ⋅ (rpp'  rpp)
(ramp'  ramp) ⋅ (fd'  rp  (α  ramp') ⋅ fd) + (1  (ramp'  ramp) ⋅ iord) ⋅ (fd'  fd)
(1  (ramp'  ramp) ⋅ iord) ⋅ (bc0'  bc0) + (ramp'  ramp) ⋅ (bc0'  α ⋅ bc0  bcpc0')
(1  (ramp'  ramp) ⋅ iord) ⋅ (bc1'  bc1) + (ramp'  ramp) ⋅ (bc1'  α ⋅ bc1  bcpc1')
Terminal
bc0 ⋅ rpp + bc1 ⋅ fd  1
See data structures and registers for explanations of the specific value 16.
Clock Jump Differences and Inner Sorting
The previous sections show how it is proven that in the Jump Stack, Op Stack, and RAM Tables, the regions of constant memory pointer are contiguous. The next step is to prove that within each contiguous region of constant memory pointer, the rows are sorted for clock cycle. That is the topic of this section.
The problem arises from clock jumps, which describes the phenomenon when the clock cycle increases even though the memory pointer does not change. If arbitrary jumps were allowed, nothing would prevent the cheating prover from using a table where higher rows correspond to later states, giving rise to an exploitable attack. So it must be shown that every clock jump is directed forward and not backward.
Our strategy is to show that the difference, i.e., the next clock cycle minus the current clock cycle, is itself a clock cycle. Recall that the Processor Table's clock cycles run from 0 to $T−1$. Therefore, a forward directed clock jump difference is in $F={1,…,T−1}⊆F_{p}$, whereas a backward directed clock jump's difference is in $B={−f∣f∈F}={1−T,…,−1}⊆F_{p}$. No other clock jump difference can occur. If $T≪p/2$, there is no overlap between sets $F$ and $B$. As a result, in this regime, showing that a clock jump difference is in $F$ guarantees that the jump is forwarddirected.
The set of values in the Processor Table's clock cycle column is $F∪{0}$.
A standard Lookup Argument can show that the clock jump differences are elements of that column.
Since all three memorylike tables look up the same property, the Processor Table acts as a single server as opposed to three distinct servers.
Concretely, the lookup multiplicities of the three clients, i.e., the memorylike tables, are recorded in a single column cjd_mul
.
It contains the elementwise sum of the three distinct lookup multiplicities.
Proof of Memory Consistency
Whenever the Processor Table reads a value "from" a memorylike table, this value appears nondeterministically and is unconstrained by the main table AIR constraints. However, there is a permutation argument that links the Processor Table to the memorylike table in question. The construction satisfies memory consistency if it guarantees that whenever a memory cell is read, its value is consistent with the last time that cell was written.
The above is too informal to provide a meaningful proof for. Let's put formal meanings on the proposition and premises, before reducing the former to the latter.
Let $P$ denote the Processor Table and $M$ denote the memorylike table. Both have height $T$. Both have columns clk
, mp
, and val
. For $P$ the column clk
coincides with the index of the row. $P$ has another column ci
, which contains the current instruction, which is write
, read
, or any
. Obviously, mp
and val
are abstract names that depend on the particular memorylike table, just like write
, read
, and any
are abstract instructions that depend on the type of memory being accessed. In the following math notation we use $col$ to denote the column name and $col$ to denote the value that the column might take in a given row.
Definition 1 (contiguity): The memorylike table is contiguous iff all sublists of rows with the same memory pointer mp
are contiguous. Specifically, for any given memory pointer $mp$, there are no rows with a different memory pointer $mp_{′}$ in between rows with memory pointer $mp$.
$∀i<j<k∈{0,…,T−1}:mp=△M[i][mp]=M[k][mp]⇒M[j][mp]=mp$
Definition 2 (regional sorting): The memorylike table is regionally sorted iff for every contiguous region of constant memory pointer, the clock cycle increases monotonically.
$∀i<j∈{0,…,T−1}:M[i][mp]=M[j][mp]⇒M[i][clk]<_{Z}M[j][clk]$
The symbol $<_{Z}$ denotes the integer less than operator, after lifting the operands from the finite field to the integers.
Definition 3 (memory consistency): A Processor Table $P$ has memory consistency if whenever a memory cell at location $mp$ is read, its value corresponds to the previous time the memory cell at location $mp$ was written. Specifically, there are no writes in between the write and the read, that give the cell a different value.
$∀k∈{0,…,T−1}:P[k][ci]=read⇒((1)⇒(2))$
$(1)∃i∈{0,…,k}:P[i][ci]=write∧P[i+1][val]=P[k][val]∧P[i][mp]=P[k][mp]$
$(2)∄j∈{i+1,…,k−1}:P[j][ci]=write∧P[i][mp]=P[k][mp]$
Theorem 1 (memory consistency): Let $P$ be a Processor Table. If there exists a memorylike table $M$ such that
 selecting for the columns
clk
,mp
,val
, the two tables' lists of rows are permutations of each other; and  $M$ is contiguous and regionally sorted; and
 $M$ has no changes in
val
that coincide with clock jumps;
then $P$ has memory consistency.
Proof. For every memory pointer value $mp$, select the sublist of rows $P_{mp}=△{P[k]∣P[k][mp]=mp}$ in order. The way this sublist is constructed guarantees that it coincides with the contiguous region of $M$ where the memory pointer is also $mp$.
Iteratively apply the following procedure to $P_{mp}$: remove the bottommost row if it does not correspond to a row $k$ that constitutes a counterexample to memory consistency. Specifically, let $i$ be the clock cycle of the previous row in $P_{mp}$.
 If $i$ satisfies $(1)$ then by construction it also satisfies $(2)$. As a result, row $k$ is not part of a counterexample to memory consistency. We can therefore remove the bottommost row and proceed to the next iteration of the outermost loop.
 If $P[i][ci]=write$ then we can safely ignore this row: if there is no clock jump, then the absence of a $write$instruction guarantees that $val$ cannot change; and if there is a clock jump, then by assumption on $M$, $val$ cannot change. So set $i$ to the clock cycle of the row above it in $P_{mp}$ and proceed to the next iteration of the inner loop. If there are no rows left for $i$ to index, then there is no possible counterexample for $k$ and so remove the bottommost row of $P_{mp}$ and proceed to the next iteration of the outermost loop.
 The case $P[i+1][val]=P[k][val]$ cannot occur because by construction of $i$, $val$ cannot change.
 The case $P[i][mp]=P[k][mp]$ cannot occur because the list was constructed by selecting only elements with the same memory pointer.
 This list exhausts the possibilities of condition (1).
When $P_{mp}$ consists of only two rows, it can contain no counterexamples. By applying the above procedure, we can reduce every correctly constructed sublist $P_{mp}$ to a list consisting of two rows. Therefore, for every $mp$, the sublist $P_{mp}$ is free of counterexamples to memory consistency. Equivalently, $P$ is memory consistent. $□$
Program Attestation
“Program attestation” is the notion of cryptographically linking a program to the proof produced by that program's execution. Informally, program attestation allows Triton VM to prove claims of the form:
There exists a program with hash digest X that, given input Y, produces output Z.
Without program attestation, Triton VM would only be able to prove claims of the form:
There exists a program that, given input Y, produces output Z.
The proof system is zeroknowledge with respect to the actual program. That is, the verifier learns nothing about the program of which correct execution is proven, except the program's digest^{1}. This property is generally desirable, for example to keep a specific machine learning model secret while being able to prove its correct execution. However, it can be equally desirable to reveal the program, for example to allow code auditing. In such cases, the program's source code can be revealed and digest equality can be checked.
As is the case throughout Triton VM, the used hash function is Tip5.
Mechanics
Program attestation is part of initializing Triton VM. That is, control is passed to the tobeexecuted program only after the following steps have been performed:
 The instructions, located in program memory, are padded. The padding follows Tip5's rules for input of variable length^{2}.
 The padded instructions are copied to the Hash Table.
 The Hash Table computes the program's digest by iterating Tip5 multiple times in Sponge mode.
 The program digest is copied to the bottom of the operational stack, i.e., stack elements
st11
throughst15
.
The program digest is part of the generated proof^{3}, and hence public. During proof verification, AIR constraints in the Hash Table establish that the claimed and the computed program digests are identical, and that the digest was computed integrally. This establishes verifiable integrity of the publicly claimed program digest. Below figure visualizes the interplay of tables and public data for proof verification.
Notably, a program has access to the hash digest of itself. This is useful for recursive verification: since the program digest is both part of the proof that is being verified and accessible during execution, the two can be checked for equality at runtime. This way, the recursive verifier can know whether it is actually recursing, or whether it is verifying a proof for some other program. After all, the hash digest of the verifier program cannot be hardcoded into the verifier program, because that would be a circular dependency.
Since hash functions are deterministic, a programmer desiring resistance against enumeration attacks might want to include blinding elements
This is easily possible in, for example, dead code, by including instructions like push random_element
.
The distribution random_element
is sampled from and the number of such instructions determine the security level against enumeration attacks.
Padding is one 1 followed by the minimal number of 0’s necessary to make the padded input length a multiple of the $rate$, which is 10. See also section 2.5 “FixedLength versus VariableLength” in the Tip5 paper.
More specifically, of the claim, which itself is part of the proof.
Index Sampling
Task: given pseudorandomness, sample pseudorandomly $k$ distinct but otherwise uniform indices from the interval $[0:U)$. This index sampling task is needed in several locations, not just in the STARK engine. Outside of the VM, this task can be achieved by rejection sampling until the list of collected samples has the requisite size (namely, $k$). Inside the VM, we like to avoid variablelength execution paths. This note explains how to do it using nondeterminism.
Solution 1: OutofOrder Divination
We want to sample exactly $k$ unique indices, but the sampler in the VM is allowed to divine counter values.
Outside VM
By rejection sampling until the list of samples has the requisite length I mean something like this:
function sample_indices(count, upper_bound, randomness):
list < []
counter < 0
while length(list) < count:
integer < Hash(randomness  counter)
candidate_index < integer % upper_bound
if candidate_index in list:
continue // reject
else:
list < list  candidate_index
return list
The loop needs to iterate at least count
number of times. It is possible to parallelize this first step by separating the count
first iterations from the variablenumber followups.
This pythonlike pseudocode hides important complexity behind the list membership test. To expand this logic, we need sorting and consecutive deduplication.
function sample_indices(count, upper_bound, randomness):
list < []
counter < 0
while length(list) < count:
integer < Hash(randomness  counter)
candidate_index < integer % upper_bound
list < sort(list  candidate_index)
list.deduplicate()
return list
Inside VM
In a nutshell, the VM nondeterministically guesses ("divines") the values of the counters out of order, checks that they are within allowed bounds, and then checks that the resulting list of indices is sorted and has no duplicates.
function sample_indices(count, upper_bound, randomness):
list < []
for i in [0:count):
counter < divine()
assert(counter >= 0)
assert(counter < count + margin)
integer < Hash(randomness  counter)
index < integer % upper_bound
list < list  index
if i > 0:
assert(list[i] < list[i1])
return list
The Virtual Machine supplies the right counters through divine()
, having observed them from the outofVM execution.
The parameter margin
should be large enough so that, with overwhelming probability, there is a set of count
many counters in [0:(count+margin))
that map to distinct indices. At the same time, the parameter margin
should not be too large as it gives the malicious adversary more flexibility in selecting favorable indices.
Failure Probability
In order for the honest user to fail to find a good counter set, the list [0:(count+margin))
must contain at least margin
+1 collisions under x > Hash(randomness  x) % upper_bound
. We continue by modeling this map as uniformly random.
The event margin
+1 collisions or more implies that margin
+1 samples were drawn from a subset of at most count
1 marked indices. The probability of the implied event is
$Pr[#collisions≥μ+1]=(U(k−1) )_{μ+1},$
where $U$ denotes upper_bound
, $k$ denotes count
, and $μ$ denotes margin
.
Given the count
and the upper_bound
, the appropriate value for margin
that pushes this failure probability below $2_{−λ}$ is
$(U(k−1) )_{μ+1}≤2_{−λ}$
$(μ+1)⋅g_{2}(U(k−1) )≤−λ$
$μ≥g_{2}(U(k−1) )−λ −1.$
For a concrete example, set $k=λ=160$ and $U=2_{32}$. Then $μ$ needs to be at least 6.
Security Degradation
Suppose the user is malicous and hopes to conceal his fraud by selecting a set of indices that do not expose it. Suppose that the proportion of subsets of $[0:U)$ of size $k$ that are suitable for the adversary is $ρ$. Then clearly with the standard old index sampling method the attacker's success probability is bounded by $ρ≈2_{−λ}$. The question is whether the improved index sampler enables a higher success probability (and if so, how much higher).
The attacker has access to at most $(μk+μ )$ subsets of $[0:U)$ of size $k$. The probability that a given subset is suitable for the attack is $ρ$, and so:
 The probability that one subset is unsuitable for attack is $1−ρ$.
 The probability that all $(μk+μ )$ subsets are unsuitable for attack is $(1−ρ)_{(μk+μ)}$. In this step we assume that whether a subset is suitable for attack, is independent from whether different subset is suitable for attack, even if the intersection of the two subsets is nonzero.
 The probability that at least one subset is suitable for attack is $1−(1−ρ)_{(μk+μ)}$.
The binomial formula expands $(1−ρ)_{(μk+μ)}=i=0∑(μk+μ ) (−1)_{i}(i(μk+μ ) )ρ_{i}.$ We assume that each subsequent term is smaller in absolute value than the previous because $ρ$ dominates. Indeed, plugging the wellknown upper bound on the binomial coefficient $(kn )<(kn⋅e )_{k}$, we get $(i(μk+μ ) )<(i(μ(k+μ)⋅e )_{μ} )<⎝⎛ i(μ(k+μ)⋅e )_{μ}⋅e ⎠⎞ _{i}$ and the assumption holds already when $(μ(k+μ)⋅e )_{μ}<ρ_{−1}$. For a concrete example, set $k=160$ and $μ=6$, then the left hand side of this expression is roughly $2_{37.4}$ whereas these parameters target a security level of $λ≈g_{2}ρ_{−1}$.
Using this assumption (on the shrinking absolute value of each successive term) we can derive an expression to quantify the security degradation. $Pr[attack]≤1−(1−ρ)_{(μk+μ)}$ $=1−1+ρ⋅(μk+μ )−ρ_{2}⋅(μk+μ )⋅(μk+μ )⋅21 +⋯$ $≤(μk+μ )⋅ρ.$
For a concrete example, set $k=160$ and $μ=6$. Then $(μk+μ )≈2_{34.6}$ and so we lose about $33$ bits of security.
Solution 2: Stupid Safety Margin
How about we sample $k+μ$ indices from the start, and use them all no matter what? We only need $k$ for security. The margin parameter $μ$ is chosen such that finding more than $μ$ collisions, which is required to undermine the security guarantee, is cryptographically improbable.
The benefit of this solution is that both the index samplers, i.e., inside and outside the VM, have a simple description. Furthermore, there is no longer a nonzero failure probability for honest users. The drawback is that more work is done than necessary, and the proofs are larger too.
So what is the right value of $μ$? It turns out that this probability matches with the correctness error derived for the previous solution. For redundancy this derivation is repeated here.
Sampling $μ+1$ or more collisions requires hitting a marked subset of at most $k−1$ indices $μ+1$ times. The probability of this event is therefore $(Uk−1 )_{μ+1}$. This probability is smaller than $2_{−λ}$ when $(Uk−1 )_{μ+1}≤2_{−λ}$ $(μ+1)⋅g_{2}(Uk−1 )≤−λ$ $μ≥g_{2}(Uk−1 )−λ −1.$
For a concrete example, set $k=λ=160$ and $U=2_{32}$. Then $μ$ needs to be at least 6.
ZeroIntegral and SumCheck
Let $T$ be a table of 1 column and $∣H∣$ rows, and let $f^ (X)$ be any polynomial that agrees with $T$ when evaluated on $H=(ο_{i})_{i}$, where $ο$ is a generator of the subgroup of order $2_{k}$ $H$. Group theory lets us prove and efficiently verify if $∑T=0$.
Decompose $f^ (X)$ into $f^ (X)=g^ (X)+Z_{H}(X)⋅h^(X)$, where $Z_{H}(X)$ is the zerofier over $H$, and where $g^ (X)$ has degree at most $∣H∣−1$. The table sums to zero if and only if $g^ (X)$ integrates to zero over $H$ because
$h∈H∑ f^ (h)=h∈H∑ (g^ (h)+Z_{H}(h)⋅h^(h))=h∈H∑ g^ (h)$
and this latter proposition is true if and only if the constant term of $g^ (X)$ is zero.
Theorem. $∑_{h∈H}f^ (h)=0⇔X∣g^ (X)$ for a subgroup $H⊆F_{∗}$ of order $2_{k}$.
$Proof.$ Let $K$ be a subgroup of $H$. If $K={1}$ then $−1∈K$ and also $∑_{k∈K}k=0$ because the elements of $K$ come in pairs: $k∈K⇔−k∈K$. Therefore $∑_{h∈H}h=0$.
The map $H→K,h↦h_{α}$ is a morphism of groups with $∣H∣≥∣K∣≥1$. Therefore we have:
$h∈H∑ h_{α}={∣H∣0 ⇐α=0mod∣H∣.⇐α=0mod∣H∣. $
The polynomial $g^ (X)$ has only one term whose exponent is $0mod∣H∣$, which is the constant term. $□$
This observation gives rise to the following Polynomial IOP for verifying that a polynomial oracle $[f^ (X)]$ integrates to 0 on a subgroup $H$ of order some power of 2.
Protocol ZeroIntegral
 Prover computes $g^ (X)←f^ (X)modZ_{H}(X)$ and $h^(X)←Z_{H}(X)f^ (X)−g^ (X) $.
 Prover computes $g^ _{⋆}(X)=Xg^ (X) $.
 Prover sends $g^ _{⋆}(X)$, of degree at most $∣H∣−2$, and $h^(X)$, of degree at most $deg(f^ (X))−∣H∣$ to Verifier.
 Verifier queries $[f^ (X)]$, $[g^ _{⋆}(X)]$, $[h^(X)]$ in z \xleftarrow{$} \mathbb{F} and receives $y_{f},y_{g_{⋆}},y_{h}$.
 Verifier tests $y_{f}=?z⋅y_{g_{⋆}}+y_{h}⋅Z_{H}(z)$.
This protocol can be adapted to show instead that a given polynomial oracle integrates to $a=0$ on the subgroup $H$, giving rise to the wellknown SumCheck protocol. The adaptation follows from the Verifier's capacity to simulate $[f^ _{⋆}(X)]$ from $[f^ (X)]$, where $f^ _{⋆}(X)=f^ (X)−∣H∣a $. This simulated polynomial is useful because
$h∈H∑ f^ _{⋆}(h)=h∈H∑ (f^ (h)−∣H∣a )=a−Ha h∈H∑ 1=0.$
In other words, $f^ (X)$ integrates to $a$ on $H$ iff $f^ _{⋆}(X)$ integrates to $0$ on $H$, and we already a protocol to establish the latter claim.