Instruction Set Architecture

Triton VM is a stack machine with RAM. It is a Harvard architecture with read-only memory for the program. The arithmetization of the VM is defined over the B-field where is the Oxfoi prime, i.e., .1 This means the registers and memory elements take values from , and the transition function gives rise to low-degree transition verification polynomials from the ring of multivariate polynomials over .

At certain points in the course of arithmetization we need an extension field over to ensure sufficient soundness. To this end we use , i.e., the quotient ring of remainders of polynomials after division by the Shah polynomial, . We refer to this field as the X-field for short, and its elements as X-field elements.

Instructions have variable width: they either consist of one word, i.e., one B-field element, or of two words, i.e., two B-field elements. An example for a single-word instruction is add, adding the two elements on top of the stack, leaving the result as the new top of the stack. An example for a double-word instruction is push + arg, pushing arg to the stack.

Triton VM has two interfaces for data input, one for public and one for secret data, and one interface for data output, whose data is always public. The public interfaces differ from the private one, especially regarding their arithmetization.

1

The name “Oxfoi” comes from the prime's hexadecimal representation 0xffffffff00000001.

Data Structures

Memory

The term memory refers to a data structure that gives read access (and possibly write access, too) to elements indicated by an address. Regardless of the data structure, the address lives in the B-field. There are four separate notions of memory:

  1. Program Memory, from which the VM reads instructions.
  2. Op Stack Memory, which stores the operational stack.
  3. RAM, to which the VM can read and write field elements.
  4. Jump Stack Memory, which stores the entire jump stack.

Program Memory

Program memory holds the instructions of the program currently executed by Triton VM. It is immutable.

Operational Stack

The stack is a last-in;first-out data structure that allows the program to store intermediate variables, pass arguments, and keep pointers to objects held in RAM. In this document, the operational stack is either referred to as just “stack” or, if more clarity is desired, “op stack.”

From the Virtual Machine's point of view, the stack is a single, continuous object. The first 16 elements of the stack can be accessed very conveniently. Elements deeper in the stack require removing some of the higher elements, possibly after storing them in RAM.

For reasons of arithmetization, the stack is actually split into two distinct parts:

  1. the operational stack registers st0 through st15, and
  2. the Op Stack Underflow Memory.

The motivation and the interplay between the two parts is described and exemplified in arithmetization of the Op Stack table.

Random Access Memory

Triton VM has dedicated Random Access Memory. It can hold up to many base field elements, where is the Oxfoi prime1. Programs can read from and write to RAM using instructions read_mem and write_mem.

The initial RAM is determined by the entity running Triton VM. Populating RAM this way can be beneficial for a program's execution and proving time, especially if substantial amounts of data from the input streams needs to be persisted in RAM. This initialization is one form of secret input, and one of two mechanisms that make Triton VM a non-deterministic virtual machine. The other mechanism is dedicated instructions.

Jump Stack

Another last-in;first-out data structure, similar to the op stack. The jump stack keeps track of return and destination addresses. It changes only when control follows a call or return instruction, and might change through the recurse_or_return instruction. Furthermore, executing instructions return, recurse, and recurse_or_return require a non-empty jump stack.


1

Of course, the machine running Triton VM might have stricter limitations: storing or accessing bits exabytes of data is a non-trivial engineering feat.

Registers

This section covers all columns in the Processor Table. Only a subset of these registers relate to the instruction set; the remaining registers exist only to enable an efficient arithmetization and are marked with an asterisk (*).

RegisterNamePurpose
*clkcycle countercounts the number of cycles the program has been running for
*IsPaddingpadding indicatorindicates whether current state is only recorded to improve on STARK's computational runtime
ipinstruction pointercontains the memory address (in Program Memory) of the instruction
cicurrent instruction registercontains the current instruction
nianext instruction registercontains either the instruction at the next address in Program Memory, or the argument for the current instruction
*ib0 through ib6instruction bitdecomposition of the instruction's opcode used to keep the AIR degree low
jspjump stack pointercontains the memory address (in jump stack memory) of the top of the jump stack
jsojump stack origincontains the value of the instruction pointer of the last call
jsdjump stack destinationcontains the argument of the last call
st0 through st15operational stack registerscontain explicit operational stack values
*op_stack_pointeroperational stack pointerthe current size of the operational stack
*hv0 through hv5helper variable registershelper variables for some arithmetic operations
*cjd_mulclock jump difference lookup multiplicitymultiplicity with which the current clk is looked up by the Op Stack Table, RAM Table, and Jump Stack Table

Instruction

Register ip, the instruction pointer, contains the address of the current instruction in Program Memory. The instruction is contained in the register current instruction, or ci. Register next instruction (or argument), or nia, either contains the next instruction or the argument for the current instruction in ci. For reasons of arithmetization, ci is decomposed, giving rise to the instruction bit registers, labeled ib0 through ib6.

Stack

The stack is represented by 16 registers called stack registers (st0st15) plus the op stack underflow memory. The top 16 elements of the op stack are directly accessible, the remainder of the op stack, i.e, the part held in op stack underflow memory, is not. In order to access elements of the op stack held in op stack underflow memory, the stack has to shrink by discarding elements from the top – potentially after writing them to RAM – thus moving lower elements into the stack registers.

The stack grows upwards, in line with the metaphor that justifies the name "stack".

For reasons of arithmetization, the stack always contains a minimum of 16 elements. Trying to run an instruction which would result in a stack of smaller total length than 16 crashes the VM.

Stack elements st0 through st10 are initially 0. Stack elements st11 through st15, i.e., the very bottom of the stack, are initialized with the hash digest of the program that is being executed. See the mechanics of program attestation for further explanations on stack initialization.

The register op_stack_pointer is not directly accessible by the program running in TritonVM. It exists only to allow efficient arithmetization.

Helper Variables

Some instructions require helper variables in order to generate an efficient arithmetization. To this end, there are 6 helper variable registers, labeled hv0 through hv5. These registers are part of the arithmetization of the architecture, but not needed to define the instruction set.

Because they are only needed for some instructions, the helper variables are not generally defined. For instruction group decompose_arg and instructions skiz, split, eq, merkle_step, merkle_step_mem, xx_dot_step, and xb_dot_step, the behavior is defined in the respective sections.

Instructions

Most instructions are contained within a single, parameterless machine word.

Some instructions take a machine word as argument and are so considered double-word instructions. They are recognized by the form “instr + arg”.

Regarding Opcodes

An instruction's operation code, or opcode, is the machine word uniquely identifying the instruction. For reasons of efficient arithmetization, certain properties of the instruction are encoded in the opcode. Concretely, interpreting the field element in standard representation:

  • for all double-word instructions, the least significant bit is 1.
  • for all instructions shrinking the operational stack, the second-to-least significant bit is 1.
  • for all u32 instructions , the third-to-least significant bit is 1.

The first property is used by instruction skiz. The second property helps with proving consistency of the Op Stack. The third property allows efficient arithmetization of the running product for the Permutation Argument between Processor Table and U32 Table.

Op Stack Manipulation

InstructionOpcodeold op stacknew op stackDescription
pop + n3e.g., _ c b ae.g., _Pops the n top elements from the stack. 1 ⩽ n ⩽ 5
push + a1__ aPushes a onto the stack.
divine + n9e.g., _e.g., _ b aPushes n non-deterministic elements a to the stack. Interface for secret input. 1 ⩽ n ⩽ 5
pick + i17e.g., _ d x c b ae.g., _ d c b a xMoves the element indicated by i to the top of the stack. 0 ⩽ i < 16
place + i25e.g., _ d c b a xe.g., _ d x c b aMoves the top of the stack to the indicated position i. 0 ⩽ i < 16
dup + i33e.g., _ e d c b ae.g., _ e d c b a dDuplicates the element i positions away from the top. 0 ⩽ i < 16
swap + i41e.g., _ e d c b ae.g., _ e a c b dSwaps the ith stack element with the top of the stack. 0 ⩽ i < 16

Instruction divine n (together with merkle_step) make Triton a virtual machine that can execute non-deterministic programs. As programs go, this concept is somewhat unusual and benefits from additional explanation. The name of the instruction is the verb (not the adjective) meaning “to discover by intuition or insight.”

From the perspective of the program, the instruction divine n makes some n elements magically appear on the stack. It is not at all specified what those elements are, but generally speaking, they have to be exactly correct, else execution fails. Hence, from the perspective of the program, it just non-deterministically guesses the correct values in a moment of divine clarity.

Looking at the entire system, consisting of the VM, the program, and all inputs – both public and secret – execution is deterministic: the divined values were supplied as and are read from secret input.

Control Flow

InstructionOpcodeold op stacknew op stackold ipnew ipold jump stacknew jump stackDescription
halt0__ipip+1__Solves the halting problem (if the instruction is reached). Indicates graceful shutdown of the VM.
nop8__ipip+1__Do nothing
skiz2_ a_ipip+s__Skip next instruction if a is zero. s ∈ {1, 2, 3} depends on a and whether the next instruction takes an argument.
call + d49__ipd__ (ip+2, d)Push (ip+2,d) to the jump stack, and jump to absolute address d
return16__ipo_ (o, d)_Pop one pair off the jump stack and jump to that pair's return address (which is the first element).
recurse24__ipd_ (o, d)_ (o, d)Peek at the top pair of the jump stack and jump to that pair's destination address (which is the second element).
recurse_or_return32_ b a ....._ b a .....ipd or o_ (o, d)_ (o, d) or _Like recurse if st5 = a != b = st6, like return if a == b. See also extended description below.
assert10_ a_ipip+1__Pops a if a == 1, else crashes the virtual machine.

The instructions return, recurse, and recurse_or_return require a non-empty jump stack. Should the jump stack be empty, executing any of these instruction causes Triton VM to crash.

Instruction recurse_or_return behaves – surprise! – either like instruction recurse or like instruction return. The (deterministic) decision which behavior to exhibit is made at runtime and depends on stack elements st5 and st6. If st5 != st6, then recurse_or_return acts like instruction recurse, else like return. The instruction is designed to facilitate loops using pointer equality as termination condition and to play nicely with instructions merkle_step and merkle_step_mem.

Memory Access

InstructionOpcodeold op stacknew op stackold RAMnew RAMDescription
read_mem + n57e.g., _ p+2e.g., _ v2 v1 v0 p-1[p: v0, p+1, v1, …][p: v0, p+1, v1, …]Reads consecutive values vi from RAM at address p and puts them onto the op stack. Decrements RAM pointer (st0) by n. 1 ⩽ n ⩽ 5
write_mem + n11e.g., _ v2 v1 v0 pe.g., _ p+3[][p: v0, p+1, v1, …]Writes op stack's n top-most values vi to RAM at the address p+i, popping the vi. Increments RAM pointer (st0) by n. 1 ⩽ n ⩽ 5

For the benefit of clarity, the effect of every possible argument is given below.

instructionold op stacknew op stackold RAMnew RAM
read_mem 1_ p_ a p-1[p: a][p: a]
read_mem 2_ p+1_ b a p-1[p: a, p+1: b][p: a, p+1: b]
read_mem 3_ p+2_ c b a p-1[p: a, p+1: b, p+2: c][p: a, p+1: b, p+2: c]
read_mem 4_ p+3_ d c b a p-1[p: a, p+1: b, p+2: c, p+3: d][p: a, p+1: b, p+2: c, p+3: d]
read_mem 5_ p+4_ e d c b a p-1[p: a, p+1: b, p+2: c, p+3: d, p+4: e][p: a, p+1: b, p+2: c, p+3: d, p+4: e]
write_mem 1_ a p_ p+1[][p: a]
write_mem 2_ b a p_ p+2[][p: a, p+1: b]
write_mem 3_ c b a p_ p+3[][p: a, p+1: b, p+2: c]
write_mem 4_ d c b a p_ p+4[][p: a, p+1: b, p+2: c, p+3: d]
write_mem 5_ e d c b a p_ p+5[][p: a, p+1: b, p+2: c, p+3: d, p+4: e]

Hashing

InstructionOpcodeold op stacknew op stackDescription
hash18_ jihgfedcba_ yxwvuHashes the stack's 10 top-most elements and puts their digest onto the stack, shrinking the stack by 5.
assert_vector26_ edcba edcba_ edcbaAssert equality of st(i) to st(i+5) for 0 <= i < 4. Crashes the VM if any pair is unequal. Pops the 5 top-most elements.
sponge_init40__Initializes (resets) the Sponge's state. Must be the first Sponge instruction executed.
sponge_absorb34_ jihgfedcba_Absorbs the stack's ten top-most elements into the Sponge state.
sponge_absorb_mem48_ dcba p_ hgfe (p+10)Absorbs the ten RAM elements at addresses p, p+1, … into the Sponge state. Overwrites stack elements st1 through st4 with the first four absorbed elements.
sponge_squeeze56__ zyxwvutsrqSqueezes the Sponge and pushes the 10 squeezed elements onto the stack.

The instruction hash works as follows. The stack's 10 top-most elements (jihgfedcba) are popped from the stack, reversed, and concatenated with six zeros, resulting in abcdefghij000000. The Tip5 permutation is applied to abcdefghij000000, resulting in αβγδεζηθικuvwxyz. The first five elements of this result, i.e., αβγδε, are reversed and pushed to the stack. For example, the old stack was _ jihgfedcba and the new stack is _ εδγβα.

The instructions sponge_init, sponge_absorb, sponge_absorb_mem, and sponge_squeeze are the interface for using the Tip5 permutation in a Sponge construction. The capacity is never accessible to the program that's being executed by Triton VM. At any given time, at most one Sponge state exists. Only instruction sponge_init resets the state of the Sponge, and only the three Sponge instructions influence the Sponge's state. Notably, executing instruction hash does not modify the Sponge's state. When using the Sponge instructions, it is the programmer's responsibility to take care of proper input padding: Triton VM cannot know the number of elements that will be absorbed.

Base Field Arithmetic on Stack

InstructionOpcodeold op stacknew op stackDescription
add42_ b a_ cComputes the sum (c) of the top two elements of the stack (b and a) over the field.
addi + a65_ b_ cComputes the sum (c) of the top element of the stack (b) and the immediate argument (a).
mul50_ b a_ cComputes the product (c) of the top two elements of the stack (b and a) over the field.
invert64_ a_ bComputes the multiplicative inverse (over the field) of the top of the stack. Crashes the VM if the top of the stack is 0.
eq58_ b a_ (a == b)Tests the top two stack elements for equality.

Bitwise Arithmetic on Stack

InstructionOpcodeold op stacknew op stackDescription
split4_ a_ hi loDecomposes the top of the stack into the lower 32 bits and the upper 32 bits.
lt6_ b a_ a<b“Less than” of the stack's two top-most elements. Crashes the VM if a or b is not u32.
and14_ b a_ a&bBitwise and of the stack's two top-most elements. Crashes the VM if a or b is not u32.
xor22_ b a_ a^bBitwise exclusive or of the stack's two top-most elements. Crashes the VM if a or b is not u32.
log_2_floor12_ a_ ⌊log₂(a)⌋The number of bits in a minus 1, i.e., . Crashes the VM if a is 0 or not u32.
pow30_ e b_ b**eThe top of the stack to the power of the stack's runner up. Crashes the VM if exponent e is not u32.
div_mod20_ d n_ q rDivision with remainder of numerator n by denominator d. Guarantees the properties n == q·d + r and r < d. Crashes the VM if n or d is not u32 or if d is 0.
pop_count28_ a_ wComputes the hamming weight or “population count” of a. Crashes the VM if a is not u32.

Extension Field Arithmetic on Stack

InstructionOpcodeold op stacknew op stackDescription
xx_add66_ z y x b c a_ w v uAdds the two extension field elements encoded by field elements z y x and b c a.
xx_mul74_ z y x b c a_ w v uMultiplies the two extension field elements encoded by field elements z y x and b c a.
x_invert72_ z y x_ w v uInverts the extension field element encoded by field elements z y x in-place. Crashes the VM if the extension field element is 0.
xb_mul82_ z y x a_ w v uScalar multiplication of the extension field element encoded by field elements z y x with field element a. Overwrites z y x with the result.

Input/Output

InstructionOpcodeold op stacknew op stackDescription
read_io + n73e.g., _e.g., _ c b aReads n B-Field elements from standard input and pushes them to the stack. 1 ⩽ n ⩽ 5
write_io + n19e.g., _ c b ae.g., _Pops n elements from the stack and writes them to standard output. 1 ⩽ n ⩽ 5

Many-In-One

InstructionOpcodeold op stacknew op stackDescription
merkle_step36_ i edcba_ (i div 2) zyxwvHelps traversing a Merkle tree during authentication path verification. Crashes the VM if i is not u32. See extended description below.
merkle_step_mem44_ p f i edcba_ p+5 f (i div 2) zyxwvHelps traversing a Merkle tree during authentication path verification with the authentication path being supplied in RAM. Crashes the VM if i is not u32. See extended description below.
xx_dot_step80_ z y x *b *a_ z+p2 y+p1 x+p0 *b+3 *a+3Reads two extension field elements from RAM located at the addresses corresponding to the two top stack elements, multiplies the extension field elements, and adds the product (p0, p1, p2) to an accumulator located on stack immediately below the two pointers. Also, increase the pointers by the number of words read.
xb_dot_step88_ z y x *b *a_ z+p2 y+p1 x+p0 *b+3 *a+1Reads one base field element from RAM located at the addresses corresponding to the top of the stack, one extension field element from RAM located at the address of the second stack element, multiplies the field elements, and adds the product (p0, p1, p2) to an accumulator located on stack immediately below the two pointers. Also, increase the pointers by the number of words read.

The instruction merkle_step works as follows. The 6th element of the stack i is taken as the node index for a Merkle tree that is claimed to include data whose digest is the content of stack registers st4 through st0, i.e., edcba. The sibling digest of edcba is εδγβα and is read from the input interface of secret data. The least-significant bit of i indicates whether edcba is the digest of a left leaf or a right leaf of the Merkle tree's current level. Depending on this least significant bit of i, merkle_step either

  1. (i = 0 mod 2) interprets edcba as the left digest, εδγβα as the right digest, or
  2. (i = 1 mod 2) interprets εδγβα as the left digest, edcba as the right digest.

In either case,

  1. the left and right digests are hashed, and the resulting digest zyxwv replaces the top of the stack, and
  2. 6th register i is shifted by 1 bit to the right, i.e., the least-significant bit is dropped.

Instruction merkle_step_mem works very similarly to instruction merkle_step. The main difference, as the name suggests, is the source of the sibling digest: Instead of reading it from the input interface of secret data, it is supplied via RAM. Stack element st7 is taken as the RAM pointer, holding the memory address at which the next sibling digest is located in RAM. Executing instruction merkle_step_mem increments the memory pointer by the length of one digest, anticipating an authentication path that is laid out continuously. Stack element st6 does not change when executing instruction merkle_step_mem in order to facilitate instruction recurse_or_return.

In conjunction with instructions recurse_or_return and assert_vector, the instructions merkle_step and merkle_step_mem allow efficient verification of a Merkle authentication path. Furthermore, instruction merkle_step_mem allows verifiable re-use of an authentication path. This is necessary, for example, when verifiably updating a Merkle tree: first, the authentication path is used to confirm inclusion of some old leaf, and then to compute the tree's new root from the new leaf.

Arithmetization

An arithmetization defines two things:

  1. algebraic execution tables (AETs), and
  2. arithmetic intermediate representation (AIR) constraints.

For Triton VM, the execution trace is spread over multiple tables. These tables are linked by various cryptographic arguments. This division allows for a separation of concerns. For example, the main processor's trace is recorded in the Processor Table. The main processor can delegate the execution of somewhat-difficult-to-arithmetize instructions like hash or xor to a co-processor. The arithmetization of the co-processor is largely independent from the main processor and recorded in its separate trace. For example, instructions relating to hashing are executed by the hash co-processor. Its trace is recorded in the Hash Table. Similarly, bitwise instructions are executed by the u32 co-processor, and the corresponding trace recorded in the U32 Table. Another example for the separation of concerns is memory consistency, the bulk of which is delegated to the Operational Stack Table, RAM Table, and Jump Stack Table.

Algebraic Execution Tables

There are 9 Arithmetic Execution Tables in TritonVM. Their relation is described by below figure. A a blue arrow indicates a Permutation Argument, a red arrow indicates an Evaluation Argument, a purple arrow indicates a Lookup Argument, and the green arrow is the Contiguity Argument.

The grayed-out elements “program digest”, “input”, and “output” are not AETs but publicly available information. Together, they constitute the claim for which Triton VM produces a proof. See “Arguments Using Public Information” and “Program Attestation” for the Evaluation Arguments with which they are linked into the rest of the proof system.

Main Tables

The values of all registers, and consequently the elements on the stack, in memory, and so on, are elements of the B-field, i.e., where is the Oxfoi prime, . All values of columns corresponding to one such register are elements of the B-Field as well. Together, these columns are referred to as table's main columns, and make up the main table.

Auxiliary Tables

The entries of a table's columns corresponding to Permutation, Evaluation, and Lookup Arguments are elements from the X-field . These columns are referred to as a table's auxiliary columns, both because the entries are elements of the X-field and because the entries can only be computed using the main tables, through an auxiliary process. Together, these columns are referred to as a table's auxiliary columns, and make up the auxiliary table.

Padding

For reasons of computational efficiency, it is beneficial that an Algebraic Execution Table's height equals a power of 2. To this end, tables are padded. The height of the longest AET determines the padded height for all tables, which is .

Arithmetic Intermediate Representation

For each table, up to four lists containing constraints of different type are given:

  1. Initial Constraints, defining values in a table's first row,
  2. Consistency Constraints, establishing consistency within any given row,
  3. Transition Constraints, establishing the consistency of two consecutive rows in relation to each other, and
  4. Terminal Constraints, defining values in a table's last row.

Together, all these constraints constitute the AIR constraints.

Arguments Using Public Information

Triton VM uses a number of public arguments. That is, one side of the link can be computed by the verifier using publicly available information; the other side of the link is verified through one or more AIR constraints.

The two most prominent instances of this are public input and output: both are given to the verifier explicitly. The verifier can compute the terminal of an Evaluation Argument using public input (respectively, output). In the Processor Table, AIR constraints guarantee that the used input (respectively, output) symbols are accumulated similarly. Comparing the two terminal values establishes that use of public input (respectively, production of public output) was integral.

The third public argument establishes correctness of the Lookup Table, and is explained there. The fourth public argument relates to program attestation, and is also explained on its corresponding page.

Arithmetization Overview

Tables

table name#main cols#aux colstotal width
ProgramTable7316
ProcessorTable391172
OpStackTable4210
RamTable7625
JumpStackTable5211
HashTable6720127
CascadeTable6212
LookupTable4210
U32Table10113
DegreeLowering (-/8/4)0/118/2300/14/380/160/344
Randomizers013
TOTAL149/267/37950/64/88299/459/643

Constraints

The following table captures the state of affairs in terms of constraints before automatic degree lowering. In particular, automatic degree lowering introduces new columns, modifies the constraint set (in a way that is equivalent to what was there before), and lowers the constraints' maximal degree.

Before automatic degree lowering:

table name#initial#consistency#transition#terminalmax degree
ProgramTable641024
ProcessorTable291041119
OpStackTable30504
RamTable701215
JumpStackTable60605
HashTable22454729
CascadeTable21304
LookupTable31413
U32Table11522212
Grand Cross-Table Argument000141
TOTAL79761502319
(# nodes)(534)(624)(6679)(213)

After lowering degree to 8:

table name#initial#consistency#transition#terminal
ProgramTable64102
ProcessorTable29101651
OpStackTable3050
RamTable70121
JumpStackTable6060
HashTable2246492
CascadeTable2130
LookupTable3141
U32Table118242
Grand Cross-Table Argument00014
TOTAL798027823
(# nodes)(534)(635)(6956)(213)

After lowering degree to 4:

table name#initial#consistency#transition#terminal
ProgramTable64102
ProcessorTable31102381
OpStackTable3050
RamTable70131
JumpStackTable6070
HashTable2252842
CascadeTable2130
LookupTable3141
U32Table126342
Grand Cross-Table Argument00014
TOTAL819439823
(# nodes)(538)(676)(7246)(213)

Triton Assembly Constraint Evaluation

Triton VM's recursive verifier needs to evaluate Triton VM's AIR constraints. In order to gauge the runtime cost for this step, the following table provides estimates for that step's contribution to various tables.

TypeProcessorOp StackRAM
static377597022724996
dynamic494567802928895

Opcode Pressure

When changing existing or introducing new instructions, one consideration is: how many other instructions compete for opcodes in the same instruction category? The table below helps answer this question at a glance.

IsU32ShrinksStackHasArgNum Opcodes
nnn12
nny10
nyn11
nyy3
ynn6
yny0
yyn4
yyy0

Maximum number of opcodes per row is 16.

Program Table

The Program Table contains the entire program as a read-only list of instruction opcodes and their arguments. The processor looks up instructions and arguments using its instruction pointer ip.

For program attestation, the program is padded and sent to the Hash Table in chunks of size 10, which is the of the Tip5 hash function. Program padding is one 1 followed by the minimal number of 0’s necessary to make the padded input length a multiple of the 1.

Main Columns

The Program Table consists of 7 main columns. Those columns marked with an asterisk (*) are only used for program attestation.

ColumnDescription
Addressan instruction's address
Instructionthe (opcode of the) instruction
LookupMultiplicityhow often an instruction has been executed
*IndexInChunkAddress modulo the Tip5 , which is 10
*MaxMinusIndexInChunkInvthe inverse-or-zero of
*IsHashInputPaddingpadding indicator for absorbing the program into the Sponge
IsTablePaddingpadding indicator for rows only required due to the dominating length of some other table

Auxiliary Columns

A Lookup Argument with the Processor Table establishes that the processor has loaded the correct instruction (and its argument) from program memory. To establish the program memory's side of the Lookup Argument, the Program Table has auxiliary column InstructionLookupServerLogDerivative.

For sending the padded program to the Hash Table, a combination of two Evaluation Arguments is used. The first, PrepareChunkRunningEvaluation, absorbs one chunk of (i.e. 10) instructions at a time, after which it is reset and starts absorbing again. The second, SendChunkRunningEvaluation, absorbs one such prepared chunk every 10 instructions.

Padding

A padding row is a copy of the Program Table's last row with the following modifications:

  1. column Address is increased by 1,
  2. column Instruction is set to 0,
  3. column LookupMultiplicity is set to 0,
  4. column IndexInChunk is set to Address mod ,
  5. column MaxMinusIndexInChunkInv is set to the inverse-or-zero of ,
  6. column IsHashInputPadding is set to 1, and
  7. column IsTablePadding is set to 1.

Above procedure is iterated until the necessary number of rows have been added.

A Note on the Instruction Lookup Argument

For almost all table-linking arguments, the initial row contains the argument's initial value after having applied the first update. For example, the initial row for a Lookup Argument usually contains for some and . As an exception, the Program Table's instruction Lookup Argument always records 0 in the initial row.

Recall that the Lookup Argument is not just about the instruction, but also about the instruction's argument if it has one, or the next instruction if it has none. In the Program Table, this argument (or the next instruction) is recorded in the next row from the instruction in question. Therefore, verifying correct application of the logarithmic derivative's update rule requires access to both the current and the next row.

Out of all constraint types, only Transition Constraints have access to more than one row at a time. This implies that the correct application of the first update of the instruction Lookup Argument cannot be verified by an initial constraint. Therefore, the recorded initial value must be independent of the second row.

Consequently, the final value for the Lookup Argument is recorded in the first row just after the program description ends. This row is guaranteed to exist because of the mechanics for program attestation: the program has to be padded with at least one 1 before it is hashed.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. The Address is 0.
  2. The IndexInChunk is 0.
  3. The indicator IsHashInputPadding is 0.
  4. The InstructionLookupServerLogDerivative is 0.
  5. PrepareChunkRunningEvaluation has absorbed Instruction with respect to challenge 🪑.
  6. SendChunkRunningEvaluation is 1.

Initial Constraints as Polynomials

  1. Address
  2. IndexInChunk
  3. IsHashInputPadding
  4. InstructionLookupServerLogDerivative
  5. PrepareChunkRunningEvaluation - 🪑 - Instruction
  6. SendChunkRunningEvaluation - 1

Consistency Constraints

  1. The MaxMinusIndexInChunkInv is zero or the inverse of IndexInChunk.
  2. The IndexInChunk is or the MaxMinusIndexInChunkInv is the inverse of IndexInChunk.
  3. Indicator IsHashInputPadding is either 0 or 1.
  4. Indicator IsTablePadding is either 0 or 1.

Consistency Constraints as Polynomials

  1. (1 - MaxMinusIndexInChunkInv · (rate - 1 - IndexInChunk)) · MaxMinusIndexInChunkInv
  2. (1 - MaxMinusIndexInChunkInv · (rate - 1 - IndexInChunk)) · (rate - 1 - IndexInChunk)
  3. IsHashInputPadding · (IsHashInputPadding - 1)
  4. IsTablePadding · (IsTablePadding - 1)

Transition Constraints

  1. The Address increases by 1.
  2. If the IndexInChunk is not , it increases by 1. Else, the IndexInChunk in the next row is 0.
  3. The indicator IsHashInputPadding is 0 or remains unchanged.
  4. The padding indicator IsTablePadding is 0 or remains unchanged.
  5. If IsHashInputPadding is 0 in the current row and 1 in the next row, then Instruction in the next row is 1.
  6. If IsHashInputPadding is 1 in the current row then Instruction in the next row is 0.
  7. If IsHashInputPadding is 1 in the current row and IndexInChunk is in the current row then IsTablePadding is 1 in the next row.
  8. If the current row is not a padding row, the logarithmic derivative accumulates the current row's address, the current row's instruction, and the next row's instruction with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥 respectively. Otherwise, it remains unchanged.
  9. If the IndexInChunk in the current row is not , then PrepareChunkRunningEvaluation absorbs the Instruction in the next row with respect to challenge 🪑. Otherwise, PrepareChunkRunningEvaluation resets and absorbs the Instruction in the next row with respect to challenge 🪑.
  10. If the next row is not a padding row and the IndexInChunk in the next row is , then SendChunkRunningEvaluation absorbs PrepareChunkRunningEvaluation in the next row with respect to variable 🪣. Otherwise, it remains unchanged.

Transition Constraints as Polynomials

  1. Address' - Address - 1
  2. MaxMinusIndexInChunkInv · (IndexInChunk' - IndexInChunk - 1)
    + (1 - MaxMinusIndexInChunkInv · (rate - 1 - IndexInChunk)) · IndexInChunk'
  3. IsHashInputPadding · (IsHashInputPadding' - IsHashInputPadding)
  4. IsTablePadding · (IsTablePadding' - IsTablePadding)
  5. (IsHashInputPadding - 1) · IsHashInputPadding' · (Instruction' - 1)
  6. IsHashInputPadding · Instruction'
  7. IsHashInputPadding · (1 - MaxMinusIndexInChunkInv · (rate - 1 - IndexInChunk)) · IsTablePadding'
  8. (1 - IsHashInputPadding) · ((InstructionLookupServerLogDerivative' - InstructionLookupServerLogDerivative) · (🪥 - 🥝·Address - 🥥·Instruction - 🫐·Instruction') - LookupMultiplicity)
    + IsHashInputPadding · (InstructionLookupServerLogDerivative' - InstructionLookupServerLogDerivative)
  9. (rate - 1 - IndexInChunk) · (PrepareChunkRunningEvaluation' - 🪑·PrepareChunkRunningEvaluation - Instruction')
    + (1 - MaxMinusIndexInChunkInv · (rate - 1 - IndexInChunk)) · (PrepareChunkRunningEvaluation' - 🪑 - Instruction')
  10. (IsTablePadding' - 1) · (1 - MaxMinusIndexInChunkInv' · (rate - 1 - IndexInChunk')) · (SendChunkRunningEvaluation' - 🪣·SendChunkRunningEvaluation - PrepareChunkRunningEvaluation')
    + (SendChunkRunningEvaluation' - SendChunkRunningEvaluation) · IsTablePadding'
    + (SendChunkRunningEvaluation' - SendChunkRunningEvaluation) · (rate - 1 - IndexInChunk')

Terminal Constraints

  1. The indicator IsHashInputPadding is 1.
  2. The IndexInChunk is or the indicator IsTablePadding is 1.

Terminal Constraints as Polynomials

  1. IsHashInputPadding - 1
  2. (rate - 1 - IndexInChunk) · (IsTablePadding - 1)

1

See also section 2.5 “Fixed-Length versus Variable-Length” in the Tip5 paper.

Processor Table

The Processor Table records all of Triton VM states during execution of a particular program. The states are recorded in chronological order. The first row is the initial state, the last (non-padding) row is the terminal state, i.e., the state after having executed instruction halt. It is impossible to generate a valid proof if the instruction executed last is anything but halt.

It is worth highlighting the initialization of the operational stack. Stack elements st0 through st10 are initially 0. However, stack elements st11 through st15, i.e., the very bottom of the stack, are initialized with the hash digest of the program that is being executed. This is primarily useful for recursive verifiers: they can compare their own program digest to the program digest of the proof they are verifying. This way, a recursive verifier can easily determine if they are actually recursing, or whether the proof they are checking was generated using an entirely different program. A more detailed explanation of the mechanics can be found on the page about program attestation.

Main Columns

The processor consists of all registers defined in the Instruction Set Architecture. Each register is assigned a column in the processor table.

Auxiliary Columns

The Processor Table has the following auxiliary columns, corresponding to Evaluation Arguments, Permutation Arguments, and Lookup Arguments:

  1. RunningEvaluationStandardInput for the Evaluation Argument with the input symbols.
  2. RunningEvaluationStandardOutput for the Evaluation Argument with the output symbols.
  3. InstructionLookupClientLogDerivative for the Lookup Argument with the Program Table
  4. RunningProductOpStackTable for the Permutation Argument with the Op Stack Table.
  5. RunningProductRamTable for the Permutation Argument with the RAM Table. Note that virtual column instruction_type holds value 1 for reads and 0 for writes.
  6. RunningProductJumpStackTable for the Permutation Argument with the Jump Stack Table.
  7. RunningEvaluationHashInput for the Evaluation Argument with the Hash Table for copying the input to the hash function from the processor to the hash coprocessor.
  8. RunningEvaluationHashDigest for the Evaluation Argument with the Hash Table for copying the hash digest from the hash coprocessor to the processor.
  9. RunningEvaluationSponge for the Evaluation Argument with the Hash Table for copying the 10 next to-be-absorbed elements from the processor to the hash coprocessor or the 10 next squeezed elements from the hash coprocessor to the processor, depending on the instruction.
  10. U32LookupClientLogDerivative for the Lookup Argument with the U32 Table.
  11. ClockJumpDifferenceLookupServerLogDerivative for the Lookup Argument of clock jump differences with the Op Stack Table, the RAM Table, and the Jump Stack Table.

Permutation Argument with the Op Stack Table

The subset Permutation Argument with the Op Stack Table RunningProductOpStackTable establishes consistency of the op stack underflow memory. The number of factors incorporated into the running product at any given cycle depends on the executed instruction in this cycle: for every element pushed to or popped from the stack, there is one factor. Namely, if the op stack grows, every element spilling from st15 into op stack underflow memory will be incorporated as one factor; and if the op stack shrinks, every element from op stack underflow memory being transferred into st15 will be one factor.

Notably, if an instruction shrinks the op stack by more than one element in a single clock cycle, each spilled element is incorporated as one factor. The same holds true for instructions growing the op stack by more than one element in a single clock cycle.

One key insight for this Permutation Argument is that the processor will always have access to the elements that are to be read from or written to underflow memory: if the instruction grows the op stack, then the elements in question currently reside in the directly accessible, top part of the stack; if the instruction shrinks the op stack, then the elements in question will be in the top part of the stack in the next cycle. In either case, the Transition Constraint for the Permutation Argument can incorporate the explicitly listed elements as well as the corresponding trivial-to-compute op_stack_pointer.

Padding

A padding row is a copy of the Processor Table's last row with the following modifications:

  1. column clk is increased by 1,
  2. column IsPadding is set to 1,
  3. column cjd_mul is set to 0,

A notable exception: if the row with clk equal to 1 is a padding row, then the value of cjd_mul is not constrained in that row. The reason for this exception is the lack of “awareness” of padding rows in the Jump Stack Table: it keeps looking up clock jump differences in its padding section. All these clock jumps are guaranteed to have magnitude 1.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Note, that the transition constraint's use of some_column vs some_column_next might be a little unintuitive. For example, take the following part of some execution trace.

Clock CycleCurrent Instructionst0st15Running Evaluation “To Hash Table”Running Evaluation “From Hash Table”
foo1722
hash1722
bar133722

In order to verify the correctness of RunningEvaluationHashInput, the corresponding transition constraint needs to conditionally “activate” on row-tuple (, ), where it is conditional on ci_next (not ci), and verifies absorption of the next row, i.e., row . However, in order to verify the correctness of RunningEvaluationHashDigest, the corresponding transition constraint needs to conditionally “activate” on row-tuple (, ), where it is conditional on ci (not ci_next), and verifies absorption of the next row, i.e., row .

Initial Constraints

  1. The cycle counter clk is 0.
  2. The instruction pointer ip is 0.
  3. The jump address stack pointer jsp is 0.
  4. The jump address origin jso is 0.
  5. The jump address destination jsd is 0.
  6. The operational stack element st0 is 0.
  7. The operational stack element st1 is 0.
  8. The operational stack element st2 is 0.
  9. The operational stack element st3 is 0.
  10. The operational stack element st4 is 0.
  11. The operational stack element st5 is 0.
  12. The operational stack element st6 is 0.
  13. The operational stack element st7 is 0.
  14. The operational stack element st8 is 0.
  15. The operational stack element st9 is 0.
  16. The operational stack element st10 is 0.
  17. The Evaluation Argument of operational stack elements st11 through st15 with respect to indeterminate 🥬 equals the public part of program digest challenge, 🫑. See program attestation for more details.
  18. The op_stack_pointer is 16.
  19. RunningEvaluationStandardInput is 1.
  20. RunningEvaluationStandardOutput is 1.
  21. InstructionLookupClientLogDerivative has absorbed the first row with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥.
  22. RunningProductOpStackTable is 1.
  23. RunningProductRamTable has absorbed the first row with respect to challenges 🍍, 🍈, 🍎, and 🌽 and indeterminate 🛋.
  24. RunningProductJumpStackTable has absorbed the first row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.
  25. RunningEvaluationHashInput has absorbed the first row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪 if the current instruction is hash. Otherwise, it is 1.
  26. RunningEvaluationHashDigest is 1.
  27. RunningEvaluationSponge is 1.
  28. U32LookupClientLogDerivative is 0.
  29. ClockJumpDifferenceLookupServerLogDerivative starts having accumulated the first contribution.

Consistency Constraints

  1. The composition of instruction bits ib0 through ib6 corresponds to the current instruction ci.
  2. The instruction bit ib0 is a bit.
  3. The instruction bit ib1 is a bit.
  4. The instruction bit ib2 is a bit.
  5. The instruction bit ib3 is a bit.
  6. The instruction bit ib4 is a bit.
  7. The instruction bit ib5 is a bit.
  8. The instruction bit ib6 is a bit.
  9. The padding indicator IsPadding is either 0 or 1.
  10. If the current padding row is a padding row and clk is not 1, then the clock jump difference lookup multiplicity is 0.

Transition Constraints

Due to their complexity, instruction-specific constraints are defined in their own section. The following additional constraints also apply to every pair of rows.

  1. The cycle counter clk increases by 1.
  2. The padding indicator IsPadding is 0 or remains unchanged.
  3. If the next row is not a padding row, the logarithmic derivative for the Program Table absorbs the next row with respect to challenges 🥝, 🥥, and 🫐 and indeterminate 🪥. Otherwise, it remains unchanged.
  4. The running sum for the logarithmic derivative of the clock jump difference lookup argument accumulates the next row's clk with the appropriate multiplicity cjd_mul with respect to indeterminate 🪞.
  5. The running product for the Jump Stack Table absorbs the next row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.
    1. If the current instruction in the next row is hash, the running evaluation “Hash Input” absorbs the next row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪.
    2. If the current instruction in the next row is merkle_step or merkle_step_mem and helper variable hv5
      1. …is 0, the running evaluation “Hash Input” absorbs next row's st0 through st4 and hv0 through hv4
      2. …is 1, the running evaluation “Hash Input” absorbs next row's hv0 through hv4 and st0 through st4
        …with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪.
    3. Otherwise, it remains unchanged.
  6. If the current instruction is hash, merkle_step, or merkle_step_mem, the running evaluation “Hash Digest” absorbs the next row with respect to challenges 🧄₀ through 🧄₄ and indeterminate 🪟. Otherwise, it remains unchanged.
  7. If the current instruction is sponge_init, then the running evaluation “Sponge” absorbs the current instruction and the Sponge's default initial state with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction is sponge_absorb, then the running evaluation “Sponge” absorbs the current instruction and the current row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction is sponge_squeeze, then the running evaluation “Sponge” absorbs the current instruction and the next row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Else if the current instruction is sponge_absorb_mem, then the running evaluation “Sponge” absorbs the opcode of instruction sponge_absorb and stack elements st1 through st4 and all 6 helper variables with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Otherwise, the running evaluation remains unchanged.
    1. If the current instruction is split, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates st0 and st1 in the next row and ci in the current row with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.
    2. If the current instruction is lt, and, xor, or pow, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates st0, st1, and ci in the current row and st0 in the next row with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.
    3. If the current instruction is log_2_floor, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates st0 and ci in the current row and st0 in the next row with respect to challenges 🥜, 🥑, and 🥕, and indeterminate 🧷.
    4. If the current instruction is div_mod, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates both
      1. st0 in the next row and st1 in the current row as well as the constants opcode(lt) and 1 with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.
      2. st0 in the current row and st1 in the next row as well as opcode(split) with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.
    5. If the current instruction is merkle_step or merkle_step_mem, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates st5 from the current and next rows as well as opcode(split) with respect to challenges 🥜, 🌰, and 🥑, and indeterminate 🧷.
    6. If the current instruction is pop_count, then the logarithmic derivative for the Lookup Argument with the U32 Table accumulates st0 and ci in the current row and st0 in the next row with respect to challenges 🥜, 🥑, and 🥕, and indeterminate 🧷.
    7. Else, i.e., if the current instruction is not a u32 instruction, the logarithmic derivative for the Lookup Argument with the U32 Table remains unchanged.

Terminal Constraints

  1. In the last row, register “current instruction” ci is 0, corresponding to instruction halt.

Instruction Groups

Some transition constraints are shared across some, or even many instructions. For example, most instructions must not change the jump stack. Likewise, most instructions must not change RAM. To simplify presentation of the instruction-specific transition constraints, these common constraints are grouped together and aliased. Continuing above example, instruction group keep_jump_stack contains all transition constraints to ensure that the jump stack remains unchanged.

The next section treats each instruction group in detail. The following table lists and briefly explains all instruction groups.

group namedescription
decompose_arginstruction's argument held in nia is binary decomposed into helper registers hv0 through hv3
prohibit_illegal_num_wordsconstrain the instruction's argument n to 1 ⩽ n ⩽ 5
no_iothe running evaluations for public input & output remain unchanged
no_ramRAM is not being accessed, i.e., the running product of the Permutation Argument with the RAM Table remains unchanged
keep_jump_stackjump stack does not change, i.e., registers jsp, jso, and jsd do not change
step_1jump stack does not change and instruction pointer ip increases by 1
step_2jump stack does not change and instruction pointer ip increases by 2
grow_op_stackop stack elements are shifted down by one position, top element of the resulting stack is unconstrained
grow_op_stack_by_any_ofop stack elements are shifted down by n positions, top n elements of the resulting stack are unconstrained, where n is the instruction's argument
keep_op_stack_heightthe op stack height remains unchanged, and so in particular the running product of the Permutation Argument with the Op Stack table remains unchanged
op_stack_remains_except_top_nall but the top n elements of the op stack remain unchanged
keep_op_stackop stack remains entirely unchanged
binary_operationop stack elements starting from st2 are shifted up by one position, highest two elements of the resulting stack are unconstrained
shrink_op_stackop stack elements starting from st1 are shifted up by one position
shrink_op_stack_by_any_ofop stack elements starting from stn are shifted up by one position, where n is the instruction's argument

The following instruction groups conceptually fit the category of 'instruction groups' but are not used or enforced through AIR constraints.

group namedescription
no_hashThe hash coprocessor is not accessed. The constraints are implied by the evaluation argument with the Hash Table which takes the current instruction into account.

A summary of all instructions and which groups they are part of is given in the following table.

instructiondecompose_argprohibit_illegal_num_wordsno_iono_ramkeep_jump_stackstep_1step_2grow_op_stackgrow_op_stack_by_any_ofkeep_op_stack_heightop_stack_remains_except_top_nkeep_op_stackbinary_operationshrink_op_stackshrink_op_stack_by_any_of
pop + nxxxxxx
push + axxxx
divine + nxxxxxx
pick + ixxxxx
place + ixxxxx
dup + ixxxxx
swap + ixxxxx
nopxxxx0x
skizxxxx
call + dxxx0x
returnxxx0x
recursexxxx0x
recurse_or_returnxxx0x
assertxxxx
haltxxxx0x
read_mem + nxxxx
write_mem + nxxxx
hashxxx
assert_vectorxxx
sponge_initxxx
sponge_absorbxxx
sponge_absorb_memxx5
sponge_squeezexxx
addxxx
addi + axxxx1
mulxxxx
invertxxxx1
eqxxxx
splitxxx
ltxxxx
andxxxx
xorxxxx
log_2_floorxxxx1
powxxxx
div_modxxxx2
pop_countxxxx1
xx_addxxx
xx_mulxxx
x_invertxxxx3
xb_mulxxx
read_io + nxxxxx
write_io + nxxxxx
merkle_stepxxx6
merkle_step_memxx8
xx_dot_stepxx5
xb_dot_stepxx5x

Indicator Polynomials ind_i(hv3, hv2, hv1, hv0)

In this and the following sections, a register marked with a ' refers to the next state of that register. For example, st0' = st0 + 2 means that stack register st0 is incremented by 2. An alternative view for the same concept is that registers marked with ' are those of the next row in the table.

For instructions like dup i, swap i, pop n, et cetera, it is beneficial to have polynomials that evaluate to 1 if the instruction's argument is a specific value, and to 0 otherwise. This allows indicating which registers are constraint, and in which way they are, depending on the argument. This is the purpose of the indicator polynomials ind_i. Evaluated on the binary decomposition of i, they show the behavior described above.

For example, take i = 13. The corresponding binary decomposition is (hv3, hv2, hv1, hv0) = (1, 1, 0, 1). Indicator polynomial ind_13(hv3, hv2, hv1, hv0) is defined as hv3·hv2·(1 - hv1)·hv0. It evaluates to 1 on (1, 1, 0, 1), i.e., ind_13(1, 1, 0, 1) = 1. Any other indicator polynomial, like ind_7, evaluates to 0 on (1, 1, 0, 1). Likewise, the indicator polynomial for 13 evaluates to 0 for any other argument.

The list of all 16 indicator polynomials is:

  1. ind_0(hv3, hv2, hv1, hv0) = (1 - hv3)·(1 - hv2)·(1 - hv1)·(1 - hv0)
  2. ind_1(hv3, hv2, hv1, hv0) = (1 - hv3)·(1 - hv2)·(1 - hv1)·hv0
  3. ind_2(hv3, hv2, hv1, hv0) = (1 - hv3)·(1 - hv2)·hv1·(1 - hv0)
  4. ind_3(hv3, hv2, hv1, hv0) = (1 - hv3)·(1 - hv2)·hv1·hv0
  5. ind_4(hv3, hv2, hv1, hv0) = (1 - hv3)·hv2·(1 - hv1)·(1 - hv0)
  6. ind_5(hv3, hv2, hv1, hv0) = (1 - hv3)·hv2·(1 - hv1)·hv0
  7. ind_6(hv3, hv2, hv1, hv0) = (1 - hv3)·hv2·hv1·(1 - hv0)
  8. ind_7(hv3, hv2, hv1, hv0) = (1 - hv3)·hv2·hv1·hv0
  9. ind_8(hv3, hv2, hv1, hv0) = hv3·(1 - hv2)·(1 - hv1)·(1 - hv0)
  10. ind_9(hv3, hv2, hv1, hv0) = hv3·(1 - hv2)·(1 - hv1)·hv0
  11. ind_10(hv3, hv2, hv1, hv0) = hv3·(1 - hv2)·hv1·(1 - hv0)
  12. ind_11(hv3, hv2, hv1, hv0) = hv3·(1 - hv2)·hv1·hv0
  13. ind_12(hv3, hv2, hv1, hv0) = hv3·hv2·(1 - hv1)·(1 - hv0)
  14. ind_13(hv3, hv2, hv1, hv0) = hv3·hv2·(1 - hv1)·hv0
  15. ind_14(hv3, hv2, hv1, hv0) = hv3·hv2·hv1·(1 - hv0)
  16. ind_15(hv3, hv2, hv1, hv0) = hv3·hv2·hv1·hv0

Group decompose_arg

Description

  1. The helper variables hv0 through hv3 are the binary decomposition of the instruction's argument, which is held in register nia.
  2. The helper variable hv0 is either 0 or 1.
  3. The helper variable hv1 is either 0 or 1.
  4. The helper variable hv2 is either 0 or 1.
  5. The helper variable hv3 is either 0 or 1.

Polynomials

  1. nia - (8·hv3 + 4·hv2 + 2·hv1 + hv0)
  2. hv0·(hv0 - 1)
  3. hv1·(hv1 - 1)
  4. hv2·(hv2 - 1)
  5. hv3·(hv3 - 1)

Group prohibit_illegal_num_words

Is only ever used in combination with instruction group decompose_arg. Therefore, the instruction argument's correct binary decomposition is known to be in helper variables hv0 through hv3.

Description

  1. The argument is not 0.
  2. The argument is not 6.
  3. The argument is not 7.
  4. The argument is not 8.
  5. The argument is not 9.
  6. The argument is not 10.
  7. The argument is not 11.
  8. The argument is not 12.
  9. The argument is not 13.
  10. The argument is not 14.
  11. The argument is not 15.

Polynomials

  1. ind_0(hv3, hv2, hv1, hv0)
  2. ind_6(hv3, hv2, hv1, hv0)
  3. ind_7(hv3, hv2, hv1, hv0)
  4. ind_8(hv3, hv2, hv1, hv0)
  5. ind_9(hv3, hv2, hv1, hv0)
  6. ind_10(hv3, hv2, hv1, hv0)
  7. ind_11(hv3, hv2, hv1, hv0)
  8. ind_12(hv3, hv2, hv1, hv0)
  9. ind_13(hv3, hv2, hv1, hv0)
  10. ind_14(hv3, hv2, hv1, hv0)
  11. ind_15(hv3, hv2, hv1, hv0)

Group no_io

Description

  1. The running evaluation for standard input remains unchanged.
  2. The running evaluation for standard output remains unchanged.

Polynomials

  1. RunningEvaluationStandardInput' - RunningEvaluationStandardInput
  2. RunningEvaluationStandardOutput' - RunningEvaluationStandardOutput

Group no_ram

Description

  1. The running product for the Permutation Argument with the RAM table does not change.

Polynomials

  1. RunningProductRamTable' - RunningProductRamTable

Group keep_jump_stack

Description

  1. The jump stack pointer jsp does not change.
  2. The jump stack origin jso does not change.
  3. The jump stack destination jsd does not change.

Polynomials

  1. jsp' - jsp
  2. jso' - jso
  3. jsd' - jsd

Group step_1

Contains all constraints from instruction group keep_jump_stack, and additionally:

Description

  1. The instruction pointer increments by 1.

Polynomials

  1. ip' - (ip + 1)

Group step_2

Contains all constraints from instruction group keep_jump_stack, and additionally:

Description

  1. The instruction pointer increments by 2.

Polynomials

  1. ip' - (ip + 2)

Group grow_op_stack

Description

  1. The stack element in st0 is moved into st1.
  2. The stack element in st1 is moved into st2.
  3. The stack element in st2 is moved into st3.
  4. The stack element in st3 is moved into st4.
  5. The stack element in st4 is moved into st5.
  6. The stack element in st5 is moved into st6.
  7. The stack element in st6 is moved into st7.
  8. The stack element in st7 is moved into st8.
  9. The stack element in st8 is moved into st9.
  10. The stack element in st9 is moved into st10.
  11. The stack element in st10 is moved into st11.
  12. The stack element in st11 is moved into st12.
  13. The stack element in st12 is moved into st13.
  14. The stack element in st13 is moved into st14.
  15. The stack element in st14 is moved into st15.
  16. The op stack pointer is incremented by 1.
  17. The running product for the Op Stack Table absorbs the current row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤.

Polynomials

  1. st1' - st0
  2. st2' - st1
  3. st3' - st2
  4. st4' - st3
  5. st5' - st4
  6. st6' - st5
  7. st7' - st6
  8. st8' - st7
  9. st9' - st8
  10. st10' - st9
  11. st11' - st10
  12. st12' - st11
  13. st13' - st12
  14. st14' - st13
  15. st15' - st14
  16. op_stack_pointer' - (op_stack_pointer + 1)
  17. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊·ib1 - 🍉·op_stack_pointer - 🫒·st15)

Group grow_op_stack_by_any_of

Is only ever used in combination with instruction group decompose_arg. Therefore, the instruction's argument n correct binary decomposition is known to be in helper variables hv0 through hv3.

Description

  1. If n is 1, then st0 is moved into st1
    else if n is 2, then st0 is moved into st2
    else if n is 3, then st0 is moved into st3
    else if n is 4, then st0 is moved into st4
    else if n is 5, then st0 is moved into st5.
  2. If n is 1, then st1 is moved into st2
    else if n is 2, then st1 is moved into st3
    else if n is 3, then st1 is moved into st4
    else if n is 4, then st1 is moved into st5
    else if n is 5, then st1 is moved into st6.
  3. If n is 1, then st2 is moved into st3
    else if n is 2, then st2 is moved into st4
    else if n is 3, then st2 is moved into st5
    else if n is 4, then st2 is moved into st6
    else if n is 5, then st2 is moved into st7.
  4. If n is 1, then st3 is moved into st4
    else if n is 2, then st3 is moved into st5
    else if n is 3, then st3 is moved into st6
    else if n is 4, then st3 is moved into st7
    else if n is 5, then st3 is moved into st8.
  5. If n is 1, then st4 is moved into st5
    else if n is 2, then st4 is moved into st6
    else if n is 3, then st4 is moved into st7
    else if n is 4, then st4 is moved into st8
    else if n is 5, then st4 is moved into st9.
  6. If n is 1, then st5 is moved into st6
    else if n is 2, then st5 is moved into st7
    else if n is 3, then st5 is moved into st8
    else if n is 4, then st5 is moved into st9
    else if n is 5, then st5 is moved into st10.
  7. If n is 1, then st6 is moved into st7
    else if n is 2, then st6 is moved into st8
    else if n is 3, then st6 is moved into st9
    else if n is 4, then st6 is moved into st10
    else if n is 5, then st6 is moved into st11.
  8. If n is 1, then st7 is moved into st8
    else if n is 2, then st7 is moved into st9
    else if n is 3, then st7 is moved into st10
    else if n is 4, then st7 is moved into st11
    else if n is 5, then st7 is moved into st12.
  9. If n is 1, then st8 is moved into st9
    else if n is 2, then st8 is moved into st10
    else if n is 3, then st8 is moved into st11
    else if n is 4, then st8 is moved into st12
    else if n is 5, then st8 is moved into st13.
  10. If n is 1, then st9 is moved into st10
    else if n is 2, then st9 is moved into st11
    else if n is 3, then st9 is moved into st12
    else if n is 4, then st9 is moved into st13
    else if n is 5, then st9 is moved into st14.
  11. If n is 1, then st10 is moved into st11
    else if n is 2, then st10 is moved into st12
    else if n is 3, then st10 is moved into st13
    else if n is 4, then st10 is moved into st14
    else if n is 5, then st10 is moved into st15.
  12. If n is 1, then st11 is moved into st12
    else if n is 2, then st11 is moved into st13
    else if n is 3, then st11 is moved into st14
    else if n is 4, then st11 is moved into st15
    else if n is 5, then the op stack pointer grows by 5.
  13. If n is 1, then st12 is moved into st13
    else if n is 2, then st12 is moved into st14
    else if n is 3, then st12 is moved into st15
    else if n is 4, then the op stack pointer grows by 4
    else if n is 5, then the running product with the Op Stack Table accumulates st11 through st15.
  14. If n is 1, then st13 is moved into st14
    else if n is 2, then st13 is moved into st15
    else if n is 3, then the op stack pointer grows by 3
    else if n is 4, then the running product with the Op Stack Table accumulates st12 through st15.
  15. If n is 1, then st14 is moved into st15
    else if n is 2, then the op stack pointer grows by 2
    else if n is 3, then the running product with the Op Stack Table accumulates st13 through st15.
  16. If n is 1, then the op stack pointer grows by 1
    else if n is 2, then the running product with the Op Stack Table accumulates st14 and st15.
  17. If n is 1, then the running product with the Op Stack Table accumulates st15.

Group keep_op_stack_height

The op stack pointer and the running product for the Permutation Argument with the Op Stack Table remain the same. In other words, there is no access (neither read nor write) from/to the Op Stack Table.

Description

  1. The op stack pointer does not change.
  2. The running product for the Op Stack Table remains unchanged.

Polynomials

  1. op_stack_pointer' - op_stack_pointer
  2. RunningProductOpStackTable' - RunningProductOpStackTable

Group op_stack_remains_except_top_n_elements_unconstrained

Contains all constraints from group keep_op_stack_height, and additionally ensures that all but the top n op stack elements remain the same. (The top n elements are unconstrained.)

Description

  1. For i in {n, ..., NUM_OP_STACK_REGISTERS-1} The stack element in sti does not change.

Polynomials

  1. stn' - stn
  2. ...
  3. stN' - stN

Group keep_op_stack

Short-hand for op_stack_remains_except_top_n_elements_unconstrained with n=0.

Group binary_operation

Description

  1. The stack element in st2 is moved into st1.
  2. The stack element in st3 is moved into st2.
  3. The stack element in st4 is moved into st3.
  4. The stack element in st5 is moved into st4.
  5. The stack element in st6 is moved into st5.
  6. The stack element in st7 is moved into st6.
  7. The stack element in st8 is moved into st7.
  8. The stack element in st9 is moved into st8.
  9. The stack element in st10 is moved into st9.
  10. The stack element in st11 is moved into st10.
  11. The stack element in st12 is moved into st11.
  12. The stack element in st13 is moved into st12.
  13. The stack element in st14 is moved into st13.
  14. The stack element in st15 is moved into st14.
  15. The op stack pointer is decremented by 1.
  16. The running product for the Op Stack Table absorbs clock cycle and instruction bit 1 from the current row as well as op stack pointer and stack element 15 from the next row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤.

Polynomials

  1. st1' - st2
  2. st2' - st3
  3. st3' - st4
  4. st4' - st5
  5. st5' - st6
  6. st6' - st7
  7. st7' - st8
  8. st8' - st9
  9. st9' - st10
  10. st10' - st11
  11. st11' - st12
  12. st12' - st13
  13. st13' - st14
  14. st14' - st15
  15. op_stack_pointer' - (op_stack_pointer - 1)
  16. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊·ib1 - 🍉·op_stack_pointer' - 🫒·st15')

Group shrink_op_stack

Contains all constraints from instruction group binary_operation, and additionally:

Description

  1. The stack element in st1 is moved into st0.

Polynomials

  1. st0' - st1

Group shrink_op_stack_by_any_of

Is only ever used in combination with instruction group decompose_arg. Therefore, the instruction's argument n correct binary decomposition is known to be in helper variables hv0 through hv3.

Description

  1. If n is 1, then st1 is moved into st0
    else if n is 2, then st2 is moved into st0
    else if n is 3, then st3 is moved into st0
    else if n is 4, then st4 is moved into st0
    else if n is 5, then st5 is moved into st0.
  2. If n is 1, then st2 is moved into st1
    else if n is 2, then st3 is moved into st1
    else if n is 3, then st4 is moved into st1
    else if n is 4, then st5 is moved into st1
    else if n is 5, then st6 is moved into st1.
  3. If n is 1, then st3 is moved into st2
    else if n is 2, then st4 is moved into st2
    else if n is 3, then st5 is moved into st2
    else if n is 4, then st6 is moved into st2
    else if n is 5, then st7 is moved into st2.
  4. If n is 1, then st4 is moved into st3
    else if n is 2, then st5 is moved into st3
    else if n is 3, then st6 is moved into st3
    else if n is 4, then st7 is moved into st3
    else if n is 5, then st8 is moved into st3.
  5. If n is 1, then st5 is moved into st4
    else if n is 2, then st6 is moved into st4
    else if n is 3, then st7 is moved into st4
    else if n is 4, then st8 is moved into st4
    else if n is 5, then st9 is moved into st4.
  6. If n is 1, then st6 is moved into st5
    else if n is 2, then st7 is moved into st5
    else if n is 3, then st8 is moved into st5
    else if n is 4, then st9 is moved into st5
    else if n is 5, then st10 is moved into st5.
  7. If n is 1, then st7 is moved into st6
    else if n is 2, then st8 is moved into st6
    else if n is 3, then st9 is moved into st6
    else if n is 4, then st10 is moved into st6
    else if n is 5, then st11 is moved into st6.
  8. If n is 1, then st8 is moved into st7
    else if n is 2, then st9 is moved into st7
    else if n is 3, then st10 is moved into st7
    else if n is 4, then st11 is moved into st7
    else if n is 5, then st12 is moved into st7.
  9. If n is 1, then st9 is moved into st8
    else if n is 2, then st10 is moved into st8
    else if n is 3, then st11 is moved into st8
    else if n is 4, then st12 is moved into st8
    else if n is 5, then st13 is moved into st8.
  10. If n is 1, then st10 is moved into st9
    else if n is 2, then st11 is moved into st9
    else if n is 3, then st12 is moved into st9
    else if n is 4, then st13 is moved into st9
    else if n is 5, then st14 is moved into st9.
  11. If n is 1, then st11 is moved into st10
    else if n is 2, then st12 is moved into st10
    else if n is 3, then st13 is moved into st10
    else if n is 4, then st14 is moved into st10
    else if n is 5, then st15 is moved into st10.
  12. If n is 1, then st12 is moved into st11
    else if n is 2, then st13 is moved into st11
    else if n is 3, then st14 is moved into st11
    else if n is 4, then st15 is moved into st11
    else if n is 5, then the op stack pointer shrinks by 5.
  13. If n is 1, then st13 is moved into st12
    else if n is 2, then st14 is moved into st12
    else if n is 3, then st15 is moved into st12
    else if n is 4, then the op stack pointer shrinks by 4
    else if n is 5, then the running product with the Op Stack Table accumulates next row's st11 through st15.
  14. If n is 1, then st14 is moved into st13
    else if n is 2, then st15 is moved into st13
    else if n is 3, then the op stack pointer shrinks by 3
    else if n is 4, then the running product with the Op Stack Table accumulates next row's st12 through st15.
  15. If n is 1, then st15 is moved into st14
    else if n is 2, then the op stack pointer shrinks by 2
    else if n is 3, then the running product with the Op Stack Table accumulates next row's st13 through st15.
  16. If n is 1, then the op stack pointer shrinks by 1
    else if n is 2, then the running product with the Op Stack Table accumulates next row's st14 and st15.
  17. If n is 1, then the running product with the Op Stack Table accumulates next row's st15.

Instruction-Specific Transition Constraints

Instruction pop + n

This instruction is fully constrained by its instruction groups

Instruction push + a

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The instruction's argument a is moved onto the stack.

Polynomials

  1. st0' - nia

Instruction divine + n

This instruction is fully constrained by its instruction groups

Instruction pick + i

This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.

Description

For 0 ⩽ i < 16:

  1. Stack element i is moved to the top.
  2. For 0 ⩽ j < i: stack element j is shifted down by 1.
  3. For j > i: stack element j remains unchanged.

Instruction place + i

This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.

Description

For 0 ⩽ i < 16:

  1. Stack element 0 is moved to position i.
  2. For 0 ⩽ j < i: stack element j is shifted up by 1.
  3. For j > i: stack element j remains unchanged.

Instruction dup + i

This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.

Description

  1. If i is 0, then st0 is put on top of the stack and
    if i is 1, then st1 is put on top of the stack and
    if i is 2, then st2 is put on top of the stack and
    if i is 3, then st3 is put on top of the stack and
    if i is 4, then st4 is put on top of the stack and
    if i is 5, then st5 is put on top of the stack and
    if i is 6, then st6 is put on top of the stack and
    if i is 7, then st7 is put on top of the stack and
    if i is 8, then st8 is put on top of the stack and
    if i is 9, then st9 is put on top of the stack and
    if i is 10, then st10 is put on top of the stack and
    if i is 11, then st11 is put on top of the stack and
    if i is 12, then st12 is put on top of the stack and
    if i is 13, then st13 is put on top of the stack and
    if i is 14, then st14 is put on top of the stack and
    if i is 15, then st15 is put on top of the stack.

Polynomials

  1. ind_0(hv3, hv2, hv1, hv0)·(st0' - st0)
    + ind_1(hv3, hv2, hv1, hv0)·(st0' - st1)
    + ind_2(hv3, hv2, hv1, hv0)·(st0' - st2)
    + ind_3(hv3, hv2, hv1, hv0)·(st0' - st3)
    + ind_4(hv3, hv2, hv1, hv0)·(st0' - st4)
    + ind_5(hv3, hv2, hv1, hv0)·(st0' - st5)
    + ind_6(hv3, hv2, hv1, hv0)·(st0' - st6)
    + ind_7(hv3, hv2, hv1, hv0)·(st0' - st7)
    + ind_8(hv3, hv2, hv1, hv0)·(st0' - st8)
    + ind_9(hv3, hv2, hv1, hv0)·(st0' - st9)
    + ind_10(hv3, hv2, hv1, hv0)·(st0' - st10)
    + ind_11(hv3, hv2, hv1, hv0)·(st0' - st11)
    + ind_12(hv3, hv2, hv1, hv0)·(st0' - st12)
    + ind_13(hv3, hv2, hv1, hv0)·(st0' - st13)
    + ind_14(hv3, hv2, hv1, hv0)·(st0' - st14)
    + ind_15(hv3, hv2, hv1, hv0)·(st0' - st15)

Instruction swap + i

This instruction makes use of indicator polynomials. In addition to its instruction groups, this instruction has the following constraints.

Description

For 0 ⩽ i < 16:

  1. Stack element i is moved to the top.
  2. The top stack element is moved to position i
  3. For ji: stack element j remains unchanged.

Instruction nop

This instruction is fully constrained by its instruction groups

Instruction skiz

For the correct behavior of instruction skiz, the instruction pointer ip needs to increment by either 1, or 2, or 3. The concrete value depends on the top of the stack st0 and the next instruction, held in nia.

Helper variable hv0 helps with identifying whether st0 is 0. To this end, it holds the inverse-or-zero of st0, i.e., is 0 if and only if st0 is 0, and is the inverse of st0 otherwise.

Efficient arithmetization of instruction skiz makes use of one of the properties of opcodes. Concretely, the least significant bit of an opcode is 1 if and only if the instruction takes an argument. The arithmetization of skiz can incorporate this simple flag by decomposing nia into helper variable registers hv, similarly to how ci is (always) deconstructed into instruction bit registers ib. Correct decomposition is guaranteed by employing a range check.

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. Helper variable hv1 is the inverse of st0 or 0.
  2. Helper variable hv1 is the inverse of st0 or st0 is 0.
  3. The next instruction nia is decomposed into helper variables hv1 through hv5.
  4. The indicator helper variable hv1 is either 0 or 1. Here, hv1 == 1 means that nia takes an argument.
  5. The helper variable hv2 is either 0 or 1 or 2 or 3.
  6. The helper variable hv3 is either 0 or 1 or 2 or 3.
  7. The helper variable hv4 is either 0 or 1 or 2 or 3.
  8. The helper variable hv5 is either 0 or 1 or 2 or 3.
  9. If st0 is non-zero, register ip is incremented by 1. If st0 is 0 and nia takes no argument, register ip is incremented by 2. If st0 is 0 and nia takes an argument, register ip is incremented by 3.

Written as Disjunctive Normal Form, the last constraint can be expressed as:

  1. (Register st0 is 0 or ip is incremented by 1), and (st0 has a multiplicative inverse or hv1 is 1 or ip is incremented by 2), and (st0 has a multiplicative inverse or hv1 is 0 or ip is incremented by 3).

Since the three cases are mutually exclusive, the three respective polynomials can be summed up into one.

Polynomials

  1. (st0·hv0 - 1)·hv0
  2. (st0·hv0 - 1)·st0
  3. nia - hv1 - 2·hv2 - 8·hv3 - 32·hv4 - 128·hv5
  4. hv1·(hv1 - 1)
  5. hv2·(hv2 - 1)·(hv2 - 2)·(hv2 - 3)
  6. hv3·(hv3 - 1)·(hv3 - 2)·(hv3 - 3)
  7. hv4·(hv4 - 1)·(hv4 - 2)·(hv4 - 3)
  8. hv5·(hv5 - 1)·(hv5 - 2)·(hv5 - 3)
  9. (ip' - (ip + 1)·st0)
    + ((ip' - (ip + 2))·(st0·hv0 - 1)·(hv1 - 1))
    + ((ip' - (ip + 3))·(st0·hv0 - 1)·hv1)

Helper variable definitions for skiz

  1. hv0 = inverse(st0) (if st0 ≠ 0)
  2. hv1 = nia mod 2
  3. hv2 = (nia >> 1) mod 4
  4. hv3 = (nia >> 3) mod 4
  5. hv4 = (nia >> 5) mod 4
  6. hv5 = nia >> 7

Instruction call + d

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The jump stack pointer jsp is incremented by 1.
  2. The jump's origin jso is set to the current instruction pointer ip plus 2.
  3. The jump's destination jsd is set to the instruction's argument d.
  4. The instruction pointer ip is set to the instruction's argument d.

Polynomials

  1. jsp' - (jsp + 1)
  2. jso' - (ip + 2)
  3. jsd' - nia
  4. ip' - nia

Instruction return

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The jump stack pointer jsp is decremented by 1.
  2. The instruction pointer ip is set to the last call's origin jso.

Polynomials

  1. jsp' - (jsp - 1)
  2. ip' - jso

Instruction recurse

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The jump stack pointer jsp does not change.
  2. The last jump's origin jso does not change.
  3. The last jump's destination jsd does not change.
  4. The instruction pointer ip is set to the last jump's destination jsd.

Polynomials

  1. jsp' - jsp
  2. jso' - jso
  3. jsd' - jsd
  4. ip' - jsd

Instruction recurse_or_return

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. If ST5 equals ST6, then ip in the next row equals jso in the current row.
  2. If ST5 equals ST6, then jsp decrements by one.
  3. If ST5 equals ST6, then hv0 in the current row is 0.
  4. If ST5 is unequal to ST6, then hv0 in the current row is the inverse of (ST6 - ST5).
  5. If ST5 is unequal to ST6, then ip in the next row is equal to jsd in the current row.
  6. If ST5 is unequal to ST6, then jsp remains unchanged.
  7. If ST5 is unequal to ST6, then jso remains unchanged.
  8. If ST5 is unequal to ST6, then jsd remains unchanged.

Helper variable definitions for recurse_or_return

To help arithmetizing the equality check between ST5 and ST6, helper variable hv0 is the inverse-or-zero of (ST6 - ST5).

Instruction assert

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The current top of the stack st0 is 1.

Polynomials

  1. st0 - 1

Instruction halt

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The instruction executed in the following step is instruction halt.

Polynomials

  1. ci' - ci

Instruction read_mem + n

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The RAM pointer st0 is decremented by n.
  2. If n is 1, then st1 is moved into st2
    else if n is 2, then st1 is moved into st3
    else if n is 3, then st1 is moved into st4
    else if n is 4, then st1 is moved into st5
    else if n is 5, then st1 is moved into st6.
  3. If n is 1, then st2 is moved into st3
    else if n is 2, then st2 is moved into st4
    else if n is 3, then st2 is moved into st5
    else if n is 4, then st2 is moved into st6
    else if n is 5, then st2 is moved into st7.
  4. If n is 1, then st3 is moved into st4
    else if n is 2, then st3 is moved into st5
    else if n is 3, then st3 is moved into st6
    else if n is 4, then st3 is moved into st7
    else if n is 5, then st3 is moved into st8.
  5. If n is 1, then st4 is moved into st5
    else if n is 2, then st4 is moved into st6
    else if n is 3, then st4 is moved into st7
    else if n is 4, then st4 is moved into st8
    else if n is 5, then st4 is moved into st9.
  6. If n is 1, then st5 is moved into st6
    else if n is 2, then st5 is moved into st7
    else if n is 3, then st5 is moved into st8
    else if n is 4, then st5 is moved into st9
    else if n is 5, then st5 is moved into st10.
  7. If n is 1, then st6 is moved into st7
    else if n is 2, then st6 is moved into st8
    else if n is 3, then st6 is moved into st9
    else if n is 4, then st6 is moved into st10
    else if n is 5, then st6 is moved into st11.
  8. If n is 1, then st7 is moved into st8
    else if n is 2, then st7 is moved into st9
    else if n is 3, then st7 is moved into st10
    else if n is 4, then st7 is moved into st11
    else if n is 5, then st7 is moved into st12.
  9. If n is 1, then st8 is moved into st9
    else if n is 2, then st8 is moved into st10
    else if n is 3, then st8 is moved into st11
    else if n is 4, then st8 is moved into st12
    else if n is 5, then st8 is moved into st13.
  10. If n is 1, then st9 is moved into st10
    else if n is 2, then st9 is moved into st11
    else if n is 3, then st9 is moved into st12
    else if n is 4, then st9 is moved into st13
    else if n is 5, then st9 is moved into st14.
  11. If n is 1, then st10 is moved into st11
    else if n is 2, then st10 is moved into st12
    else if n is 3, then st10 is moved into st13
    else if n is 4, then st10 is moved into st14
    else if n is 5, then st10 is moved into st15.
  12. If n is 1, then st11 is moved into st12
    else if n is 2, then st11 is moved into st13
    else if n is 3, then st11 is moved into st14
    else if n is 4, then st11 is moved into st15
    else if n is 5, then the op stack pointer grows by 5.
  13. If n is 1, then st12 is moved into st13
    else if n is 2, then st12 is moved into st14
    else if n is 3, then st12 is moved into st15
    else if n is 4, then the op stack pointer grows by 4
    else if n is 5, then with the Op Stack Table accumulates st11 through st15.
  14. If n is 1, then st13 is moved into st14
    else if n is 2, then st13 is moved into st15
    else if n is 3, then the op stack pointer grows by 3
    else if n is 4, then the running product with the Op Stack Table accumulates st12 through st15
    else if n is 5, then the running product with the RAM Table accumulates next row's st1 through st5.
  15. If n is 1, then st14 is moved into st15
    else if n is 2, then the op stack pointer grows by 2
    else if n is 3, then the running product with the Op Stack Table accumulates st13 through st15
    else if n is 4, then the running product with the RAM Table accumulates next row's st1 through st4
  16. If n is 1, then the op stack pointer grows by 1
    else if n is 2, then the running product with the Op Stack Table accumulates st14 and st15
    else if n is 3, then the running product with the RAM Table accumulates next row's st1 through st3.
  17. If n is 1, then the running product with the Op Stack Table accumulates st15
    else if n is 2, then the running product with the RAM Table accumulates next row's st1 and st2.
  18. If n is 1, then the running product with the RAM Table accumulates next row's st1.

Instruction write_mem + n

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The RAM pointer st0 is incremented by n.
  2. If n is 1, then st2 is moved into st1
    else if n is 2, then st3 is moved into st1
    else if n is 3, then st4 is moved into st1
    else if n is 4, then st5 is moved into st1
    else if n is 5, then st6 is moved into st1.
  3. If n is 1, then st3 is moved into st2
    else if n is 2, then st4 is moved into st2
    else if n is 3, then st5 is moved into st2
    else if n is 4, then st6 is moved into st2
    else if n is 5, then st7 is moved into st2.
  4. If n is 1, then st4 is moved into st3
    else if n is 2, then st5 is moved into st3
    else if n is 3, then st6 is moved into st3
    else if n is 4, then st7 is moved into st3
    else if n is 5, then st8 is moved into st3.
  5. If n is 1, then st5 is moved into st4
    else if n is 2, then st6 is moved into st4
    else if n is 3, then st7 is moved into st4
    else if n is 4, then st8 is moved into st4
    else if n is 5, then st9 is moved into st4.
  6. If n is 1, then st6 is moved into st5
    else if n is 2, then st7 is moved into st5
    else if n is 3, then st8 is moved into st5
    else if n is 4, then st9 is moved into st5
    else if n is 5, then st10 is moved into st5.
  7. If n is 1, then st7 is moved into st6
    else if n is 2, then st8 is moved into st6
    else if n is 3, then st9 is moved into st6
    else if n is 4, then st10 is moved into st6
    else if n is 5, then st11 is moved into st6.
  8. If n is 1, then st8 is moved into st7
    else if n is 2, then st9 is moved into st7
    else if n is 3, then st10 is moved into st7
    else if n is 4, then st11 is moved into st7
    else if n is 5, then st12 is moved into st7.
  9. If n is 1, then st9 is moved into st8
    else if n is 2, then st10 is moved into st8
    else if n is 3, then st11 is moved into st8
    else if n is 4, then st12 is moved into st8
    else if n is 5, then st13 is moved into st8.
  10. If n is 1, then st10 is moved into st9
    else if n is 2, then st11 is moved into st9
    else if n is 3, then st12 is moved into st9
    else if n is 4, then st13 is moved into st9
    else if n is 5, then st14 is moved into st9.
  11. If n is 1, then st11 is moved into st10
    else if n is 2, then st12 is moved into st10
    else if n is 3, then st13 is moved into st10
    else if n is 4, then st14 is moved into st10
    else if n is 5, then st15 is moved into st10.
  12. If n is 1, then st12 is moved into st11
    else if n is 2, then st13 is moved into st11
    else if n is 3, then st14 is moved into st11
    else if n is 4, then st15 is moved into st11
    else if n is 5, then the op stack pointer shrinks by 5.
  13. If n is 1, then st13 is moved into st12
    else if n is 2, then st14 is moved into st12
    else if n is 3, then st15 is moved into st12
    else if n is 4, then the op stack pointer shrinks by 4
    else if n is 5, then the running product with the Op Stack Table accumulates next row's st11 through st15.
  14. If n is 1, then st14 is moved into st13
    else if n is 2, then st15 is moved into st13
    else if n is 3, then the op stack pointer shrinks by 3
    else if n is 4, then the running product with the Op Stack Table accumulates next row's st12 through st15
    else if n is 5, then the running product with the RAM Table accumulates st1 through st5.
  15. If n is 1, then st15 is moved into st14
    else if n is 2, then the op stack pointer shrinks by 2
    else if n is 3, then the running product with the Op Stack Table accumulates next row's st13 through st15
    else if n is 4, then the running product with the RAM Table accumulates st1 through st4.
  16. If n is 1, then the op stack pointer shrinks by 1
    else if n is 2, then the running product with the Op Stack Table accumulates next row's st14 and st15
    else if n is 3, then the running product with the RAM Table accumulates st1 through st2.
  17. If n is 1, then the running product with the Op Stack Table accumulates next row's st15
    else if n is 2, then the running product with the RAM Table accumulates st1 and st1.
  18. If n is 1, then the running product with the RAM Table accumulates st1.

Instruction hash

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. st10 is moved into st5.
  2. st11 is moved into st6.
  3. st12 is moved into st7.
  4. st13 is moved into st8.
  5. st14 is moved into st9.
  6. st15 is moved into st10.
  7. The op stack pointer shrinks by 5.
  8. The running product with the Op Stack Table accumulates next row's st11 through st15.

Polynomials

  1. st5' - st10
  2. st6' - st11
  3. st7' - st12
  4. st8' - st13
  5. st9' - st14
  6. st10' - st15
  7. op_stack_pointer' - op_stack_pointer + 5
  8. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 1) - 🫒·st14')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 2) - 🫒·st13')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 3) - 🫒·st12')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 4) - 🫒·st11')

Instruction assert_vector

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. st0 is equal to st5.
  2. st1 is equal to st6.
  3. st2 is equal to st7.
  4. st3 is equal to st8.
  5. st4 is equal to st9.
  6. st10 is moved into st5.
  7. st11 is moved into st6.
  8. st12 is moved into st7.
  9. st13 is moved into st8.
  10. st14 is moved into st9.
  11. st15 is moved into st10.
  12. The op stack pointer shrinks by 5.
  13. The running product with the Op Stack Table accumulates next row's st11 through st15.

Polynomials

  1. st5 - st0
  2. st6 - st1
  3. st7 - st2
  4. st8 - st3
  5. st9 - st4
  6. st5' - st10
  7. st6' - st11
  8. st7' - st12
  9. st8' - st13
  10. st9' - st14
  11. st10' - st15
  12. op_stack_pointer' - op_stack_pointer + 5
  13. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 1) - 🫒·st14')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 2) - 🫒·st13')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 3) - 🫒·st12')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 4) - 🫒·st11')

Instruction sponge_init

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the Hash Table.

Instruction sponge_absorb

In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table.

Description

  1. st10 is moved into st0.
  2. st11 is moved into st1.
  3. st12 is moved into st2.
  4. st13 is moved into st3.
  5. st14 is moved into st4.
  6. st15 is moved into st5.
  7. The op stack pointer shrinks by 10.
  8. The running product with the Op Stack Table accumulates next row's st6 through st15.

Polynomials

  1. st0' - st10
  2. st1' - st11
  3. st2' - st12
  4. st3' - st13
  5. st4' - st14
  6. st5' - st15
  7. op_stack_pointer' - op_stack_pointer + 10
  8. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 1) - 🫒·st14')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 2) - 🫒·st13')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 3) - 🫒·st12')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 4) - 🫒·st11')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 5) - 🫒·st10')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 6) - 🫒·st9')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 7) - 🫒·st8')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 8) - 🫒·st7')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 9) - 🫒·st6')

Instruction sponge_absorb_mem

In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table and the RAM Table.

Description

  1. st0 is incremented by 10.

Instruction sponge_squeeze

In addition to its instruction groups, this instruction has the following constraints. Beyond that, correct transition is guaranteed by the Hash Table.

Description

  1. st0 is moved into st10.
  2. st1 is moved into st11.
  3. st2 is moved into st12.
  4. st3 is moved into st13.
  5. st4 is moved into st14.
  6. st5 is moved into st15.
  7. The op stack pointer grows by 10.
  8. The running product with the Op Stack Table accumulates st6 through st15.

Polynomials

  1. st10' - st0
  2. st11' - st1
  3. st12' - st2
  4. st13' - st3
  5. st14' - st4
  6. st15' - st5
  7. op_stack_pointer' - op_stack_pointer - 10
  8. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍉·op_stack_pointer - 🫒·st15)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 1) - 🫒·st14)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 2) - 🫒·st13)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 3) - 🫒·st12)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 4) - 🫒·st11)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 5) - 🫒·st10)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 6) - 🫒·st9)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 7) - 🫒·st8)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 8) - 🫒·st7)
    ·(🪤 - 🍋·clk - 🍉·(op_stack_pointer + 9) - 🫒·st6)

Instruction add

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The sum of the top two stack elements is moved into the top of the stack.

Polynomials

  1. st0' - (st0 + st1)

Instruction mul

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The product of the top two stack elements is moved into the top of the stack.

Polynomials

  1. st0' - st0·st1

Instruction invert

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The top of the stack's inverse is moved into the top of the stack.

Polynomials

  1. st0'·st0 - 1

Instruction eq

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. Helper variable hv0 is the inverse of the difference of the stack's two top-most elements or 0.
  2. Helper variable hv0 is the inverse of the difference of the stack's two top-most elements or the difference is 0.
  3. The new top of the stack is 1 if the difference between the stack's two top-most elements is not invertible, 0 otherwise.

Polynomials

  1. hv0·(hv0·(st1 - st0) - 1)
  2. (st1 - st0)·(hv0·(st1 - st0) - 1)
  3. st0' - (1 - hv0·(st1 - st0))

Helper variable definitions for eq

  1. hv0 = inverse(rhs - lhs) if rhs ≠ lhs.
  2. hv0 = 0 if rhs = lhs.

Instruction split

In addition to its instruction groups, this instruction has the following constraints. Part of the correct transition, namely the range check on the instruction's result, is guaranteed by the U32 Table.

Description

  1. The top of the stack is decomposed as 32-bit chunks into the stack's top-most two elements.
  2. Helper variable hv0 holds the inverse of subtracted from the high 32 bits or the low 32 bits are 0.

Polynomials

  1. st0 - (2^32·st1' + st0')
  2. st0'·(hv0·(st1' - (2^32 - 1)) - 1)

Helper variable definitions for split

Given the high 32 bits of st0 as hi = st0 >> 32 and the low 32 bits of st0 as lo = st0 & 0xffff_ffff,

  1. hv0 = (hi - (2^32 - 1)) if lo ≠ 0.
  2. hv0 = 0 if lo = 0.

Instruction lt

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction and

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction xor

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction log2floor

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction pow

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction div_mod

Recall that instruction div_mod takes stack _ d n and computes _ q r where n is the numerator, d is the denominator, r is the remainder, and q is the quotient. The following two properties are guaranteed by the U32 Table:

  1. The remainder r is smaller than the denominator d, and
  2. all four of n, d, q, and r are u32s.

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. Numerator is quotient times denominator plus remainder: n == q·d + r.

Polynomials

  1. st0 - st1·st1' - st0'
  2. st2' - st2

Instruction pop_count

This instruction is fully constrained by its instruction groups Beyond that, correct transition is guaranteed by the U32 Table.

Instruction xx_add

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The result of adding st0 to st3 is moved into st0.
  2. The result of adding st1 to st4 is moved into st1.
  3. The result of adding st2 to st5 is moved into st2.
  4. st6 is moved into st3.
  5. st7 is moved into st4.
  6. st8 is moved into st5.
  7. st9 is moved into st6.
  8. st10 is moved into st7.
  9. st11 is moved into st8.
  10. st12 is moved into st9.
  11. st13 is moved into st10.
  12. st14 is moved into st11.
  13. st15 is moved into st12.
  14. The op stack pointer shrinks by 3.
  15. The running product with the Op Stack Table accumulates next row's st13 through st15.

Polynomials

  1. st0' - (st0 + st3)
  2. st1' - (st1 + st4)
  3. st2' - (st2 + st5)
  4. st3' - st6
  5. st4' - st7
  6. st5' - st8
  7. st6' - st9
  8. st7' - st10
  9. st8' - st11
  10. st9' - st12
  11. st10' - st13
  12. st11' - st14
  13. st12' - st15
  14. op_stack_pointer' - op_stack_pointer + 3
  15. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 1) - 🫒·st14')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 2) - 🫒·st13')

Instruction xx_mul

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The coefficient of x^0 of multiplying the two X-Field elements on the stack is moved into st0.
  2. The coefficient of x^1 of multiplying the two X-Field elements on the stack is moved into st1.
  3. The coefficient of x^2 of multiplying the two X-Field elements on the stack is moved into st2.
  4. st6 is moved into st3.
  5. st7 is moved into st4.
  6. st8 is moved into st5.
  7. st9 is moved into st6.
  8. st10 is moved into st7.
  9. st11 is moved into st8.
  10. st12 is moved into st9.
  11. st13 is moved into st10.
  12. st14 is moved into st11.
  13. st15 is moved into st12.
  14. The op stack pointer shrinks by 3.
  15. The running product with the Op Stack Table accumulates next row's st13 through st15.

Polynomials

  1. st0' - (st0·st3 - st2·st4 - st1·st5)
  2. st1' - (st1·st3 + st0·st4 - st2·st5 + st2·st4 + st1·st5)
  3. st2' - (st2·st3 + st1·st4 + st0·st5 + st2·st5)
  4. st3' - st6
  5. st4' - st7
  6. st5' - st8
  7. st6' - st9
  8. st7' - st10
  9. st8' - st11
  10. st9' - st12
  11. st10' - st13
  12. st11' - st14
  13. st12' - st15
  14. op_stack_pointer' - op_stack_pointer + 3
  15. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 1) - 🫒·st14')
    ·(🪤 - 🍋·clk - 🍊 - 🍉·(op_stack_pointer' + 2) - 🫒·st13')

Instruction x_invert

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The coefficient of x^0 of multiplying X-Field element on top of the current stack and on top of the next stack is 1.
  2. The coefficient of x^1 of multiplying X-Field element on top of the current stack and on top of the next stack is 0.
  3. The coefficient of x^2 of multiplying X-Field element on top of the current stack and on top of the next stack is 0.

Polynomials

  1. st0·st0' - st2·st1' - st1·st2' - 1
  2. st1·st0' + st0·st1' - st2·st2' + st2·st1' + st1·st2'
  3. st2·st0' + st1·st1' + st0·st2' + st2·st2'

Instruction xb_mul

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. The result of multiplying the top of the stack with the X-Field element's coefficient for x^0 is moved into st0.
  2. The result of multiplying the top of the stack with the X-Field element's coefficient for x^1 is moved into st1.
  3. The result of multiplying the top of the stack with the X-Field element's coefficient for x^2 is moved into st2.
  4. st4 is moved into st3.
  5. st5 is moved into st4.
  6. st6 is moved into st5.
  7. st7 is moved into st6.
  8. st8 is moved into st7.
  9. st9 is moved into st8.
  10. st10 is moved into st9.
  11. st11 is moved into st10.
  12. st12 is moved into st11.
  13. st13 is moved into st12.
  14. st14 is moved into st13.
  15. st15 is moved into st14.
  16. The op stack pointer shrinks by 3.
  17. The running product with the Op Stack Table accumulates next row's st15.

Polynomials

  1. st0' - st0·st1
  2. st1' - st0·st2
  3. st2' - st0·st3
  4. st3' - st4
  5. st4' - st5
  6. st5' - stt
  7. st6' - st7
  8. st7' - st8
  9. st8' - st9
  10. st9' - st10
  11. st10' - st11
  12. st11' - st12
  13. st12' - st13
  14. st13' - st14
  15. st14' - st15
  16. op_stack_pointer' - op_stack_pointer + 1
  17. RunningProductOpStackTable' - RunningProductOpStackTable·(🪤 - 🍋·clk - 🍊 - 🍉·op_stack_pointer' - 🫒·st15')

Instruction read_io + n

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. If n is 1, the running evaluation for standard input accumulates next row's st0
    else if n is 2, the running evaluation for standard input accumulates next row's st0 and st1
    else if n is 3, the running evaluation for standard input accumulates next row's st0 through st2
    else if n is 4, the running evaluation for standard input accumulates next row's st0 through st3
    else if n is 5, the running evaluation for standard input accumulates next row's st0 through st4.

Polynomials

  1. ind_1(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput' - 🛏·RunningEvaluationStandardInput - st0')
    + ind_2(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput' - 🛏·(🛏·RunningEvaluationStandardInput - st0') - st1')
    + ind_3(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput' - 🛏·(🛏·(🛏·RunningEvaluationStandardInput - st0') - st1') - st2')
    + ind_4(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput' - 🛏·(🛏·(🛏·(🛏·RunningEvaluationStandardInput - st0') - st1') - st2') - st3')
    + ind_5(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardInput' - 🛏·(🛏·(🛏·(🛏·(🛏·RunningEvaluationStandardInput - st0') - st1') - st2') - st3') - st4')

Instruction write_io + n

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. If n is 1, the running evaluation for standard output accumulates st0
    else if n is 2, the running evaluation for standard output accumulates st0 and st1
    else if n is 3, the running evaluation for standard output accumulates st0 through st2
    else if n is 4, the running evaluation for standard output accumulates st0 through st3
    else if n is 5, the running evaluation for standard output accumulates st0 through st4.

Polynomials

  1. ind_1(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput' - 🧯·RunningEvaluationStandardOutput - st0)
    + ind_2(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput' - 🧯·(🧯·RunningEvaluationStandardOutput - st0) - st1)
    + ind_3(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput' - 🧯·(🧯·(🧯·RunningEvaluationStandardOutput - st0) - st1) - st2)
    + ind_4(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput' - 🧯·(🧯·(🧯·(🧯·RunningEvaluationStandardOutput - st0) - st1) - st2) - st3)
    + ind_5(hv3, hv2, hv1, hv0)·(RunningEvaluationStandardOutput' - 🧯·(🧯·(🧯·(🧯·(🧯·RunningEvaluationStandardOutput - st0) - st1) - st2) - st3) - st4)

Instruction merkle_step

Recall that in a Merkle tree, the indices of left (respectively right) leaves have 0 (respectively 1) as their least significant bit. This motivates the use of a helper variable to hold that least significant bit.

In addition to its instruction groups, this instruction has the following constraints. The two Evaluation Arguments also used for instruction hash guarantee correct transition of the top of the stack.

Description

  1. Helper variable hv5 is either 0 or 1.
  2. st5 is shifted by 1 bit to the right. In other words, twice st5 in the next row plus hv5 equals st5 in the current row.

Helper variable definitions for merkle_step

  1. hv0 through hv4 hold the sibling digest of the node indicated by st5, as read from the interface for non-deterministic input.
  2. hv5 holds the result of st5 % 2, the Merkle tree node index's least significant bit, indicating whether it is a left or right node.

Instruction merkle_step_mem

In addition to its instruction groups, this instruction has the following constraints. The two Evaluation Arguments also used for instruction hash guarantee correct transition of the top of the stack. See also merkle_step.

Description

  1. Stack element st6 does not change.
  2. Stack element st7 is incremented by 5.
  3. Helper variable hv5 is either 0 or 1.
  4. st5 is shifted by 1 bit to the right. In other words, twice st5 in the next row plus hv5 equals st5 in the current row.
  5. The running product with the RAM Table accumulates hv0 through hv4 using the (appropriately adjusted) RAM pointer held in st7.

Helper variable definitions for merkle_step_mem

  1. hv0 through hv4 hold the sibling digest of the node indicated by st5, as read from RAM at the address held in st7.
  2. hv5 holds the result of st5 % 2, the Merkle tree node index's least significant bit, indicating whether it is a left or right node.

Instruction xx_dot_step

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. Store (RAM[st0], RAM[st0+1], RAM[st0+2]) in (hv0, hv1, hv2).
  2. Store (RAM[st1], RAM[st1+1], RAM[st1+2]) in (hv3, hv4, hv5).
  3. Add (hv0 + hv1·x + hv2·x²) · (hv3 + hv4·x + hv5·x²) into (st2, st3, st4)
  4. Increase the pointers: st0 and st1 by 3 each.

Instruction xb_dot_step

In addition to its instruction groups, this instruction has the following constraints.

Description

  1. Store RAM[st0] in hv0.
  2. Store (RAM[st1], RAM[st1+1], RAM[st1+2]) in (hv1, hv2, hv3).
  3. Add hv0 · (hv1 + hv2·x + hv3·x²) into (st1, st2, st3)
  4. Increase the pointers: st0 and st1 by 1 and 3, respectively.

Operational Stack Table

The operational stack1 is where the program stores simple elementary operations, function arguments, and pointers to important objects. There are 16 registers (st0 through st15) that the program can access directly. These registers correspond to the top of the stack. They are recorded in the Processor Table.

The rest of the op stack is stored in a dedicated memory object called “operational stack underflow memory”. It is initially empty. The evolution of the underflow memory is recorded in the Op Stack Table. The sole task of the Op Stack Table is to keep underflow memory immutable. To achieve this, any read or write accesses to the underflow memory are recorded in the Op Stack Table. Read and write accesses to op stack underflow memory are a side effect of shrinking or growing the op stack.

Main Columns

The Op Stack Table consists of 4 columns:

  1. the cycle counter clk
  2. the shrink stack indicator shrink_stack, corresponding to the processor's instruction bit 1 ib1,
  3. the op stack pointer stack_pointer, and
  4. the first underflow element first_underflow_element.
ClockShrink Stack IndicatorStack PointerFirst Underflow Element
----

Column clk records the processor's execution cycle during which the read or write access happens. The shrink_stack indicator signals whether the underflow memory access is a read or a write: a read corresponds to a shrinking stack is indicated by a 1, a write corresponds to a growing stack and is indicated by a 0. The same column doubles up as a padding indicator, in which case shrink_stack is set to 2. The stack_pointer is the address at which the to-be-read-or-written element resides. Finally, the first_underflow_element is the stack element being transferred from stack register st15 into underflow memory on a write, or the other way around on a read.

A subset Permutation Argument with the Processor Table establishes that the rows recorded in the Op Stack Table are consistent with the processor's part of the op stack.

In order to guarantee memory consistency, the rows of the operational stack table are sorted by stack_pointer first, cycle count clk second. The mechanics are best illustrated by an example. Observe how the malicious manipulation of the Op Stack Underflow Memory, the change of “42” into “99” in cycle 8, is detected: The transition constraints of the Op Stack Table stipulate that if stack_pointer does not change, then the first_underflow_element can only change if the next instruction grows the stack. Consequently, row [4, 0, 8, 42] being followed by row [10, 1, 8, 99] violates the constraints. The shrink stack indicator being correct is guaranteed by the Permutation Argument between Op Stack Table and the Processor Table.

For illustrative purposes only, we use four stack registers st0 through st3 in the example. Triton VM has 16 stack registers, st0 through st15. Furthermore, implied next instructions usually recorded in register “next instruction or argument” nia are omitted for reasons of readability.

Processor's execution trace:

clkciniast0st1st2st3Op Stack Underflow Memoryop_stack_pointer
0push420000[]4
1push4342000[ 0]5
2push44434200[ 0, 0]6
3push454443420[ 0, 0, 0]7
4push4645444342[ 0, 0, 0, 0]8
5push4746454443[42, 0, 0, 0, 0]9
6push4847464544[43, 42, 0, 0, 0, 0]10
7nop48474645[44, 43, 42, 0, 0, 0]11
8pop48474645[44, 43, 99, 0, 0, 0]11
9pop47464544[43, 99, 0, 0, 0, 0]10
10pop46454443[99, 0, 0, 0, 0]9
11pop45444399[ 0, 0, 0, 0]8
12push774443990[ 0, 0, 0]7
13swap377444399[ 0, 0, 0, 0]8
14push7899444377[ 0, 0, 0, 0]8
15swap378994443[77, 0, 0, 0, 0]9
16push7943994478[77, 0, 0, 0, 0]9
17pop79439944[78, 77, 0, 0, 0, 0]10
18pop43994478[77, 0, 0, 0, 0]9
19pop99447877[ 0, 0, 0, 0]8
20pop4478770[ 0, 0, 0]7
21pop787700[ 0, 0]6
22pop77000[ 0]5
23halt0000[]4

Operational Stack Table:

clkshrink_stackstack_pointerfirst_underflow_element
0040
22140
1050
21150
2060
20160
3070
11170
12070
19170
40842
101899
140877
181877
50943
91943
160978
171978
601044
811044

Auxiliary Columns

The Op Stack Table has 2 auxiliary columns, rppa and ClockJumpDifferenceLookupClientLogDerivative.

  1. A Permutation Argument establishes that the rows of the Op Stack Table correspond to the rows of the Processor Table. The running product for this argument is contained in the rppa column.
  2. In order to achieve memory consistency, a Lookup Argument shows that all clock jump differences are contained in the clk column of the Processor Table. The logarithmic derivative for this argument is contained in the ClockJumpDifferenceLookupClientLogDerivative column.

Padding

The last row in the Op Stack Table is taken as the padding template row. Should the Op Stack Table be empty, the row (clk, shrink_stack, stack_pointer, first_underflow_element) = (0, 0, 16, 0) is used instead. In the template row, the shrink_stack indicator is set to 2, signifying padding. The template row is inserted below the last row until the desired padded height is reached.

Memory-Consistency

Memory-consistency follows from two more primitive properties:

  1. Contiguity of regions of constant memory pointer. Since in Op Stack Table, the stack_pointer can change by at most one per cycle, it is possible to enforce a full sorting using AIR constraints.
  2. Correct inner-sorting within contiguous regions. Specifically, the rows within each contiguous region of constant memory pointer should be sorted for clock cycle. This property is established by the clock jump difference Lookup Argument. In a nutshell, every difference of consecutive clock cycles that occurs within one contiguous block of constant memory pointer is shown itself to be a valid clock cycle through a separate cross-table argument.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. The stack_pointer is the number of available stack registers, i.e., 16.
  2. If the row is not a padding row, the running product for the permutation argument with the Processor Table rppa starts off having accumulated the first row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤. Otherwise, it is the Permutation Argument's default initial, i.e., 1.
  3. The logarithmic derivative for the clock jump difference lookup ClockJumpDifferenceLookupClientLogDerivative is 0.

Initial Constraints as Polynomials

  1. stack_pointer - 16
  2. (shrink_stack - 2)·(rppa - (🪤 - 🍋·clk - 🍊·shrink_stack - 🍉·stack_pointer - 🫒·first_underflow_element))
    + (shrink_stack - 0)·(shrink_stack - 1)·(rppa - 1)
  3. ClockJumpDifferenceLookupClientLogDerivative

Consistency Constraints

None.

Transition Constraints

    • the stack_pointer increases by 1, or
    • the stack_pointer does not change AND the first_underflow_element does not change, or
    • the stack_pointer does not change AND the shrink stack indicator shrink_stack in the next row is 0.
  1. If thex next row is not a padding row, the running product for the permutation argument with the Processor Table rppa absorbs the next row with respect to challenges 🍋, 🍊, 🍉, and 🫒 and indeterminate 🪤. Otherwise, the running product remains unchanged.
  2. If the current row is a padding row, then the next row is a padding row.
  3. If the next row is not a padding row and the op stack pointer stack_pointer does not change, then the logarithmic derivative for the clock jump difference lookup ClockJumpDifferenceLookupClientLogDerivative accumulates a factor (clk' - clk) relative to indeterminate 🪞. Otherwise, it remains the unchanged.

Written as Disjunctive Normal Form, the same constraints can be expressed as:

  1. The stack_pointer increases by 1 or the stack_pointer does not change.
  2. The stack_pointer increases by 1 or the first_underflow_element does not change or the shrink stack indicator shrink_stack in the next row is 0.
    • The next row is a padding row or rppa has accumulated the next row, and
    • the next row is not a padding row or rppa remains unchanged.
  3. The current row is not a padding row or the next row is a padding row.
    • the stack_pointer changes or the next row is a padding row or the logarithmic derivative accumulates a summand,
    • the stack_pointer remains unchanged or the logarithmic derivative remains unchanged, and
    • the next row is not a padding row or the logarithmic derivative remains unchanged.

Transition Constraints as Polynomials

  1. (stack_pointer' - stack_pointer - 1)·(stack_pointer' - stack_pointer)
  2. (stack_pointer' - stack_pointer - 1)·(first_underflow_element' - first_underflow_element)·(shrink_stack' - 0)
  3. (shrink_stack' - 2)·(rppa' - rppa·(🪤 - 🍋·clk' - 🍊·shrink_stack' - 🍉·stack_pointer' - 🫒first_underflow_element'))
    + (shrink_stack' - 0)·(shrink_stack' - 1)·(rppa' - rppa)
  4. (shrink_stack - 0)·(shrink_stack - 1)·(shrink_stack' - 2)
  5. (stack_pointer' - stack_pointer - 1)·(shrink_stack' - 2)·((ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative) · (🪞 - clk' + clk) - 1)
    + (stack_pointer' - stack_pointer)·(ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative)
    + (shrink_stack' - 0)·(shrink_stack' - 1)·(ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative)

Terminal Constraints

None.


1

frequently abbreviated as “Op Stack”

Random Access Memory Table

The purpose of the RAM Table is to ensure that the RAM is accessed in a consistent manner. That is, all mutation of RAM happens explicitly through instruction write_mem, and any invocations of instruction read_mem return the values last written.

The fundamental principle is identical to that of the Op Stack Table. The main difference is the absence of a dedicated stack pointer. Instead, op stack element st0 is used as RAM pointer ram_pointer.

If some RAM address is read from before it is written to, the corresponding value is not determined. This is one of interfaces for non-deterministic input in Triton VM. Consecutive reads from any such address always returns the same value (until overwritten via write_mem).

Main Columns

The RAM Table has 7 main columns:

  1. the cycle counter clk,
  2. the executed instruction_type – 0 for “write”, 1 for “read”, 2 for padding rows,
  3. RAM pointer ram_pointer,
  4. RAM value ram_value,
  5. helper variable "inverse of ram_pointer difference" iord,
  6. Bézout coefficient polynomial coefficient 0 bcpc0,
  7. Bézout coefficient polynomial coefficient 1 bcpc1,

Column iord helps with detecting a change of ram_pointer across two RAM Table rows. The function of iord is best explained in the context of sorting the RAM Table's rows, which is what the next section is about.

The Bézout coefficient polynomial coefficients bcpc0 and bcpc1 represent the coefficients of polynomials that are needed for the contiguity argument. This argument establishes that all regions of constant ram_pointer are contiguous.

Auxiliary Columns

The RAM Table has 6 auxiliary columns:

  1. RunningProductOfRAMP, accumulating next row's ram_pointer as a root whenever ram_pointer changes between two rows,
  2. FormalDerivative, the (evaluated) formal derivative of RunningProductOfRAMP,
  3. BezoutCoefficient0, the (evaluated) polynomial with main column bcpc0 as coefficients,
  4. BezoutCoefficient1, the (evaluated) polynomial with main column bcpc1 as coefficients,
  5. RunningProductPermArg, the Permutation Argument with the Processor Table, and
  6. ClockJumpDifferenceLookupClientLogDerivative, part of memory consistency.

Columns RunningProductOfRAMP, FormalDerivative, BezoutCoefficient0, and BezoutCoefficient1 are part of the Contiguity Argument.

Sorting Rows

In the Hash Table, the rows are arranged such that they

  1. form contiguous regions of ram_pointer, and
  2. are sorted by cycle counter clk within each such region.

One way to achieve this is to sort by ram_pointer first, clk second.

Coming back to iord: if the difference between ram_pointer in row and row is 0, then iord in row is 0. Otherwise, iord in row is the multiplicative inverse of the difference between ram_pointer in row and ram_pointer in row . In the last row, there being no next row, iord is 0.

An example of the mechanics can be found below. To increase visual clarity, stack registers holding value “0” are represented by an empty cell. For illustrative purposes only, we use six stack registers st0 through st5 in the example. Triton VM has 16 stack registers, st0 through st15.

Processor Table:

clkciniast0st1st2st3st4st5
0push20
1push10020
2write_mem110020
3pop1101
4push5
5push65
6push765
7push8765
8push98765
9push4298765
10write_mem54298765
11pop147
12push42
13read_mem142
14pop2419
15push45
16read_mem345
17pop442876
18push17
19push1817
20push191817
21push43191817
22write_mem343191817
23pop146
24push46
25read_mem546
26pop14191918175
27pop591918175
28push42
29read_mem142
30pop2419
31push100
32read_mem1100
33pop29920
34halt

RAM Table:

clktypepointervalueiord
10write4290
13read4290
25read4290
29read4291
10write4380
16read4380
22write43190
25read43191
10write4470
16read4470
22write44180
25read44181
10write4560
16read4560
22write45170
25read45171
10write4650
25read465
2write100200
32read100200

Padding

The row used for padding the RAM Table is its last row, with the instruction_type set to 2.

If the RAM Table is empty, the all-zero row with the following modifications is used instead:

  • instruction_type is set to 2, and
  • bcpc1 is set to 1.

This ensures that the Contiguity Argument works correctly in the absence of any actual contiguous RAM pointer region.

The padding row is inserted below the RAM Table until the desired height is reached.

Row Permutation Argument

The permutation argument with the Processor Table establishes that the RAM Table's rows correspond to the Processor Table's sent and received RAM values, at the correct cycle counter and RAM address.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. The first coefficient of the Bézout coefficient polynomial 0 bcpc0 is 0.
  2. The Bézout coefficient 0 bc0 is 0.
  3. The Bézout coefficient 1 bc1 is equal to the first coefficient of the Bézout coefficient polynomial bcpc1.
  4. The running product polynomial RunningProductOfRAMP starts with 🧼 - ram_pointer.
  5. The formal derivative starts with 1.
  6. If the first row is not a padding row, the running product for the permutation argument with the Processor Table RunningProductPermArg has absorbed the first row with respect to challenges 🍍, 🍈, 🍎, and 🌽 and indeterminate 🛋.
    Else, the running product for the permutation argument with the Processor Table RunningProductPermArg is 1.
  7. The logarithmic derivative for the clock jump difference lookup ClockJumpDifferenceLookupClientLogDerivative is 0.

Initial Constraints as Polynomials

  1. bcpc0
  2. bc0
  3. bc1 - bcpc1
  4. RunningProductOfRAMP - 🧼 + ram_pointer
  5. FormalDerivative - 1
  6. (RunningProductPermArg - 🛋 - 🍍·clk - 🍈·ram_pointer - 🍎·ram_value - 🌽·instruction_type)·(instruction_type - 2)
    (RunningProductPermArg - 1)·(instruction_type - 1)·(instruction_type - 0)
  7. ClockJumpDifferenceLookupClientLogDerivative

Consistency Constraints

None.

Transition Constraints

  1. If the current row is a padding row, then the next row is a padding row.
  2. The “inverse of ram_pointer difference” iord is 0 or iord is the inverse of the difference between current and next row's ram_pointer.
  3. The ram_pointer changes or iord is the inverse of the difference between current and next row's ram_pointer.
  4. The ram_pointer changes or instruction_type is “write” or the ram_value remains unchanged.
  5. The bcbp0 changes if and only if the ram_pointer changes.
  6. The bcbp1 changes if and only if the ram_pointer changes.
  7. If the ram_pointer changes, the RunningProductOfRAMP accumulates next ram_pointer.
    Otherwise, it remains unchanged.
  8. If the ram_pointer changes, the FormalDerivative updates under the product rule of differentiation.
    Otherwise, it remains unchanged.
  9. If the ram_pointer changes, Bézout coefficient 0 bc0 updates according to the running evaluation rules with respect to bcpc0.
    Otherwise, it remains unchanged.
  10. If the ram_pointer changes, Bézout coefficient 1 bc1 updates according to the running evaluation rules with respect to bcpc1.
    Otherwise, it remains unchanged.
  11. If the next row is not a padding row, the RunningProductPermArg accumulates the next row.
    Otherwise, it remains unchanged.
  12. If the ram_pointer does not change and the next row is not a padding row, the ClockJumpDifferenceLookupClientLogDerivative accumulates the difference of clk.
    Otherwise, it remains unchanged.

Transition Constraints as Polynomials

  1. (instruction_type - 0)·(instruction_type - 1)·(instruction_type' - 2)
  2. (iord·(ram_pointer' - ram_pointer) - 1)·iord
  3. (iord·(ram_pointer' - ram_pointer) - 1)·(ram_pointer' - ram_pointer)
  4. (iord·(ram_pointer' - ram_pointer) - 1)·(instruction_type - 0)·(ram_value' - ram_value)
  5. (iord·(ram_pointer' - ram_pointer) - 1)·(bcpc0' - bcpc0)
  6. (iord·(ram_pointer' - ram_pointer) - 1)·(bcpc1' - bcpc1)
  7. (iord·(ram_pointer' - ram_pointer) - 1)·(RunningProductOfRAMP' - RunningProductOfRAMP)
    + (ram_pointer' - ram_pointer)·(RunningProductOfRAMP' - RunningProductOfRAMP·(ram_pointer'-🧼))
  8. (iord·(ram_pointer' - ram_pointer) - 1)·(FormalDerivative' - FormalDerivative)
    + (ram_pointer' - ram_pointer)·(FormalDerivative' - FormalDerivative·(ram_pointer'-🧼) - RunningProductOfRAMP)
  9. (iord·(ram_pointer' - ram_pointer) - 1)·(bc0' - bc0)
    + (ram_pointer' - ram_pointer)·(bc0' - bc0·🧼 - bcpc0')
  10. (iord·(ram_pointer' - ram_pointer) - 1)·(bc1' - bc1)
    + (ram_pointer' - ram_pointer)·(bc1' - bc1·🧼 - bcpc1')
  11. (RunningProductPermArg' - RunningProductPermArg·(🛋 - 🍍·clk' - 🍈·ram_pointer' - 🍎·ram_value' - 🌽·instruction_type'))·(instruction_type' - 2)
    (RunningProductPermArg' - RunningProductPermArg)·(instruction_type - 1)·(instruction_type - 0))
  12. (iord·(ram_pointer' - ram_pointer) - 1)·(instruction_type' - 2)·((ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative) · (🪞 - clk' + clk) - 1)
    + (ram_pointer' - ram_pointer)·(ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative)
    + (instruction_type' - 1)·(instruction_type' - 0)·(ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative)

Terminal Constraints

  1. The Bézout relation holds between RunningProductOfRAMP, FormalDerivative, bc0, and bc1.

Terminal Constraints as Polynomials

  1. RunningProductOfRAMP·bc0 + FormalDerivative·bc1 - 1

Jump Stack Table

The Jump Stack Memory contains the underflow from the Jump Stack.

Main Columns

The Jump Stack Table consists of 5 columns:

  1. the cycle counter clk
  2. current instruction ci,
  3. the jump stack pointer jsp,
  4. the last jump's origin jso, and
  5. the last jump's destination jsd.
ClockCurrent InstructionJump Stack PointerJump Stack OriginJump Stack Destination
-----

The rows are sorted by jump stack pointer jsp, then by cycle counter clk. The column jsd contains the destination of stack-extending jump (call) as well as of the no-stack-change jump (recurse); the column jso contains the source of the stack-extending jump (call) or equivalently the destination of the stack-shrinking jump (return).

The AIR for this table guarantees that the return address of a single cell of return address memory can change only if there was a call instruction.

An example program, execution trace, and jump stack table are shown below.

Program:

addressinstruction
0x00foo
0x01bar
0x02call
0x030xA0
0x04buzz
0x05bar
0x06call
0x070xB0
0x08foo
0x09bar
0xA0buzz
0xA1foo
0xA2bar
0xA3return
0xA4foo
0xB0foo
0xB1call
0xB20xC0
0xB3return
0xB4bazz
0xC0buzz
0xC1foo
0xC2bar
0xC3return
0xC4buzz

Execution trace:

clkipciniajspjsojsdjump stack
00x00foobar00x000x00[ ]
10x01barcall00x000x00[ ]
20x02call0xA000x000x00[ ]
30xA0buzzfoo10x040xA0[(0x04, 0xA0)]
40xA1foobar10x040xA0[(0x04, 0xA0)]
50xA2barreturn10x040xA0[(0x04, 0xA0)]
60xA3returnfoo10x040xA0[(0x04, 0xA0)]
70x04buzzbar00x000x00[ ]
80x05barcall00x000x00[ ]
90x06call0xB000x000x00[ ]
100xB0foocall10x080xB0[(0x08, 0xB0)]
110xB1call0xC010x080xB0[(0x08, 0xB0)]
120xC0buzzfoo20xB30xC0[(0x08, 0xB0), (0xB3, 0xC0)]
130xC1foobar20xB30xC0[(0x08, 0xB0), (0xB3, 0xC0)]
140xC2barreturn20xB30xC0[(0x08, 0xB0), (0xB3, 0xC0)]
150xC3returnbuzz20xB30xC0[(0x08, 0xB0), (0xB3, 0xC0)]
160xB3returnbazz10x080xB0[(0x08, 0xB0)]
170x08foobar00x000x00[ ]

Jump Stack Table:

clkcijspjsojsd
0foo00x000x00
1bar00x000x00
2call00x000x00
7buzz00x000x00
8bar00x000x00
9call00x000x00
17foo00x000x00
3buzz10x040xA0
4foo10x040xA0
5bar10x040xA0
6return10x040xA0
10foo10x080xB0
11call10x080xB0
16return10x080xB0
12buzz20xB30xC0
13foo20xB30xC0
14bar20xB30xC0
15return20xB30xC0

Auxiliary Columns

The Jump Stack Table has 2 auxiliary columns, rppa and ClockJumpDifferenceLookupClientLogDerivative.

  1. A Permutation Argument establishes that the rows of the Jump Stack Table match with the rows in the Processor Table. The running product for this argument is contained in the rppa column.
  2. In order to achieve memory consistency, a Lookup Argument shows that all clock jump differences are contained in the clk column of the Processor Table. The logarithmic derivative for this argument is contained in the ClockJumpDifferenceLookupClientLogDerivative column.

Padding

A padding row is a direct copy of the Jump Stack Table's row with the highest value for column clk, called template row, with the exception of the cycle count column clk. In a padding row, the value of column clk is 1 greater than the value of column clk in the template row. The padding row is inserted right below the template row. These steps are repeated until the desired padded height is reached. In total, above steps ensure that the Permutation Argument between the Jump Stack Table and the Processor Table holds up.

Memory-Consistency

Memory-consistency follows from two more primitive properties:

  1. Contiguity of regions of constant memory pointer. Since the memory pointer for the Jump Stack table, jsp can change by at most one per cycle, it is possible to enforce a full sorting using AIR constraints.
  2. Correct inner-sorting within contiguous regions. Specifically, the rows within each contiguous region of constant memory pointer should be sorted for clock cycle. This property is established by the clock jump difference Lookup Argument. In a nutshell, every difference of consecutive clock cycles that occurs within one contiguous block of constant memory pointer is shown itself to be a valid clock cycle through a separate cross-table argument.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. Cycle count clk is 0.
  2. Jump Stack Pointer jsp is 0.
  3. Jump Stack Origin jso is 0.
  4. Jump Stack Destination jsd is 0.
  5. The running product for the permutation argument with the Processor Table rppa has absorbed the first row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.
  6. The running product of clock jump differences ClockJumpDifferenceLookupClientLogDerivative is 0.

Initial Constraints as Polynomials

  1. clk
  2. jsp
  3. jso
  4. jsd
  5. rppa - (🧴 - 🍇·clk - 🍅·ci - 🍌·jsp - 🍏·jso - 🍐·jsd)
  6. ClockJumpDifferenceLookupClientLogDerivative

Consistency Constraints

None.

Transition Constraints

  1. The jump stack pointer jsp increases by 1, or
  2. (jsp does not change and jso does not change and jsd does not change and the cycle counter clk increases by 1), or
  3. (jsp does not change and jso does not change and jsd does not change and the current instruction ci is call), or
  4. (jsp does not change and the current instruction ci is return), or
  5. (jsp does not change and the current instruction ci is recurse_or_return).
  6. The running product for the permutation argument rppa absorbs the next row with respect to challenges 🍇, 🍅, 🍌, 🍏, and 🍐 and indeterminate 🧴.
  7. If the jump stack pointer jsp does not change, then the logarithmic derivative for the clock jump difference lookup ClockJumpDifferenceLookupClientLogDerivative accumulates a factor (clk' - clk) relative to indeterminate 🪞. Otherwise, it remains the same.

Written as Disjunctive Normal Form, the same constraints can be expressed as:

  1. The jump stack pointer jsp increases by 1 or the jump stack pointer jsp does not change
  2. The jump stack pointer jsp increases by 1 or the jump stack origin jso does not change or current instruction ci is return or recurse_or_return
  3. The jump stack pointer jsp increases by 1 or the jump stack destination jsd does not change or current instruction ci is return or recurse_or_return
  4. The jump stack pointer jsp increases by 1 or the cycle count clk increases by 1 or current instruction ci is call or current instruction ci is return or recurse_or_return
  5. rppa' - rppa·(🧴 - 🍇·clk' - 🍅·ci' - 🍌·jsp' - 🍏·jso' - 🍐·jsd')
    • the jsp changes or the logarithmic derivative accumulates a summand, and
    • the jsp does not change or the logarithmic derivative does not change.

Transition Constraints as Polynomials

  1. (jsp' - (jsp + 1))·(jsp' - jsp)
  2. (jsp' - (jsp + 1))·(jso' - jso)·(ci - op_code(return))
  3. (jsp' - (jsp + 1))·(jsd' - jsd)·(ci - op_code(return))
  4. (jsp' - (jsp + 1))·(clk' - (clk + 1))·(ci - op_code(call))·(ci - op_code(return))
  5. clk_di·(jsp' - jsp - 1)·(1 - clk_di·(clk' - clk - one))
  6. (clk' - clk - one)·(jsp' - jsp - 1)·(1 - clk_di·(clk' - clk - one))
  7. rppa' - rppa·(🧴 - 🍇·clk' - 🍅·ci' - 🍌·jsp' - 🍏·jso' - 🍐·jsd')
  8. (jsp' - (jsp + 1))·((ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative) · (🪞 - clk' + clk) - 1)
    + (jsp' - jsp)·(ClockJumpDifferenceLookupClientLogDerivative' - ClockJumpDifferenceLookupClientLogDerivative)

Terminal Constraints

None.

Hash Table

The instruction hash hashes the Op Stack's 10 top-most elements in one cycle. Similarly, the Sponge instructions sponge_init, sponge_absorb, and sponge_squeeze also all complete in one cycle. The main processor achieves this by using a hash coprocessor. The Hash Table is part of the arithmetization of that coprocessor, the other two parts being the Cascade Table and the Lookup Table. In addition to accelerating these hashing instructions, the Hash Table helps with program attestation by hashing the program.

Note: the Hash Table is not “aware” of instruction sponge_absorb_mem. Instead, the processor requests a “regular” sponge_absorb from the Hash Table, fetching the to-be-absorbed elements from RAM instead of the stack.

The arithmetization for instruction hash, the Sponge instructions sponge_init, sponge_absorb, and sponge_squeeze, and for program hashing are quite similar. The main differences are in updates to the state registers between executions of the pseudo-random permutation used in Triton VM, the permutation of Tip5. A summary of the four instructions' mechanics:

  • Instruction hash
    1. sets all the hash coprocessor's rate registers (state_0 through state_9) to equal the processor's stack registers state_0 through state_9,
    2. sets all the hash coprocessor's capacity registers (state_10 through state_15) to 1,
    3. executes the 5 rounds of the Tip5 permutation,
    4. overwrites the processor's stack registers state_0 through state_4 with 0, and
    5. overwrites the processor's stack registers state_5 through state_9 with the hash coprocessor's registers state_0 through state_4.
  • Instruction sponge_init
    1. sets all the hash coprocessor's registers (state_0 through state_15) to 0.
  • Instruction sponge_absorb
    1. overwrites the hash coprocessor's rate registers (state_0 through state_9) with the processor's stack registers state_0 through state_9, and
    2. executes the 5 rounds of the Tip5 permutation.
  • Instruction sponge_squeeze
    1. overwrites the processor's stack registers state_0 through state_9 with the hash coprocessor's rate registers (state_0 through state_9), and
    2. executes the 5 rounds of the Tip5 permutation.

Program hashing happens in the initialization phase of Triton VM. The to-be-executed program has no control over it. Program hashing is mechanically identical to performing instruction sponge_absorb as often as is necessary to hash the entire program. A notable difference is the source of the to-be-absorbed elements: they come from program memory, not the processor (which is not running yet). Once all instructions have been absorbed, the resulting digest is checked against the publicly claimed digest.

Due to the various similar but distinct tasks of the Hash Table, it has an explicit Mode register. The four separate modes are program_hashing, sponge, hash, and pad, and they evolve in that order. Changing the mode is only possible when the permutation has been applied in full, i.e., when the round number is 5. Once mode pad is reached, it is not possible to change the mode anymore. It is not possible to skip mode program_hashing: the program is always hashed. Skipping any or all of the modes sponge, hash, or pad is possible in principle:

  • if no Sponge instructions are executed, mode sponge will be skipped,
  • if no hash instruction is executed, mode hash will be skipped, and
  • if the Hash Table does not require any padding, mode pad will be skipped.

The distinct modes translate into distinct sections in the Hash Table, which are recorded in order: First, the entire Sponge's transition of hashing the program is recorded. Then, the Hash Table records all Sponge instructions in the order the processor executed them. Then, the Hash Table records all hash instructions in the order the processor executed them. Lastly, as many padding rows as necessary are inserted. In total, this separation allows the processor to execute hash instructions without affecting the Sponge's state, and keeps program hashing independent from both.

Note that state_0 through state_3, corresponding to those states that are being split-and-looked-up in the Tip5 permutation, are not stored as a single field element. Instead, four limbs “highest”, “mid high”, “mid low”, and “lowest” are recorded in the Hash Table. This (basically) corresponds to storing the result of . In the Hash Table, the resulting limbs are 16 bit wide, and hence, there are only 4 limbs; the split into 8-bit limbs happens in the Cascade Table. For convenience, this document occasionally refers to those states as if they were a single register. This is an alias for .

Main Columns

The Hash Table has 67 main columns:

  • The Mode indicator, as described above. It takes value
    • for mode program_hashing,
    • for mode sponge,
    • for mode hash, and
    • for mode pad.
  • Current instruction CI, holding the instruction the processor is currently executing. This column is only relevant for mode sponge.
  • Round number indicator round_no, which can be one of . The Tip5 permutation has 5 rounds, indexed . The round number 5 indicates that the Tip5 permutation has been applied in full.
  • 16 columns state_i_highest_lkin, state_i_mid_high_lkin, state_i_mid_low_lkin, state_i_lowest_lkin for the to-be-looked-up value of state_0 through state_4, each of which holds one 16-bit wide limb.
  • 16 columns state_i_highest_lkout, state_i_mid_high_lkout, state_i_mid_low_lkout, state_i_lowest_lkout for the looked-up value of state_0 through state_4, each of which holds one 16-bit wide limb.
  • 12 columns state_5 through state_15.
  • 4 columns state_i_inv establishing correct decomposition of state_0_*_lkin through state_3_*_lkin into 16-bit wide limbs.
  • 16 columns constant_i, which hold the round constant for the round indicated by RoundNumber, or 0 if no round with this round number exists.

Auxiliary Columns

The Hash Table has 20 auxiliary columns:

  • RunningEvaluationReceiveChunk for the Evaluation Argument for copying chunks of size from the Program Table. Relevant for program attestation.
  • RunningEvaluationHashInput for the Evaluation Argument for copying the input to the hash function from the processor to the hash coprocessor,
  • RunningEvaluationHashDigest for the Evaluation Argument for copying the hash digest from the hash coprocessor to the processor,
  • RunningEvaluationSponge for the Evaluation Argument for copying the 10 next to-be-absorbed elements from the processor to the hash coprocessor or the 10 next squeezed elements from the hash coprocessor to the processor, depending on the instruction,
  • 16 columns state_i_limb_LookupClientLogDerivative (for i and limb highest, mid_high, mid_low, lowest ) establishing correct lookup of the respective limbs in the Cascade Table.

Padding

Each padding row is the all-zero row with the exception of

  • CI, which is the opcode of instruction hash,
  • state_i_inv for i , which is , and
  • constant_i for i , which is the ith constant for round 0.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. The Mode is program_hashing.
  2. The round number is 0.
  3. RunningEvaluationReceiveChunk has absorbed the first chunk of instructions with respect to indeterminate 🪣.
  4. RunningEvaluationHashInput is 1.
  5. RunningEvaluationHashDigest is 1.
  6. RunningEvaluationSponge is 1.
  7. For i and limb highest, mid_high, mid_low, lowest :
    state_i_limb_LookupClientLogDerivative has accumulated state_i_limb_lkin and state_i_limb_lkout with respect to challenges 🍒, 🍓 and indeterminate 🧺.

Initial Constraints as Polynomials

  1. Mode - 1
  2. round_no
  3. RunningEvaluationReceiveChunk - 🪣 - (🪑^10 + state_0·🪑^9 + state_1·🪑^8 + state_2·🪑^7 + state_3·🪑^6 + state_4·🪑^5 + state_5·🪑^4 + state_6·🪑^3 + state_7·🪑^2 + state_8·🪑 + state_9)
  4. RunningEvaluationHashInput - 1
  5. RunningEvaluationHashDigest - 1
  6. RunningEvaluationSponge - 1
  7. For i and limb highest, mid_high, mid_low, lowest :
    state_i_limb_LookupClientLogDerivative·(🧺 - 🍒·state_i_limb_lkin - 🍓·state_i_limb_lkout) - 1

Consistency Constraints

  1. The Mode is a valid mode, i.e., .
  2. If the Mode is program_hashing, hash, or pad, then the current instruction is the opcode of hash.
  3. If the Mode is sponge, then the current instruction is a Sponge instruction.
  4. If the Mode is pad, then the round_no is 0.
  5. If the current instruction CI is sponge_init, then the round_no is 0.
  6. For i : If the current instruction CI is sponge_init, then register state_i is 0. (Note: the remaining registers, corresponding to the rate, are guaranteed to be 0 through the Evaluation Argument “Sponge” with the processor.)
  7. For i : If the round number is 0 and the current Mode is hash, then register state_i is 1.
  8. For i : ensure that decomposition of state_i is unique. That is, if both high limbs of state_i are the maximum value, then both low limbs are 01. To make the corresponding polynomials low degree, register state_i_inv holds the inverse-or-zero of the re-composed highest two limbs of state_i subtracted from their maximum value. Let state_i_hi_limbs_minus_2_pow_32 be an alias for that difference: state_i_hi_limbs_minus_2_pow_32 state_i_highest_lk_instate_i_mid_high_lk_in.
    1. If the two high limbs of state_i are both the maximum possible value, then the two low limbs of state_i are both 0.
    2. The state_i_inv is the inverse of state_i_hi_limbs_minus_2_pow_32 or state_i_inv is 0.
    3. The state_i_inv is the inverse of state_i_hi_limbs_minus_2_pow_32 or state_i_hi_limbs_minus_2_pow_32 is 0.
  9. The round constants adhere to the specification of Tip5.

Consistency Constraints as Polynomials

  1. (Mode - 0)·(Mode - 1)·(Mode - 2)·(Mode - 3)
  2. (Mode - 2)·(CI - opcode(hash))
  3. (Mode - 0)·(Mode - 1)·(Mode - 3)
    ·(CI - opcode(sponge_init))·(CI - opcode(sponge_absorb))·(CI - opcode(sponge_squeeze))
  4. (Mode - 1)·(Mode - 2)·(Mode - 3)·(round_no - 0)
  5. (CI - opcode(hash))·(CI - opcode(sponge_absorb))·(CI - opcode(sponge_squeeze))·(round_no - 0)
  6. For i :
    ·(CI - opcode(hash))·(CI - opcode(sponge_absorb))·(CI - opcode(sponge_squeeze))
    ·(state_i - 0)
  7. For i :
    (round_no - 1)·(round_no - 2)·(round_no - 3)·(round_no - 4)·(round_no - 5)
    ·(Mode - 0)·(Mode - 1)·(Mode - 2)
    ·(state_i - 1)
  8. For i : define state_i_hi_limbs_minus_2_pow_32 := 2^32 - 1 - 2^16·state_i_highest_lk_in - state_i_mid_high_lk_in.
    1. (1 - state_i_inv · state_i_hi_limbs_minus_2_pow_32)·(2^16·state_i_mid_low_lk_in + state_i_lowest_lk_in)
    2. (1 - state_i_inv · state_i_hi_limbs_minus_2_pow_32)·state_i_inv
    3. (1 - state_i_inv · state_i_hi_limbs_minus_2_pow_32)·state_i_hi_limbs_minus_2_pow_32

Transition Constraints

  1. If the round_no is 5, then the round_no in the next row is 0.
  2. If the Mode is not pad and the current instruction CI is not sponge_init and the round_no is not 5, then the round_no increments by 1.
  3. If the Mode in the next row is program_hashing and the round_no in the next row is 0, then receive a chunk of instructions with respect to challenges 🪣 and 🪑.
  4. If the Mode changes from program_hashing, then the Evaluation Argument of state_0 through state_4 with respect to indeterminate 🥬 equals the public program digest challenge, 🫑.
  5. If the Mode is program_hashing and the Mode in the next row is sponge, then the current instruction in the next row is sponge_init.
  6. If the round_no is not 5 and the current instruction CI is not sponge_init, then the current instruction does not change.
  7. If the round_no is not 5 and the current instruction CI is not sponge_init, then the Mode does not change.
  8. If the Mode is sponge, then the Mode in the next row is sponge or hash or pad.
  9. If the Mode is hash, then the Mode in the next row is hash or pad.
  10. If the Mode is pad, then the Mode in the next row is pad.
  11. If the round_no in the next row is 0 and the Mode in the next row is either program_hashing or sponge and the instruction in the next row is either sponge_absorb or sponge_init, then the capacity's state registers don't change.
  12. If the round_no in the next row is 0 and the current instruction in the next row is sponge_squeeze, then none of the state registers change.
  13. If the round_no in the next row is 0 and the Mode in the next row is hash, then RunningEvaluationHashInput accumulates the next row with respect to challenges 🧄₀ through 🧄₉ and indeterminate 🚪. Otherwise, it remains unchanged.
  14. If the round_no in the next row is 5 and the Mode in the next row is hash, then RunningEvaluationHashDigest accumulates the next row with respect to challenges 🧄₀ through 🧄₄ and indeterminate 🪟. Otherwise, it remains unchanged.
  15. If the round_no in the next row is 0 and the Mode in the next row is sponge, then RunningEvaluationSponge accumulates the next row with respect to challenges 🧅 and 🧄₀ through 🧄₉ and indeterminate 🧽. Otherwise, it remains unchanged.
  16. For i and limb highest, mid_high, mid_low, lowest :
    If the next round number is not 5 and the mode in the next row is not pad and the current instruction CI in the next row is not sponge_init, then state_i_limb_LookupClientLogDerivative has accumulated state_i_limb_lkin' and state_i_limb_lkout' with respect to challenges 🍒, 🍓 and indeterminate 🧺. Otherwise, state_i_limb_LookupClientLogDerivative remains unchanged.
  17. For r :
    If the round_no is r, the state registers adhere to the rules of applying round r of the Tip5 permutation.

Transition Constraints as Polynomials

  1. (round_no - 0)·(round_no - 1)·(round_no - 2)·(round_no - 3)·(round_no - 4)·(round_no' - 0)
  2. (Mode - 0)·(round_no - 5)·(CI - opcode(sponge_init))·(round_no' - round_no - 1)
  3. RunningEvaluationReceiveChunk' - 🪣·RunningEvaluationReceiveChunk - (🪑^10 + state_0·🪑^9 + state_1·🪑^8 + state_2·🪑^7 + state_3·🪑^6 + state_4·🪑^5 + state_5·🪑^4 + state_6·🪑^3 + state_7·🪑^2 + state_8·🪑 + state_9)
  4. (Mode - 0)·(Mode - 2)·(Mode - 3)·(Mode' - 1)·(🥬^5 + state_0·🥬^4 + state_1·🥬^3 + state_2·🥬^2 + state_3·🥬^1 + state_4 - 🫑)
  5. (Mode - 0)·(Mode - 2)·(Mode - 3)·(Mode' - 2)·(CI' - opcode(sponge_init))
  6. (round_no - 5)·(CI - opcode(sponge_init))·(CI' - CI)
  7. (round_no - 5)·(CI - opcode(sponge_init))·(Mode' - Mode)
  8. (Mode - 0)·(Mode - 1)·(Mode - 3)·(Mode' - 0)·(Mode' - 2)·(Mode' - 3)
  9. (Mode - 0)·(Mode - 1)·(Mode - 2)·(Mode' - 0)·(Mode' - 3)
  10. (Mode - 1)·(Mode - 2)·(Mode - 3)·(Mode' - 0)
  11. (round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)·(round_no' - 5)
    ·(Mode' - 3)·(Mode' - 0)
    ·(CI' - opcode(sponge_init))
    ·(🧄₁₀·(state_10' - state_10) + 🧄₁₁·(state_11' - state_11) + 🧄₁₂·(state_12' - state_12) + 🧄₁₃·(state_13' - state_13) + 🧄₁₄·(state_14' - state_14) + 🧄₁₅·(state_15' - state_15))
  12. (round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)·(round_no' - 5)
    ·(CI' - opcode(hash))·(CI' - opcode(sponge_init))·(CI' - opcode(sponge_absorb))
    ·(🧄₀·(state_0' - state_0) + 🧄₁·(state_1' - state_1) + 🧄₂·(state_2' - state_2) + 🧄₃·(state_3' - state_3) + 🧄₄·(state_4' - state_4)
    + 🧄₅·(state_5' - state_5) + 🧄₆·(state_6' - state_6) + 🧄₇·(state_7' - state_7) + 🧄₈·(state_8' - state_8) + 🧄₉·(state_9' - state_9)
    + 🧄₁₀·(state_10' - state_10) + 🧄₁₁·(state_11' - state_11) + 🧄₁₂·(state_12' - state_12) + 🧄₁₃·(state_13' - state_13) + 🧄₁₄·(state_14' - state_14) + 🧄₁₅·(state_15' - state_15))
  13. (round_no' - 0)·(round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)
    ·(RunningEvaluationHashInput' - 🚪·RunningEvaluationHashInput - 🧄₀·state_0' - 🧄₁·state_1' - 🧄₂·state_2' - 🧄₃·state_3' - 🧄₄·state_4' - 🧄₅·state_5' - 🧄₆·state_6' - 🧄₇·state_7' - 🧄₈·state_8' - 🧄₉·state_9')
    + (round_no' - 0)·(RunningEvaluationHashInput' - RunningEvaluationHashInput)
    + (Mode' - 3)·(RunningEvaluationHashInput' - RunningEvaluationHashInput)
  14. (round_no' - 0)·(round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)
    ·(Mode' - 0)·(Mode' - 1)·(Mode' - 2)
    ·(RunningEvaluationHashDigest' - 🪟·RunningEvaluationHashDigest - 🧄₀·state_0' - 🧄₁·state_1' - 🧄₂·state_2' - 🧄₃·state_3' - 🧄₄·state_4')
    + (round_no' - 5)·(RunningEvaluationHashDigest' - RunningEvaluationHashDigest)
    + (Mode' - 3)·(RunningEvaluationHashDigest' - RunningEvaluationHashDigest)
  15. (round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)·(round_no' - 5)
    ·(CI' - opcode(hash))
    ·(RunningEvaluationSponge' - 🧽·RunningEvaluationSponge - 🧅·CI' - 🧄₀·state_0' - 🧄₁·state_1' - 🧄₂·state_2' - 🧄₃·state_3' - 🧄₄·state_4' - 🧄₅·state_5' - 🧄₆·state_6' - 🧄₇·state_7' - 🧄₈·state_8' - 🧄₉·state_9')
    + (RunningEvaluationSponge' - RunningEvaluationSponge)·(round_no' - 0)
    + (RunningEvaluationSponge' - RunningEvaluationSponge)·(CI' - opcode(sponge_init))·(CI' - opcode(sponge_absorb))·(CI' - opcode(sponge_squeeze))
  16. For i and limb highest, mid_high, mid_low, lowest :
    (round_no' - 5)·(Mode' - 0)·(CI' - opcode(sponge_init))·((state_i_limb_LookupClientLogDerivative' - state_i_limb_LookupClientLogDerivative)·(🧺 - 🍒·state_i_limb_lkin' - 🍓·state_i_limb_lkout') - 1)
    + (round_no' - 0)·(round_no' - 1)·(round_no' - 2)·(round_no' - 3)·(round_no' - 4)
    ·(state_i_limb_LookupClientLogDerivative' - state_i_limb_LookupClientLogDerivative)
    + (CI' - opcode(hash))·(CI' - opcode(sponge_absorb))·(CI' - opcode(sponge_squeeze))
    ·(state_i_limb_LookupClientLogDerivative' - state_i_limb_LookupClientLogDerivative)
  17. The remaining constraints are left as an exercise to the reader. For hints, see the Tip5 paper.

Terminal Constraints

  1. If the Mode is program_hashing, then the Evaluation Argument of state_0 through state_4 with respect to indeterminate 🥬 equals the public program digest challenge, 🫑.
  2. If the Mode is not pad and the current instruction CI is not sponge_init, then the round_no is 5.

Terminal Constraints as Polynomials

  1. 🥬^5 + state_0·🥬^4 + state_1·🥬^3 + state_2·🥬^2 + state_3·🥬 + state_4 - 🫑
  2. (Mode - 0)·(CI - opcode(sponge_init))·(round_no - 5)
1

This is a special property of the Oxfoi prime.

Cascade Table

The Cascade Table helps arithmetizing the lookups necessary for the split-and-lookup S-box used in the Tip5 permutation. The main idea is to allow the Hash Table to look up limbs that are 16 bit wide even though the S-box is defined over limbs that are 8 bit wide. The Cascade Table facilitates the translation of limb widths. For the actual lookup of the 8-bit limbs, the Lookup Table is used. For a more detailed explanation and in-depth analysis, see the Tip5 paper.

Main Columns

The Cascade Table has 6 main columns:

namedescription
IsPaddingIndicator for padding rows.
LookInHiThe more significant bits of the lookup input.
LookInLoThe less significant bits of the lookup input.
LookOutHiThe more significant bits of the lookup output.
LookOutLoThe less significant bits of the lookup output.
LookupMultiplicityThe number of times the value is looked up by the Hash Table.

Auxiliary Columns

The Cascade Table has 2 auxiliary columns:

  • HashTableServerLogDerivative, the (running sum of the) logarithmic derivative for the Lookup Argument with the Hash Table. In every row, the sum accumulates LookupMultiplicity / (🧺 - Combo) where 🧺 is a verifier-supplied challenge and Combo is the weighted sum of LookInHi · 2^8 + LookInLo and LookOutHi · 2^8 + LookOutLo with weights 🍒 and 🍓 supplied by the verifier.
  • LookupTableClientLogDerivative, the (running sum of the) logarithmic derivative for the Lookup Argument with the Lookup Table. In every row, the sum accumulates the two summands
    1. 1 / combo_hi where combo_hi is the verifier-weighted combination of LookInHi and LookOutHi, and
    2. 1 / combo_lo where combo_lo is the verifier-weighted combination of LookInLo and LookOutLo.

Padding

Each padding row is the all-zero row with the exception of IsPadding, which is 1.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. If the first row is not a padding row, then HashTableServerLogDerivative has accumulated the first row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else, HashTableServerLogDerivative is 0.
  2. If the first row is not a padding row, then LookupTableClientLogDerivative has accumulated the first row with respect to challenges 🥦 and 🥒 and indeterminate 🪒. Else, LookupTableClientLogDerivative is 0.

Initial Constraints as Polynomials

  1. (1 - IsPadding)·(HashTableServerLogDerivative·(🧺 - 🍒·(2^8·LookInHi + LookInLo) - 🍓·(2^8 · LookOutHi + LookOutLo)) - LookupMultiplicity)
    + IsPadding · HashTableServerLogDerivative
  2. (1 - IsPadding)·(LookupTableClientLogDerivative·(🪒 - 🥦·LookInLo - 🥒·LookOutLo)·(🪒 - 🥦·LookInHi - 🥒·LookOutHi) - 2·🪒 + 🥦·(LookInLo + LookInHi) + 🥒·(LookOutLo + LookOutHi))
    + IsPadding·LookupTableClientLogDerivative

Consistency Constraints

  1. IsPadding is 0 or 1.

Consistency Constraints as Polynomials

  1. IsPadding·(1 - IsPadding)

Transition Constraints

  1. If the current row is a padding row, then the next row is a padding row.
  2. If the next row is not a padding row, then HashTableServerLogDerivative accumulates the next row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else, HashTableServerLogDerivative remains unchanged.
  3. If the next row is not a padding row, then LookupTableClientLogDerivative accumulates the next row with respect to challenges 🥦 and 🥒 and indeterminate 🪒. Else, LookupTableClientLogDerivative remains unchanged.

Transition Constraints as Polynomials

  1. IsPadding·(1 - IsPadding')
  2. (1 - IsPadding')·((HashTableServerLogDerivative' - HashTableServerLogDerivative)·(🧺 - 🍒·(2^8·LookInHi' + LookInLo') - 🍓·(2^8 · LookOutHi' + LookOutLo')) - LookupMultiplicity')
    + IsPadding'·(HashTableServerLogDerivative' - HashTableServerLogDerivative)
  3. (1 - IsPadding')·((LookupTableClientLogDerivative' - LookupTableClientLogDerivative)·(🪒 - 🥦·LookInLo' - 🥒·LookOutLo')·(🪒 - 🥦·LookInHi' - 🥒·LookOutHi') - 2·🪒 + 🥦·(LookInLo' + LookInHi') + 🥒·(LookOutLo' + LookOutHi'))
    + IsPadding'·(LookupTableClientLogDerivative' - LookupTableClientLogDerivative)

Terminal Constraints

None.

Lookup Table

The Lookup Table helps arithmetizing the lookups necessary for the split-and-lookup S-box used in the Tip5 permutation. It works in tandem with the Cascade Table. In the language of the Tip5 paper, it is a “narrow lookup table.” This means it is always fully populated, independent of the actual number of lookups.

Correct creation of the Lookup Table is guaranteed through a public-facing Evaluation Argument: after sampling some challenge , the verifier computes the terminal of the Evaluation Argument over the list of all the expected lookup values with respect to challenge . The equality of this verifier-supplied terminal against the similarly computed, in-table part of the Evaluation Argument is checked by the Lookup Table's terminal constraint.

Main Columns

The Lookup Table has 4 main columns:

namedescription
IsPaddingIndicator for padding rows.
LookInThe lookup input.
LookOutThe lookup output.
LookupMultiplicityThe number of times the value is looked up.

Auxiliary Columns

The Lookup Table has 2 auxiliary columns:

  • CascadeTableServerLogDerivative, the (running sum of the) logarithmic derivative for the Lookup Argument with the Cascade Table. In every row, accumulates the summand LookupMultiplicity / Combo where Combo is the verifier-weighted combination of LookIn and LookOut.
  • PublicEvaluationArgument, the running sum for the public evaluation argument of the Lookup Table. In every row, accumulates LookOut.

Padding

Each padding row is the all-zero row with the exception of IsPadding, which is 1.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. LookIn is 0.
  2. CascadeTableServerLogDerivative has accumulated the first row with respect to challenges 🍒 and 🍓 and indeterminate 🧺.
  3. PublicEvaluationArgument has accumulated the first LookOut with respect to indeterminate 🧹.

Initial Constraints as Polynomials

  1. LookIn
  2. CascadeTableServerLogDerivative·(🧺 - 🍒·LookIn - 🍓·LookOut) - LookupMultiplicity
  3. PublicEvaluationArgument - 🧹 - LookOut

Consistency Constraints

  1. IsPadding is 0 or 1.

Consistency Constraints as Polynomials

  1. IsPadding·(1 - IsPadding)

Transition Constraints

  1. If the current row is a padding row, then the next row is a padding row.
  2. If the next row is not a padding row, LookIn increments by 1. Else, LookIn is 0.
  3. If the next row is not a padding row, CascadeTableServerLogDerivative accumulates the next row with respect to challenges 🍒 and 🍓 and indeterminate 🧺. Else, CascadeTableServerLogDerivative remains unchanged.
  4. If the next row is not a padding row, PublicEvaluationArgument accumulates the next LookOut with respect to indeterminate 🧹. Else, PublicEvaluationArgument remains unchanged.

Transition Constraints as Polynomials

  1. IsPadding·(1 - IsPadding')
  2. ((1 - IsPadding')·(LookIn' - LookIn - 1))
    + IsPadding'·LookIn'
  3. (1 - IsPadding')·((CascadeTableServerLogDerivative' - CascadeTableServerLogDerivative)·(🧺 - 🍒·LookIn' - 🍓·LookOut') - LookupMultiplicity')
    + IsPadding'·(CascadeTableServerLogDerivative' - CascadeTableServerLogDerivative)
  4. (1 - IsPadding')·((PublicEvaluationArgument' - PublicEvaluationArgument)·(🧹 - lookup_output'))
    + IsPadding'·(PublicEvaluationArgument' - PublicEvaluationArgument)

Terminal Constraints

  1. PublicEvaluationArgument matches verifier-supplied challenge 🪠.

Terminal Constraints as Polynomials

  1. PublicEvaluationArgument - 🪠

U32 Table

The U32 Operations Table arithmetizes the coprocessor for “difficult” 32-bit unsigned integer operations. The two inputs to the U32 Operations Table are left-hand side (LHS) and right-hand side (RHS), usually corresponding to the processor's st0 and st1, respectively. (For more details see the arithmetization of the specific u32 instructions further below.)

To allow efficient arithmetization, a u32 instruction's result is constructed using multiple rows. The collection of rows arithmetizing the execution of one instruction is called a section. The U32 Table's sections are independent of each other. The processor's current instruction CI is recorded within the section and dictates its arithmetization.

Crucially, the rows of the U32 table are independent of the processor's clock. Hence, the result of the instruction can be transferred into the processor within one clock cycle.

Main Columns

namedescription
CopyFlagThe row to be copied between processor and u32 coprocessor. Marks the beginning of a section.
CICurrent instruction, the instruction the processor is currently executing.
BitsThe number of bits that LHS and RHS have already been shifted by.
BitsMinus33InvThe inverse-or-zero of the difference between 33 and Bits.
LHSLeft-hand side of the operation. Usually corresponds to the processor's st0.
LhsInvThe inverse-or-zero of LHS. Needed to check whether LHS is unequal to 0.
RHSRight-hand side of the operation. Usually corresponds to the processor's st1.
RhsInvThe inverse-or-zero of RHS. Needed to check whether RHS is unequal to 0.
ResultThe result (or intermediate result) of the instruction requested by the processor.
LookupMultiplicityThe number of times the processor has executed the current instruction with the same arguments.

An example U32 Table follows. Some columns are omitted for presentation reasons. All cells in the U32 Table contain elements of the B-field. For clearer illustration of the mechanics, the columns marked “” are presented in base 2, whereas columns marked “” are in the usual base 10.

CopyFlagCIBitsLHSRHSResultResult
1and011000110101100024
0and111001101110012
0and21101101106
0and31111113
0and41111
0and50000
1pow01010110000032
0pow110101004
0pow2101102
0pow310011
1log_2_floor010011001015
0log_2_floor11001101015
0log_2_floor2100101015
0log_2_floor310001015
0log_2_floor41001015
0log_2_floor5101015
0log_2_floor600-1-1
1lt0111111101100
0lt11111110100
0lt211111000
0lt31111102
0lt411102
0lt500102

The AIR verifies the correct update of each consecutive pair of rows. For most instructions, the current least significant bit of both LHS and RHS is eliminated between two consecutive rows. This eliminated bit is used to successively build the required result in the Result column. For instruction pow, only the least significant bit RHS is eliminated, while LHS remains unchanged throughout the section.

There are 6 instructions the U32 Table is “aware” of: split, lt, and, log_2_floor, pow, and pop_count. The instruction split uses the U32 Table for range checking only. Concretely, the U32 Table ensures that the instruction split's resulting “high bits” and “low bits” each fit in a u32. Since the processor does not expect any result from instruction split, the Result must be 0 else the Lookup Argument fails. Similarly, for instructions log_2_floor and pop_count, the processor always sets the RHS to 0.

For instruction xor, the processor requests the computation of the two arguments' and and converts the result using the following equality:

Credit for this trick goes, to the best of our knowledge, to Daniel Lubarov.

For the remaining u32 instruction div_mod, the processor triggers the creation of two sections in the U32 Table:

  • One section to ensure that the remainder r is smaller than the divisor d. The processor requests the result of lt by setting the U32 Table's CI register to the opcode of lt. This also guarantees that r and d each fit in a u32.
  • One section to ensure that the quotient q and the numerator n each fit in a u32. The processor needs no result, only the range checking capabilities, like for instruction split. Consequently, the processor sets the U32 Table's CI register to the opcode of split.

If the current instruction is lt, the Result can take on the values 0, 1, or 2, where

  • 0 means LHS >= RHS is definitely known in the current row,
  • 1 means LHS < RHS is definitely known in the current row, and
  • 2 means the result is unknown in the current row. This is only an intermediate result. It can never occur in the first row of a section, i.e., when CopyFlag is 1.

A new row with CopyFlag = 1 can only be inserted if

  1. LHS is 0 or the current instruction is pow, and
  2. RHS is 0.

It is impossible to create a valid proof of correct execution of Triton VM if Bits is 33 in any row.

Auxiliary Columns

The U32 Table has 1 auxiliary column, U32LookupServerLogDerivative. It corresponds to the Lookup Argument with the Processor Table, establishing that whenever the processor executes a u32 instruction, the following holds:

  • the processor's requested left-hand side is copied into LHS,
  • the processor's requested right-hand side is copied into RHS,
  • the processor's requested u32 instruction is copied into CI, and
  • the result Result is copied to the processor.

Padding

Each padding row is the all-zero row with the exception of

  • CI, which is the opcode of split, and
  • BitsMinus33Inv, which is .

Additionally, if the U32 Table is non-empty before applying padding, the padding row's columns CI, LHS, LhsInv, and Result are taken from the U32 Table's last row.

Arithmetic Intermediate Representation

Let all household items (🪥, 🛁, etc.) be challenges, concretely evaluation points, supplied by the verifier. Let all fruit & vegetables (🥝, 🥥, etc.) be challenges, concretely weights to compress rows, supplied by the verifier. Both types of challenges are X-field elements, i.e., elements of .

Initial Constraints

  1. If the CopyFlag is 0, then U32LookupServerLogDerivative is 0. Otherwise, the U32LookupServerLogDerivative has accumulated the first row with multiplicity LookupMultiplicity and with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.

Initial Constraints as Polynomials

  1. (CopyFlag - 1)·U32LookupServerLogDerivative
    + CopyFlag·(U32LookupServerLogDerivative·(🧷 - 🥜·LHS - 🌰·RHS - 🥑·CI - 🥕·Result) - LookupMultiplicity)

Consistency Constraints

  1. The CopyFlag is 0 or 1.
  2. If CopyFlag is 1, then Bits is 0.
  3. BitsMinus33Inv is the inverse of (Bits - 33).
  4. LhsInv is 0 or the inverse of LHS.
  5. Lhs is 0 or LhsInv is the inverse of LHS.
  6. RhsInv is 0 or the inverse of RHS.
  7. Rhs is 0 or RhsInv is the inverse of RHS.
  8. If CopyFlag is 0 and the current instruction is lt and LHS is 0 and RHS is 0, then Result is 2.
  9. If CopyFlag is 1 and the current instruction is lt and LHS is 0 and RHS is 0, then Result is 0.
  10. If the current instruction is and and LHS is 0 and RHS is 0, then Result is 0.
  11. If the current instruction is pow and RHS is 0, then Result is 1.
  12. If CopyFlag is 0 and the current instruction is log_2_floor and LHS is 0, then Result is -1.
  13. If CopyFlag is 1 and the current instruction is log_2_floor and LHS is 0, the VM crashes.
  14. If CopyFlag is 0 and the current instruction is pop_count and LHS is 0, then Result is 0.
  15. If CopyFlag is 0, then LookupMultiplicity is 0.

Written in Disjunctive Normal Form, the same constraints can be expressed as:

  1. The CopyFlag is 0 or 1.
  2. CopyFlag is 0 or Bits is 0.
  3. BitsMinus33Inv the inverse of (Bits - 33).
  4. LhsInv is 0 or the inverse of LHS.
  5. Lhs is 0 or LhsInv is the inverse of LHS.
  6. RhsInv is 0 or the inverse of RHS.
  7. Rhs is 0 or RhsInv is the inverse of RHS.
  8. CopyFlag is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or LHS is not 0 or RHS is not 0 or Result is 2.
  9. CopyFlag is 0 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or LHS is not 0 or RHS is not 0 or Result is 0.
  10. CI is the opcode of split, lt, pow, log_2_floor, or pop_count or LHS is not 0 or RHS is not 0 or Result is 0.
  11. CI is the opcode of split, lt, and, log_2_floor, or pop_count or RHS is not 0 or Result is 1.
  12. CopyFlag is 1 or CI is the opcode of split, lt, and, pow, or pop_count or LHS is not 0 or Result is -1.
  13. CopyFlag is 0 or CI is the opcode of split, lt, and, pow, or pop_count or LHS is not 0.
  14. CopyFlag is 1 or CI is the opcode of split, lt, and, pow, or log_2_floor or LHS is not 0 or Result is 0.
  15. CopyFlag is 1 or LookupMultiplicity is 0.

Consistency Constraints as Polynomials

  1. CopyFlag·(CopyFlag - 1)
  2. CopyFlag·Bits
  3. 1 - BitsMinus33Inv·(Bits - 33)
  4. LhsInv·(1 - LHS·LhsInv)
  5. LHS·(1 - LHS·LhsInv)
  6. RhsInv·(1 - RHS·RhsInv)
  7. RHS·(1 - RHS·RhsInv)
  8. (CopyFlag - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(1 - LHS·LhsInv)·(1 - RHS·RhsInv)·(Result - 2)
  9. (CopyFlag - 0)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(1 - LHS·LhsInv)·(1 - RHS·RhsInv)·(Result - 0)
  10. (CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(1 - LHS·LhsInv)·(1 - RHS·RhsInv)·Result
  11. (CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(1 - RHS·RhsInv)·(Result - 1)
  12. (CopyFlag - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(pop_count))·(1 - LHS·LhsInv)·(Result + 1)
  13. CopyFlag·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(pop_count))·(1 - LHS·LhsInv)
  14. (CopyFlag - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(1 - LHS·LhsInv)·(Result - 0)
  15. (CopyFlag - 1)·LookupMultiplicity

Transition Constraints

Even though they are never explicitly represented, it is useful to alias the LHS's and RHS's least significant bit, or “lsb.” Given two consecutive rows, the (current) least significant bit of LHS can be computed by subtracting twice the next row's LHS from the current row's LHS. These aliases, i.e., LhsLsb := LHS - 2·LHS' and RhsLsb := RHS - 2·RHS', are used throughout the following.

  1. If the CopyFlag in the next row is 1 and the current instruction is not pow, then LHS in the current row is 0.
  2. If the CopyFlag in the next row is 1, then RHS in the current row is 0.
  3. If the CopyFlag in the next row is 0, then CI in the next row is CI in the current row.
  4. If the CopyFlag in the next row is 0 and LHS in the current row is unequal to 0 and the current instruction is not pow, then Bits in the next row is Bits in the current row plus 1.
  5. If the CopyFlag in the next row is 0 and RHS in the current row is unequal to 0, then Bits in the next row is Bits in the current row plus 1.
  6. If the CopyFlag in the next row is 0 and the current instruction is not pow, then LhsLsb is either 0 or 1.
  7. If the CopyFlag in the next row is 0, then RhsLsb is either 0 or 1.
  8. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 0, then Result in the current row is 0.
  9. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 1, then Result in the current row is 1.
  10. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 2 and LhsLsb is 0 and RhsLsb is 1, then Result in the current row is 1.
  11. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 2 and LhsLsb is 1 and RhsLsb is 0, then Result in the current row is 0.
  12. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 2 and LhsLsb is RhsLsb and the CopyFlag in the current row is 0, then Result in the current row is 2.
  13. If the CopyFlag in the next row is 0 and the current instruction is lt and Result in the next row is 2 and LhsLsb is RhsLsb and the CopyFlag in the current row is 1, then Result in the current row is 0.
  14. If the CopyFlag in the next row is 0 and the current instruction is and, then Result in the current row is twice Result in the next row plus the product of LhsLsb and RhsLsb.
  15. If the CopyFlag in the next row is 0 and the current instruction is log_2_floor and LHS in the next row is 0 and LHS in the current row is not 0, then Result in the current row is Bits.
  16. If the CopyFlag in the next row is 0 and the current instruction is log_2_floor and LHS in the next row is not 0, then Result in the current row is Result in the next row.
  17. If the CopyFlag in the next row is 0 and the current instruction is pow, then LHS remains unchanged.
  18. If the CopyFlag in the next row is 0 and the current instruction is pow and RhsLsb in the current row is 0, then Result in the current row is Result in the next row squared.
  19. If the CopyFlag in the next row is 0 and the current instruction is pow and RhsLsb in the current row is 1, then Result in the current row is Result in the next row squared times LHS in the current row.
  20. If the CopyFlag in the next row is 0 and the current instruction is pop_count, then Result in the current row is Result in the next row plus LhsLsb.
  21. If the CopyFlag in the next row is 0, then U32LookupServerLogDerivative in the next row is U32LookupServerLogDerivative in the current row.
  22. If the CopyFlag in the next row is 1, then U32LookupServerLogDerivative in the next row has accumulated the next row with multiplicity LookupMultiplicity and with respect to challenges 🥜, 🌰, 🥑, and 🥕, and indeterminate 🧷.

Written in Disjunctive Normal Form, the same constraints can be expressed as:

  1. CopyFlag' is 0 or LHS is 0 or CI is the opcode of pow.
  2. CopyFlag' is 0 or RHS is 0.
  3. CopyFlag' is 1 or CI' is CI.
  4. CopyFlag' is 1 or LHS is 0 or CI is the opcode of pow or Bits' is Bits + 1.
  5. CopyFlag' is 1 or RHS is 0 or Bits' is Bits + 1.
  6. CopyFlag' is 1 or CI is the opcode of pow or LhsLsb is 0 or LhsLsb is 1.
  7. CopyFlag' is 1 or RhsLsb is 0 or RhsLsb is 1.
  8. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 1 or 2) or Result is 0.
  9. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 0 or 2) or Result is 1.
  10. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 0 or 1) or LhsLsb is 1 or RhsLsb is 0 or Result is 1.
  11. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 0 or 1) or LhsLsb is 0 or RhsLsb is 1 or Result is 0.
  12. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 0 or 1) or LhsLsb is unequal to RhsLsb or CopyFlag is 1 or Result is 2.
  13. CopyFlag' is 1 or CI is the opcode of split, and, pow, log_2_floor, or pop_count or (Result' is 0 or 1) or LhsLsb is unequal to RhsLsb or CopyFlag is 0 or Result is 0.
  14. CopyFlag' is 1 or CI is the opcode of split, lt, pow, log_2_floor, or pop_count or Result is twice Result' plus the product of LhsLsb and RhsLsb.
  15. CopyFlag' is 1 or CI is the opcode of split, lt, and, pow, or pop_count or LHS' is not 0 or LHS is 0 or Result is Bits.
  16. CopyFlag' is 1 or CI is the opcode of split, lt, and, pow, or pop_count or LHS' is 0 or Result is Result'.
  17. CopyFlag' is 1 or CI is the opcode of split, lt, and, log_2_floor, or pop_count or LHS' is LHS.
  18. CopyFlag' is 1 or CI is the opcode of split, lt, and, log_2_floor, or pop_count or RhsLsb is 1 or Result is Result' times Result'.
  19. CopyFlag' is 1 or CI is the opcode of split, lt, and, log_2_floor, or pop_count or RhsLsb is 0 or Result is Result' times Result' times LHS.
  20. CopyFlag' is 1 or CI is the opcode of split, lt, and, pow, or log_2_floor or Result is Result' plus LhsLsb.
  21. CopyFlag' is 1 or U32LookupServerLogDerivative' is U32LookupServerLogDerivative.
  22. CopyFlag' is 0 or the difference of U32LookupServerLogDerivative' and U32LookupServerLogDerivative times (🧷 - 🥜·LHS' - 🌰·RHS' - 🥑·CI' - 🥕·Result') is LookupMultiplicity'.

Transition Constraints as Polynomials

  1. (CopyFlag' - 0)·LHS·(CI - opcode(pow))
  2. (CopyFlag' - 0)·RHS
  3. (CopyFlag' - 1)·(CI' - CI)
  4. (CopyFlag' - 1)·LHS·(CI - opcode(pow))·(Bits' - Bits - 1)
  5. (CopyFlag' - 1)·RHS·(Bits' - Bits - 1)
  6. (CopyFlag' - 1)·(CI - opcode(pow))·LhsLsb·(LhsLsb - 1)
  7. (CopyFlag' - 1)·RhsLsb·(RhsLsb - 1)
  8. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 1)·(Result' - 2)·(Result - 0)
  9. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 0)·(Result' - 2)·(Result - 1)
  10. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 0)·(Result' - 1)·(LhsLsb - 1)·(RhsLsb - 0)·(Result - 1)
  11. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 0)·(Result' - 1)·(LhsLsb - 0)·(RhsLsb - 1)·(Result - 0)
  12. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 0)·(Result' - 1)·(1 - LhsLsb - RhsLsb + 2·LhsLsb·RhsLsb)·(CopyFlag - 1)·(Result - 2)
  13. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result' - 0)·(Result' - 1)·(1 - LhsLsb - RhsLsb + 2·LhsLsb·RhsLsb)·(CopyFlag - 0)·(Result - 0)
  14. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(Result - 2·Result' - LhsLsb·RhsLsb)
  15. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(pop_count))·(1 - LHS'·LhsInv')·LHS·(Result - Bits)
  16. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(pop_count))·LHS'·(Result' - Result)
  17. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(LHS' - LHS)
  18. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(RhsLsb - 1)·(Result - Result'·Result')
  19. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(log_2_floor))·(CI - opcode(pop_count))·(RhsLsb - 0)·(Result - Result'·Result'·LHS)
  20. (CopyFlag' - 1)·(CI - opcode(split))·(CI - opcode(lt))·(CI - opcode(and))·(CI - opcode(pow))·(CI - opcode(log_2_floor))·(Result - Result' - LhsLsb)
  21. (CopyFlag' - 1)·(U32LookupServerLogDerivative' - U32LookupServerLogDerivative)
  22. (CopyFlag' - 0)·((U32LookupServerLogDerivative' - U32LookupServerLogDerivative)·(🧷 - 🥜·LHS - 🌰·RHS - 🥑·CI - 🥕·Result) - LookupMultiplicity')

Terminal Constraints

  1. LHS is 0 or the current instruction is pow.
  2. RHS is 0.

Terminal Constraints as Polynomials

  1. LHS·(CI - opcode(pow))
  2. RHS

Table Linking

Triton VM's tables are linked together using a variety of different cryptographic arguments. The section on arithmetization incorporates all the necessary ingredients by defining the corresponding parts of the Arithmetic Execution Tables and the Arithmetic Intermediate Representation. However, that presentation doesn't make it easy to follow and understand the specific arguments in use. Therefor, the distinct “cross-table” or “table-linking” arguments are presented here in isolation.

Compressing Multiple Elements

Mathematically, all arguments used in Triton VM are about single elements of the finite field . In practice, it is very useful to argue about multiple elements. A common trick is to “compress” multiple elements into a single one using random weights. These weights are “supplied by the STARK verifier,” i.e., sampled using the Fiat-Shamir heuristic after the prover has committed to the elements in question. For example, if , , and are to be incorporated into the cryptographic argument, then the prover

  • commits to , , and ,
  • samples weights , , and using the Fiat-Shamir heuristic,
  • “compresses” the elements in question to , and
  • uses in the cryptographic argument.

In the following, all cryptographic arguments are presented using single field elements. Depending on the use case, this single element represents multiple, “compressed” elements.

Permutation Argument

The Permutation Argument establishes that two lists and are permutations of each other. To achieve this, the lists' elements are interpreted as the roots of polynomials and , respectively:

The two lists and are a permutation of each other if and only if the two polynomials and are identical. By the Schwartz–Zippel lemma, probing the polynomials in a single point establishes the polynomial identity with high probability.1

In Triton VM, the Permutation Argument is generally applied to show that the rows of one table appear in some other table without enforcing the rows' order in relation to each other. To establish this, the prover

  • commits to the main column in question,2
  • samples a random challenge through the Fiat-Shamir heuristic,
  • computes the running product of and in the respective tables' auxiliary column.

For example, in Table A:

main column Aauxiliary column A: running product
0
1
2
3

And in Table B:

main column Bauxiliary column B: running product
2
1
3
0

It is possible to establish a subset relation by skipping over certain elements when computing the running product. The running product must incorporate the same elements in both tables. Otherwise, the Permutation Argument will fail.

An example of a subset Permutation Argument can be found between the U32 Table and the Processor Table.


1

This depends on the length of the lists and as well as the field size. For Triton VM, . The polynomials and are evaluated over the auxiliary field with elements. The false positive rate is therefore .

Evaluation Argument

The Evaluation Argument establishes that two lists and are identical. To achieve this, the lists' elements are interpreted as the coefficients of polynomials and , respectively:

The two lists and are identical if and only if the two polynomials and are identical. By the Schwartz–Zippel lemma, probing the polynomials in a single point establishes the polynomial identity with high probability.1

In Triton VM, the Evaluation Argument is generally used to show that (parts of) some row appear in two tables in the same order. To establish this, the prover

  • commits to the main column in question,2
  • samples a random challenge through the Fiat-Shamir heuristic,
  • computes the running evaluation of and in the respective tables' auxiliary column.

For example, in both Table A and B:

main columnauxiliary column: running evaluation
0
1
2
3

It is possible to establish a subset relation by skipping over certain elements when computing the running evaluation. The running evaluation must incorporate the same elements in both tables. Otherwise, the Evaluation Argument will fail.

Examples for subset Evaluation Arguments can be found between the Hash Table and the Processor Table.


1

This depends on the length of the lists and as well as the field size. For Triton VM, . The polynomials and are evaluated over the auxiliary field with elements. The false positive rate is therefore .

Lookup Argument

The Lookup Argument establishes that all elements of list also occur in list . In this context, contains the values that are being looked up, while is the lookup table.1 Both lists and may contain duplicates. However, it is inefficient if does, and is therefor assumed not to.

The example at the end of this section summarizes the necessary computations for the Lookup Argument. The rest of the section derives those computations. The first step is to interpret both lists' elements as the roots of polynomials and , respectively:

By counting the number of occurrences of unique elements and eliminating duplicates, can equivalently be expressed as:

The next step uses

  • the formal derivative of , and
  • the multiplicity-weighted formal derivative of .

The former is the familiar formal derivative:

The multiplicity-weighted formal derivative uses the lookup-multiplicities as additional factors:

Let the greatest common divisor of and . The polynomial has the same roots as , but all roots have multiplicity 1. This polynomial is identical to if and only if all elements in list also occur in list .

A similar property holds for the formal derivatives: The polynomial is identical to if and only if all elements in list also occur in list . By solving for and equating, the following holds:

Optimization through Logarithmic Derivatives

The logarithmic derivative of is defined as . Furthermore, the equality of monic polynomials and is equivalent to the equality of their logarithmic derivatives.2 This allows rewriting above equation as:

Using logarithmic derivatives for the equality check decreases the computational effort for both prover and verifier. Concretely, instead of four running products – one each for , , , and – only two logarithmic derivatives are needed. Namely, considering once again list containing duplicates, above equation can be written as:

To compute the sums, the lists and are main columns in the two respective tables. Additionally, the lookup multiplicity is recorded explicitly in a main column of the lookup table.

Example

In Table A:

main column Aauxiliary column A: logarithmic derivative
0
2
2
1
2

And in Table B:

main column Bmultiplicityauxiliary column B: logarithmic derivative
01
11
23

It is possible to establish a subset relation by skipping over certain elements when computing the logarithmic derivative. The logarithmic derivative must incorporate the same elements with the same multiplicity in both tables. Otherwise, the Lookup Argument will fail.

An example for a Lookup Argument can be found between the Program Table and the Processor Table.


1

The lookup table may represent a mapping from one or more “input” elements to one or more “output” elements – see “Compressing Multiple Elements.”

2

See Lemma 3 in this paper for a proof.

Memory Consistency

Triton has three memory-like units: the RAM, the jump stack, and the op stack. Each such unit has a corresponding table. Memory-consistency is the property that whenever the processor reads a data element from these units, the value that it receives corresponds to the value sent the last time when that cell was written. Memory consistency follows from two intermediate properties.

  1. Contiguity of regions of constant memory pointer. After selecting from each table all the rows with any given memory pointer, the resulting sublist is contiguous, which is to say, there are no gaps and two such sublists can never interleave.
  2. Inner-sorting within contiguous regions. Within each such contiguous region, the rows are sorted in ascending order by clock cycle.

The contiguity of regions of constant memory pointer is established differently for the RAM Table than for the Op Stack or Jump Stack Tables. The Op Stack and Jump Stack Tables enjoy a particular property whereby the memory pointer can only ever increase or decrease by one (or stay the same). As a result, simple AIR constraints can enforce the correct sorting by memory pointer. In contrast, the memory pointer for the RAM Table can jump arbitrarily. As explained in the next section, a Contiguity Argument establishes contiguity.

The correct inner sorting is established the same way for all three memory-like tables. Their respective clock jump differences – differences of clock cycles within regions of constant memory pointer – are shown to be contained in the set of all clock cycles through Lookup Arguments. Under reasonable assumptions about the running time, this fact implies that all clock jumps are directed forwards, as opposed to backwards, which in turn implies that the rows are sorted for clock cycle.

The next sections elaborate on these constructions. A dedicated section shows that these two properties do indeed suffice to prove memory consistency.

Contiguity of Memory-Pointer Regions

Contiguity for Op Stack Table

In each cycle, the memory pointer for the Op Stack Table, op_stack_pointer, can only ever increase by one, remain the same, or decrease by one. As a result, it is easy to enforce that the entire table is sorted for memory pointer using one initial constraint and one transition constraint.

  • Initial constraint: op_stack_pointer starts with 161, so in terms of polynomials the constraint is op_stack_pointer - 16.
  • Transition constraint: the new op_stack_pointer is either the same as the previous or one larger. The polynomial representation for this constraint is (op_stack_pointer' - op_stack_pointer - 1) · (op_stack_pointer' - op_stack_pointer).

Contiguity for Jump Stack Table

Analogously to the Op Stack Table, the Jump Stack's memory pointer jsp can only ever decrease by one, remain the same, or increase by one, within each cycle. As a result, similar constraints establish that the entire table is sorted for memory pointer.

  • Initial constraint: jsp starts with zero, so in terms of polynomials the constraint is jsp.
  • Transition constraint: the new jsp is either the same as the previous or one larger. The polynomial representation for this constraint is (jsp' - jsp - 1) * (jsp' - jsp).

Contiguity for RAM Table

The Contiguity Argument for the RAM table establishes that all RAM pointer regions start with distinct values. It is easy to ignore consecutive duplicates in the list of all RAM pointers using one additional main column. This allows identification of the RAM pointer values at the regions' boundaries, . The Contiguity Argument then shows that the list contains no duplicates. For this, it uses Bézout's identity for univariate polynomials. Concretely, the prover shows that for the polynomial with all elements of as roots, i.e., and its formal derivative , there exist and with appropriate degrees such that In other words, the Contiguity Argument establishes that the greatest common divisor of and is 1. This implies that all roots of have multiplicity 1, which holds if and only if there are no duplicate elements in list .

The following columns and constraints are needed for the Contiguity Argument:

  • Main column iord and two deterministic transition constraints enable conditioning on a changed memory pointer.
  • Main columns bcpc0 and bcpc1 and two deterministic transition constraints contain and constrain the symbolic Bézout coefficient polynomials' coefficients.
  • Auxiliary column rpp is a running product similar to that of a conditioned permutation argument. A randomized transition constraint verifies the correct accumulation of factors for updating this column.
  • Auxiliary column fd is the formal derivative of rpp. A randomized transition constraint verifies the correct application of the product rule of differentiation to update this column.
  • Auxiliary columns bc0 and bc1 build up the Bézout coefficient polynomials based on the corresponding main columns, bcpc0 and bcpc1. Two randomized transition constraints enforce the correct build-up of the Bézout coefficient polynomials.
  • A terminal constraint takes the weighted sum of the running product and the formal derivative, where the weights are the Bézout coefficient polynomials, and equates it to one. This equation asserts the Bézout relation.

The following table illustrates the idea. Columns not needed for establishing memory consistency are not displayed.

rampiordbcpc0bcpc1rppfdbc0bc1
0
0
0
0
-

The values contained in the auxiliary columns are undetermined until the verifier's challenge is known; before that happens it is worthwhile to present the polynomial expressions in , anticipating the substitution . The constraints are articulated relative to α.

The inverse of RAMP difference iord takes the inverse of the difference between the current and next ramp values if that difference is non-zero, and zero else. This constraint corresponds to two transition constraint polynomials:

  • (ramp' - ramp) ⋅ ((ramp' - ramp) ⋅ iord - 1)
  • iord ⋅ (ramp' - ramp) ⋅ iord - 1)

The running product rpp starts with initially, which is enforced by an initial constraint. It accumulates a factor in every pair of rows where ramp ≠ ramp'. This evolution corresponds to one transition constraint: (ramp' - ramp) ⋅ (rpp' - rpp ⋅ (α - ramp')) + (1 - (ramp' - ramp) ⋅ di) ⋅ (rpp' - rp)

Denote by the polynomial that accumulates all factors in every pair of rows where .

The column fd contains the “formal derivative” of the running product with respect to . The formal derivative is initially , which is enforced by an initial constraint. The transition constraint applies the product rule of differentiation conditioned upon the difference in ramp being nonzero; in other words, if then the same value persists; but if then is mapped as This update rule is called the product rule of differentiation because, assuming , then

The transition constraint for fd is (ramp' - ramp) ⋅ (fd' - rp - (α - ramp') ⋅ fd) + (1 - (ramp' - ramp) ⋅ iord) ⋅ (fd' - fd).

The polynomials and are the Bézout coefficient polynomials satisfying the relation The prover finds and as the minimal-degree Bézout coefficients as returned by the extended Euclidean algorithm. Concretely, the degree of is smaller than the degree of , and the degree of is smaller than the degree of .

The (scalar) coefficients of the Bézout coefficient polynomials are recorded in main columns bcpc0 and bcpc1, respectively. The transition constraints for these columns enforce that the value in one such column can only change if the memory pointer ramp changes. However, unlike the conditional update rule enforced by the transition constraints of rp and fd, the new value is unconstrained. Concretely, the two transition constraints are:

  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bcpc0' - bcpc0)
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bcpc1' - bcpc1)

Additionally, bcpc0 must initially be zero, which is enforced by an initial constraint. This upper-bounds the degrees of the Bézout coefficient polynomials, which are built from main columns bcpc0 and bcpc1. Two transition constraints enforce the correct build-up of the Bézout coefficient polynomials:

  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bc0' - bc0) + (ramp' - ramp) ⋅ (bc0' - α ⋅ bc0 - bcpc0')
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bc1' - bc1) + (ramp' - ramp) ⋅ (bc1' - α ⋅ bc1 - bcpc1')

Additionally, bc0 must initially be zero, and bc1 must initially equal bcpc1. This is enforced by initial constraints bc0 and bc1 - bcpc1.

Lastly, the verifier verifies the randomized AIR terminal constraint bc0 ⋅ rpp + bc1 ⋅ fd - 1.

Completeness.

For honest provers, the gcd is guaranteed to be one. As a result, the protocol has perfect completeness.

Soundness.

If the table has at least one non-contiguous region, then and share at least one factor. As a result, no Bézout coefficients and can exist such that . The verifier therefore probes unequal polynomials of degree at most , where is the length of the execution trace, which is upper bounded by . According to the Schwartz-Zippel lemma, the false positive probability is at most .

Zero-Knowledge.

Since the contiguity argument is constructed using only AET and AIR, zero-knowledge follows from Triton VM's zk-STNARK engine.

Summary of constraints

We present a summary of all constraints.

Initial

  • bcpc0
  • bc0
  • bc1 - bcpc1
  • fd - 1
  • rpp - (α - ramp)

Consistency

None.

Transition

  • (ramp' - ramp) ⋅ ((ramp' - ramp) ⋅ iord - 1)
  • iord ⋅ ((ramp' - ramp) ⋅ iord - 1)
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bcpc0' - bcpc0)
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bcpc1' - bcpc1)
  • (ramp' - ramp) ⋅ (rpp' - rpp ⋅ (α - ramp')) + (1 - (ramp' - ramp) ⋅ iord) ⋅ (rpp' - rpp)
  • (ramp' - ramp) ⋅ (fd' - rp - (α - ramp') ⋅ fd) + (1 - (ramp' - ramp) ⋅ iord) ⋅ (fd' - fd)
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bc0' - bc0) + (ramp' - ramp) ⋅ (bc0' - α ⋅ bc0 - bcpc0')
  • (1 - (ramp' - ramp) ⋅ iord) ⋅ (bc1' - bc1) + (ramp' - ramp) ⋅ (bc1' - α ⋅ bc1 - bcpc1')

Terminal

  • bc0 ⋅ rpp + bc1 ⋅ fd - 1

1

See data structures and registers for explanations of the specific value 16.

Clock Jump Differences and Inner Sorting

The previous sections show how it is proven that in the Jump Stack, Op Stack, and RAM Tables, the regions of constant memory pointer are contiguous. The next step is to prove that within each contiguous region of constant memory pointer, the rows are sorted for clock cycle. That is the topic of this section.

The problem arises from clock jumps, which describes the phenomenon when the clock cycle increases even though the memory pointer does not change. If arbitrary jumps were allowed, nothing would prevent the cheating prover from using a table where higher rows correspond to later states, giving rise to an exploitable attack. So it must be shown that every clock jump is directed forward and not backward.

Our strategy is to show that the difference, i.e., the next clock cycle minus the current clock cycle, is itself a clock cycle. Recall that the Processor Table's clock cycles run from 0 to . Therefore, a forward directed clock jump difference is in , whereas a backward directed clock jump's difference is in . No other clock jump difference can occur. If , there is no overlap between sets and . As a result, in this regime, showing that a clock jump difference is in guarantees that the jump is forward-directed.

The set of values in the Processor Table's clock cycle column is . A standard Lookup Argument can show that the clock jump differences are elements of that column. Since all three memory-like tables look up the same property, the Processor Table acts as a single server as opposed to three distinct servers. Concretely, the lookup multiplicities of the three clients, i.e., the memory-like tables, are recorded in a single column cjd_mul. It contains the element-wise sum of the three distinct lookup multiplicities.

Proof of Memory Consistency

Whenever the Processor Table reads a value "from" a memory-like table, this value appears nondeterministically and is unconstrained by the main table AIR constraints. However, there is a permutation argument that links the Processor Table to the memory-like table in question. The construction satisfies memory consistency if it guarantees that whenever a memory cell is read, its value is consistent with the last time that cell was written.

The above is too informal to provide a meaningful proof for. Let's put formal meanings on the proposition and premises, before reducing the former to the latter.

Let denote the Processor Table and denote the memory-like table. Both have height . Both have columns clk, mp, and val. For the column clk coincides with the index of the row. has another column ci, which contains the current instruction, which is write, read, or any. Obviously, mp and val are abstract names that depend on the particular memory-like table, just like write, read, and any are abstract instructions that depend on the type of memory being accessed. In the following math notation we use to denote the column name and to denote the value that the column might take in a given row.

Definition 1 (contiguity): The memory-like table is contiguous iff all sublists of rows with the same memory pointer mp are contiguous. Specifically, for any given memory pointer , there are no rows with a different memory pointer in between rows with memory pointer .

Definition 2 (regional sorting): The memory-like table is regionally sorted iff for every contiguous region of constant memory pointer, the clock cycle increases monotonically.

The symbol denotes the integer less than operator, after lifting the operands from the finite field to the integers.

Definition 3 (memory consistency): A Processor Table has memory consistency if whenever a memory cell at location is read, its value corresponds to the previous time the memory cell at location was written. Specifically, there are no writes in between the write and the read, that give the cell a different value.

Theorem 1 (memory consistency): Let be a Processor Table. If there exists a memory-like table such that

  • selecting for the columns clk, mp, val, the two tables' lists of rows are permutations of each other; and
  • is contiguous and regionally sorted; and
  • has no changes in val that coincide with clock jumps;

then has memory consistency.

Proof. For every memory pointer value , select the sublist of rows in order. The way this sublist is constructed guarantees that it coincides with the contiguous region of where the memory pointer is also .

Iteratively apply the following procedure to : remove the bottom-most row if it does not correspond to a row that constitutes a counter-example to memory consistency. Specifically, let be the clock cycle of the previous row in .

  • If satisfies then by construction it also satisfies . As a result, row is not part of a counter-example to memory consistency. We can therefore remove the bottom-most row and proceed to the next iteration of the outermost loop.
  • If then we can safely ignore this row: if there is no clock jump, then the absence of a -instruction guarantees that cannot change; and if there is a clock jump, then by assumption on , cannot change. So set to the clock cycle of the row above it in and proceed to the next iteration of the inner loop. If there are no rows left for to index, then there is no possible counterexample for and so remove the bottom-most row of and proceed to the next iteration of the outermost loop.
  • The case cannot occur because by construction of , cannot change.
  • The case cannot occur because the list was constructed by selecting only elements with the same memory pointer.
  • This list exhausts the possibilities of condition (1).

When consists of only two rows, it can contain no counter-examples. By applying the above procedure, we can reduce every correctly constructed sublist to a list consisting of two rows. Therefore, for every , the sublist is free of counter-examples to memory consistency. Equivalently, is memory consistent.

Program Attestation

“Program attestation” is the notion of cryptographically linking a program to the proof produced by that program's execution. Informally, program attestation allows Triton VM to prove claims of the form:

There exists a program with hash digest X that, given input Y, produces output Z.

Without program attestation, Triton VM would only be able to prove claims of the form:

There exists a program that, given input Y, produces output Z.

The proof system is zero-knowledge with respect to the actual program. That is, the verifier learns nothing about the program of which correct execution is proven, except the program's digest1. This property is generally desirable, for example to keep a specific machine learning model secret while being able to prove its correct execution. However, it can be equally desirable to reveal the program, for example to allow code auditing. In such cases, the program's source code can be revealed and digest equality can be checked.

As is the case throughout Triton VM, the used hash function is Tip5.

Mechanics

Program attestation is part of initializing Triton VM. That is, control is passed to the to-be-executed program only after the following steps have been performed:

  1. The instructions, located in program memory, are padded. The padding follows Tip5's rules for input of variable length2.
  2. The padded instructions are copied to the Hash Table.
  3. The Hash Table computes the program's digest by iterating Tip5 multiple times in Sponge mode.
  4. The program digest is copied to the bottom of the operational stack, i.e., stack elements st11 through st15.

The program digest is part of the generated proof3, and hence public. During proof verification, AIR constraints in the Hash Table establish that the claimed and the computed program digests are identical, and that the digest was computed integrally. This establishes verifiable integrity of the publicly claimed program digest. Below figure visualizes the interplay of tables and public data for proof verification.

Notably, a program has access to the hash digest of itself. This is useful for recursive verification: since the program digest is both part of the proof that is being verified and accessible during execution, the two can be checked for equality at runtime. This way, the recursive verifier can know whether it is actually recursing, or whether it is verifying a proof for some other program. After all, the hash digest of the verifier program cannot be hardcoded into the verifier program, because that would be a circular dependency.


1

Since hash functions are deterministic, a programmer desiring resistance against enumeration attacks might want to include blinding elements This is easily possible in, for example, dead code, by including instructions like push random_element. The distribution random_element is sampled from and the number of such instructions determine the security level against enumeration attacks.

2

Padding is one 1 followed by the minimal number of 0’s necessary to make the padded input length a multiple of the , which is 10. See also section 2.5 “Fixed-Length versus Variable-Length” in the Tip5 paper.

3

More specifically, of the claim, which itself is part of the proof.

Index Sampling

Task: given pseudorandomness, sample pseudorandomly distinct but otherwise uniform indices from the interval . This index sampling task is needed in several locations, not just in the STARK engine. Outside of the VM, this task can be achieved by rejection sampling until the list of collected samples has the requisite size (namely, ). Inside the VM, we like to avoid variable-length execution paths. This note explains how to do it using non-determinism.

Solution 1: Out-of-Order Divination

We want to sample exactly unique indices, but the sampler in the VM is allowed to divine counter values.

Outside VM

By rejection sampling until the list of samples has the requisite length I mean something like this:

    function sample_indices(count, upper_bound, randomness):
        list <- []
        counter <- 0
        while length(list) < count:
            integer <- Hash(randomness || counter)
            candidate_index <- integer % upper_bound
            if candidate_index in list:
                continue // reject
            else:
                list <- list || candidate_index
        return list

The loop needs to iterate at least count number of times. It is possible to parallelize this first step by separating the count first iterations from the variable-number follow-ups.

This python-like pseudocode hides important complexity behind the list membership test. To expand this logic, we need sorting and consecutive deduplication.

    function sample_indices(count, upper_bound, randomness):
        list <- []
        counter <- 0
        while length(list) < count:
            integer <- Hash(randomness || counter)
            candidate_index <- integer % upper_bound
            list <- sort(list || candidate_index)
            list.deduplicate()
        return list

Inside VM

In a nutshell, the VM nondeterministically guesses ("divines") the values of the counters out of order, checks that they are within allowed bounds, and then checks that the resulting list of indices is sorted and has no duplicates.

    function sample_indices(count, upper_bound, randomness):
        list <- []
        for i in [0:count):
            counter <- divine()
            assert(counter >= 0)
            assert(counter < count + margin)
            integer <- Hash(randomness || counter)
            index <- integer % upper_bound
            list <- list || index
            if i > 0:
                assert(list[i] < list[i-1])
        return list

The Virtual Machine supplies the right counters through divine(), having observed them from the out-of-VM execution.

The parameter margin should be large enough so that, with overwhelming probability, there is a set of count-many counters in [0:(count+margin)) that map to distinct indices. At the same time, the parameter margin should not be too large as it gives the malicious adversary more flexibility in selecting favorable indices.

Failure Probability

In order for the honest user to fail to find a good counter set, the list [0:(count+margin)) must contain at least margin+1 collisions under x -> Hash(randomness || x) % upper_bound. We continue by modeling this map as uniformly random.

The event margin+1 collisions or more implies that margin+1 samples were drawn from a subset of at most count-1 marked indices. The probability of the implied event is where denotes upper_bound, denotes count, and denotes margin.

Given the count and the upper_bound, the appropriate value for margin that pushes this failure probability below is

For a concrete example, set and . Then needs to be at least 6.

Security Degradation

Suppose the user is malicous and hopes to conceal his fraud by selecting a set of indices that do not expose it. Suppose that the proportion of subsets of of size that are suitable for the adversary is . Then clearly with the standard old index sampling method the attacker's success probability is bounded by . The question is whether the improved index sampler enables a higher success probability (and if so, how much higher).

The attacker has access to at most subsets of of size . The probability that a given subset is suitable for the attack is , and so:

  • The probability that one subset is unsuitable for attack is .
  • The probability that all subsets are unsuitable for attack is . In this step we assume that whether a subset is suitable for attack, is independent from whether different subset is suitable for attack, even if the intersection of the two subsets is nonzero.
  • The probability that at least one subset is suitable for attack is .

The binomial formula expands We assume that each subsequent term is smaller in absolute value than the previous because dominates. Indeed, plugging the well-known upper bound on the binomial coefficient , we get and the assumption holds already when . For a concrete example, set and , then the left hand side of this expression is roughly whereas these parameters target a security level of .

Using this assumption (on the shrinking absolute value of each successive term) we can derive an expression to quantify the security degradation.

For a concrete example, set and . Then and so we lose about bits of security.

Solution 2: Stupid Safety Margin

How about we sample indices from the start, and use them all no matter what? We only need for security. The margin parameter is chosen such that finding more than collisions, which is required to undermine the security guarantee, is cryptographically improbable.

The benefit of this solution is that both the index samplers, i.e., inside and outside the VM, have a simple description. Furthermore, there is no longer a nonzero failure probability for honest users. The drawback is that more work is done than necessary, and the proofs are larger too.

So what is the right value of ? It turns out that this probability matches with the correctness error derived for the previous solution. For redundancy this derivation is repeated here.

Sampling or more collisions requires hitting a marked subset of at most indices times. The probability of this event is therefore . This probability is smaller than when

For a concrete example, set and . Then needs to be at least 6.

Zero-Integral and Sum-Check

Let be a table of 1 column and rows, and let be any polynomial that agrees with when evaluated on , where is a generator of the subgroup of order . Group theory lets us prove and efficiently verify if .

Decompose into , where is the zerofier over , and where has degree at most . The table sums to zero if and only if integrates to zero over because

and this latter proposition is true if and only if the constant term of is zero.

Theorem. for a subgroup of order .

Let be a subgroup of . If then and also because the elements of come in pairs: . Therefore .

The map is a morphism of groups with . Therefore we have:

The polynomial has only one term whose exponent is , which is the constant term.

This observation gives rise to the following Polynomial IOP for verifying that a polynomial oracle integrates to 0 on a subgroup of order some power of 2.

Protocol Zero-Integral

  • Prover computes and .
  • Prover computes .
  • Prover sends , of degree at most , and , of degree at most to Verifier.
  • Verifier queries , , in z \xleftarrow{$} \mathbb{F} and receives .
  • Verifier tests .

This protocol can be adapted to show instead that a given polynomial oracle integrates to on the subgroup , giving rise to the well-known Sum-Check protocol. The adaptation follows from the Verifier's capacity to simulate from , where . This simulated polynomial is useful because

In other words, integrates to on iff integrates to on , and we already a protocol to establish the latter claim.