Dmytro Taranovsky

July 17, 2005

Extensively Expanded and Revised: June 27, 2017

**Abstract:** We use fast-growing finite and infinite sequences of natural numbers and more complicated constructs to define models of hypercomputation and interpret non-arithmetic predicates, with the strongest extensions reaching full second order arithmetical truth and beyond. Since the predicates are interpreted using properties of certain natural finite structures, they are arguably finitistic.

**Contents:**

1 Introduction

2 A Model of Hypercomputation

3 Strengthenings to Multiple Sequences

4 Finite Structures

5 $Σ^1_2$ Truth

6 Second Order Arithmetic

7 Beyond Second Order Arithmetic

7.1 Estimators with Transfinite Levels

7.2 Using Pattern Enumerators

7.3 Using Countable Infinity

Finitism refers to definitions and arguments that do not use an actual infinity, though they may involve a potential infinity. The potential infinity refers to the fact that the natural numbers are unlimited, and that there are arbitrarily large natural numbers. Hyperfinitism by contrast deals only with "small" natural numbers, such as those whose binary representation is feasible.

The central question about finitism is which properties are definable and which claims are demonstrable finitistically. At one extreme is the view that finitistic properties are the recursive ones, and that finitistic demonstration does not go much further beyond PRA. At the other extreme is the view that every definable property (including the truth predicate of set theory) of natural numbers can be defined finitistically, and that every true mathematical claim is finitistically demonstrable. On that view, infinite structures are useful as conceptual aids and for argument simplification, but can ultimately be replaced by finitistically definable finite analogues. Finitistic strong consistency proofs can involve either new finitistic properties or just new insights about the recursive properties. Most logicians hold a narrow view of finitistic definability because of absence of evidence to the contrary. Plausibly finitistic ways to define properties of high complexity or to prove strong consistency statements were simply not known before this paper (or at least not as generally known or developed). One compromise view is that philosophically, there is a continuum of notions between being fully finitistic and using infinity, and our results help to elucidate that continuum. And one formalist (or under Platonism, metaphoric) view is that our notions are an alternative approach to infinity: Infinity can be expressed directly (using infinite sets) or indirectly as idealization of large numbers and fast growing sequences and other constructs (as this paper develops). In any case, whether or not the definitions here are finitistic, they enrich our foundational understanding by interpreting statements involving infinity as properties of certain natural finite structures.

Hypercomputation refers to idealized computers computing non-recursive functions. Models of hypercomputation can be divided into two types. One uses oracles, either directly or through some auxiliary mechanism. The other uses an infinite amount of computation in finite time. In the models we propose (which are technically oracle based), finite computations correspond to infinite computations by using fast-growing sequences to convert unbounded searches into bounded ones.

In a way, this paper complements my paper [1], especially the last section, and its follow-up [2]. These papers use largeness to extend the language — of number theory in this paper, and of set theory in the other papers — though here the extensions are definable in preexisting mathematics.

For each notion here, the strategy is to intuitively define the notion using just finite numbers, give a formal definition using infinite sets, argue that the intuitive and formal notions agree, and prove the complexity of the formal notion. Precise notions will be intuitively defined by defining a vague notion and then defining the precise notion such that the definition does not depend on how vagueness is resolved. In the infinitary formal treatment, possibilities for the vague notion will form a directed system such that every sufficiently large choice works correctly.

We assume some understanding of set theory and second order arithmetic; relevant background can be found in a standard text such as [3] (especially for second order arithmetic).

The second section describes computation using a single sufficiently fast-growing infinite sequence. The third section discusses computation using multiple sequences. The fourth section presents similar results for finite sequences. The fifth section describes an arguably finitistic way to compute $Σ^1_2$ truth. The sixth section extends the results to full second order arithmetic, and the final section goes beyond second order arithmetic using transfinite levels of estimators (7.1), pattern enumerators (7.2), and countable infinity (7.3).

A powerful model of hypercomputation rests on a surprisingly mild physical assumption:

There is a type of event that occurs an unlimited number of times, but with frequency of occurrence decreasing sufficiently fast.

**Definition 2.1:** A language $L$ is recognized by a Turing machine with *the fast-growing sequence oracle* iff there is a total function $A$ such that for every $B$ with $B(n)≥A(n)$ (for every natural number $n$), the machine using $B$ as an oracle halts on input $S$ iff $S$ is in $L$.

A variation on the definition would be to require $A(n)$ to be sufficiently large relative to $(A(0),A(1),...,A(n-1))$, but the expressive power is the same as we can get the stronger version using $(A(0),A(A(0)), A(A(A(0))), ...)$. In this section, we will use "for every sufficiently fast-growing $B$, $φ(B)$" to mean $∃A ∀B (∀n B(n)≥A(n)) \, φ(B)$, but the *strict version* of "sufficiently fast-growing" $B$ requires $B(0)≥A(0) ∧ ∀n B(n+1)≥A(B(n))$. Also, for us, it is insufficient for $B$ to just be fast-growing asymptotically since the Turing machine must be chosen independent of $B$.

It is not known whether such machines are physically constructible. Currently, the biggest problem is surviving for the extremely long time required, rather than the apparent lack of the required physical events. In any case, studying the machines improves our understanding of the recursion theory. Below, we prove that a language is recognized by such a hypercomputer iff it is $Π^1_1$. Thus, a language is decidable in the model iff it is hyperarithmetic. Note that $Π^1_1$ rather than $Σ^1_1$ is the correct analogue of recursively enumerable. Also, it appears likely that a set is definable in first order arithmetic with a sufficiently fast-growing function $A$ (independent of $A$) iff it is recursive in a finite hyperjump of 0.

**Theorem 2.2:** Recognizability for Turing machines with the fast-growing sequence oracle equals $Π^1_1$.

**Proof:** For a sufficiently fast-growing sequence $A$, a recursive relation has an infinite descending path through $n$ iff it has an infinite descending path through $n$ and then through a natural number less than $A(\text{max}(n, m))$ where $m$ is the length of the given recursive definition of the relation. By König's lemma, if the relation is well-founded, then the tree is finite, and thus the machine searching it will eventually discover that the relation is well-founded.

For the converse, for each machine and input in the theorem, the machine halts iff for every $A:ℕ→ℕ$ there is a halting computation such that the answers the machine receives are at least as large as the answers given by $A$.

If we do not require a Turing machine to act uniformly on all sufficiently fast-growing $g$, we have the following.

**Theorem 2.3:**

*(a)* The set of Turing machines that halt for some sufficiently fast-growing $g$ (that is $g(n)$ is sufficiently large relative to $n$ and the Turing machine) is $Π^1_1$-complete.

*(b)* The set of Turing machines that halt for every monotonic sufficiently fast-growing $g$ is $Π^1_1$-complete.

*(c)* The set of Turing machines that halt for every sufficiently fast-growing $g$ equals (under one-one reduction) the game quantifier for $Σ^0_2$-games on integers.

*Notes:*

* The theorem (and the proof) relativizes to a real parameter.

* As a corollary, (b) also holds for the class of $g$ with $g(0)$ sufficiently large and $g(n+1)$ sufficiently large relative to $g(n)$.

* Game quantifier is the set of codes of games (given a coding) for which the first player has a winning strategy.

* Lack of monotonicity makes the condition in (c) less natural. The bound in (c) is far beyond a finite iteration of the hyperjump.

**Proof:**

*(a)* Follows from the proof of Theorem 2.2.

*(b)* Given a Turing machine $T$, consider the following game:

Player II: $N$ (a number)

repeat:

Player I: next $N$ values of a sequence $f$

Player II: next $N$ values [alternatively, one value] of a monotonic sequence $g$ with each $g(n)≥f(n)$; new value of $N>0$.

First player wins iff $T$ halts using $g$.

This is an open game on integers, so the game quantifier (the set of $T$ for which player I wins) is $Π^1_1$. If $T$ halts for every sufficiently fast-growing monotonic $g$, then player I wins (by playing a sufficiently fast-growing $f$). Conversely, if for some sufficiently fast-growing monotonic $g$, $T$ does not halt, then player II wins because for every strategy for player I, player II can make $g$ into any monotonic function that is sufficiently fast-growing relative to the player I strategy. (Proof sketch: Given a well-behaved bound $h$ derived from player I strategy, and a desired $g(n)>h^{3n}(0)$, choose the least $N$ to get to a gap $g(n+1)>h(h(g(n)))$.)

*(c)* To compute a $Σ^0_2$ ($∃m∀n \, φ(m,n)$)-game quantifier, modify the game by requiring each move to be below g(2*number_coding_the_play_so_far), and modify the winning condition to $∃m∀n<g(2m+1) \, φ(m,n,\text{play})$, and halt if the first player has a winning strategy. Here we use $g$ being sufficiently large for even numbers, the possibility of $g$ for odd numbers being sufficiently large relative to $g$ for even numbers, determinacy, and König's lemma.

Conversely, a Turing machine $T$ halts for every sufficiently fast-growing $g$ iff the first player wins the following game: At move $n$, the first player plays $f(n)$ (thus defining $f$), while the second player may pass or play $g(m)≥f(m)$ where $m$ is the least such that $g(m)$ has not yet been determined. The first player wins if $T$ halts using $g$, or some $g(n)$ is undefined.

For readers familiar with forcing, another interesting quantifier is "for every generic sufficiently fast-growing sequence" which is equivalent to "for every Hechler real (as a sequence of integers) with a sufficiently long initial segment removed". For non-absolute properties, the two versions are (1) $V$-generic, and (2) (in line with this paper) generic for a sufficiently complete countable model (using Template Definition 4.2) but existing in $V$. Hechler reals satisfy every open property (from the list of properties in the ground model) that is dense on sufficiently fast-growing sequences. To the extent determinacy (or just Baire property) holds, all such reals have the same properties for properties (in the ground model) that are not altered by removing a finite initial segment of the sequence. The sequences are not monotonic (in a sense anything that can happen happens), but we can define variants for monotonic, strictly monotonic, and the strict version (as used beneath Definition 2.1).

**Computational Complexity**

For each machine, input, and the set of possible $A$, we may define its computation tree, where we start at the root, each vertex is a query to $A$, and each branch is an answer to the query, and we may define the ordinal given machine and input as the ordinal depth of the computation tree for the set of sufficiently fast-growing $A$ (where 'sufficiently fast-growing' may depend on the machine and input). Intuitively, the ordinal says how long in the worst case the computation will take relative to the rate of growth of $A$. From there, the ordinal for a machine is the supremum of ordinals across inputs (assuming the machine always halts for sufficiently fast-growing $A$), and the ordinal for a problem is smallest ordinal for a machine (whose output is independent of the choice of a sufficiently fast-growing $A$) that solves the problem.

**Theorem 2.4:** For each recursive ordinal $α$, the ordinal for a problem is at most $α$ iff the problem is recursive in $0^{(α)}$ (αth Turing jump of 0).

**Proof:** To compute $0^{(α+1)}$, given an input $n$, compute $0^{(α)}$ up to $A(n)$, and then check whether machine $n$ halts on $0^{(α)}$ before getting out of bounds. The computation depth is $α+1$, as required (for limit $α$, $0^{(α)}$ can be computed as a sequence of lower Turing jumps).

Conversely, given a computation tree of depth $α$ for sufficiently fast-growing sequences, start with accepting halting positions (without checking rate of growth), and work backwards, and at each stage, mark positions such that for every sufficiently large answer, the resulting position was marked. For $α=β+n$, note that each level adds a $Δ^0_2$-quantifier, so $n$ levels merge into $Δ^0_{n+1}$ as required. (Each quantifier is $Δ^0_2$ since assuming the sequence so far was sufficiently fast-growing, and given that we measure at maximum depth, every sufficiently large choice for the next element will give the same acceptance value.)

Informally, we can speak of computational complexity on a much more fine-grained level as follows: A problem can be solved in time $T(n)$ if there is a Turing machine and a natural set $A$ such that the machine using $A$ solves the problem in time $T(n)$ and that if $∀n \, B(n)≥A(n)$, the machine using $B$ correctly solves the problem. Thus, recursive languages have recursive complexity, arithmetic languages have arithmetic complexity, and so on. The converse might be false, and it is unclear what the complexity of the inverse Busy Beaver function should be. Unfortunately, we do not know how to define complexity formally because using unnatural $A$ we can code solutions to recursive problems as follows: If instance $k$ is true, set $A(2k)$ to a certain value, and otherwise make $A(2k)$ larger, and similarly with $k$ being false and $A(2k+1)$. One mitigation (which still breaks down at high complexity) would be to make $A$ sufficiently sparse, for example by requiring that $A$ is coded through a binary tape with sequential access such that every '1' is at a position $2^n$ (for some n) and that if a '1' is position $2^n$, the next '1' cannot occur until position $2^{n^2}$. That way, for problems with easily verifiable solutions there is at most a polylogarithmic speed up (relative to the running time) as we can try all possible initial oracle tape segments.

Still higher complexity is reachable by computers that use a sufficiently fast-growing function $A$ and a function $B$ that grows sufficiently fast relative to $A$. We can extend the model to allow more sequences, each of which growing sufficiently fast relative to the previous ones. We can even (not part of definition 3.1) allow transfinitely many sequences through the use of diagonalization; to handle such cases, an infinite alternation of quantifiers is defined to mean existence of a winning strategy in the corresponding game.

**Definition 3.1:** A language L is recognizable by a Turing machine with the fast-growing level $k$ sequence oracle iff there is a Turing a machine T and a function $f: ℕ^ℕ→ℕ^ℕ$ such that for every $A_1,...,A_k$ with $∀i< k \, ∀n \, A_{i+1}(n)>f(A_i)(n)$ ($A_0$ is 0 here), T halts on input S with oracles $A_1,...,A_k$ iff S is in L.

*Notes:*

* We use the term "level $k$" (or $k$-level) because for a number $n$ in the range, the maximum $i$ such that $n∈A_i$ is from 1 to $k$.

* As before, a strict version of fast-growing level $k$ sequences (that does not affect Definition 3.1) is to require $A_{i+1}(n+1)>f(A_i)(A_i(n))$ ($A_i(-1)$ being 0 here); one could also require $A_{i+1}⊂A_i$ (with each $A$ viewed as a set).

* An issue with using the definition is that whether (for example) $A_2(0)$ is big enough depends on $A_1$ as a whole (note that the domain of $f$ is $ℕ^ℕ$) and not just the values of $A_0$ that are below $A_2(0)$. We conjecture that this does not affect the expressive power, in that we would get the same expressive power by treating a fast-growing infinite level $k$ sequence as an infinite sequence all of whose sufficiently long initial segments are in the sense of Definition 4.3(c) sufficiently long fast-growing level $k$ sequences.

**Proposition 3.2:** The set of winning positions in a $Σ^0_2$ game is recognizable using the fast-growing level 2 sequence oracle.

**Proof:** If a player has a winning strategy, then he has a winning strategy using moves less than $A_1$(position), where position includes the position in the game and a canonical description of the rules. Given this constraint on the moves, the second player has a winning strategy iff he has a strategy that enters then $n$th open set before time $A_2$($n$+position). Given these constraints, the game becomes an open one with finite number of moves per turn, and hence recognizable.

**Conjecture 3.3:** We conjecture the following to have equal complexity ($n>0$):

- Recognizability using sequences of level $n+1$.
- The game quantifier for level $n$ of the difference hierarchy of (lightface) $Δ^0_3$ sets (level 1 is $Σ^0_2$, level two is $Σ^0_2 \setminus Σ^0_2$, level 3 is $Σ^0_2 \setminus (Σ^0_2 \setminus Σ^0_2)$, and so on).
- Monotone inductive definitions for co-recognizable sets (relative to the current real number) for sequences of level $n$ (for $Π^0_1$ if $n=0$).

Moreover, we conjecture that allowing number quantification in the model (i.e. allowing arbitrary arithmetical formulas in place of Turing machines) corresponds to boldface (in terms of $Σ^1_1$ strength of the corresponding theory) or finitely iterated versions of these properties.

Furthermore, using the results in [4], we appear to have the following equivalence:

(a) $x$ is expressible using using (2) for some $n$.

(b) $x$ is in the least $\text{L}_κ$ satisfying $∀n ∃κ_1,...,κ_n \, κ_1<...<κ_n<κ ∧ \text{L}_{κ_1}≺_1...≺_1 \text{L}_{κ_n}$ (these sets for each numeral $n$ and relativized to each real $y$ exactly capture the $Π^1_3$ strength of $Π^1_2-\text{CA}_0$).

(c) $x$ is recursively reducible to the set of trees accepted by a tree automaton (equivalently by a formula in S2S) (the reduction $f$ is such that $x(i)=1$ iff $f(i)$ is a (possibly infinite) tree that is accepted).

**Towards Proving the Conjecture**

Conjectured computation of recognizability for these machines using the game quantifier:

Let player I choose oracles $A_1,A_2,...,A_k$, at move $m$ disclosing $\{(x,i): A_i(x)=m\}$, and similarly for player II and $B_1,B_2,...,B_k$ with $∀i∀n \, A_i(n)≤B_i(n)$, and each $A_i$ and $B_i$ monotonic. Player I wins if the machine halts using $B_1,B_2,...,B_k$ as oracles or there is $i$ such that each of $A_1,...,A_i$ is infinite while $B_i$ is finite.

Conjectured computation of the game quantifier using recognizability for these machines:

To compute the game quantifier for the ($k-1$)th level of the lightface $Δ^0_3$ difference hierarchy (for the Baire space, which corresponds to level $k$ for the Cantor space), it is sufficient to consider games of the following form. Given a polynomial time computable $f:ℕ→{0..k}$ (which will determine the game rules), player one wins iff the lowest $k-f(m)$ seen infinitely often is even, where $m$ codes the sequence of moves (each move being 0 or 1) made so far. To recognize the game quantifier using the oracles, modify the game as follows. Halt at stage $A_i(x)$ if $x≥\mathrm{code}(f)$ and no value of $f$ that is $≥i$ occurred after stage $x$ (with each player's move being one stage). Player one wins iff $k-i$ is even (if there are multiple $i$ that simultaneously trigger the halt, pick the lowest one). If we never halt, player one wins (since value $k$ was seen infinitely often).

Relation with monotonic induction:

* The game quantifier can be computed using monotonic induction: Use the previous quantifier to find positions where player one can win while preventing the game from getting into a state labeled with $k$ (where player two wins if $k$ is seen infinitely often), and mark these positions as player one wins, and repeat with the modified game, continuing using monotonic induction until no new positions are added. The other positions are player two wins. (Side note: A winning strategy for player one would be to play the auxiliary game for the step where the current position was marked, and each time a previously marked position is reached, player one wins the auxiliary game and switches to the auxiliary game that led to the position being marked. For player two, the auxiliary game (this is not the auxiliary game for player one) is keeping to unmarked positions, with a win if a state labeled with $k$ is reached. For both players, if the auxiliary game does not halt at a finite step, the winning condition is the same as in the original game.)

* To simulate the multiple sequence oracle (conjectured), we can build the computation tree where each vertex corresponds to a query to $A_k$. Work backwards from halting configurations, and using the previous quantifier for each step, use monotonic induction to reach the initial state if it is halting.

For transfinite sets of sequences, the above constructions suggest correspondence with winning strategies for transfinite levels of $Δ^0_3$ difference hierarchy.

The infinite structures have finite analogues. A key concept is that of a sufficiently fast-growing sufficiently long finite sequence. Sufficiently long is with respect to the rate of growth, and is preserved under extending the sequence. In computation, along with input, we will be given such a sequence, with the first element sufficiently large relative to the input size. Vagueness will be avoided because there is no error in interpreting "sufficiently" too strictly. What we are defining are not the sequences themselves, but the results of the computations — for computations that agree for all appropriate sequences (and thus will still agree if "sufficiently" is interpreted too strictly). The appropriate sequences can then be formally defined as the ones giving the correct results. Also, by using infinite sets, we can quantify over all possible definitions (like we did for the fast-growing sequence oracle) and thus formally define the results without using "sufficiently".

Let us now describe the construction in detail. In the definitions, 'vague' indicates dependency on a notion of 'sufficient'.

**Vague Definition 4.1:**

*(a)* A *sufficiently long fast-growing sequence* is a finite function $g:\{0,...,N\}→ℕ$ such that

- $g(0)$ is sufficiently large (relative to the problem size),

- $g(n+1)$ is sufficiently large relative to $g(n)$ ($n<N$), and

- $g$ or an initial segment of $g$ is sufficiently long relative to its rate growth. The rate of growth is how large is $g(0)$, and how large is $g(n+1)$ relative to $g(n)$ (for $n+1$ in the range of the initial segment).

*(b)* A sufficiently long fast-growing sequence or its extension is any finite extension of a sequence in (a).

*(c)* (generalization of (a)) A sufficiently long fast-growing level $k+1$ sequence is a sufficiently long finite sequence of sufficiently long fast-growing level $k$ sequences such that once the sequences are expanded to numbers, each number is sufficiently large relative to the previous one. Here, a level 0 sequence is a number sufficiently large relative to the problem size. "Sufficiently long" may depend on the sequence (specifically, required length is relative to the rate of growth or its analog) but if a sequence is sufficiently long, then so are all of its extensions.

*Note:* One useful representation of a level $k$ sequence is a function $g:\{0,...,n\}→\{0..k+1\}$ with $g(x)≥i$ iff $x$ is the last element (after expansion into a sequence of numbers) of an included level $i-1$ sequence (and $g(n)=k+1$; alternatively set $g(n)=k$ since with $g(n)=k$ extensions of the sequence are represented by extensions of $g$). For example, (((1,2,3),(4,5,6,7)),((10,11),(12,13,14))) would be represented as (0,1,1,2,1,1,1,3,0,0,1,2,1,1,4).

In one finitistic treatment, we assume that in certain contexts, 'sufficiently' is a primitive concept — it can be explained, partially axiomatized, and connected to other concepts, but not formally defined in terms of more primitive concepts. (On the other hand, one could argue that our uses of 'sufficiently' and other terms are a recharacterization of aspects of infinity.) Using that concept, given a particular Turing machine T, we can define a predicate $P$ as $P(n)$ holds iff $T$ accepts given input $n$ and a sufficiently large/complete/etc example (relative to $n$ and $T$) of the appropriate type, provided that $T$ is a machine that halts with the same (dependent on $n$) output for every $n$ and sufficient example. While without context, 'sufficient' is vague, the above usage is (at least arguably) precise because it does not depend on how the vagueness is resolved. One would then finitistically study the properties of $P$ and other such predicates, and develop axiom systems. However, with a little reasoning, many uses of "sufficiently" can also be formally defined using infinite sets, and we will use infinity as a shortcut and to unambiguously interpret the concepts in existing mathematics.

**Template Definition 4.2.**

*Assumption:* We are given a set of potential examples, and a set $S$ of possible notions of sufficiency: The notions must form a directed set under strictness, and each notion must be nonempty, or the definition fails. Alternatively, instead of $S$, we may be given $T$ that is equivalent to $S$ in that for every notion in $S$ there is at least as strict notion in $T$ and vice versa.

*(a)* A property holds for every sufficient example iff there is a notion $Q$ in $S$ such that the property holds for every example in $Q$.

*(b)* A predicate $P$ is computable using a sufficient example (assuming the examples are reducible to integers) iff there is a Turing machine $T$ such that for every $n$, $T$ outputs $P(n)$ given $n$ and a sufficient example (that is for every sufficient example; sufficiency depends on $n$) as input.

*(c)* Given a well-defined language, a sentence using a sufficiently good example as a parameter is well-defined iff there is a valid notion $Q$ such that all elements of $Q$ agree on whether the sentence is true, which is then the truth value of the sentence.

To apply the definition (which we will sometimes (for example, in section 7.2) do implicitly), we need the set of possible notions of sufficiency. Here is one choice, noting that selection between equivalent (as used in 4.2) sets may be arbitrary.

**Definition 4.3:**

*(a)* A notion of sufficiently long fast-growing sequences is valid iff there is a function $f$ such that for every infinite sequence $g$ with $g(0)>f(0)$ and $g(n+1)>f(g(n))$, there is $m$ such that every initial segment of $g$ of length at least $m$ is sufficient.

*(b)* The extension for 4.1(b) is trivial. (A notion of sufficiently long fast-growing sequence or extension thereof is valid if it is the closure under finite extension of a valid notion in (a).)

*(c)* A notion of sufficiently long level $k+1$ (fast-growing) sequences is valid iff there is a function $f$ and a valid notion $P$ of sufficiently long level $k$ sequences such that for every infinite sequence $g$ of elements of $P$ with, when $g$ is expanded into a sequence of numbers, $g(0)>f(0)$ and $g(n+1)>f(g(n))$, there is $m$ such that every initial segment of $g$ of length at least $m$ is sufficient. For every $n$, $\{x:x≥n\}$ is a valid notion of level 0 sequences.

**Theorem 4.4:**

*(a)* A predicate is computable using a sufficiently long fast-growing sequence iff it is $Δ^1_1$ in the hyperjump of 0.

*(b)* A predicate is computable using a sufficiently long fast-growing sequence or extension thereof iff it is $Δ^1_1$.

**Notes:**

* There should also be an analogous statement for 4.3(c).

* A finite sequence (as in (a)) is more expressive than the corresponding infinite sequence because it gives us a stopping point.

* For mass problems (where the correct answer is non-unique), the notion in 4.1(b) can be used to compute $Π^1_1$ separation (with the choice of the correct answer depending the sequence). A weakening of 4.1(b) that does not allow that would be to store the sequence as a predicate so one cannot test whether one reached the end.

**Proof:**

*(a)* The algorithm above (Theorem 2.2) for testing well-foundness (with reaching the end of the sequence indicating an infinite path), will work correctly and compute the hyperjump of 0 up to a sufficiently large unspecified point in the sequence. In deciding a $Δ^1_1$ in the hyperjump of 0 problem, use the partially computed hyperjump of 0 to check which of the two opposing sequences (for the sentence and its negation) terminates first: For inputs in the size limit, that will happen before that unspecified point.

Conversely, a machine that behaves uniformly halts iff for every growth rate $A$, there is a finite sequence $h$ growing faster than $A$ such that for every sufficiently-fast growing finite extension of $h$, the machine halts. Extensions of $h$ can be considered independently of $A$, so this is $Π^1_1$ in the hyperjump of 0.

*(b)* $Δ^1_1$ predicates are clearly computable. For the converse, for every rate of growth $g$, there is a finite sequence $h$ with $h(n+1)>g(h(n))$ such that the output is given on every continuation of $h$ (and one similarly has a $Σ^1_1$ description).

Without uniformity, we have the following.

**Proposition 4.5:** For every $Σ^1_1$ $φ$, $\{n: ∀S \, φ(n,S)\}$ is $Π^1_1$ in the hyperjump of 0 where ∀S quantifies over sufficiently long fast-growing sequences.

**Proof:** Consider the following game ($i$ is move number, $n$ is fixed):

First player: $A_1(i)$.

Second player: $A_2(i)≥A_1(i)$.

At any point, either player can request to stop.

The first player wins iff once both players request to stop, $φ(A_2)$ holds, or the first player requests to stop, but the second player never does.

This game is a concatenation of an open game (before the first player asks to stop) and a closed game afterwards (also, φ can be tested using a closed game) without extra parameters, so the game quantifier is $Π^1_1$ in the hyperjump of 0. The first player can force the $A_2$ to be some sufficiently long fast-growing sequence, and relative to a strategy for the first player, the second player can force $A_2$ to be an arbitrary sufficiently long fast-growing sequence, so the first player wins iff $∀S \, φ(n,S)$ holds above, as desired.

Intuitively, one can treat a sufficiently fast-growing sufficiently long sequence as if it is a definite object, but restricting reasoning to forms that remain valid; I am not sure if there is a reasonable formal system (perhaps analogous to nonstandard analysis) that corresponds to such a restriction.

If we allow the computers to produce sufficiently long level $k$ fast-growing sequences unlimited number of times (for variable input length), this corresponds to an infinite sequence of level $k+1$ (but I could not verify whether this definition gives the same expressiveness as Definition 3.1). However, to make the (arguably) finitistic definition as concrete as possible (and to allow a more fine-grained hierarchy of notions), we limit ourselves to a single application (of the producer of the sequences), and to correctness with respect to an ordinary Turing machine using that application.

**Restrictions on Length**

(Disclaimer: Statements here were not formally verified.)

In addition to considering sufficiently long sequences, we can also consider fast-growing sequences $g$ with specific length conditions. Given a recursive well-founded relation '≺', a natural corresponding length condition is the following: There is no $x_0 ≻ x_1 ≻ ... ≻ x_n$ with $x_i < g(i)$ and $n+1$ being the length of $g$. Computability for sufficiently fast-growing $g$ with this length condition equals $0^{(α+1)}$ (or $0^{(α)}$ for finite $α$) where $α$ is the rank of '≺'.

For fixed finite $k$, $g$ having length $k$ corresponds to ordinal $k$ for '≺'. If $h$ is a fixed recursive function with $\lim_{n→∞} h(n) = ∞$, then $g$ having length $h(g(0))$ corresponds to (computability using) $0^{(ω+1)}$. The independence of expressiveness on $h$ may appear counterintuitive — if we increase the length of $g$ by 1, we can compute the halting problem for appropriate machines that use the shorter $g$ — but we cannot determine which machines are valid (in the sense of being independent of the choice of sequence), hence halting reversal does not increase expressiveness. Going further, length $h(g(k))$ corresponds to $0^{(ω+k+1)}$, length $g(h(g(k)))$ — $0^{(ω 2 + k + 1)}$, $g^m(h(g(k)))$ — $0^{(ω (m+1) + k + 1)}$ (superscript indicates repetition), $g^{g^m(h(g(k)))}(0)$ — $0^{(ω^2 + ω m + k + 1)}$, and similarly using nested superscripts and ordinals below $ω^3$.

We now give an arguably finitistic definition of $Σ^1_2$ truth. Essentially, this section informally describes in detail a variation on level 2 estimators of the next section.

In the computational model, along with input, we are given a finite set of finite sequences. Each sequence is sufficiently long relative to itself, or relative to its rate of growth. (Optional) If $A$ is sufficiently long and $|A|≤|B|$ and $∀n<|A| \, B(n)≤A(n)$, then $B$ may be considered sufficiently long. (Optional) For the purposes of determining sufficient completeness (below), arbitrary conditions may be imposed on the sequences as long as they do not impose an upper bound on the rate of growth (as used here, an upper bound would be a function $f$ such that for every permitted sequence $A$, $∃n<|A| \, A(n)<f(n)$). Given these conditions, the set is sufficiently complete for the input size. Completeness is preserved under addition of new sequences or decrements to input size. We are not given the conditions on the sequences, but regardless of what they are, the algorithm below is guaranteed to be correct.

The algorithm aims to check whether the sentence encoded in the input has a transitive model. It enumerates potential initial segments (whose length equals the length of the longest sequence in the set of sequences we were given) of the truth predicate for the language with a distinguished symbol for a well-ordering of the elements. Each segment is checked for a quick (relative to its length) inconsistency with the sentence. If not found inconsistent, it is converted into a finite quasi-model, with elements coded by sentences of the right syntactic form: $∃x \, P(x)$ codes the least such $x$ if true, and 0 if false. Then, for every sequence $A$ (in the set of sequences we were given) with $A(0)$ larger than the input size, it tries to form a descending path $(a, b, c, …)$ of elements with $a<A(1), b<A(2), c<A(3), …$ . If the path reaches length $|A|-1$, the truth predicate is rejected; otherwise it is accepted by the sequence. The algorithm returns 'yes' iff a truth predicate segment is accepted by all sequences.

To see that the computation is correct, note that if a sentence has a transitive model, then it has a transitive model of bounded complexity, and hence without sufficiently long descending sequences of the required type. If it does not have a transitive model, then in the complete binary tree of candidate truth predicate initial segments, every path will have an infinite descending sequence (or inconsistency). Thus, regardless of how strictly "sufficiently long" is defined, every path will have sufficiently long descending sequences; furthermore, excluding all sequences that are above an infinite path would be an upper bound on the rate of growth (as used above), which is prohibited. By König's lemma, a finite set of such sequences can cover all paths, and thus a sufficiently complete finite set will suffice.

The construction has an infinite version, namely a predicate or a tree on finite sequences of natural numbers. Consider a well-founded tree of such sequences (ordered by extension) such that every non-leaf node can be extended by any natural number, and if a sequence is a leaf node, then it is sufficiently long relative to itself (or relative to its rate of growth; the difference does not affect the expressive power) — or any tree that is a well-founded extension of such a tree. Using such a tree, recognizability is $Π^1_2$ complete. (However, $Σ^1_2$ may be a better analogue of being recursively enumerable: The relevant parameter is not the halting time but the ordinal depth of the visited portion of the tree for nonhalting runs, which corresponds to the complexity of the least witness for a $Σ^1_2$ statement.) In one direction, the above computational procedure works, except that if the theory has a well-founded model, we do not halt. In the other direction, the machine halts iff for every well-founded tree $T$ for every non-halting computation corresponding to an extension $S$ of $T$, $S$ has an infinite path, which is $Π^1_2$.

We can extend definability by using an infinite sequence of numbers growing sufficiently fast relative to the tree. Recognizability should then correspond to having a $Π^1_2$ monotonic inductive definition. A stronger extension (likely still $Δ^1_3$) is to use an $n$-tuple of trees, with sequences in each tree sufficiently long relative to themselves, and to every previous tree.

**Vague Definition 6.1:**

A *level 0 estimator* is a pair $(a, b)$ with $b$ sufficiently large relative to $a$ and the problem size.

A *level n+1 estimator* is a sufficiently complete finite set of level $n$ estimators, where 'sufficient completeness' is defined such that it does not prevent the notion of level $n$ estimators from being arbitrarily strict.

The above definition is vague but the following is well-defined regardless of how the vagueness is resolved. (As written, Definition 6.2 is well-defined relative to the choice of an estimator and (for bad estimators) the choice of 'standard rules', but given Theorem 6.6, neither is relevant if we use Definitions 6.3 and 6.4.)

**Definition 6.2:** Computation of second order arithmetical truth:

* Use standard rules to convert the sentence (or its negation) to $Q X_1 Q X_2 … ∃ X_n ∀x ∃y>x \, P$ where $n>0$, each $Q$ is a quantifier, and $P$ is a bounded quantifier formula that does not use $x$ (and $P$ can even be chosen to be polynomial time computable), and each $X_i$ is a set of natural numbers.

* Pick any (it does not matter which one) level $n$ estimator on problem sizes at least as large as the sentence. The sentence is true iff it passes the estimator.

* A formula $∀x ∃y>x \, P(y, …)$ ($P$ as above) passes a level zero estimator $(a, b)$ iff $∃y (a≤y<b) P(y, …)$.

* A formula $∀Xφ(X)$ passes an estimator iff for every binary sequence $X$, $φ(X)$ passes at least one of the estimators included.

* A formula $∃Xφ(X)$ passes an estimator iff for some binary sequence $X$, $φ(X)$ passes all of the estimators included.

(Note: Because $P$ is bounded quantifier and an estimator is finite, the search over $X$ is also finite).

To informally see that the description is correct, if a $Σ^1_n$ statement is true, then it has a witness (for the outermost existential quantifier) of bounded complexity. Level $n-1$ estimators will work correctly with that witness and hence the sentence. On the other hand, if a $Σ^1_n$ statement is false, then every potential witness will be intercepted by some level $n-1$ estimator, and hence the falsehood will be witnessed by a sufficiently complete set of such estimators. By induction on $n$, the method is correct.

To prove correctness, we first have to formalize what we want to prove. If we treat 'sufficiently' as a parameter in the definition, we naturally get the following:

**Definition 6.3:**

* A notion $R$ of 0-estimators is valid iff $R(a,b) ∧ c≤a ∧ b≤d ⇒ R(c,d)$ and $∀a∃b R(a,b)$.

* A notion $R$ of $n+1$ estimators is valid iff

(1) Every element of $R$ is finite. (A reasonable variation (immaterial for below): Also require elements to be non-empty.)

(2) $∪R$ is valid and $x∈R ∧y∈∪R ⇒ x∪\{y\}∈R$. (Equivalently, there is a valid notion $T$ of $n$-estimators such that every element of $R$ consists of elements of $T$, and adding an element of $T$ to an element of $R$ results in an element of $R$.)

(3) For every valid notion $S$ of $n$-estimators, there is an element of $R$ consisting of elements of $S$, i.e. $∃r ∈ R \,\, r⊂S$.

**Definition 6.4:** A sentence (in second order arithmetic or another well-defined language) using a (sufficiently good) $n$-estimator as a parameter is well-defined iff there is a valid notion $R$ such that all elements of $R$ agree on whether the sentence is true, which is then the truth value of the sentence.

*Note:* This is simply Definition 4.2(c) applied to valid notions of estimators.

The truth value of a well-defined sentence is unambiguous:

**Proposition 6.5:**

**(a)** Every valid notion of $n$-estimators is non-empty.

**(b)** Given two valid notions $R$ and $S$ of $n$-estimators, $R∩S$ is also a valid notion.

**Proof:**

**(a)** The follows from 6.3(3) and existence of the trivial notion of $n$-estimators that includes everything of the right syntactic form.

**(b)** Assuming $n>0$ ($n=0$ is easy), note that $∪R$ and $∪S$ are valid. Assume that the proposition applies for $n-1$, and let $T$ be an arbitrary notion of $n-1$ estimators. Thus, $T' = T ∩ ∪R ∩ ∪S$ is valid. There is $x∈R$ and $y∈S$ such that $x⊂T'$ and $y⊂T'$, and for every such $x,y$, we have $x∪y∈R∩S$, which satisfies (3) in the definition. Using a similar argument, $∪R ∩ ∪S = ∪(R∩S)$, and (2) follows.

**Theorem 6.6:** The above notion/computation of second order arithmetical truth is well-defined and agrees with the actual truth.

**Proof:**
Given problem size $s$, and a real number parameter $X$, consider the following notions:

$(a,b)$ is a level 0 $(s,X)$ estimator iff for every $φ$ in $\text{Z}_2$ for with two free variables and length at most $2^{s+a}$, $∃c \, φ(c, X) ⇒ ∃c<b \, φ(c, X)$. For this notion, if working in just ZFC without projective determinacy, augment $\text{Z}_2$ with a function that uniformizes projective sets.

$x$ is a level $n+1$ $(s,X)$ estimator iff every element of $x$ is a level $n$ $(s,X)$ estimator and for every $Y⊂ℕ$, there is an element of $x$ that is $(s,X,Y)$ $n$-estimator (coding $(X,Y)$ by a single real number).

By induction and using König's lemma to get finiteness, these are valid notions of $n$-estimators. Given a level $n$ $(s,Y)$ estimator, computation of $Σ^1_n$ truth works for sentences of length at most $s$ that use $Y$ as a parameter. At the top level (and similarly for other levels):

true $∃Xφ(X)$ — there is a bounded complexity witness $X_0$, and all included estimators will accept $φ(X_0)$.

false $∃Xφ(X)$ — for every $X$, an $X$-estimator is included, and it will reject $φ(X)$.

true $∀Xφ(X)$ — for every $X$, an $X$-estimator is included, and it will accept $φ(X)$.

false $∀Xφ(X)$ — there is a bounded complexity counterexample $X_0$, and all included estimators will reject $φ(X_0)$.

*Variations on the definition:*

* Instead of 0-estimators, we could have started with level -1 estimators being sufficiently large numbers (or even every number being a sufficiently good level -2 estimator), and $k$-estimators for $k>0$ should then still have the same power (with 0-estimators being essentially equivalent to $(a,b)$ with $\text{input_size} ≪ a ≪ b$).

* If sufficiently complete did not obey the "arbitrarily strict" (for previous levels) qualification and we used a definition like Definition 3.1, the expressiveness using $k$-estimators ($k>0$) should collapse to that of being hyperarithmetic in the $k$th hyperjump of 0. (Essentially, enough level 0 estimators give us a sufficiently fast-growing sequence $S$, and levels 1-k estimators would give us $k$ special points (each sufficiently large relative to the previous points and the growth rate of $S$), including the final point of $S$.)

The estimators have an infinite version: A predicate that codes a sufficiently strict valid notion of $n$-estimators. Recognizability for machines using such predicates (where the machine must work for every sufficiently strict notion of $n$-estimators) appears to equal $Π^1_{n+1}$. Also, a single predicate suffices for all input sizes: Given input size $s$, the machine can pick $n$-estimators such that for every included level 0 estimator $(a,b)$, $b>s$, and then ignore 0-estimators $(a',b')$ with $a'<s$.

This completes our description at the level of second order arithmetic. What remains is finitistically studying which axioms to include in the system, and relating infinite statements to natural statements about finite structures. If, for example, projective determinacy for the results of the estimations will be deemed a finitistically acceptable axiom schema, we will have a finitistic consistency proof of ZFC.

To go further, we can define transfinite levels of estimators. For an ordinal $α>0$ (alternatively, for consistency with $n$-estimators, limit $α$), an $α$-estimator is a sufficiently complete collection of estimators of lower levels, with α being marked in the estimator, and where 'sufficiently' complete is defined so as not to preclude notions of lower level estimators being arbitrarily strict. Level 0 estimators are as defined above (Definition 6.3 or an alternative).

**Definition 7.1.1:**

A notion $R$ of $α$-estimators ($α>0$) is valid iff:

(1) Every element of $R$ is finite.

(2) $∪R$ is a valid notion of $<α$-estimators.

(3) $x∈R ∧y∈∪R ⇒ x∪\{y\}∈R$

(4) For every valid notion $S$ of $<α$-estimators, there is an element of $R$ all of whose elements are elements of $S$.

A notion $R$ of $<α$-estimators is valid iff:

(5) For every $β<α$, the set of $β$-estimators in $R$, $R_β$, forms a valid notion.

(6) (Coherence) For every $β<α$, $R|β = ∪R_β$. (Here $β>0$ and $R|β = ∪_{γ<β}R_γ$. Informally, different levels of $R$ agree on which estimators of lower levels are valid.)

**Proposition 7.1.2:**

**(a)** For every ordinal $α$, there is a valid notion of $α$-estimators, and every valid notion is non-empty.

**(b)** For valid notions $R$ and $S$ of $α$-estimators, $R∩S$ is valid. Similarly, intersection of two valid notions of $<α$-estimators is valid. Also, for $α>0$, $∪(R∩S) = ∪R∩∪S$.

**Proof:**

**(a)** This follows from the existence of the trivial notion that includes every set of the right syntactic form. (Also, any valid notion of $<α$-estimators has a trivial corresponding notion of $α$-estimators.)

**(b)** Assume this holds for smaller $α$ (and $α=0$ is immediate). Let $R'$ be $∪R$ and $S'$ be $∪S$. Verify that $T' = R'∩S'$ is a valid notion of $<α$-estimators: (5) $T'_β = R'_β ∩ S'_β$, which is valid, and (6) $T'|β = R'|β ∩ S'|β$ $=$ $∪R'_β ∩ ∪S'_β$ $=$ $∪(R'_β ∩ S'_β) = ∪T'_β$. Thus, there are $r⊂T'$ with $r∈R$ and $s⊂T'$ with $s∈S$, so $r∪s∈R∩S$, and (using (3) for $R$ and $S$) every element of T' is in $∪(R∩S)$, so $∪(R∩S) = ∪R∩∪S$. Finally, (4) holds because for a valid notion $U'$ of $<α$-estimators, $T'∩U'$ is also valid.

Thus, using Definition 4.2, we can meaningfully speak of 'for every sufficiently good $α$-estimator'.

Definability of reals using estimators of levels below $ω(α+1)$ appears to equal to definability in $\text{L}_α(ℝ)$, at least when $α$ is not too large. Note that for a definition using $α$-estimators not to (overtly) use infinity, we would need a coding of ordinals up to $α$ by integers.

**Estimators with Rational Levels**

Countable ordinals can be embedded into rationals, and one extension of transfinite level estimators is to use estimators with rational levels. Level 0 estimators will be as before, and a higher level estimator will consist of the level and a sufficiently complete finite set of lower level estimators, but the treatment of 'sufficiently complete' is more subtle. Rational numbers are not well-ordered, but we work around that by in effect defining estimators for sufficiently complete (unspecified) well-ordered subsets of nonnegative rationals.

Given a notion $R$ of $α$-estimators and a zero-preserving order-preserving injection $f:α→β$, there is a natural translation of $R$ to $β$, $R_f$:

$γ$ not in the range of $f$ - every finite subset of $R_f|γ$ is included (as a level $γ$ estimator in $R_f$).

$γ$ in the range of $f$ - a finite subset $r$ of $R_f|γ$ is included if after recursively deleting from $r$ all estimators of levels that are not in the range of $f$, removing duplicates, and applying the inverse of $f$ on the levels, we get an element of $R$.

The above definition works even if $β$ is a non-well-founded linear order, and in particular for $f:α→ℚ$. The set of notions of estimators derived from a notion of $α$-estimator and $f:α→ℚ$ (variable countable $α$ and $f$) forms a directed system under strictness, which allows Definition 4.2 to work.

The expressiveness for computations that use sufficiently good estimators with rational levels probably equals to $Δ_1(\{ℝ\})$ definability in $\mathrm{L}_α(ℝ)$ for the least $α$ such that $α$ is not $Δ_1(\{ℝ\})$ definable in $\mathrm{L}_α(ℝ)$ (and thus $α$ is large but countable here).

If the system used finite ordinals instead of countable ordinals, the expressiveness appears to equal the Turing degree of second order arithmetic. The difference is that (with finite ordinals) if we use (for example) a level 1 estimator to get estimators of level 1/2, 2/3, 3/4, 4/5, ..., we still get enough levels of estimators, but there is no assurance that an estimator of level $(n-1)/n$ is sufficiently complete relative to $n$ for large $n$ (or even non-empty, and we do not know when to stop considering higher level estimators).

A caveat (even for the full system above) is that if we use (for example) a level 1 estimator to get estimators of level 2/3, 3/5, 4/7, ..., then for a sufficiently large $n$, we may get an empty (or otherwise small) estimator for level $n/(2n-1)$. Of course, if $n$ is not too large (or if the completeness is relative to $n$), all included level $n/(2n-1)$ estimators would still be (essentially) sufficiently complete. There might be interesting notions of estimators with rational levels that handle such descending sequences differently (or other interesting notions of estimators with rational levels in general). Perhaps, such notions can be used to compute game quantifiers (and strategies) for games of countable length.

A (not necessarily finitistic) notion related to that of a sufficiently large Turing degree (or more closely, fast-growing infinite sequence of Turing degrees) is that of a sufficiently closed pattern enumerator.

**Vague Definition 7.2.1:** A sufficiently closed pattern enumerator is a function $f:ℕ→2^ℕ$ such that

- $f(n)(i)$ is character $i$ in the infinite binary string corresponding to pattern $n$. Numbering of patterns is permitted to be arbitrary.

- if $x∈2^ℕ$ follows a sufficiently simple pattern relative to $f(n)$, then $x$ is in the range of $f$.

- (optional for our purposes) if $x$ and $y$ are in the range of $f$, then so is their join $x_0,y_0,x_1,y_1,...$ .

For example, for some $n$, $f(n)(i)=0$ for all $i$, but the least such $n$ may be arbitrarily large. Also, without the optional condition, enumerators $A$ and $B$ can be trivially combined into an enumerator of patterns included in $A$ or $B$.

Let $A_1, A_2, ..., A_n $be sufficiently closed pattern enumerators such that $A_{i+1}$ includes $A_i$ as a pattern, and $S$ be a sufficiently fast-growing sequence relative to the enumerators.

Assuming sufficient determinacy, we can recognize the set of winning positions of games of integers of length $ω^2 n$ with $Σ^0_1$ payoff as follows. Replace the first $ω$ moves of player I by a strategy $p_1$ for these moves where $p_1$ (chosen by player I) is in $A_1$, replace the first $ω$ moves of player II by strategy $s_1$, such that $p_1$ join $s_1$ is in $A_1$, and so on replacing the first $ω^2$ moves with $p_1,s_1,p_2,s_2,...$ and reducing game length to $ω^2 (n-1)$ (if $n>1$). We might not be able to go further with $A_1$ since $A_1$ does not recognize its behavior as a pattern, but we can use $A_2$ for the first $ω^2$ moves of the modified game and so on, converting the game to one of length $ω$. We then constrain the move values using $S$, and if there is a strategy (for the first player) that wins at a finite stage, halt with 'accept'. We conjecture the converse to hold as well (assuming sufficient determinacy).

*Notes:*

* Without a sufficiently fast-growing sequence $S$, an unspecified sufficiently closed pattern enumerator is essentially useless to a Turing machine since it may be arbitrary up to an arbitrarily large finite point.

* Sufficiently closed pattern enumerators have a number of equivalent (in terms of expressiveness) definitions. For example, we may view them as pattern detectors with $f'(s)$ being the least $n$ such that $f(n)$ starts with $s$.

* To go further, we can use a transfinite sequence of pattern enumerators, with each enumerator including the sequence of enumerators of lower levels (combined into a binary sequence) as a pattern.

* If we used 'sufficiently simple pattern' instead 'sufficiently simple relative pattern relative to $f(n)$', we should get the same expressiveness by using $ωα$ enumerators in place of $α$, while requiring each enumerator to include all sufficiently simple patterns with respect to the join of the lower level enumerators.

* Like fast-growing sequences, the enumerators (and chains of them with the fast-growing sequences), have a finite version; simply cut off the chain (coded as an infinite string) at a sufficiently large number relative to the chain and the problem size.

We can also consider what can be expressed using real numbers but without using an uncountable infinity. One way to increase expressiveness is to allow an infinite alternation of real quantifiers. Formally, $∃X_1 ∀X_2 …$ can be defined as true when the first player has a winning strategy in the game where the players takes turns to play $X_i$ and the goal of the first player is to satisfy the formula after the quantifiers. Under reasonable axioms, this leads to expressiveness beyond L(ℝ). Technically, for every countable ordinal $α$, real numbers definable from a countable ordinal using $1+α+1$ real quantifiers are precisely those that are in the minimal iterable inner model with $α$ Woodin cardinals. Just $ω+2$ real quantifiers go beyond first order definability in L(ℝ).

This definition uses strategies, which are formally uncountable. However, we can avoid the uncountable by introducing a notion of $Y$ having a sufficiently large complexity (under Turing reducibility) relative to $X$. As before, vagueness is avoided because every sufficiently strict notion works. We obtain the necessary complexity by using a sequence of $α$ real numbers, each sufficiently complex relative to a join of the previous real numbers. Statements with infinite alternations of quantifiers can be decided by restricting the complexity of the strategies: For $γ$ levels of play, the strategies must be recursive in the $γ$'s member of the sequence. By determinacy, and by sufficiency of the sequence, this does not affect who wins. We can go even further by using sequences with ordinal length sufficiently long relative to themselves. Reaching higher complexity is an endless endeavor.

REFERENCES

[1] "Extending the Language of Set Theory" by Dmytro Taranovsky (2005), arXiv:math/0504375.

[2] "Reflective Cardinals" by Dmytro Taranovsky (2012), arXiv:1203.2270.

[3] Simpson, Stephen G. *Subsystems of Second Order Arithmetic* (Perspectives in Logic) 2nd Edition (2009).

[4] "How unprovable is Rabin's decidability theorem?" by Leszek Aleksander Kołodziejczyk, Henryk Michalewski (2015), arXiv:1508.06780.