This page contains a list of past talks. Where available, there is a link to the speaker's slides and to a video of the talk on the seminar's Youtube channel.
Alessio Guglielmi (Bath), 15 December 2021 Totally Linear Proofs for Classical Logics Slides |
(joint work with Chris Barrett and Victoria Barrett) Subatomic proof systems in deep inference exploit the idea that atoms can be seen as superpositions of values, for example, true and false. Adopting this point of view leads to some dramatic simplifications of proof systems and their proof theory. A single, simple, linear inference shape, together with a certain notion of saturation, generates proof systems for several logics of interest. One of these is a conservative extension of propositional classical logic by decision trees. Despite expressing classical logic, this subatomic proof system is linear in an extreme sense: any notion of negation, contraction and weakening disappears, together with logical units themselves. This system, and how we can make it so extremely linear, is the main focus of the talk.
The normalisation theory of our proof system can be developed at the subatomic level and then lifted to the standard level via an interpretation function. This way, we recover negation and all the other features without affecting the structural properties of proofs, such as cut-elimination. In the talk, we prove cut-elimination and other proof-theoretic results. The ideas are natural, simple, and mostly rely on two notions that appear to have very general applicability: projecting proofs around atoms and what we call the 'eversion' of formulae. More information about deep inference is at: http://alessio.guglielmi.name/res/cos.
Alexis Saurin (PPS & INRIA Paris), 17 November 2021 Virtuous circles in proofs Video |
In this seminar, I will consider proof systems for logics expressing inductive and coinductive properties, focusing on "circular" and non-wellfounded proofs. Those derivations are finitely branching but admit infinitely deep branches, possibly with some regularity conditions. Circular derivations are closely related with "proofs by infinite descent" and shall be equipped with a global condition preventing vicious circles in proofs.
After surveying several logics expressing (co)inductive properties and circular proofs, I will concentrate on the mu-calculus in a linear setting, that is, on linear logic extended with least and greatest fixed points.
In the spirit of structural proof-theory and of the Curry-Howard correspondence, we will not only be interested in the structure of provability but also in the structure of proofs themselves. I will survey several topics, notably the global validity condition, the productivity of cut-elimination, the impact of a careful treatment of structural rules, the question of proof invariants and the sequentiality of inferences.
Wilfried Sieg (Carnegie Mellon), 3 November 2021 Proofs as Objects Paper | Video |
In his recently discovered Twenty-Fourth Problem, Hilbert suggested investigating mathematical proofs and called in 1918 for a "theory of the specifically mathematical proof". Gentzen insisted in 1936 that "the objects of proof theory shall be the proofs carried out in mathematics proper". He viewed derivations in his natural deduction calculi as "formal images [Abbilder]" of such proofs. Clearly, these formal images were to represent significant structural features of the proofs from ordinary mathematical practice. The thorough formalization of mathematics encouraged in Mac Lane the idea that formal derivations could be the starting-points for a theory of mathematical proofs.
These ideas for a theory of mathematical proofs have been reawakened by a confluence of investigations on natural reasoning (in the tradition of Gentzen and Prawitz), interactive verifications of theorems, and implementations of mechanisms that search for proofs. At this intersection of proof theory with interactive and automated proof construction, one finds a promising avenue for exploring the structure of mathematical proofs. I will detail steps down this avenue: the formal representation of proofs in appropriate logical frames is akin to the representation of physical phenomena in mathematical theories; an important dynamic aspect is captured through the articulation of bi-directional and strategically guided procedures for constructing (normal) proofs.
References
Gentzen G (1936) Die Widerspruchsfreiheit der reinen Zahlentheorie. Mathematische Annalen 112(1):493-565.
Hilbert D (1918) Axiomatisches Denken. Mathematische Annalen 78:405-415.
Mac Lane S (1934) Abgekürzte Beweise im Logikkalkül. Dissertation, Georg-August-Universität Göttingen.
Mac Lane S (1935) A logical analysis of mathematical structure. The Monist 45(1):118-130.
Thiele R (2003) Hilbert's twenty-fourth problem. The American Mathematical Monthly 110(1):1-24.
Fedor Pakhomov (Ghent & Steklov Institute), 20 October 2021 Fast growing hierarchies, ordinal collapsing, and Pi^1_1-CA_0 Video |
(joint work with Juan Pablo Aguilera and Andreas Weiermann) Weakly finite dilators are functions F from naturals to naturals for which we have a fixed extension to a dilator F:On->On. We show that for a large class of natural ordinal notation systems alpha the fast growing function F_alpha naturally could be treated as a weakly finite dilator. More precisely our construction works when alpha=D(omega), for a weakly finite dilator D. Thus we have an operator that maps a weakly finite dilator D to the weakly finite dilator F_{D(omega)}. We show that F_{D(omega)} could be naturally identified with the ordinal denotation system based on a variant of collapsing function psi, collapsing D(Omega). We show that the fact that this operator maps weakly finite dilators to weakly finite dilators is equivalent to Pi^1_1-CA_0.
Jeremy Avigad (Carnegie Mellon), 6 October 2021 The conservativity of weak König's lemma (a proof from the book) Video |
A celebrated result by Harvey Friedman is that WKL_0, a subsystem of second-order arithmetic based on Weak König's Lemma, is conservative over primitive recursive arithmetic (aka PRA) for Pi_2 sentences. Leo Harrington showed that it is conservative over RCA_0 for Pi^1_1 sentences, from which Friedman's result follows. It is known that in the worst case there is an iterated exponential increase in length when translating proofs from RCA_0 to PRA, and hence from WKL_0 to PRA. But Hajek and I independently showed that proofs can be translated from WKL_0 to RCA_0 without a dramatic increase in length. In this talk, I'll sketch the relevant background and present the nicest proof of this last result that I know of.
Antonina Kolokolova (Newfoundland), 30 June 2021 Learning from Bounded Arithmetic Slides | Video |
The central question of complexity theory -- what can (and cannot) be feasibly computed -- has a corresponding logical meta-question: what can (and cannot) be feasibly proven. While complexity theory studies the former, bounded arithmetic is one of the main approaches to the meta-question. There is a tight relation between theories of bounded arithmetic and corresponding complexity classes, allowing one to study what can be proven in, for example, "polynomial-time reasoning" and what power is needed to resolve complexity questions, with a number of both positive and negative provability results.
Here, we focus on the complexity of another meta-problem: learning to solve problems such as Boolean satisfiability. There is a range of ways to define "solving problems", with one extreme, the uniform setting, being an existence of a fast algorithm (potentially randomized), and another of a potentially non-computable family of small Boolean circuits, one for each problem size. The complexity of learning can be recast as the complexity of finding a procedure to generate Boolean circuits solving the problem of a given size, if it (and such a family of circuits) exists.
First, inspired by the KPT witnessing theorem, a special case of Herbrand's theorem in bounded arithmetic, we develop an intermediate notion of uniformity that we call LEARN-uniformity. While non-uniform lower bounds are notoriously difficult, we can prove several unconditional lower bounds for this weaker notion of uniformity. Then, returning to the world of bounded arithmetic and using that notion of uniformity as a tool, we show unprovability of several complexity upper bounds for both deterministic and randomized complexity classes, in particular giving simpler proofs that the theory of polynomial-time reasoning PV does not prove that all of P is computable by circuits of a specific polynomial size, and the theory V^1, a second-order counterpart to the classic Buss' theory S^1_2, does not prove the same statement with NP instead of P.
Finally, we leverage these ideas to show that bounded arithmetic "has trouble differentiating" between uniform and non-uniform frameworks: more specifically, we show that theories for polynomial-time and randomized polynomial-time reasoning cannot prove both a uniform lower bound and a non-uniform upper bound for NP. In particular, while it is possible that NP!=P yet all of NP is computable by families of polynomial-size circuits, this cannot be proven feasibly.
Hugo Herbelin (INRIA Paris), 16 June 2021 On the logical structure of choice and bar induction principles
We present an alternative equivalent generalised formulation GDC(A,B,T) of the axiom of choice (for T a predicate filtering the finite approximations of functions from A to B), of the form "coinductive T-approximability implies T-compatible choice function" such that:
- GDC(\(\mathbb N\),B,T) is essentially the axiom of dependent choices
- GDC(A,Bool,T) is essentially the statement of the Boolean prime filter theorem
- GDC(\(\mathbb N\),Bool,T) reduces to their common intersection, namely essentially weak König's lemma
- GDC(A,B,R\(^+\)), for R\(^+\) some specific filtering predicate defined from a relation R on A and B, is the statement of the general axiom of choice
Its syntactic contrapositive GBI(A,B,T) "T is barred implies T is inductively barred" is such that:
- GBI(\(\mathbb N\),B,T) is essentially bar induction, and close also to countable Zorn's lemma
- GBI(A,Bool,T) is essentially the statement of Tarski completeness theorem in the form validity implies provability for Scott's entailment relations
- GBI(\(\mathbb N\),Bool,T) reduces to their common intersection, namely essentially the weak fan theorem
We will also present further explorations aiming at integrating the full version of Zorn's lemma in the picture. Note that all equivalences will be considered under the eyes both of constructive and linear logic.
This is joint work with Nuria Brede.
Lutz Straßburger (INRIA & LIX Paris), 2 June 2021 Towards a Combinatorial Proof Identity ArXiv paper presented at LICS 2021 |
In this talk I will explore the relationship between combinatorial proofs for first-order logic and a deep inference proof system for first-order logic. In particular, I will discuss the decomposition of a proof into a linear part and a resource management part. Based on this, I will argue that combinatorial proofs can serve as way to define a universal notion of proof identity.
This talk is based on a joint work with Dominic Hughes and Jui-Hsuan Wu (to be presented at LICS 2021).
Revantha Ramanayake (Groningen), 5 May 2021 Up and Down the Lambek Calculus Slides | Video |
I will present recent results using proof theory that establish decidability and complexity for many substructural logics. Specifically: for infinitely many axiomatic extensions of the commutative Full Lambek calculus with contraction/weakening, including the prominent fuzzy logic MTL. This work can be seen as extending Kripkeās decidability argument for FLec (1959). I aim to isolate some of the proof-theoretic motifs that feature in this type of work. These include: refinement of backward and forward proof search, cut-free proof systems via structural rule extension, absorption of contraction rules, and upper bounding proof search via controlled bad sequences. Joint work with A.R. Balasubramanian and Timo Lang.
Greg Restall (Melbourne), 21 April 2021 Comparing Rules for Identity in Sequent Systems and Natural Deduction Slides | Video |
It is straightforward to treat the identity predicate in models for first order predicate logic. Truth conditions for identity formulas are straightforward. On the other hand, finding appropriate rules for identity in a sequent system or in natural deduction leaves many questions open. Identity could be treated with introduction and elimination rules in natural deduction, or left and right rules, in a sequent calculus, as is standard for familiar logical concepts. On the other hand, since identity is a predicate and identity formulas are atomic, it is possible to treat identity by way of axiomatic sequents, rather than inference rules. In this talk, I will describe this phenomenon, and explore the relationships between different formulations of rules for the identity predicate, and attempt to account for some of the distinctive virtues of each different formulation.
Andreas Weiermann (Ghent), 17 March 2021 Goodstein sequences and notation systems for natural numbers Video |
Termination theorems for Goodstein sequences provide canonical examples for the Gödel incompleteness of systems of arithmetic and set theory. The original Goodstein sequence is based on canonical notations for natural numbers using addition and exponentiation. In our exposition we consider more general forms of notations which are based on the Ackermann function and related functions and we use them to define Goodstein sequences. The resulting termination theorems will be independent of relatively strong systems of arithmetic. We prove some general results about the notation systems for natural numbers and try to explain how these results might be connected to the problem of canonical well orderings and to comparison theorems between the slow and fast growing hierarchies. The exposition is aimed at a general audience.
Dale Miller (Inria & IP Paris), 3 March 2021 Focusing Gentzen's LK proof system Slides | Video |
Gentzen's sequent calculi LK and LJ are landmark proof systems. They identify the structural rules of weakening and contraction as notable inference rules, and they allow for an elegant statement and proof of both cut-elimination and consistency for classical and intuitionistic logics. Among the undesirable features of those sequent calculi is the fact that their inferences rules are very low-level and that they frequently permute over each other. As a result, large-scale structures within sequent calculus proofs are hard to identify. In this paper, we present a different approach to designing a sequent calculus for classical logic. Starting with Gentzen's LK proof system, we first examine the proof search meaning of his inference rules and classify those rules as involving either don't care nondeterminism or don't know nondeterminism. Based on that classification, we design the focused proof system LKF in which inference rules belong to one of two phases of proof construction depending on which flavor of nondeterminism they involve. We then prove that the cut-rule and the general form of the initial rule are admissible in LKF. Finally, by showing that the rules of inference for LK are all admissible in LKF, we can give a completeness proof for LKF provability with respect to LK provability. We shall also apply these properties of the LKF proof system to establish other meta-theoretic properties of classical logic, including Herbrand's theorem. This talk is based on a recent draft paper co-authored with Chuck Liang.
Keita Yokoyama (JAIST), 17 February 2021 Forcing interpretation, conservation and proof size Slides |
The notion of relative interpretation is one of the essential tools of proof theory. In [1], Avigad demonstrated how forcing can be understood in the context of proof theory and how it is related to relative consistency or conservation proofs. Indeed, one may see that forcing is a generalization of a relative interpretation by means of the Kripke semantics.
Moreover, such an interpretation usually provides a feasible (polynomial) proof transformation as digested in [2]. In this talk, we will overview such ideas and show that many conservation theorems for systems of second-order arithmetic can be converted to non-speedup theorems.
[1] J. Avigad, Forcing in proof theory. Bull. Symbolic Logic 10 (2004), no. 3, 305-333.
[2] J. Avigad, Formalizing forcing arguments in subsystems of second-order arithmetic. Ann. Pure Appl. Logic 82 (1996), no. 2, 165-191.
[3] L. A. Kolodziejczyk, T. L. Wong and K. Yokoyama, Ramsey's theorem for pairs, collection, and proof size, submitted.
Sara Negri (Genova), 3 February 2021 A proof-theoretic approach to formal epistemology Slides |
(joint work with Edi Pavlovic) Ever since antiquity, attempts have been made to characterize knowledge through belief augmented by additional properties such as truth and justification. These characterizations have been challenged by Gettier counterexamples and their variants.
A modern proposal, what is known as defeasibility theory, characterizes knowledge through stability under revisions of beliefs on the basis of true or arbitrary information (Hintikka (1962), Stalnaker (1998)). A formal investigation of such a proposal calls for the methods of dynamic epistemic logic: well-developed semantic approaches to dynamic epistemic logic have been given through plausibility models of Baltag and Smets (2008) and Pacuit (2013), but a corresponding proof theory is still in its beginning.
We shall recast plausibility models in terms of the more general neighbourhood models and develop on their basis complete proof systems, following a methodology introduced in Negri (2017) and developed for conditional doxastic notions in Girlando et al. (2018).
An inferential treatment of various epistemic and doxastic notions such as safe belief and strong belief will give a new way to study their relationships; among these, the characterization of knowledge as belief stable under arbitrary revision will be grounded through formal labelled sequent calculus derivations.
Georg Moser (U Innsbruck), 20 January 2021 Herbrand Complexity and Hilbert's Epsilon Calculus Slides | Video |
In the talk I will make use of two results on the epsilon calculus over a language with equality. First, I'll show how the epsilon calculus can be used to derive upper bounds on the Herbrand complexity of first-order logic with equality, independent on the propositional structure of the first-order proof. Second, I'll indicate how to obtain (non-elementary) upper bounds on the Herbrand complexity, if first-order logic is extended by equality axioms of epsilon terms. This follows from a careful study of the complexity of the epsilon elimination procedure in the presence of epsilon equality axioms.
This is joint work with Kenji Miyamoto.
Matthias Baaz (TU Vienna), 16 December 2020 Cut Elimination as Error Correcting Device Slides | Video |
In mathematical proofs axioms and intermediary results are often represented by their names. It is however undecidable whether such a description corresponds to an underlying proof. This implies that there is sometimes no recursive bound of the complexity of the simplest underlying proof in the complexity of the abstract proof description, i.e. the abstract proof description might be non-recursively simpler. This however does not apply to cut-free proofs, where it is easy to decide whether there is a corresponding proof and it is easy to reconstruct a most general proof in case there is one. This means that cut elimination on an abstract proof description might be considered as error correcting device. We compare various elimination procedures on abstract proof descriptions and describe their relation to the first epsilon-theorem.
Ulrich Kohlenbach (TU Darmstadt), 2 December 2020 Proof Mining and the Lion-Man game Slides | Video |
We analyze, based on an interplay between ideas and techniques from logic and geometric analysis, a pursuit-evasion game. More precisely, we focus on a discrete lion and man game with an epsilon-capture criterion.
We prove that in uniformly convex bounded domains the lion always wins and, using ideas stemming from proof mining, we extract a uniform rate of convergence for the successive distances between the lion and the man. As a byproduct of our analysis, we study the relation among different convexity properties in the setting of geodesic spaces.
Joint work with Genaro López-Acedo and Adriana Nicolae.
Albert Visser (Utrecht University), 18 November 2020 Fixed Points meet Löb's Rule Slides | Video |
The modal part of the work reported in this talk is in collaboration with Tadeusz Litak.
For a wide class of theories we have the Second Incompleteness Theorem and, what is more, Löb's rule, also in cases where the third Löb Condition L3 *provable implies provably provable* (aka 4) fails. We will briefly indicate some examples of this phenomenon. What happens when we do have Löb's Rule but not L3? It turns out that we still have a lot. For example, the de Jongh-Sambin-Bernardi Theorem on the uniqueness of fixed points remains valid. So, e.g., Gödel sentences are unique modulo provable equivalence. On the other hand, explicit definability of fixed points fails. An arithmetical example of the non-explicit-definability of the Gödel sentence is still lacking. (I do have an arithmetical example where the Gödel sentence is equivalent to the consistency of inconsistency but not to consistency.)
We discuss the relevant logic: the Henkin Calculus, to wit, K plus Löb's rule plus boxed fixed points. This logic turns out to be synonymous to the mu-calculus plus the minimal Henkin sentence, which expresses well-foundedness. So, results concerning the mu-calculus, like uniform interpolation, can be transferred to the Henkin Calculus.
Valeria de Paiva (Topos Institute Berkeley), 4 November 2020 Benchmarking Linear Logic Slides | Video |
Benchmarking automated theorem proving (ATP) systems using standardized problem sets is a well-established method for measuring their performance. The availability of such libraries of problems for non-classical logics is very limited. So in work with Olarte, Pimentel, and Reis (2018) we proposed a library for benchmarking Girard's (propositional) intuitionistic linear logic, starting from Kleene's initial collection of intuitionistic theorems. However, we want to use these theorems not for checking the efficiency of different theorem provers for the logic, but instead, I wanted to investigate the structure of the space of proofs of linear theorems, in different variants of linear logic. Having learned of Tarau's work on 'generating' intuitionistic implication theorems, I suggested to him that we could calculate the 'linear' implicational theorems. His program turned out to be very adept and in a few hours produces almost 8 billion "small" implicational linear theorems. Given this scale, we decided we could do some machine learning of intuitionistic linear logic, and the results seem impressive.
(This is joint work with Paul Tarau, presented by him at ICLP2020, CLA2020)
Michael Rathjen (University of Leeds), 21 October 2020 Far beyond Goodman's Theorem? Slides | Video |
Goodman's Theorem says that Heyting arithmetic in all finite types, HA^\omega, augmented by the axiom of choice for all finite types, AC(FT), is conservative over HA. This classical result has been extended to many other intuitionistic theories (e.g. by Beeson) and even to intuitionistic Zermelo-Fraenkel set theory with large set axioms (by Friedman and Scedrov). The question I'll be pondering in this talk is whether the finite types are the limit. What about adding the axiom of choice for all the transfinite dependent types as they appear in Martin-Löf type theory (actually already in Hilbert's "Über das Unendliche")?
Sam Buss (University of California at San Diego), 7 October 2020 Propositional proof systems and bounded arithmetic for logspace and nondeterministic logspace Slides | Video |
We discuss logics for proof systems that correspond to
deterministic logspace computation and to nondeterministic logspace
computation. The first component is propositional proof systems in which
lines are either decision diagrams, represented as decision trees with
extension variables, or nondeterministic decision diagrams, represented by
decision trees with nondeterministic choices and extension variables. The
second component is the bounded arithmetic theories VL and VNL.
This is part of a program of developing logics for weak classes of
computational complexity. A highlight of this kind of work is tight
relationships between propositional proof systems, (second-order) theories
of bounded arithmetic, and computational complexity. Our results are in
part revamped versions of prior theories for logspace and nondeterministic
logspace due to Zambella, Cook, Nguyen, and Perron. The talk will survey
past results, and report on joint work-in-progress with Anupam Das and
Alexander Knop.