Archive of events from 2021
An archive of events from the year
-
I will introduce some of our research on Iris, a higher-order concurrent separation logic framework, implemented and verified in the Coq proof assistant, which can be used for mathematical reasoning about safety and correctness of concurrent higher-order imperative programs. Iris has been used for many different applications; see iris-project.org for a list of research papers. However, in this talk I will focus on the Iris base logic (and its semantics) and sketch how one may define useful program logics on top of the base logic. The base logic is a higher-order intuitionistic modal logic, which, in particular, supports the definition of recursive predicates and whose type of propositions is itself recursively defined.
-
Cyclic proof systems permit derivations to contain cycles, thereby serving as finite representations of non-wellfounded derivation trees. Such cyclic proof systems have been proposed for a broad range of logics, especially those with fixed-point features or inductively defined objects of reasoning. The soundness of such systems is commonly ensured by only considering those cyclic derivations as proofs which satisfy a so-called global trace condition.
</p>
In this talk, I will present a category theoretical notion, the trace category, which generalizes the trace condition of many cyclic proof systems. Indeed, a handful of different trace categories are sufficient to capture the trace conditions of all cyclic proof systems from the literature I have examined so far. The arising abstract notion of cyclic proof allows for the derivation of generalized renditions of standard results of cyclic proof theory, such as the decidability of proof-checking and the regularizability of certain non-wellfounded proofs. It also opens the door to broader inquiries into the structural properties of trace condition based cyclic and non-wellfounded proof systems, some of which I will touch on in this talk, time permitting. The majority of this talk will be based on my Master’s thesis.
-
Cognitive scientists Spelke and Kintzler (2007) and Carey (2009) propose objects, actions, space and numbers as ‘core domains of knowledge’ that underpin the framework of concepts people use to describe and communicate about the world. Gärdenfors (2019, 2020) argues that humans make sense of domains by appealing to various types of invariances in sensory signals. In this talk, I present work by Quinon and Gärdenfors (manuscript) in which the aim is to extend the analysis in terms of invariances to the domain of numbers. We focus on several perspectives relating invariances: cognitive modeling, formal mathematical and experimental.
</p>
As theoretical background, we assume that numbers are properties of collections (Simons 1982, 2007, 2011; Johansson 2015; Angere 2014). We observe that the domain of number is determined by two types of invariances. First, the concept of collection itself depends on being invariant under the location of its objects. Second, the determinant invariance of the domain of number is the fungibility of objects: If an object in a collection is exchanged for another object, the collection will still contain the same number of objects. Fungibility will be shown to be closely related to one-to-one correspondences.
We first introduce the concept of a collection and show how it differs from the concept of a set. Then we present the invariance of location of objects that applies to collections and we introduce fungibility as a second type of invariance. We illustrate our theoretical analysis by empirical material from experiments of developmental psychologists.
This is joint work with Peter Gärdenfors (Lund).
-
In 1672 John Eliot, English Puritan educator and missionary, published The Logick Primer: Some Logical Notions to initiate the INDIANS in the knowledge of the Rule of Reason; and to know how to make use thereof [1]. This roughly 80 page pamphlet focuses on introducing basic syllogistic vocabulary and reasoning so that syllogisms can be created from texts in the Psalms, the gospels, and other New Testament books. The use of logic for proselytizing purposes is not distinctive: What is distinctive about Eliot’s book is that it is bilingual, written in both English and Wôpanâak (Massachusett), an Algonquian language spoken in eastern coastal and southeastern Massachusetts. It is one of the earliest bilingual logic textbooks, it is the only textbook that I know of in an indigenous American language, and it is one of the earliest printed attestations of the Massachusett language.
In this talk, I will:
- Introduce John Eliot and the linguistic context he was working in.
- Introduce the contents of the Logick Primer—vocabulary, inference patterns, and applications.
- Discuss notions of “Puritan” logic that inform this primer.
- Talk about the importance of his work in documenting and expanding the Massachusett language and the problems that accompany his colonial approach to this work.
References
[1] J.[ohn] E.[liot]. The Logick Primer: Some Logical Notions to initiate the INDIANS in the knowledge of the Rule of Reason; and to know how to make use thereof. Cambridge, MA: Printed by M.[armaduke] J.[ohnson], 1672.
-
I will demonstrate how Walukiewicz’ seminal proof of completeness for the propositional μ-calculus can be derived (and refined) from the cyclic proof theory of the logic, notably the uniform interpolation theorem for the logic.
-
Team semantics is, when compared to standard Tarskian semantics, a more expressive framework that can be used to express logical connectives, operations and atoms that can’t be expressed in Tarskian semantics. This includes branching quantifiers, notions of dependence and independence, trace quantification in linear-time temporal logic (LTL), and probabilistic notions from quantum mechanics.
Team semantics is based on the same notion of structure as Tarskian semantics, but instead of a single assignment satisfying a formula (or not), in team semantics a set, or a team, of assignments satisfies a formula (or not). In other words, the semantic value of a formula is lifted from a set of assignments (those that satisfy the formula) to a set of teams of assignments.
In almost all (with only one exception that I’m aware of) logical systems based on team semantics this lifting operation is the power set operation, and as a result the set of teams satisfying a formula is closed downwards. This is often taken as a basic and foundational principle of team semantics.
In this talk I will discuss this principle and present some ideas on why, or why not, the power set operation is the most natural lift operation. By using other lift operations we can get a more powerful semantics, but, it seems, also a more complicated one.
References:
- Engström, F. (2012) “Generalized quantifiers in dependence logic”
- Nurmi, V. (2009) “Dependence Logic: Investigations into Higher-Order Semantics Defined on Teams”
- Väänänen, J. (2007) “Dependence logic: A new approach to independence friendly logic”
-
Semiring semantics of logical formulae generalises the classical Boolean semantics by permitting multiple truth values from certain semirings. In the classical Boolean semantics, a model of a formula assigns to each (instantiated) literal a Boolean value. K-interpretations, for a semiring K, generalize this by assigning to each such literal a value from K. We then interpret 0 as false and all other semiring values as nuances of true, which provide additional information, depending on the semiring: For example, the Boolean semiring over {0,1} corresponds classical semantics, the Viterbi-semiring can model confidence scores, the tropical semiring is used for cost analysis, and min-max-semirings (A, max, min, a, b) for a totally ordered set (A,<) can model different access levels. Most importantly, semirings of polynomials, such as N[X], allow us to track certain literals by mapping them to different indeterminates. The overall value of the formula is then a polynomial that describes precisely what combinations of literals prove the truth of the formula.
This can also be used for strategy analysis in games. Evaluating formulae that define winning regions in a given game in an appropriate semiring of polynomials provides not only the Boolean information on who wins, but also tells us how they win and which strategies they might use. For this approach, the case of Büchi games is of special interest, not only due to their practical importance, but also because it is the simplest case where the logical definition of the winning region involves a genuine alternation of a greatest and a least fixed point. We show that, in a precise sense, semiring semantics provide information about all absorption-dominant strategies – strategies that win with minimal effort, and we discuss how these relate to positional and the more general persistent strategies. This information enables further applications such as game synthesis or determining minimal modifications to the game needed to change its outcome.
-
Sets of uniqueness and their properties are traditionally investigated in Harmonic Analysis. The study of such sets has a long and illustrious history, witnessing fruitful interdisciplinary interactions, often enriching the subject with a vibrant fertility for crossover of ideas. In this talk, we set up the modern framework used to study such sets with particular emphasis on some (classical) descriptive set-theoretic aspects. We present some results concerning the family of closed sets of uniqueness of a locally compact Polish group - more concretely, we will talk about their complexity and the (in)existence of a Borel basis.
-
We employ the lens provided by formal truth theory to study axiomatizations of PA (Peano Arithmetic). More specifically, let EA (Elementary Arithmetic) be the fragment I∆0 + Exp of PA, and CT−[EA] be the extension of EA by the commonly studied axioms of compositional truth CT−. The truth theory delivers a natural preorder on the set of axiomatizations: an axiomatization A is greater or equal to an axiomatization B if and only if, over CT-[EA], the assertion “All axioms from A are true” implies “All axioms from B are true”. Our focus is dominantly on two types of axiomatizations, namely: (1) schematic axiomatizations that are deductively equivalent to PA, and (2) axiomatizations that are proof-theoretically equivalent to the canonical axiomatization of PA.
The first part of the talk focuses on the axiomatizations of type (1). We adapt the argument by Visser and Pakhomov (“On a question of Krajewski’s”, JSL 84(1), 2019) to show that there is no weakest axiomatization of this form (even if the axiomatizations are ordered by relative interpretability). Secondly, we sketch an argument showing that such axiomatizations with the given ordering form a countably universal partial order. This part is based on our joint work with Ali Enayat, available at https://www.researchgate.net/publication/353496287_Axiomatizations_of_Peano_Arithmetic_A_truth-theoretic_view
In the second part of the talk we discuss axiomatizations of type (2). We narrow our attention to such axiomatizations A for which CT-[EA] + “All axioms from A are true” is a conservative extension of PA. We explain why such theories have very diverse metamathematical properties (e.g. large speed-up). To illustrate our methods we show that, with the given ordering, such axiomatizations do not form a lattice. This is a work still in progress.
-
Cyclic (or circular) proofs are now a common technique for demonstrating metalogical properties of systems incorporating (co)induction, including modal logics, predicate logics, type systems and algebras. Inspired by automaton theory, cyclic proofs encode a form of self-dependency of which induction/recursion comprise special cases. An overarching question of the area, the so-called ‘Brotherston-Simpson conjecture’, asks to what extent the converse holds.
In this talk I will discuss a line of work that attempts to understand the expressivity of circular reasoning via forms of proof theoretic strength. Namely, I address predicate logic in the guise of first-order arithmetic, and type systems in the guise of higher-order primitive recursion, and establish a recurring theme: circular reasoning buys precisely one level of ‘abstraction’ over inductive reasoning.
This talk will be based on the following works:
-
Fitch-style modal lambda calculi (Borghuis 1994; Clouston 2018) provide a solution to programming necessity modalities in a typed lambda calculus by extending the typing context with a delimiting operator (denoted by a lock). The addition of locks simplifies the formulation of typing rules for calculi that incorporate different modal axioms, but obscures weakening and substitution, and requires tedious and seemingly ad hoc syntactic lemmas to prove normalization.
In this work, we take a semantic approach to normalization, called Normalization by Evaluation (NbE) (Berger and Schwichtenberg 1991), by leveraging the possible world semantics of Fitch-style calculi. We show that NbE models can be constructed for calculi that incorporate the K, T and 4 axioms, with suitable instantiations of the frames in their possible world semantics. In addition to existing results that handle beta reduction (or computational rules), our work also considers eta expansion (or extensional equality rules).
References:
- Borghuis, V.A.J. (1994). “Coming to terms with modal logic : on the interpretation of modalities in typed lambda-calculus”.
- Clouston, Ranald (2018). “Fitch-Style Modal Lambda Calculi”.
- Berger, Ulrich and Helmut Schwichtenberg (1991). “An Inverse of the Evaluation Functional for Typed lambda-calculus”.
-
In his address to the International Congress of Mathematics in Vancouver, 1974, Harvey Friedman launched a program where the aim would be to find the minimal set of axioms needed to prove theorems of ordinary mathematics. More than often, it turned out that the axioms then would be provable from the theorems, and the subject was named Reverse Mathematics. In this talk we will survey some of the philosophy behind, and results of, the early reverse mathematics, based on the formalisation of mathematics within second order number theory.
In 2005, Ulrich Kohlenbach introduced higher order reverse mathematics, and we give a brief explanation of the what and why? of Kohlenbach’s approach. In an ongoing project with Sam Sanders we have studied the strength of classical theorems of late 19th/early 20th century mathematics, partly within Kohlenbach’s formal typed theory and partly by their, in a generalised sense, constructive content. In the final part of the talk I will give some examples of results from this project, mainly from the perspective of higher order computability theory. No prior knowledge of higher order computability theory is needed.
-
In its modern formulation, Hilbert’s tenth problem asks to find a general algorithm which decides the solvability of Diophantine equations. While this problem was shown to be unsolvable due to the combined work of Davis, Putnam, Robinson and Matiyasevich, similar question can be posed over domains other than the integers. Among the most important open questions in this area of research is if a version of Hilbert’s tenth problem for Fp((t)), the field of formal Laurent series over the finite field Fp, is solvable or not.
The fact that this remains open stands in stark contrast to the fact that the first order theory of the much similar object Qp, the field of p-adic numbers, is completely understood thanks to the work by Ax, Kochen and, independently, Ershov. In light of this dichotomy, I will present new decidability results obtained during my doctoral research on extensions of Fp((t)). This work is motivated by recent progress on Hilbert’s tenth problem for Fp((t)) by Anscombe and Fehm and builds on previous decidability results by Kuhlman.</p>
-
When is a property of a model a logical property? According to the so-called Tarski-Sher criterion this is the case when the property is preserved by isomorphisms. We relate this to the model-theoretic characteristics of abstract logics in which the model class is definable, resulting in a graded concept of logicality (in the terminology of Sagi’s paper “Logicality and meaning”).
We consider which characteristics of logics, such as variants of the Löwenheim-Skolem Theorem, Completeness Theorem, and absoluteness, are relevant from the logicality point of view, continuing earlier work by Bonnay, Feferman, and Sagi. We suggest that a logic is the more logical the closer it is to first order logic, and offer a refinement of the result of McGee that logical properties of models can be expressed in L∞∞ if the expression is allowed to depend on the cardinality of the model, based on replacing L∞∞ by a “tamer” logic. This is joint work with Jouko Väänänen.</p>
-
Almost exactly a thousand years ago a teenager known today as Avicenna lived in what is now Uzbekistan. He made a resolution to teach himself Aristotelian logic, armed with an Arabic translation of Aristotle and a century-old Arabic textbook of logic. A couple of years later, around his eighteenth birthday, he wrote a brief report of what he had learned.
Six months ago I started to examine this report - I suspect I am the first logician to do that. It contains many surprising things. Besides introducing some new ideas that readers of Avicenna know from his later works, it also identifies some specific points of modal logic where Avicenna was sure that Aristotle had made a mistake. People had criticised Aristotle’s logic before, but not at these points. At first Avicenna had no clear understanding of how to do modal logic, and it took him another thirty years to justify all the criticisms of Aristotle in his report. But meanwhile he discovered for himself how to defend a new logic by building new foundations. I think the logic itself is interesting, but the talk will concentrate on another aspect. These recent discoveries mean that Avicenna is the earliest known logician who creates new logics and tells us what he is doing, and why, at each stage along the way.
-
I will present work in progress (together with Graham Leigh) on a novel proof of the conservativity of the intuitionistic fix-point theory over Heyting arithmetic (HA), originally proved in full generality by Arai [1].
We make use of the work of van den Berg and van Slooten [2] on realizability in Heyting arithmetic over Beeson’s logic of partial terms (HAP). Let IF be the extension of Heyting arithmetic by fix-points, denoted \hat{ID}i1 in the literature. The proof is divided into four parts: First we extend the inclusion of HA into HAP to IF into a similar theory IFP in the logic of partial terms. We then show that every theorem of this theory provably has a realizer in the theory IFP(Λ) of fix-points for almost negative operator forms only. Constructing a hierarchy stratifying the class of almost negative formulae and partial truth predicates for this hierarchy, we use Gödel’s diagonal lemma to show IFP(Λ) is interpretable in HAP. Finally we use use the result of [2] that adding the schema of “self-realizability” for arithmetic formulae to HAP is conservative over HA. The result generalises the work presented at my half-time seminar 2020-08-28.
References
[1] Toshiyasu Arai. Quick cut-elimination for strictly positive cuts. Annals of Pure and Applied Logic, 162(10):807–815, 2011.
[2] Benno van den Berg and Lotte van Slooten. Arithmetical conservation results. Ind- agationes Mathematicae, 29:260–275, 2018.
-
Focusing is a general technique for syntactically compartmentalising the non-deterministic choices in a proof system, which not only improves proof search but also has the representational benefit of distilling sequent proofs into synthetic normal forms.
However, since focusing was traditionally specified as a restriction of the sequent calculus, the technique had not been transferred to logics that lack a (shallow) sequent presentation, as is the case for some modal or substructural logics.
With K. Chaudhuri and L. Straßburger, we extended the focusing technique to nested sequents, a generalisation of ordinary sequents, which allows us to capture all the logics of the classical and intuitionistic S5 cube in a modular fashion. This relied on an adequate polarisation of the syntax and an internal cut-elimination procedure for the focused system which in turn is used to show its completeness.
Recently, with A. Gheorghiu, we applied a similar method to the logic of Bunched Implications (BI), a substructural logic that freely combines intuitionistic logic and multiplicative linear logic. For this we had first to reformulate the traditional bunched calculus for BI using nested sequents, followed again by a polarised and focused variant that we show is sound and complete via a cut-elimination argument.
-
In the traditional so-called Tarski’s Truth Definition the semantics of first order logic is defined with respect to an assignment of values to the free variables. A richer family of semantic concepts can be modelled if semantics is defined with respect to a set (a “team”) of such assignments. This is called team semantics.
Examples of semantic concepts available in team semantics but not in traditional Tarskian semantics are the concepts of dependence and independence. Dependence logic is an extension of first-order logic based on team semantics. It has emerged that teams appear naturally in several areas of sciences and humanities, which has made it possible to apply dependence logic and its variants to these areas. In my talk I will give a quick introduction to the basic ideas of team semantics and dependence logic as well as an overview of some new developments, such as quantitative analysis of team properties, a framework for a multiverse approach to set theory, and probabilistic independence logic inspired by the foundations of quantum mechanics.
-
In the field of axiomatic theories of truth, conservativity properties of theories are much investigated. Conservativity can be used to argue that, despite the well-known undefinability of truth, there is a sense in which a primitive truth predicate can be reduced to the resources of an underlying mathematical theory that provides basic syntactic structure to truth ascriptions.
Conservativity is typically proved model-theoretically, or proof-theoretically via the elimination of cuts on formulae containing truth (Tr-cuts). The original Tr-cut-elimination argument for the theory of Tarskian, compositional truth CT[B] by Halbach is not conclusive. This strategy has been corrected by Graham Leigh: Leigh supplemented Halbach’s strategy with the machinery of approximations (introduced by Kotlarski, Krajewski and Lachlan in the context of their M-Logic). In the talk we investigate a different, and arguably simpler way of supplementing Halbach’s original strategy. It is based on an adaptation of the Takeuti/Buss free-cut elimination strategy for first-order logic to the richer truth-theoretic context. If successful, the strategy promises to generalize to the type-free setting in a straightforward way. This is joint work with Luca Castaldo.
-
An account of inferences should take into account not only inferences from established premisses but also inferences made under assumptions. This makes it necessary to consider arguments, chains of inferences in which assumptions and variables may become bound.
An argument is valid when all its inferences are valid, and it then amounts to a proof in case it has no unbound assumptions or variables. The validity of an inference – not to confuse with the conclusion being a logical consequence of the premisses – seems in turn best explained in terms of proofs. This means that the concepts of valid inference and valid argument depend on each other and cannot be defined independently but have to be described by principles that state how they are related. A number of such principles will be proposed. It is conjectured that inferences that can be expressed in the language of first order intuitionistic predicate logic and are implied to be valid by these principles are all provable in that logic.
-
Conditional perfection is the phenomenon in which conditionals are strengthened to biconditionals. In some contexts, “If A, B” is understood as if it meant “A if and only if B.” I’ll present and discuss a series of experiments designed to test one of the most promising pragmatic accounts of conditional perfection.
This is the idea that conditional perfection is a form of exhaustification, triggered by a question that the conditional answers. If a speaker is asked how B comes about, then the answer “If A, B” is interpreted exhaustively to meaning that A is the only way to bring about B. Hence, “A if and only if B.” The evidence suggests that conditional perfection is a form of exhaustification, but not that it is triggered by a relationship to a salient question. (This is joint work with Fabrizio Cariani.)
-
Project presentations
-
A cyclic proof is a, possibly infinite but, regular derivation tree in which every infinite path satisfies a certain soundness criterion, the form of which depends on the logic under study. Circular and, more generally, non-wellfounded derivations are not traditionally regarded as formal proofs but merely as an intermediate machinery in proof-theoretic investigations.
They are, however, an important alternative to finitary proofs and in the last decade have helped break some important barriers in the proof theory of logics formalising inductive and co-inductive concepts. In this talk we focus on cyclic proofs for modal logics, ranging from Gödel-Löb logic to more expressive languages such as the modal mu-calculus, and reflect on how they can contribute to the development of the theory of fixed point modal logic.