Functional Programming Laboratory
 Functional Programming Lab Seminars

The Functional Programming Laboratory Seminar programme for the 2013-14 academic year is being administrated by Christian Sattler (cvs at cs.nott....). Please get in contact with him if you would like to present a talk, or have any general comments!

Forthcoming seminars

### Friday 28th February 2014, C1, 09.00

Ralf Hinze (Oxford)

Effect Lego

There are two categorical approaches to universal algebra: monads and Lawvere theories. Both of them have been useful for structuring semantic descriptions, for integrating effects into pure languages, and for composing programs. In this talk I'll take a closer look at the second approach, review its main ideas, and revisit some old papers on monad transformers and combinator libraries. The revised programs and libraries are of pleasing simplicity. Furthermore, the approach seems to hold many promises for teaching principles of programming languages.

Previous seminars

### Friday 14th February 2014, C60, 16.00

Neel Krishnaswami (Birmingham)

Complete Bidirectional Type Inference for GADTs

GADTs, or generalized algebraic data types, are a feature found in functional languages like Ocaml and Haskell. Essentially, they permit indexing data types with additional information, so that (for instance) lists may be indexed with their length, or term syntaxes with the type they will evaluate to.

In this talk, Ill show how these types have a logical interpretation, in terms of a type-theoretic interpretation of equality type originally introduced by Jean-Yves Girard and Peter Schroeder-Heister to model equality in first-order logic. This equality has a different (and stronger) elimination form than the equality type of Martin-Loef type theory, and leads to a different proof theory -- in particular, it is what justifies the practice of omitting needless cases from pattern match statements.

I'll show how this leads to a type inference algorithm whose structure arises from the proof-theoretic notion of focalization. This makes it possible for us to not only give an efficient algorithm, but also to prove it sound and complete with respect to a natural declarative semantics. Along the way, we'll meet new versions of some familiar friends, such as principal types.

This is ongoing work with Joshua Dunfield.

### Friday 7th February 2014, C60, 16.00

Idris: General Purpose Programming with Dependent Types

Idris is an experimental functional programming language with full spectrum dependent types, meaning that types can be predicated on any value. Its syntax is influenced by Haskell. As well as full dependent types it supports records, type classes, tactic based theorem proving, totality checking, and an optimising compiler with a foreign function interface. One of the goals of the Idris project is to bring type-based program verification techniques to programming practitioners while still supporting efficient systems programming via an optimising compiler and interaction with external libraries.

In this talk I will introduce dependently typed programming using Idris, and demonstrate its features using several examples including verified compression by run-length encoding, management of state and side-effects and resource correctness proofs.

### Friday 6th December 2013, C60, 12.00

Nicola Gambino (Leeds)

Recent advances and open problems in Homotopy Type Theory and Univalent Foundations

Homotopy Type Theory and Voevodsky's Univalent Foundations of Mathematics programme have attracted a significant amount of interest and activity in recent years. A recent conference on these topics, held at the Centre de Recerca Matematica in Barcelona, provided a good overview of the state of the art on the subject. I will report on recent advances presented at the CRM conference and discuss some open problems in the field.

### Friday 29th November 2013, C60, 13.00

Richard Mortier (Nottingham)

Introducing Mirage

I will talk about Mirage, a framework for building what we have called unikernels: compact, efficient single-purpose virtual machines for building cloud services. I will elaborate our goals, discuss our design and some implementation details, present some performance results, and give a status update. Mirage was published in ASPLOS'13 "Unikernels: Library Operating Systems for the Cloud".

### Friday 8th February 2013, C01, 16.00

Richard Bird (Oxford)

Some calculations with graphs and arrays

An informal talk, suitable for anyone who has done a first course in functional programming,  illustrating how to calculate a simple program using the laws of functional programming.  During the talk the audience will be given a chance to think about the problem and to come up with their own solution.
I will award a £20 prize for the best solution.

### Monday 21st January 2013, LT1, 12.00

Lennart Augustsson (Standard Chartered Bank)

Making Embedded Domain-Specific Languages Fly

Domain-specific languages (DSLs) enable domain experts themselves to realise the full potential of computers for solving problems in their domains without the time and cost overhead of involving programmers to develop tailor-made applications. The scope for deploying DSLs is as large and diverse as the domains where computers are or could be used to solve problems, and, with potentially huge payoffs, this is reflected in an upsurge of interest in what also has been termed Language-Oriented Programming. However, developing a high-quality, performant DSL is itself a costly proposition. In this talk, Lennart Augustsson will show how modern, typed, functional programming language technology, exemplified by Haskell, in combination with modern compiler technology (LLVM), can ease this task significantly by implementing a toy DSL as an embedding in Haskell in three steps: first, getting a typed DSL embedded in Haskell; then, executing such a language using an interpreter; finally, using LLVM to generate very efficient code for the DSL.

About the speaker: Lennart Augustsson is a computer scientist currently employed at Standard Chartered Bank in London. He has previously worked as a lecturer at Chalmers University of Technology as well as for a number of companies including Sandburst and Credit Suisse. His field of research is functional programming and implementations of functional languages. During his career he has done different things, e.g., written about four Haskell compilers, designed and implemented a hardware design language (Bluespec), written USB device drivers, winning the International Obfuscated C Code Contest three times, and designing packet routing hardware. He also likes to watch total eclipses of the sun.

### Monday 3rd December 2012, C01 (Computer Science), 16.00

Eddy Westbrook (Rice)

Approximate Programming

Real-number programs are at the core of much of computer science. Such programs are almost universally implemented using finite-precision approximations, such as fixed- or floating-point numbers, for reasons of both computability and efficiency. Approximation does not have to stop with fixed- or floating-point numbers: recent research has shown that additional approximations --- such as omitting loop iterations or using less precise implementations of certain core functions --- can greatly improve performance while still maintaining reasonable error bounds in many situations. A key difficulty in using approximations, however, is verifying approximated programs. This requires proving that an approximated program is quantifiably correct with respect to the original, a more complex proof obligation than traditional program correctness. In this talk, a new semantics for relating programs and their approximations is given, which can capture all of these techniques. Ongoing work on approximate program synthesis, where approximate programs and their errors are derived from real-number specifications, is also discussed.

### Tuesday 28th August 2012, C60 (Computer Science), 13.00

Neil Sculthorpe (Kansas)

The HERMIT In The Machine

The importance of reasoning about and refactoring programs is a central tenet of functional programming. Yet our compilers and development toolchains only provide rudimentary support for these tasks. To address this, we have implemented HERMIT, a toolkit enabling informal but systematic transformation of Haskell programs from inside the Glasgow Haskell Compiler's optimisation pipeline. With HERMIT, users can experiment with optimisations and equational reasoning, while the tedious heavy lifting of performing the actual transformations is done for them.

Hermit provides a transformation API that can be used to build higher-level rewrite tools. In this talk I will describe one HERMIT application -- a read-eval-print shell for performing transformations using HERMIT. I will also demonstrate using this shell to prototype an optimisation on a specific example, and discuss our initial experiences and remaining challenges.

### Friday 30th March 2012, C60 (Computer Science), 16.00

Conor McBride (Strathclyde)

Totality versus Turing Completeness

It's a well known fact that a total programming language can't encode its own interpreter. Some advocates and most detractors of total programming note that it comes at the cost of Turing completeness, another well known fact. These well known facts are about as true as the fact that "Haskell can't handle I/O." I shall talk about ways in which total languages can model the risk of nontermination, and show you a method I have found useful, inspired by notions of algebraic effect. If time permits, I shall sketch an appropriate effect-checking type discipline and/or show how to automate the construction of Bove-Capretta domain predicates by recursive computation over the relevant free monad.

### Thursday 15th March 2012, A08 (Business School South), 13.00

Cezar Ionescu (Institute of Climate Impact Research, Potsdam)

Calculation and Communication

"Vulnerability" is an important concept in global change studies. At the Potsdam Institute for Climate Impact Research (PIK), we have come up with a formal model of vulnerability, in order to compare different vulnerability assessments (a task called meta-analysis), but also in order to better structure and test the software used for these assessments, I'd like to present some aspects of this formalisation which are hopefully relevant and/or entertaining.

### Friday 2nd March 2012, C60 (Computer Science), 15.00

George Giorgidze (Tubingen)

The second half of my talk will be about how we use monad comprehensions in Database Supported Haskell (DSH). DSH is a Haskell library that allows execution of idiomatic, list processing program fragments on a relational database management system (RDBMS). DSH can be used to allow existing Haskell programs to operate on large scale data (e.g., larger than the available heap) or to query existing database resident data with Haskell. In addition to the standard list processing combinators, database-executable program fragments can also be written using the monad comprehension notation.

At the end of my talk, if time permits, I will briefly describe our ongoing work that uses Haskell's new generic deriving mechanism to extend the database-supported program execution to all algebraic data types.

### Wednesday 11th January 2012, C60 (Computer Science), 15.00

Algebraic Foundations to Effect-Dependent Optimisations

We present a general theory of Gifford-style type and effect annotations, where effect annotations are sets of effects. Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi's monadic theory of computational effects that emphasises the operations causing the effects at hand and their equational theory. The key observation is that annotation effects can be identified with operational symbols.

We develop an annotated imperative and functional language with a kind of computations for every effect set; it can be thought of as a sequential, annotated intermediate language. We develop a range of validated optimisations (i.e., equivalences), generalising many existing ones and adding new ones. We classify these optimisations as structural, algebraic or abstract: structural optimisations always hold, algebraic ones depend on the effect theory at hand; and abstract ones depend on the global nature of the theory (we give modularly-checkable sufficient conditions for their validity).

Joint work with Gordin Plotkin, to appear in POPL'12.

SLIDES

### Friday 2nd December 2011, A06 (BS-South), 15.00

Danel Ahman (Cambridge)

When is a Container a Comonad?

Abbott, Altenkirch, Ghani and others have taught us that many parameterised datatypes (set functors) can be usefully analysed via container representations in terms of a set of shapes and a set of positions in each shape.

Our work builds on the observation that datatypes often carry additional structure that containers alone do not account for. We introduce directed containers to capture the common situation where every position in a data structure determines another data structure, informally the sub-data structure rooted by that position. Some natural examples are non-empty lists and node-labelled trees, and data structures with a designated position (zippers).

While containers denote set functors via a fully-faithful functor, directed containers interpret fully-faithful functors into comonads. But more is true: every comonad whose underlying functor is a container is represented by a directed container. In fact, directed containers are the same as containers that are comonads.

We have formalised our development in the dependently typed programming language Agda.

### Friday 25th November 2011, C60 (Computer Science), 15.00-16.00

Per Kristian Lehre (Nottingham)

Runtime Analysis of Evolutionary Algorithms

Evolutionary algorithms and related meta-heuristics are widely applied to solve real-world optimisation problems. However, the theoretical foundations behind these heuristics are rather weak.

One of the fundamental problems is to derive upper and lower bounds on their expected runtime. The runtime is in this context the random variable describing the number of candidate solutions the evolutionary algorithm investigates before a solution of acceptable quality is found. Runtime analysis can provide insights into how the performance of evolutionary algorithms depends on the parameter-settings of the algorithm, and on the characteristics of the underlying optimisation problem.

This talk is an introduction to runtime analysis of evolutionary algorithms. The first part covers the basic definitions, and gives an overview of the results that have been obtained. The second part focusses on the so-called exploration-exploitation balance in evolutionary algorithms. We will see how this balance has an exponentially large impact on the runtime. Finally, I will show how I use OCaml to experiment with evolutionary algorithms.

### Friday 4th November 2011, C60 (Computer Science), 15.00-16.15

Ki Yung Ahn (Portland State)

Nax: a normalising language supporting recursive types of arbitrary polarity with a hierarchy of Mendler style iteration/recursion combinators.

This talk is about the design of a small normalising language Nax, named after Nax P. Mendler. in Nax, you are allowed to introduce arbitrary recursive types and eliminate them by principled iteration/recursion combinators following the style of Mendler. There is a rich family of such combinators whose termination properties are well studied, where some of which are our own contribution (see our ICFP'11 paper here).

The motivation for this work is to explore the design space of languages aiming for both (functional) programming and (logical) reasoning. Most typed functional programming languages allow formation of arbitrary recursive types with no guarantee of normalisation. Many of the typed logical reasoning systems are guaranteed to normalise but limit the formation of recursive types. Our language Nax is an attempt to establish a core language with the good properties of both worlds.

### Wednesday 12th January 2011, (Dearing Building B85), 11.00

Mirage: a multi-scale, functional operating system

Applications run on all kinds of environments these days: multicore desktops, virtual cloud infrastructures, smart-phones, and web browsers. These diverse environments make it worth rethinking the long-term future of our software stacks; do we really want to continue bundling gigabytes of general-purpose OS software with every single cloud image? Is there any point holding onto decades-old interfaces such as POSIX any more? How do advances in programming languages fit into the way we build operating systems?

I will introduce Mirage, a new operating system built in the statically type-safe OCaml functional language. Mirage compiles high-level functional source code directly into a variety of targets: microkernels that run directly on the Xen hypervisor, Javascript for web browsers, or embedded devices and general-purpose operating systems.

Mirage provides a consistent, simple programming API across all of these diverse backends, which makes it a powerful foundation for constructing safe, complex distributed systems across a heterogeneous set of resources.

### Friday 2010-11-12, C60, 15-16

Fredrik Nordvall Forsberg (Swansea)

Interpreting inductive-inductive definitions as indexed inductive definitions

Induction is a powerful and important principle of definition, especially so in dependent type theory and constructive mathematics. Induction-induction (named in reference to Dybjer and Setzer's induction-recursion) is an induction principle which gives the possibility to simultaneously introduce a set A together with an A-indexed set B (i.e. for every a : A, B(a) is a set). Both A and B(a) are inductively defined, and the constructors for A can also refer to B and vice versa.

In recent work, we have formalised inductive-inductive definitions and thus made meta-mathematical analysis of the theory possible. In this talk, I will sketch the first result of this kind. I will show that the theory of inductive-inductive definitions can be interpreted in the theory of indexed inductive definitions. In particular, this shows that both theories have the same proof-theoretical strength.

Slides.

### Wednesday 2010-11-03, C60, 15:30-16:30

Maximally Dense Segments

Given a list of numbers, the task is to find a consecutive segment whose average (sum divided by length) is maximum. This "maximally dense segment" appears to be an easy generalisation of the well-known maximum segment sum problem, but its solution turns out to be much harder than it looks. We extend and illuminate some recent work on this problem with a formal development of a linear-time online algorithm. In this talk, I will mainly be giving an intuition of the problem and its algorithm using diagrams, and explain why this problem is hard.

Slides.

Tuesday, 21/9/10, C1, 1500 - 1600

Dr Jonas Karlsson

A New Way of Thinking: Infinite Scalability and Rapid Scale Development (- bigger picture for NOSQL?)

Abstract

In this talk Jonas takes a retrospective look in the mirror of the scalability work he's done at Google, and previous research work. The focus is how successful applications in the cloud is enabled by the designing for "infinite scalability". The talk draws on NOSQL system design and implementation experiences and insights gained by enabling and reviewing numerous applications design for scalability. Specifically, necessary principles for designing successful scalable servers is discussed. At the end, some ideas, maybe controversial, will be discussed, regarding programming languages, environments and what the next stepping stone should be.

Friday, 11/6/10, C60, 1500 - 1600

Valeria de Paiva, (Cuil, Inc)

Fibrational Versions of Dialectica Categories

Abstract

Some twenty years ago I wrote a thesis on Dialectica
Categories, a categorical approach to Goedel's Dialectica
Interpretation, which was published in 1958. Goedel's goal was to show
the consistency of higher-order arithmetic and his Dialectica
Interpretation has been a fundamental tool in proof theory from its
inception (cf. Avigad and Feferman,1999). Recently several papers have
been written about extensions and reformulations of that categorical
work. In particular, in 2008 Bodil Biering defended a phd thesis on a
complete reformulation, dealing with a first-order version of the
interpretation. In this talk I want to describe Biering's
reformulation and its rationale. Time permitting I would like to
discuss other aspects of her thesis that deserve further investigation
in my opinion.

Friday, 4/6/10, C60, 1500 - 1600

Alexander Green, (University of Nottingham)

The Quantum IO Monad: towards a formally verified functional quantum programming language.

Abstract

The field of quantum computation provides us with a new model of
computation that seems to challenge the extended Church-Turing
thesis. Quantum computers make use of the strange phenomena that
appear in Quantum Mechanics, and various algorithms have been
discovered for such computers that give an exponential speed up over
the fastest known classical solutions. Shor's algorithm, for example,
can be used to factorise large numbers with only a polynomial
complexity in the number of bits in the number being
factored. Programming languages for these quantum computers are
required that can explicitly make use of the quantum mechanical
effects. Functional programming languages are in a unique position for
this task, as they can make use of high level structures to model
effectful computation. The Quantum IO Monad is designed as an
structure to make the effects inherent in quantum computation,
specifically measurements, explicit. Aside from the effective aspects
of measurement, quantum computations must be of a unitary nature, a
reimplementation of QIO in Agda has given us a stronger foundation in
which the unitary structures are formally verified as such. This talk
shall give an introduction to QIO, as well as the underlying model of
quantum computation. I hope to cover both implementations, as well as
a look at a few of the more famous quantum algorithms written in QIO.

Friday, 7/5/10, C34, EXCHANGE, 1500 - 1600

Peter Schuster, (University of Leeds)

Unique Paths as Formal Points

(joint work with Thierry Coquand)

Abstract

A point-free formulation of the König Lemma for trees with uniformly
at most one infinite path allows for a constructive proof without
unique choice. This is built upon related work by Ishihara and
Schwichtenberg.

Friday, 5/3/10, C60, 1600 - 1700

Iman Poernomo, (Kings College London)

Correct-by-construction model transformations from partially ordered speciﬁcations in Coq

Abstract

In this talk I will outline an approach to the synthesis of provably
correct model transformations within the Coq theorem prover, an
implementation of Coquand and Huet’s Inductive Calculus of
Constructions.

Model transformations within the Model Driven Architecture (MDA)
paradigm are meant to enable designers to focus most of their work on
providing a robust, archi- tecturally sound platform independent
model. Then, given a particular platform and associated metamodel
choice, a designer applies a (possibly off-the-shelf ) transformation
to automatically obtain a speciﬁc, reﬁning model. In this way, MDA
eliminates some of the effort spent on making implementation decisions
at the speciﬁcation level, and, ideally, results in a better generated
platform speciﬁc model than that obtained from a manual process.

MDA is increasingly being taken up by industry and, as a consequence,
trans- formations of interest are becoming more complex and mission
critical. It therefore essential to have maximal levels of trust in
the correctness of model transformations.  The informality of MDA as
it currently stands makes the approach untrustworthy and dangerous. If
model transformations are incorrect, the MDA process can result in
software of a lower quality than that produced by traditional software
development.  A small number of errors in the composition of a number
of complex transformations can easily lead to an exponential number of
errors in the code that are difficult to trace and debug.

I will show how the impredicative theory of Coq, together with its
treatment of inductive and coinductive types, lends itself to
synthesis of a wide range of model transformations and discuss the
practical beneﬁts and potential scalability of our approach by means
of a case study taken from industry.

Friday, 26/2/10, C60, 1600 - 1700

Makoto Hamana (Gunma University, Japan)

Initial Algebra Semantics for Cyclic Sharing Structures

Abstract

"Tree-like" structures -- trees involving cycles and sharing --
appear very often in logic and computer science. They are usually
treated as graphs, because there do not seem to exist clear
inductive structures. I propose a simple term syntax for cyclic
sharing structures that admits structural induction and recursion
principles. I show that the obtained syntax is directly usable in
Haskell and Agda, as well as ordinary data structures such as
lists and trees. To achieve this goal, I use categorical approach
to initial algebra semantics in a presheaf category. I will also
try to relate this with traced categorical models.

Friday, 19/2/10, C60, 1600 - 1700

Andrew Pitts (University of Cambridge)

Structural Recursion with Pure Local Names

Abstract

I will discuss a new recursion principle for inductive data modulo
alpha-equivalence of bound names that makes use of Odersky-style,
effect-free local names when recursing over bound names. It is
inspired by the nominal sets notion of "alpha-structural recursion".
The new approach provides a pure calculus of total functions that
still adequately represents alpha-structural recursion while avoiding
the need to verify freshness side-conditions in definitions and
computations. It arises from a new semantics of Odersky-style local
names using nominal sets.
slides

Friday, 12/2/10, C60, 1600 - 1700

Ulrich Berger, (University of Swansea)

Program extraction from proofs: induction and coinduction

Abstract

In this talk I give two examples of program extraction from proofs
in a constructive theory of inductive and coinductive definitions.
The first belongs to the realm of computable analysis. A coinductive
predicate C_0 is defined characterising those real numbers that are
constructively "approximable". From proofs of closure properties of
C_0 one extract implementations of arithmetic operations with respect
to the signed digit representation of real numbers. In the second
example I show how to extract monadic parser combinators (Hutton and
Meijer) from proofs that certain labelled transition systems are
finitely branching. While in the first example coinduction is central,
here induction features prominently because finiteness is an
inductive concept. Both examples have in common that the data types
the extracted programs operate on (infinite digit streams,
higher-order functions) are not present in the (source) proofs which
reside in simple extensions of first-order logic. This indicates that
the perspective of replacing programming by the activity of proving
is not as daunting as it seems, and that therefore program extraction
might become an attractive method for producing formally certified
programs in the future.
slides

Friday, 5/2/10, C60, 1600 - 1700

Stefanie Weirich, (University of Pennsylvania)

Dependent Types and Program Equivalence

Abstract

Joint work with Limin Jia, Zianzhou Zhao, and Vilhelm Sjoberg.

The definition of type equivalence is one of the most important design
issues for any typed language. In dependently-typed languages, because
terms appear in types, this definition must rely on a definition of
term equivalence. In that case, decidability of type checking requires
decidability for the term equivalence relation.

Almost all dependently-typed languages require this term equivalence
relation to be decidable. Some, such as Coq, Epigram or Agda, do so by
employing analyses to force all programs to terminate. Conversely,
others, such as DML, ATS, Omega, or Haskell, allow nonterminating
computation, but do not allow those terms to appear in types. In both
cases, decidable type checking comes at a cost, in terms of complexity
and expressiveness.

Conversely, the benefits to be gained by decidable type checking are
modest. Termination analyses allow dependently typed programs to
verify total correctness properties. However, decidable type checking
is not a prerequisite for type safety---and, in this context, type
safety implies a weak form of correctness. Furthermore, decidability
does not imply tractability.

Therefore, we take a different approach: instead of a fixed,
decidable, notion for term equivalence, we parameterize our type
system with an abstract relation that is not necessarily decidable. We
then design a novel set of typing rules that require only weak
properties of this abstract relation in the proof of the preservation
and progress lemmas. This design provides flexibility: we compare
valid instantiations of term equivalence which range from
beta-equivalence, to contextual equivalence, to some exotic
equivalences.

Friday, 29/1/10, C60, 1600 - 1700

Georg Struth, (University of Sheffield)

Automated Synthesis and Verification of Imperative Programs

Abstract

Joint work with Rudolf Berghammer, Uni Kiel
I will present a new approach for automating the synthesis
(construction) and verification of imperative programs.  Based on the
standard methods of Dijkstra, Gries and Hoare, it supports a game of
proof and refutation with automated theorem provers, model search tools
and computer algebra systems combined with algebraic theories that have
been designed and optimised for automation.   While the algebras and the
proof technology can largely be hidden behind an interface, developers
can use  lightweight relation-based modelling languages and rely on an
operational understanding of binary relations.

I will first review some of the algebras used in the approach and
discuss the technology behind the interface.  I will then illustrate the
approach through the automated synthesis and verification of some
classical algorithms, e.g., Warshall's transitive closure algorithm and
Szpilrajn's algorithm for computing linear extensions of partial
orders.  If time permits I will report on ongoing work on the
integration of an automated theorem prover and the verification of
simple functional programs in Agda.

Friday, 4/12/09, C60, 1500 - 1600

James Chapman, (Institute of Cybernetics, Tallinn)

In this talk I will describe recent work with Thorsten Altenkirch and
Tarmo Uustalu on a generalisation of monads from being defined on a
category to being defined on a functor. I will introduce a number of
examples (vector spaces, well-scoped lambda terms, and arrows),
generalise some related constructions from standard monad theory
(adjunction, Kleisli category, etc.), and relate the two notions of

Friday, 6/11/09, C60, 1500 - 1600

Krzysztof Kapulkin, (University of Warsaw)

The classifying weak ω-category of a type theory

Abstract

Starting with [HS98] it has become clear that higher categories provide natural
semantics for intensional Martin-L¨of Type Theory. An ω-category has not only the
ob jects (0-cells) and morphisms (1-cells), but also morphisms between morphisms
(2-cells) and so on. All this data is organized with various kinds of composition.
However, for the purpose of interpreting Martin-Löf Type Theory we have to consider
weak ω-categories, that is those where composition is not strictly associative.
Roughly speaking, the semantics in higher categories can be deﬁned by inter-
preting types as ob jects [[A]] and terms x : A ⊢ t : B as morphisms [[t]] : [[A]] //[[B]].
The higher cells are given by identity types: a term ρ : Id(t, t′ ) is interpreted as a
2-cell [[ρ]] : [[t]] ⇒ [[t′ ]] and a term χ : Id(ρ, ρ′ ) as an appropriate 3-cell and so on.
In my talk I would like to present the results by Lumsdaine [Lum08] and Garner
and van den Berg [GvdB08], where they proved that with the interpretation as above
every type carries a weak ω-category structure. Finally, I want to sketch a proposal
for the construction of the classifying weak ω-category for a type theory according
to [AKL09].
This is joint work with Steve Awodey and Peter LeFanu Lumsdaine (Carnegie
Mellon University).
References
[AKL09] Steve Awodey, Chris Kapulkin, and Peter LeFanu Lumsdaine, The clas-
sifying weak ω-category of a type theory, Work in progress.
[GvdB08] Richard Garner and Benno van den Berg, Types are weak ω-groupoids,
Submitted, 2008, arXiv:0812.0298.
[HS98] Martin Hofmann and Thomas Streicher, The groupoid interpretation of
type theory, Twenty-Five Years of Constructive Type Theory (Venice,
1995), Oxford Logic Guides, vol. 36, Oxford Univ. Press, New York, 1998,
pp. 83–111.

slides

Wednesday, 23/9/09, C60, 1000 - 1100

Tarmo Uustalu, (Institute of Cybernetics, Tallinn)

Abstract

In programming language theory we encounter some strikingly monad-like
structures that fail to be monads seemingly only for the reason that the underlying functor is not an endofunctor. We propose a notion of  relative monad lifting this restriction. A good part of the theory of monads  carries over to the relative case. Under meaningful conditions, a relative
monad is a monoid in a (generally non-endo-) functor category and extends to a (standard) monad. Some examples of relative monads are the syntaxes of  the untyped and typed versions of lambda calculus over finite contexts and Hughes's arrow types mathematized a la Jacobs and Heunen. (Joint work with Thorsten Altenkirch and James Chapman.)

Friday, 28/8/09, C60, 1500 - 1600

Dan Licata, (Carnegie Mellon University)

A Universe of Binding and Computation

Abstract

In this talk, I will describe a logical framework supporting datatypes
that mix binding and computation, implemented as a universe in the
dependently typed programming language Agda 2.  We represent binding
pronominally, using well-scoped de Bruijn indices, so that types can be
used to reason about the scoping of variables.  We equip our universe
with datatype-generic implementations of weakening, substitution,
exchange, contraction, and subordination-based strengthening, so that
programmers need not reimplement these operations for each individual
language they define.  In our mixed, pronominal setting, weakening and
substitution hold only under some conditions on types, but we show that
these conditions can be discharged automatically in many cases.
Additionally, I will show how to program normalization-by-evaluation for
the untyped $\lambda$-calculus in our framework, demonstrating that we
can express detailed invariants about variable usage in a program's type
while still writing clean and clear code.

Seminars in 2008/2009