Computing with Markov Categories
Tallinna Õpetajate Maja, 26 February 2025
The meeting will take place at the Tallinn Teachers' House (Tallinna Õpetajate Maja) in the Ambassador Room.
Programme
- 9:00 - 9:30 Welcome coffee
- 9:30 - 10:15 Robin Piedeleu, UCL
- Title: A Complete Axiomatisation of Equivalence for Discrete Probabilistic Programming
- Abstract: We introduce a sound and complete equational theory capturing equivalence of discrete probabilistic programs, that is, programs extended with primitives for Bernoulli distributions and conditioning, to model distributions over finite sets of events. To do so, we translate these programs into a graphical syntax of probabilistic circuits, formalised as string diagrams, the two-dimensional syntax of symmetric monoidal categories. We then prove a first completeness result for the equational theory of the conditioning-free fragment of our syntax. Finally, we extend this result to a complete equational theory for the entire language. Note these developments are also of interest for the development of probability theory in Markov categories: our first result gives a presentation by generators and equations of the category of Markov kernels, restricted to objects that are powers of the two-elements set.
- 10:15 - 11:00 Niels Voorneveld, Cybernetica
- Title: Metrics to the Limit: Iteration in Markov Categories
- Abstract: This is work in progress. In cryptography, security can be proven relative to a security parameter, where raising it should make the risk of a security failure arbitrarily small. The security parameter is used to specify length of encryption keys and messages, as well as time complexity of both security protocols and adversarial capabilities. Starting with a Markov category enriched with a distance metric, we look at how to define a category of parametric polynomial computations over that category. Taking the security parameter to the limit allows one to define a notion of asymptotic equivalence, useful for proving privacy properties for cryptographic protocols.
- 11:00 - 11:30 Coffee break
- 11:30 - 12:15 Mario Román, Oxford
- Title: Partial Markov categories
- Abstract: Partial Markov categories are a synthetic probabilistic inference framework, blending Markov categories with Cartesian Restriction Categories. They use a string diagrammatic syntax—with a formal
correspondence to programs—to reason about continuous and discrete probability, decision problems (Monty Hall, Newcomb’s), observations, Bayes’ theorem, normalisation, and both Pearl’s and Jeffrey’s updates in purely abstract categorical terms. The talk is based on joint work with Elena Di Lavore, Bart Jacobs, and Paweł Sobociński ("Partial Markov Categories", https://arxiv.org/pdf/2502.03477; and "A Simple Formal Language for Probabilistic Decision Theory", https://arxiv.org/pdf/2410.10643).
- 12:15 - 13:00 Bart Jacobs, Radboud
- Title: Update rules of Pearl and Jeffrey
- Abstract: This presentation describes the two different
rules for probabilistic updating, associated with Pearl
and with Jeffrey. The difference emerges in the presense
of updating with multiple data items that one wishes
to learn from (update with). It will be sketched how
Jeffrey's rule is used in a classical algorithm in
machine learning, namely Expectation Maximisaton.
- 13:00 - 14:30 Lunch
- 14:30 - 15:15 Eigil Rischel, TalTech
- Title: Conditional expectations in partial Markov categories
- Abstract: Conditional distributions are very well-explained by the
theory of Markov categories (indeed, this is more or less the main
idea behind the notion of Markov category). Open a book on classical
probability theory, however, and you will find far more space devoted
to the notion of conditional *expectation*. There are a number of
reasons for this, generally down to ways that the concept is
better-behaved than conditional distributions. However, conditional
expectation can be somewhat mysterious, since one conditions not on an
event or the result of a function but on a sub-sigma-algebra, and
hence interpreting the theory can be quite difficult. I will show how
to make sense of these mysteries - in fact, how they entirely dissolve
when viewed from a suitably abstract point of view - as well as how to
develop the theory of conditional expectations in the abstract setting
of partial Markov categories.
- 15:15 - 16:00 Elena Di Lavore, Oxford
- Title: Preorder enrichment in partial Markov categories
- Abstract:
I will define a preorder enrichment on partial Markov categories and show that it generalises that of cartesian restriction categories and of cartesian bicategories of relations.
When the preorder has a least element, it gives a canonical choice for conditionals and normalisations.
- 16:00 - 16:30 Coffee break
- 16:30 - 17:15 Callum Reader, TalTech
- Title: Optimal Transport from Enriched Category Theory
-
Abstract:
Optimal transport is a subdiscipline of applied metric measure theory that found its footing during the second world war and the following cold war. It is interested in answering a very simple question: how do we move things around as cheaply as possible? At its core is the earth-mover's distance, a metric structure on probability measures that tells us the `least cost to rearrange one measure into another'.
In this talk we take Lawvere's perspective that metric spaces are enriched categories, and show that ideas from optimal transport arise naturally from analogues in category theory. In particular, we show that probability measures can be thought of as presheaves, and that the earth-mover's distance is given by the natural transformation object between these presheaves. We explore how this gives rise to generalisations of earth-mover's distance in non-symmetric settings.
- 17:15 - 18:00 Márk Széles, Radboud
- Title: Compositional Inference
-
Abstract:
Inference is a fundamental reasoning technique in probability theory.
When applied to a large joint distribution, it involves updating
with evidence (conditioning) in one or more components (variables)
and computing the outcome in other components. When the joint
distribution is represented by a Bayesian network, the network
structure may be exploited to proceed in a compositional manner
— with great benefits. However, the main challenge is that updating
involves (re)normalisation, making it an operation that interacts
badly with other operations.
String diagrams are becoming popular as a graphical technique
for probabilistic (and quantum) reasoning. Conditioning has appeared in string diagrams,
in terms of a disintegration, using bent
wires and shaded (or dashed) normalisation boxes. It has become
clear that such normalisation boxes do satisfy certain compositional rules.
We take a decisive step in this development by
adding a removal rule to the formalism, for the deletion of shaded
boxes. Via this removal rule one can get rid of shaded boxes and
terminate an inference argument. We demonstrate via
(graphical) examples how the resulting compositional inference technique
can be used for Bayesian networks, and causal reasoning.
This is joint work with Bart Jacobs and Dario Stein.