Partitioning infinite matroids into circuits

Although this post is part of the ongoing series on infinite matroids, it begins with an old result about graphs. Suppose that you have a graph $G$, and you want to know whether you can partition it into an edge-disjoint union of cycles. If $G$ is finite then there is a straightforward necessary and sufficient condition for this; every vertex of $G$ should have even degree. Necessity is obvious, and to see sufficiency note that as long as $G$ has any edges it cannot be a tree so it must contain some cycle; after deleting the edges of that cycle, every vertex still has even degree and so we can keep doing the same again until we have exhausted all the edges of the graph. 

What if $G$ is infinite? In that case it is no longer sufficient that all degrees are even. Indeed, consider the two-way infinite ray; every vertex has degree 2, but there isn’t even a single cycle. So we need a stronger condition. Rather than considering individual vertices, we should consider cuts. Since any cycle meets any cut in an even number of edges, if $G$ is an edge-disjoint union of cycles then every cut of $G$ must be a disjoint union of finite sets of even size, so must itself be even or infinite. In 1960, Nash-Williams showed that this condition is sufficient [3].

Theorem 1: Let $G$ be any graph. Then $G$ is an edge-disjoint union of cycles if and only if $G$ has no finite cuts of odd size.

In this post we will examine various generalisations of this result, as well as some of the basic ideas used to prove them, and we will finish up by looking at some intriguing open problems which remain. However, we will not look at Nash-Williams’ original proof, since that was long and technical, and over the years more efficient proofs have emerged (see for example [4],[5]).

Roughly speaking, the modern proofs work in the following way. We want to prove the result by induction on the cardinality of $G$. As the base case, we suppose that $G$ is countable. In this case, we can use an argument like the one above to recursively pick a sequence of edge-disjoint cycles $C_1, C_2,\ldots$ covering all edges of $G$.

For the induction step, we want to break $G$ up into strictly smaller edge-disjoint subgraphs of smaller cardinality, which we will use to cover larger and larger chunks of $G$. We will make sure that each of these subgraphs $H$ has the following properties:

  1. Any finite cut of $H$ is also a cut of $G$
  2. Any finite cut of $G – E(H)$ is also a cut of $G$

The first of these properties is obviously useful, since it ensures in particular that $H$ has no finite odd cuts, so that we can apply the induction hypothesis to it. It is also easy to extend any infinite subgraph $X$ of $G$ to a subgraph $H$ of the same cardinality satisfying (1), by repeatedly taking each finite cut of $X$ which isn’t a cut of $G$ and either adding all edges of the corresponding cut of $G$ (if that cut is finite) or infinitely many edges of the corresponding cut of $G$ (if that cut is infinite). So by closing under this operation we can easily extend any subgraph of $G$ to one satisfying (1).

The second property is used in a recursive construction of our partition of $G$ into subgraphs, to ensure that after we have taken $H$ as a partition class we still have no odd-sized cuts in the remainder and so we can continue finding more partition classes in the same way. Rather surprisingly, there is another operation, akin to that in the previous paragraph but more technical, such that by closing under that operation we can turn any subgraph of $G$ into another of the same cardinality satisfying (2). Even more surprisingly, the exact details of that operation need not be specified for the proof. 

Why not? There is a very useful trick, often applicable in such situations, called the Elementary Submodel Method. It allows us to define a kind of universal closure operation, which when applied to a subgraph closes it under almost anything we might have wanted. That seems miraculous. How could it possibly work? The idea is to make use of a result from logic called the Löwenheim-Skolem Theorem, which states that if we have some theory expressed using first-order logic, some model $M$ of that theory and some infinite subset $X$ of $M$ then we can find a submodel $N$ of $M$ including $X$ and of the same cardinality as $X$, such that any first order statement which holds (in $N$) for some elements of $N$ still holds for those same elements in $M$. $N$ is then called an elementary submodel of $M$. 

A first idea about how to apply this might be to take $M$ to be the graph $G$, considered as a model of the theory of graphs, and define $H$ to be the subgraph corresponding to $N$. Then for any construction we can express in the language of graphs, $H$ will be closed as a subgraph of $G$ under that construction. This is already fairly powerful, but unfortunately the operations under which we want to close subgraphs are often too complicated to be expressible using first-order logic in the language of graphs. So we need a more powerful idea.

What we do instead is to take $M$ to be the whole set-theoretic mathematical universe (considered as a model of the axioms of set theory). Given an infinite subgraph $X$ of $G$ we can now find an elementary submodel $N$ of $M$ including $X$ and containing $G$. If we define $H$ to be the intersection of $N$ with $G$ then it will really be closed under any operation we might think of. Any readers who are logicians might now be worried about whether it is really ok to step outside the set-theoretic universe and treat that whole universe as a mathematical object in this way. Would it then be contained in itself? This is a valid worry, but it can be circumvented by instead taking $M$ to be a large enough fragment of the set-theoretic universe, rather than the whole thing. 

For those reading about the Elementary Submodel Method for the first time, I understand that it can be a little mind-blowing the first time you see it. I remember that it took me a while to wrap my head around it. For the purposes of this post, all you need to remember is that it gives you a kind of universal closure operation, which (with a little more work) lets us push through the proof outlined above.

However, if at some point in life you find yourself working with some recursive construction where you would like to close some infinite graphs (or indeed matroids) under some very technical operations, and perhaps it isn’t easy to see exactly what those operations should be, do consider whether the Elementary Submodel Method might be a helpful tool for you. A good introduction can be found in [4].

What we want to focus on for now is how the same method allows us to extend Nash-Williams’ results to infinite matroids. The obvious question to ask is when we can express the ground set of a matroid as a disjoint union of circuits. In the discussion above, we relied on the fact that every cycle in a graph meets every cut in an even number of edges. The corresponding property in the language of matroids would be that every circuit-cocircuit intersection is even, or to put it another way, that the matroids we are working with are binary. And indeed we do get a corresponding result for infinite binary matroids, with a very similar proof:

Theorem 2: Let $M$ be a finitary binary matroid such that no finite cocircuit of $M$ has odd size. Then the ground set of $M$ is a disjoint union of circuits.

Note that this theorem is only for finitary matroids, which means that all circuits are finite, not for infinite matroids in general. We will return to this point later. However, it turns out that the theorem above can be generalised slightly beyond binary matroids. It turns out that what we need to do is to make precise exactly what it is that is used about circuits in binary matroids for the base case of an argument of the kind outlined above. 

Remember that the base case was that in which $M$ is countable. Our plan there was to recursively define a sequence of disjoint circuits $C_1, C_2, \ldots$ covering the ground set. To define $C_n$, we need that even after we have deleted the circuits $C_1, \ldots, C_{n-1}$ there is still at least one more circuit to choose as $C_n$. To ensure that we ultimately cover the whole ground set, we also need that we can choose this new circuit to contain our favourite edge $e$ of $M$. We can formalise all of this as follows:

Definition: A matroid $M$ is finite matching extendable if for any finite list $C_1, \ldots C_{n-1}$ of disjoint circuits of $M$ and any edge $e$ of $M$ not in any $C_i$ there is some circuit $C_n$ of $M$ containing $e$ and disjoint from all the other $C_i$.

So what we are using to prove the base case in our argument for finitary binary matroids is the fact that they are finite matching extendable. Komjáth, Milner and Polat showed [2] that this property also suffices for the general case.

Theorem 3: Let $M$ be any finitary matroid which is finite matching extendable. Then the ground set of $M$ is a disjoint union of circuits.

Once again, Komjáth, Milner and Polat’s proof was tricky and technical. However, using the Elementary Submodel Method, Attila Joó and I were able to give a much shorter alternative proof. 

Instead of generalising Theorem 1 to matroids, a natural question to ask is what happens for directed graphs. More precisely, a natural question to ask in this context is which directed graphs can be expressed as a disjoint union of directed cycles. After a little thought, it is clear that the following statement is the analog of Theorem 1 for directed graphs:

Theorem 4: Let $D$ be any directed graph such that every for every cut $K$ of $D$ the sets of edges crossing $K$ in each direction have the same cardinality. Then $D$ is a disjoint union of directed cycles. 

The history of this statement is a little peculiar. After proving Theorem 1 in 1960, Nash-Williams claimed that his methods could also be extended to give Theorem 4. However, he did not explain how to do so, and over the decades nobody else was able to reconstruct such a proof. Finally, in 2017 Thomassen gave Theorem 4 as a conjecture in [5], and a few years later this conjecture was finally proved by Attila Joó [1] using the Elementary Submodel Method.

Since we could generalise Theorem 1 to matroids, it is natural to ask whether we can also generalise Theorem 4 to matroids. And indeed we can, namely to regular matroids. To understand the statement, we need to understand what the analog of a directed cycle in a regular matroid might be. A quick way to express the concept we need is by using an alternative characterisation of regular matroids due to White [6].

Definition: A signing of a matroid $M$ consists of a choice, for each circuit $C$ of $M$, of a function $f_C \colon C \to \{-1, 1\}$ and a choice, for each cocircuit $D$ of $M$, of a function $g_D \colon D \to \{-1, 1\}$, such that for any circuit $C$ and cocircuit $D$ we have $$\sum_{e \in C \cap D} f_C(e)g_D(e) = 0\,.$$ A signed matroid is a matroid together with a signing of that matroid.

Fact: A matroid $M$ is regular if and only if it has a signing.

For example we can find a signing of the matroid associated to any graph $G$ by directing the edges of $G$ arbitrarily, defining each $f_C$ to be 1 on the edges directed in one sense around $C$ and -1 for the edges directed in the opposite sense, and defining each $g_D$ to be 1 on the edges crosssing $D$ in one direction and -1 on the edges crossing $D$ in the opposite direction. The desired equation holds since each cycle crosses each cut the same number of times in each direction.

In this context, it is clear what the analog of a directed cycle should be. We say that a circuit $C$ (or cocircuit $D$) of a signed matroid is directed if $f_C$ (or $g_D$) is constant. Attila and I were able to extend Theorem 4 to regular matroids as follows:

Theorem 5: Let $M$ be a finitary signed matroid such that for any cocircuit $D$ of $M$ the sets $g_D^{-1}(-1)$ and $g_D^{-1}(1)$ have the same cardinality. Then the ground set of $M$ is a disjoint union of directed circuits. 

All of the matroidal statements we have considered here were for finitary matroids. The reason for this, however, is not that they are known to be false for infinitary matroids. Indeed, there are no known counterexamples even for general infinite matroids. I would be very interested in any ideas for how to prove or refute the corresponding statements for arbitrary infinite matroids.

However there is an even more basic statement from which the countable case of Theorem 5 quickly follows, which is also open for general infinite matroids. It is an analog of the Farkas Lemma for signed matroids. It would probably be needed for any proof of Theorem 5 for general infinite matroids. So I would be extremely interested in any ideas on how to approach this problem.

Conjecture: Let $M$ be a (possibly infinite) signed matroid and $e$ an edge of $M$. Then $e$ is either contained in a directed circuit or a directed cocircuit. 

References:

[1] Attila Joó. “On partitioning the edges of an infinite digraph into directed cycles”. In: Advances in Combinatorics 2.8 (2021)
[2] Péter Komjáth, Eric Charles Milner, and Norbert Polat. “A compactness theorem for perfect matchings in matroids”. In: Journal of Combinatorial Theory, Series B 44.3 (1988), pp. 253–262.
[3] C. St. J. A. Nash-Williams. “Decomposition of graphs into closed and endless chains”. In: Proceedings of the London Mathematical Society 3.1 (1960), pp. 221–238.
[4] Lajos Soukup. “Elementary submodels in infinite combinatorics”. In: Discrete Math- ematics 311.15 (2011), pp. 1585–1598.
[5] Carsten Thomassen. “Nash-Williams’ cycle-decomposition theorem”. In: Combina- torica 37.5 (2017), pp. 1027–1037.
[6] Neil White. Matroid applications. 40. Cambridge University Press, 1992.

How the infinite matroid got his axioms

In the first post in this series we raised the question of how to find a good notion of infinite matroids with duality. One way to look at the problem is like this: we can’t just define infinite matroids by using the usual axioms for finite matroids and allowing the ground set to be infinite. This fails horribly, because the various standard axiomatisations give rise to quite different notions. We can make them match up again by adding axioms saying that the matroid is finitary (all circuits are finite), but the class of matroids we get in this way isn’t closed under duality.

In this post we’re going to look at a just-so story about how one might solve this problem. We’ll have to wrestle with the axiomatisations until we get natural modifications of all of them which are once more equivalent, but we mustn’t be too violent: we don’t want to lose closure under minors or duality along the way.

A hopeful place to start is with the following Closure Axioms for the closure operator $\text{cl}$ of a finite matroid with ground set $E$:

(CL1) For all subsets $X$ of $E$, $X \subseteq \text{cl}(X)$.
(CL2) For all subsets $X$ of $E$, $\text{cl}(X) = \text{cl}(\text{cl}(X))$.
(CL3) For all subsets $X$ and $Y$ of $E$ with $X \subseteq Y$ we have $\text{cl}(X) \subseteq \text{cl}(Y)$
(CL4) For any $a$ and $b$ in $E$ and any subset $Y$ of $E$ with $a \in \text{cl}(Y \cup {b}) \setminus \text{cl}(Y)$ we have $b \in \text{cl}(Y \cup {a}) \setminus \text{cl}(Y)$.

We can still consider such operators $\text{cl}: {\cal P}E \to {\cal P}E$ even on an infinite set $E$. They are called Idempotent-Exchange operators, or IE-operators for short (the second axiom is idempotence, and the fourth is called the exchange property) [H69]. This is a good place to start because there are very natural definitions of minors and duality for IE-operators.

The dual of $\text{cl}$ is defined as $\text{cl}^*\colon X \mapsto X \cup \{x \in E | x \not \in \text{cl}(E \setminus (X \cup \{x\})\}$. Thus $\text{cl}^*$ has the following property: for any partition of $E$ as $X \dot \cup Y \dot \cup \{x\}$ we have $x \in \text{cl}^*(X) \leftrightarrow x \not \in \text{cl}(Y)$. This property, together with (CL1), characterises $\text{cl}^*$. Thus $\text{cl}^{**} = \text{cl}$. It is isn’t hard to check that if $\text{cl}$ is the closure operator of a matroid then $\text{cl}^*$ is the closure operator of the dual matroid. $\text{cl}^*$ automatically satisfies (CL1) and it satisfies (CL3) if $\text{cl}$ does. Now (CL4) is just a reformulation of the statement we get by applying (CL2) to $\text{cl}^*$. Thus $\text{cl}^*$ satisfies (CL4) because $\text{cl}$ satisfies (CL2) and vice versa. Minors are also easy to define. Thus for example the restriction of $\text{cl}$ to $E’$ is given by $\text{cl}|_{E’} (X) = \text{cl}(X) \cap E’$.

If we try to relate IE-operators to the objects given by the other axiomatisations then we quickly run into problems. We can begin by saying that a set $I$ is $\text{cl}$-independent as long as there is no $x$ in $I$ with $x \in \text{cl}(I \setminus \{x\})$. Then we would like to define bases as maximal independent sets. The trouble is that there might not be any! For example, if $E$ is infinite and we define $\text{cl}(X)$ to be $X$ when $X$ is finite and $E$ when $X$ is infinite then the independent sets are exactly the finite sets, so there are no maximal independent sets.

How can we fix this? One rather silly idea would be to take the collection of IE-operators which have at least one base. Leaving all other problems aside, this collection isn’t minor-closed. Yet more brutally, we could require that $\text{cl}$ and all of its minors each have at least one base. By definition this class is minor-closed. It isn’t hard to see that it is also closed under duality.

This is in fact the right class of infinite matroids. They were discovered by Higgs, who called them B-Matroids [H69]. Oxley showed that B-matroids are a very natural class: he showed that the class of B-Matroids is the largest class of preindependence spaces on which duality and minors are well defined [O92]. (Preindependence spaces are the things satisfying the usual independence axioms for finite matroids.)

The clincher, which not only settles the fact that B-matroids are the right class to work with but also makes it much easier to work with them, is the fact that they have multiple natural axiomatisations [BDKPW13]. At the start of this post, we set out to find those axiomatisations, and we have already found some suitable Closure Axioms (except that we have not yet precisely formulated `all minors have bases’ as an axiom). How could we go about finding the others? It turns out that, as well as slightly tweaking the axiomatisations of finite matroids, we’ll always need a new `all minors have bases’ axiom.

Let’s start by looking at the Independence Axioms for finite matroids:
(I1) The empty set is independent.
(I2) Every subset of an independent set is independent
(I3) If $I$ and $J$ are independent and $J$ has more elements than $I$ then there is $x \in J \setminus I$ such that $I \cup \{x\}$ is still independent.

(I1) and (I2) don’t need to be changed, but the `more elements than’ clause in (I3) becomes less useful when applied to infinite sets. The key idea here is to replace `$J$ has more elements than $I$’ with the far weaker `$J$ is a maximal independent set but $I$ isn’t’ in (I3). Although it looks weaker, we don’t actually lose anything by making this change, because even after the modification (I1), (I2) and (I3) still axiomatise finite matroids.

The requirement that all minors should have at least one base doesn’t follow from (I1), (I2) and our modified (I3). So we need to add it as a new axiom. Formulating this axiom correctly is a subtle matter, because it seems that in order to have a sensible definition of contraction we must first be able to find bases of the sets we wish to contract. The key point is that any minor can be obtained in such a way that the set $I$ of elements that we contract is independent. The independent sets of such contractions are simply sets whose union with $I$ is independent. So we can formulate `all minors have bases’ as follows:

(IM) For any $I \subseteq X \subseteq E$ with $I$ independent, the collection of independent sets $J$ such that $I \subseteq J \subseteq X$ has a maximal element.

To find a base when we contract an arbitrary set $X$ using this axiom, we have to first find a base of $X$ itself. But (IM) allows us to do that too, so this doesn’t cause any problems. (I1)-(I3) together with (IM) really axiomatise B-matroids. The same idea works for bases: the usual base axioms together with (IM) applied to the collection of subsets of bases gives us another axiomatisation. We can also formulate a fifth Closure Axiom (CLM) in the same spirit, saying that the collection of $\text{cl}$-independent sets satisfies (IM). Once more, the collection of operators satisfying (CL1)-(CL4) and (CLM) is exactly the collection of B-matroids.

We won’t discuss the proof that all of these axiomatisations are equivalent (or more precisely cryptomorphic) here, since it is similar to the proof for finite matroids. Instead, we’ll finish by sketching how to get axiomatisations in terms of circuits or the rank function.

In the derivation of other axiomatisations from the circuit axioms, the circuit elimination axiom gets iterated. If one tries to make these proofs work in an infinite context, it turns out that circuit elimination has to be iterated infinitely often, and the proofs fall apart. What is needed is an axiom which allows for controlled iteration of circuit elimination. More precisely, it is enough to require that infinitely many different edges of the same circuit can all be eliminated simultaneously. This principle, the infinite circuit elimination axiom, can be formulated as follows:

(C3) Let $C$ be a circuit, $z$ an edge of $C$ and $X$ a subset of $C \setminus \{z\}$. For each $x \in X$ let $C_x$ be a circuit with $C_x \cap (X \cup \{z\}) = \{x\}$. Then there is a circuit $C’$ with $$z \in C’ \subseteq \left(C \cup \bigcup_{x \in X}C_x\right)\setminus X\,.$$

If we replace the usual circuit elimination axiom with this one and we add an axiom saying that the collection of sets not including any circuit satisfies (IM) then we get an axiomatisation of infinite matroids in terms of their circuits.

The rank function requires the biggest jump from the finite axiomatisation. In order to get a handle on the relationships between sets of infinite rank we must axiomatise not in terms of the rank function but rather in terms of the relative rank function $r$, where $r(A | B)$ is intended to represent the number of elements we need to add to $B$ in order to span $A$. Tweaking the usual rank axioms to corresponding statements about the relative rank function and, as usual, adding an axiom like (IM) gives yet another axiomatisation of infinite matroids.

I hope this has helped to explain why the axioms for infinite matroids look the way they do. Next time we’ll get a better feel for the meaning of these axioms by looking at some examples.

[BDKPW13] H. Bruhn, R. Diestel, M. Kriesell, R. A. Pendavingh and P. Wollan, Axioms for Infinite Matroids. Advances in Mathematics, 239, p18-46
[H69] D. A. Higgs, Matroids and Duality. Colloq. Maths 20 (1969), p215-220
[O92] J. G. Oxley, Infinite Matroids. Matroid Applications (ed. N. White). Encyc. Math. Appl. 40 (CUP 1992), p73-90.