This marks the fourth year in a row that a Matroid project was selected by SageMath for the Google Summer of Code program. Previous participants are Tara Fife, Chao Xu, and Jayant Apte.

]]>The polymath projects are a sort of mathematical experiment proposed by Tim Gowers to see if massively collaborative mathematics is possible. The proposed proof proceeds via a sequence of public comments on a blog. The general idea is to blurt out whatever idea comes into your head to allow for rapid public dissemination.

Polymath 12 is hosted by Timothy Chow and you can click here to follow the progress or to contribute.

]]>In that construction we used 2-sums to stick the matroids together. Suppose that we have a finite collection of matroids arranged in a tree, so that their ground sets overlap (always in single edges) if and only if they are adjacent in the tree. Then because 2-sums are associative we have an unambiguously defined 2-sum of that collection. In the previous post in this series, we saw that this construction also works if we allow the tree to be infinite, but that we have to specify a little extra information the set $\Psi$ of ends of the tree which the circuits are allowed to use.

The same trick can be used for other kinds of associative sum of finite matroids. In this post we’ll see how it works for sums of representable matroids, and why that is useful for understanding the topological spaces associated to infinite graphs.

To see how the addition of representable matroids works, we need to look at them from a slightly unusual angle. Let’s say that we have a matroid $M$ on a ground set $E$, and suppose that we have vectors $\{v_e | e \in E\}$ in some vector space over a field $k$, giving us a representation of $M$ over $k$. Then we can capture all the essential information about this representation by forgetting the details of the vector space and focusing just on the linear relationships amongst the vectors. More formally, we say that an element $\lambda$ of the space $k^E$ is a *linear dependence* of the $v_e$ if $\sum_{e \in E} \lambda(e)v_e = 0$. Then the linear dependences form a subspace $V$ of $k^E$, and this subspace is enough to recover the matroid $M$; the circuits of $M$ are precisely the minimal nonempty supports of vectors in $V$. For those who prefer to think of representations in terms of matrices rather than families of vectors, the subspace we’re working with is just the orthogonal complement of the row space of the matrix.

So we can encode representations of matroids on $E$ over $k$ as subspaces of $k^E$. This way of seeing representations fits well with matroid duality, in that if $V \subseteq k^E$ represents a matroid $M$ on $E$ then the orthogonal complement $V^{\bot}$ represents the dual matroid $M^*$. If we define $M(V)$ to be the matroid whose circuits are the minimal nonempty supports of elements of $V$, then we can express this as $M(V^{\bot}) = (M(V))^*$.

The advantage of this perspective is that there is a natural way to glue together such subspaces, which we can use to build a self-dual gluing operation for represented matroids. Suppose that we have two sets $E_1$ and $E_2$ and subspaces $V_1$ and $V_2$ of $k^{E_1}$ and $k^{E_2}$, respectively. We now want to glue these together to give a subspace $V_1 \oplus V_2$ of $E_1 \triangle E_2$. As with the 2-sum, we throw away the `gluing edges’ in the overlap $E_1 \cap E_2$. The idea is to take pairs of vectors which match on the gluing edges, patch them together and throw away the part supported on the gluing edges. More precisely, we set $V_1 \oplus V_2 := \{v \mathord{\upharpoonright}_{E_1 \triangle E_2} | v \in k^{E_1 \cup E_2}, v \mathord{\upharpoonright}_{E_i} \in V_i\}$.

Like the 2-sum, this definition is self-dual in the sense that $(M(V_1 \oplus V_2))^* = M(V_1^{\bot} \oplus V_2^{\bot})$. It is also associative, in that if $V_1$, $V_2$ and $V_3$ are subspaces of $k^{E_1}$, $k^{E_2}$ and $k^{E_3}$ respectively and the sets $E_1$ and $E_3$ are disjoint then $(V_1 \oplus V_2) \oplus V_3 = V_1 \oplus (V_2 \oplus V_3)$. So if we have a finite collection of such representations on ground sets $E_t$ arranged in a finite tree, such that the ground sets only overlap if they are adjacent in the tree, then we have an unambiguous sum of all these subspaces.

Just as for 2-sums, we can also glue together infinite trees of represented matroids in this way, as long as we are careful to specify which ends of the tree the circuits are allowed to use. Formally, we do this as follows. Suppose that we have a tree $T$, a family of sets $E_t$ indexed by the nodes of $T$, such that $E_s \cap E_t$ is only nonempty if $s = t$ or $s$ and $t$ are adjacent in $T$, a family of subspaces $V_t \subseteq k^{E_t}$ and a Borel subset $\Psi$ of the set $\Omega(T)$ of ends of $T$. Then we can build a subspace of $k^{\bigtriangleup_{t}E_t}$ by setting

$\bigoplus^{\Psi}_t V_t := \{v \mathord{\upharpoonright}_{\bigtriangleup_t E_t} | v \in k^{\bigcup_t E_t}, v \mathord{\upharpoonright}_{E_t} \in V_t \text{ and } \Omega(T) \cap \overline{\{t | v \upharpoonright_{E_t} = 0\}} \subseteq \Psi\}$

and $M(\bigoplus^{\Psi}_t V_t)$ will be an infinite matroid.

What are these infinite sums good for? Well, if we have a representable matroid and we have a $k$-separation of that matroid then we can split it up as a sum of two matroids in this way such that there are fewer than $k$ gluing edges. We can use this to break problems about bigger matroids down into problems about smaller matroids. Similarly, if we have a nested collection of finite separations in an infinite matroid, cutting the ground set up into a tree of finite parts, then we can cut the matroid up into a sum of finite matroids and analyse its properties in terms of their properties. This kind of chopping up and reconstruction can also be helpful to show that the infinite object is a matroid in the first place.

Let’s see how that might work for a more concrete problem. Suppose that we have a locally finite graph $G$. Then we can build a topological space $|G|$ from it by formally adding its ends as new points at infinity (see for example [D10]). These spaces and their subspaces are key elements of topological infinite graph theory, which was where this series of posts started.

At first, it was hoped that these subspaces would have the nice property that if they are connected then they are path connected. But Agelos Georgakopoulos eventually found a counterexample to this claim [G07]. However, the set of ends used by the counterexample he constructed was topologically horrible, so we might still hope that if we have a connected subspace $X$ of $|G|$ such that the set $\Psi$ of ends contained in $X$ is topologically nice, then $X$ will be path-connected. Well, if we take `topologically nice’ to mean `Borel’, then the ideas above let us show that this is true.

We can do this by considering the matroid whose circuits are the edge sets of those topological circles in $|G|$ which only go through ends in $\Psi$. More precisely, we need to show that this really does give the circuit set of a matroid $M(G, \Psi)$. If we can do that, then we can argue as follows:

Let $P$ be the set of edges which, together with both endpoints, are completely included in $X$. Let $u$ and $v$ be vertices in $X$. Build a new graph $G + e$ with an extra edge $e$ joining $u$ to $v$. Then since $X$ is connected, there can be no cocircuit of $M(G + e, \Psi)$ which contains $e$ and is disjoint from $P$ (such a cocircuit would induce a topological separation of $X$ with $u$ and $v$ on opposite sides). So $e$ is not a coloop in the restriction of $M(G + e, \Psi)$ to $P \cup \{e\}$. Hence there is a circuit through $e$ in that matroid, and removing $e$ from the corresponding topological circle gives an arc from $u$ to $v$ through $X$ in $|G|$. So any two vertices are in the same path-component of $X$. Similar tricks show the same for ends and interior points of edges.

What this argument shows is that the connection between connectivity and path-connectivity is encoded in the statement that $M(G, \Psi)$ is a matroid. To prove that statement, we can build $M(G, \Psi)$ as the sum of an infinite tree of graphic matroids in the sense described above. First of all, since $G$ is locally finite, we can cut it up into an infinite tree of finite connected parts using disjoint finite separators. Then we define the torso corresponding to a part to consist of that part together with new edges forming complete graphs on each of the separators. This gives us an infinite tree of finite graphs, and the ends of the tree correspond precisely to the ends of $G$. Now we take the graphic matroids corresponding to those graphs, take the standard binary representations of those matroids, and glue them together along this tree, taking the ends in $\Psi$ to be allowed for circuits. And presto! We have build the matroid $M(G, \Psi)$.

The details of this argument are explained in [BC15].

I can’t resist mentioning that the matroids we’ve just built in a bottom-up way also have a top-down characterisation. Consider the class of matroids whose ground set is the set of edges of $G$, and in which every circuit is a topological circuit of $G$ and every cocircuit is a bond of $G$. Let’s call such matroids $G$-matroids.

For some graphs $G$, we can find $G$-matroids which are not of the form $M(G, \Psi)$. For example, in the following graph $Q$ we can define an equivalence relation on the (edge sets of) double rays by saying that two double rays are equivalent if they have finite symmetric difference. Then the set of finite circuits together with any collection of double rays closed under this equivalence relation gives the set of circuits of an infinite matroid.

The matroids I just described are a bit pathological, and they hover on the boundary between being binary and non-binary. None of them has a $U_{2,4}$-minor. They also still have the familiar property that any symmetric difference of two circuits is a disjoint union of circuits. But symmetric differences of *three* circuits might not be disjoint unions of circuits!

For example, there is such a matroid in which the first three sets depicted below are circuits, but the fourth, their symmetric difference, is not.

The problem is that these matroids are wild. This means that there are circuits and cocircuits which intersect in infinitely many elements. We only have a good theory of representability for tame matroids, those in which every intersection of a circuit with a cocircuit is finite. I hope to discuss this in more detail in a future post.

If we only consider tame $G$-matroids, then this proliferation of pathological matroids disappears. For the graph $Q$, for example, there are only 2 tame $Q$-matroids, namely the finite cycle matroid and the topological cycle matroid. Remarkably, for any locally finite graph $G$ it turns out that the tame $G$-matroids are precisely the matroids of the form $M(G, \Psi)$. So our top-down and bottom up characterisations meet, and any matroid associated to a graph can be built up from finite parts in a way which mirrors the structure of that graph. The reasons for this correspondence go far beyond the scope of this post, but they can for example be found in [BC16].

Now that we’ve seen the standard ways to build infinite matroids and their relationship to infinite graphs, in the next post we’ll examine the most important open problem about them: the Infinite Matroid Intersection Conjecture.

[BC15] N. Bowler and J. Carmesin, Infinite Matroids and Determinacy of Games, preprint here.

[BC16] N. Bowler and J. Carmesin, The ubiquity of Psi-matroids, preprint here.

[D10] R. Diestel, Locally finite graphs with ends: a topological approach I–III, Discrete Math 311–312 (2010–11).

[G07] A. Georgakopoulos. Connected but not path-connected subspaces of infinite graphs, Combinatorica, 27(6) 683–698 (2007).

The version of this game where $G$ is the $5 \times 5$ grid was produced in the real world by Tiger Electronics, and has a bit of a cult following. For example, there is a dedicated wikipedia page and this site (from which I borrowed the image below) even has the original user manuals.

The goal of this post is to prove the following theorem.

**Theorem.** Let $G=(V,E)$ be a graph for which all the lights are initially ON. Then it is always possible to press a sequence of vertices to switch all the lights OFF.

(Note that if not all the lights are ON initially, it will not always be possible to switch all the lights OFF. For example, consider just a single edge where only one light is ON initially.)

**Proof. **Evidently, there is no point in pressing a vertex more than once, and the sequence in which we press vertices does not matter. Thus, we are searching for a subset $S$ of vertices such that $|S \cap N[v]|$ is odd for all $v \in V$, where $N[v]$ is the *closed neighbourhood of $v$* (the neighbours of $v$ together with $v$ itself). We can encode the existence of such a set $S$ by doing linear algebra over the binary field $\mathbb{F}_2$.

Let $A$ be the $V \times V$ adjacency matrix of $G$, and let $B=A+I$. Note that the column corresponding to a vertex $v$ is the characteristic vector of the closed neighbourhood of $v$. Thus, the required set $S$ exists if and only if the all ones vector $\mathbb{1}$ is in the column space of $B$. Since $B$ is symmetric, this is true if and only if $\mathbb{1}$ is in the row space of $B$. Let $B^+$ be the matrix obtained from $B$ by adjoining $\mathbb{1}$ as an additional row. Thus, we can turn all the lights OFF if and only if $B$ and $B^+$ have the same rank.

Since this is a matroid blog, let $M(B)$ and $M(B^+)$ be the column matroids of $B$ and $B^+$ (over $\mathbb{F}_2$). We will prove the stronger result that $M(B)$ and $M(B^+)$ are actually the same matroid. Every circuit of $M(B^+)$ is clearly a dependent set in $M(B)$. On the other hand let $C \subseteq V$ be a circuit of $M(B)$. Then $\sum_{v \in C} \chi(N[v])=\mathbb{0}$, where $\chi(N[v])$ is the characteristic vector of the closed neighbourhood of $v$. Therefore, in the subgraph of $G$ induced by the vertices in $C$, each vertex has odd degree. By the Handshaking Lemma, it follows that $|C|$ is even, and so $C$ is also dependent in $M(B^+)$.

**Reference**

Anderson, M., & Feil, T. (1998). Turning lights out with linear algebra. *Mathematics Magazine*, *71*(4), 300-303.

**Theorem 1.** If $\chi(M)$ contains infinitely many primes, then it contains zero.

One unusual aspect of Theorem 1 is that there are four proofs of it in the literature. All of them are attractive, and they call upon different (but related) sets of tools: Zorn’s lemma, compactness in first-order logic, ultraproducts, and Gröbner bases. In this post I will review all four proofs.

To start with, let $r$ be $r(M)$, and assume that $E(M)=\{1,\ldots, n\}$. Let $X$ be the set of variables $\{x_{i,j}\}_{1\leq i \leq r,\ 1\leq j \leq n}$, and let $R$ be the polynomial ring $\mathbb{Z}[X]$. We will let $A$ be the $r\times n$ matrix where the entry in row $i$ and column $j$ is $x_{i,j}$. Each $r$-element subset of $E(M)$ corresponds to an $r\times r$ submatrix of $A$, and thereby to a homogeneous polynomial in $R$, namely the determinant of that submatrix. Let $\mathcal{B}$ be the set of polynomials that correspond to bases of $M$, and let $\mathcal{N}$ contain the polynomials that correspond to dependent $r$-element subsets in $M$. Thus a representation of $M$ is a ring homomorphism from $R$ to a field that sends every polynomial in $\mathcal{N}$, but no polynomial in $\mathcal{B}$, to zero.

The first proof of Theorem 1 is due to Peter Vámos. He published two articles in the proceedings entitled Möbius Algebras, produced by the University of Waterloo in 1971. The proof of Theorem 1 has sometimes been cited as coming from [6], but in fact the correct attribution is to [5]. His proof hinges on the following lemma. Recall that an ideal, $P$, is prime, if $ab\in P$ implies that $a\in P$ or $b\in P$.

**Lemma 1.** Let $I$ be an ideal in a commutative ring, $S$, and let $T$ be a multiplicatively closed set of $S$ that is disjoint from $I$. There exists a prime ideal, $P$, such that $I\subseteq P$, and $P\cap T=\emptyset$.

*Proof*. We consider the partial order of ideals in $S$ that contain $I$ and that are disjoint from $T$. Assume that $J_{1}\subset J_{2}\subset J_{3}\subset \cdots$ is a chain in the order. It is easy to confirm that the $J_{1}\cup J_{2}\cup J_{3}\cup \cdots$ is itself an ideal in the order. This means that every such chain of ideals has a maximal element, so we can apply Zorn’s Lemma. Therefore the partial order has a maximal element, $P$. It remains to show that $P$ is a prime ideal. Assume otherwise, so that the product $ab$ belongs to $P$, even though $a\notin P$ and $b\notin P$. The maximality of $P$ means that the ideal generated by $P$ and $a$ is not disjoint from $T$, so there is some element, $ca+p$ in $T\cap \langle P, a\rangle$. (Here $p\in P$, and $c$ is an element of $S$.) Similarly, there are elements $q\in P$ and $d\in S$ such that $db+q$ belongs to $T$. Since $T$ is multiplicatively closed, we see that $(ca+p)(db+q)=(cd)(ab)+(ca)q+(db)p+pq$ is in $T$. But this element is also in $P$, since $ab\in P$. Now we have a contradiction to the fact that $P$ and $T$ are disjoint. $\square$

**Proof 1.** Let $p_{1},p_{2},p_{3},\ldots$ be distinct primes such that $M$ is representable over a field of characteristic $p_{i}$ for each $i$. Let $\phi_{i}$ be a homomorphism from $R$ to a field of characteristic $p_{i}$ that takes all the polynomials in $\mathcal{N}$, and no polynomial in $\mathcal{B}$, to zero. Then the kernel, $\ker(\phi_{i})$, of $\phi_{i}$ is a prime ideal of $R$ containing $\mathcal{N}$, but disjoint from $\mathcal{B}$. Moreover, the integer $p_{i}$ is contained in $\ker(\phi_{i})$. In fact, the only integers in $\ker(\phi_{i})$ are the multiples of $p_{i}$, since if $\ker(\phi_{i})$ contains a non-multiple of $p_{i}$, then it contains the greatest common divisor of that non-multiple and $p_{i}$ (namely $1$). This would imply that $\ker(\phi_{i})$ is equal to $R$, which is impossible.

Assume that there is an element in both the ideal $\langle \mathcal{N}\rangle$ and the multiplicatively closed set generated by $\mathcal{B}\cup\mathbb{Z}$. Such an element is of the form $kf$, where $k$ is an integer, and $f$ is a product of polynomials from $\mathcal{B}$. Since $kf$ is in $\langle \mathcal{N}\rangle$, it is in each $\ker(\phi_{i})$, even though $f$ is not. Thus $k$ is in each ideal $\ker(\phi_{i})$. Thus $k$ is a multiple of each of the distinct primes $p_{1},p_{2},p_{3},\ldots$, and we have a contradiction. Now, by Lemma $1$, there is a prime ideal, $P$, of $R$ which contains $\langle \mathcal{N}\rangle$, and which avoids $\mathcal{B}\cup\mathbb{Z}$. Because $P$ is prime, the quotient ring, $R/P$, is an integral domain. Let $F$ be the field of fractions of $R/P$. The natural homomorphism from $R$ to $F$ gives us a representation of $M$. As $P$ does not contain any integer, it follows that $R/P$ has characteristic zero, and we are done. $\square$

The next proof of Theorem 1 makes use of the Compactness Theorem in first-order logic. It is due to Samuel Wagstaff and appeared two years after Vámos’s proof, in 1973 [7]. Wagstaff does not supply many details — perhaps the account in his PhD thesis (upon which the article based) is more explicit. But I believe that my version below captures his key ideas.

**Proof 2.** We construct a first-order language of vector spaces. We have a countable supply of variables, $x_{1},x_{2},x_{3},\ldots$, and unary predicates $\operatorname{Scal}$ and $\operatorname{Vec}$. In our interpretation, these predicates will indicate that a variable stands for a scalar or a vector. We also have constants, $0_{S}$, $1_{S}$, and $0_{V}$, which are interpreted as the identities of the vector space. The functions $+_{S}$, $\times_{S}$, $+_{V}$, and $\times_{V}$, are interpreted as the binary operations of the field of scalars, as addition of vectors, and as scalar multiplication. The only binary relation we need is equality. Now, in this language, we can express the axioms for fields and for vector spaces. To illustrate, the compatibility of scalar and vector multiplication would be expressed with the sentence \begin{multline*}\forall x_{1}\forall x_{2}\forall x_{3} (\operatorname{Scal}(x_{1})\land \operatorname{Scal}(x_{2})\land \operatorname{Vec}(x_{3}) \to\\ ((x_{1}\times_{S}x_{2})\times_{V}x_{3}=x_{1}\times_{V}(x_{2}\times_{V}x_{3}))).\end{multline*} We can assert the representability of $M$ by requiring that there exist $x_{1},x_{2},\ldots, x_{n}$ such that $\operatorname{Vec}(x_{i})$ holds for each $i$. For each subset $X\subseteq E(M)=\{1,\ldots, n\}$, we include a sentence asserting that the corresponding subset of $\{x_{1},x_{2},\ldots, x_{n}\}$ is linearly independent if $X$ is independent in $M$, and otherwise asserting that it is linearly dependent. Let us illustrate how this can be accomplished by writing the formula which asserts that $\{x_{1},x_{2},x_{3}\}$ is linearly independent. Let $y_{1}, y_{2}, y_{3}$ be variables that are not used elsewhere in our sentence. The formula we need is as follows: \begin{multline*}\forall y_{1}\forall y_{2}\forall y_{3} (\operatorname{Scal}(y_{1})\land \operatorname{Scal}(y_{2})\land \operatorname{Scal}(y_{3}) \land (y_{1}\ne 0_{S} \lor y_{2}\ne 0_{S} \lor y_{3}\ne 0_{S})\to\\ (y_{1}\times_{V}x_{1}) +_{V} (y_{2}\times_{V} x_{2}) +_{V} (y_{3}\times_{V}x_{3})\ne 0_{V}).\end{multline*} We simply perform this trick for each subset of $E(M)$.

Now we have a finite collection, $\mathcal{T}$, of sentences which can be satisfied if and only if $M$ is representable over a field. For each prime, $p$, let $S_{p}$ be the sentence asserting that the sum of $p$ copies of $1_{S}$ is equal to $0_{S}$. Thus $M$ is representable over a field of characteristic $p$ if and only if $\mathcal{T}\cup\{S_{p}\}$ can be satisfied. Assume that $M$ is not representable over a field with characteristic zero. Then the collection $\mathcal{T}\cup\{\neg S_{p}\colon p\ \text{is a prime}\}$ is inconsistent: it cannot be satisfied by any model. The Compactness Theorem tells us that there is a finite subcollection which is inconsistent. Therefore we can assume that $\mathcal{T}\cup\{\neg S_{p} \colon p\ \text{is a prime and}\ p\leq N\}$ has no model, for some integer $N$. This implies that there can be no representation of $M$ over a field with characteristic greater than $N$, so we have proved the contrapositive of Theorem 1. $\square$

The next proof appeared in 1989, and comes courtesy of Imre Leader [3]. I think it might be my favourite out of the four. It uses the notion of an ultraproduct.

**Proof 3.** Let $F_{1},F_{2},F_{3},\ldots$ be an infinite sequence of fields such that $M$ is representable over each $F_{i}$ and $\operatorname{char}(F_{i+1})>\operatorname{char}(F_{i})$. Let $\mathcal{U}$ be a non-principal ultrafilter of the positive integers. This means that $\mathcal{U}$ is a collection of subsets of positive integers, and $\mathcal{U}$ satisfies: (1) any superset of a member of $\mathcal{U}$ is also in $\mathcal{U}$; (2) if $U,V\in\mathcal{U}$, then $U\cap V\in \mathcal{U}$; and (3) if $U$ is any subset of $\mathbb{Z}^{+}$, then exactly one of $U$ or $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. Non-principal ultrafilters cannot be described explicitly, but we know that they exist by an application of Zorn’s Lemma. If $U$ is a finite subset of $\mathbb{Z}^{+}$, then $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. We consider sequences $(x_{1},x_{2},x_{3},\ldots)$, where each $x_{i}$ is an element of $F_{i}$. We declare $(x_{1},x_{2},x_{3},\ldots)$ and $(y_{1},y_{2},y_{3},\ldots)$ to be equivalent if $\{i\in\mathbb{Z}^{+}\colon x_{i}=y_{i}\}$ is a member of $\mathcal{U}$. We perform componentwise addition and multiplication on sequences. It is not too difficult to see that these operations respect our notion of equivalence, and that the equivalence classes of sequences form a field, $F$, under componentwise addition and multiplication. Moreover, the characteristic of $F$ is zero. We consider a sequence of representations of $M$ over the fields $F_{1},F_{2},F_{3},\ldots$. This leads to a representation of $M$ over $F$ in the obvious way: we simply “stack” the infinite sequence of matrices, and produce a matrix whose entries are infinite sequences of numbers. $\square$

Finally, we come full circle! In 2003, Vámos (along with Rosemary Baines) published another proof of Theorem 1 [2]. The proof is actually an easy consequence of the algorithm that Stefan mentioned in a previous post, and the main ingredient is the use of Gröbner bases. I will tweak the proof slightly, for the sake of simplicity.

**Proof 4.** Let $f$ be the product of all polynomials in $\mathcal{B}$. We introduce the new, “dummy”, variable, $y$. From now on, we will operate in the ring $\mathbb{Z}[X\cup\{y\}]$. Note that an evaluation of the variables in $X\cup\{y\}$ will make $yf-1$ zero if and only if it makes each polynomial in $\mathcal{B}$ non-zero. Let $I$ be the ideal of $\mathbb{Z}[X\cup\{y\}]$ that is generated by $\mathcal{N}\cup\{yf-1\}$. Thus a representation of $M$ is really a ring homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field that takes every polynomial in $I$ to zero.

Let $G$ be a strong Gröbner basis of $I$ (see [1]). Thus $I$ is the ideal generated by $G$. In addition, we have an ordering on products of variables (the lexicographic order will suffice). The leading term of a polynomial is the highest non-zero term under this ordering. For $G$ to be a strong Gröbner basis, we require that if $h$ is in $I$, then there is a polynomial $g\in G$ such that the leading term of $g$ divides the leading term of $h$. Assume that $G$ contains an integer, $N$. If $\phi$ is a homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field, $F$, such that $\ker(\phi)$ contains $I$, then $N$ is taken to zero by $\phi$. This means that the characteristic of $F$ is at most $N$, so $M$ is not representable over infinitely many prime characteristics. Therefore we will now assume that $G$ contains no integer.

Let $I_{\mathbb{Q}}$ be the ideal of $\mathbb{Q}[X\cup\{y\}]$ generated by $\mathcal{N}\cup\{yf-1\}$. Assume that $1\in I_{\mathbb{Q}}$. Then $1$ can be expressed as a polynomial combination of the polynomials in $\mathcal{N}\cup\{yf-1\}$. By multiplying this combination through by the denominators of the coefficients, we see that $I$ contains an integer, $N$. Thus $G$ contains a polynomial whose leading term divides the leading term of $N$, which is $N$ itself. But in the lexicographic ordering, a constant polynomial is below all non-constant polynomials, from which we deduce that $G$ contains a constant polynomial, contradicting our conclusion from the previous paragraph. Therefore $1\notin I_{\mathbb{Q}}$, which means that $I_{\mathbb{Q}}$ is contained in a maximal ideal, $P$. Note that $P$ contains no constant polynomial. Now $\mathbb{Q}[X\cup\{y\}]/P$ is a field, and the natural homomorphism from $\mathbb{Q}[X\cup\{y\}]$ to this field leads to a representation of $M$ over a field of characteristic zero. $\square$

**References.**

[1] W. W. Adams and P. Loustaunau. An introduction to Gröbner bases. Graduate Studies in Mathematics, 3. American Mathematical Society, Providence, RI, 1994.

[2] R. Baines and P. Vámos. An algorithm to compute the set of characteristics of a system of polynomial equations over the integers. J. Symbolic Comput. 35 (2003), no. 3, 269–279.

[3] I. Leader. A short proof of a theorem of Vámos on matroid representations. Discrete Math. 75 (1989), no. 1-3, 315–317.

[4] R. Rado. Note on independence functions. Proc. London Math. Soc. (3) 7 (1957), 300–320.

[5] P. Vámos. A necessary and sufficient condition for a matroid to be linear. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 162–169. Univ. Waterloo, Waterloo, Ont., 1971.

[6] P. Vámos. Linearity of matroids over division rings. Notes by G. Roulet. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 170–174. Univ. Waterloo, Waterloo, Ont., 1971.

[7] S. S. Wagstaff Jr. Infinite matroids. Trans. Amer. Math. Soc. 175 (1973), 141–153.

]]>**Definition 1. **A *partial field *is a pair $\mathbb{P} = (R,G)$ of a commutative ring $R$ and a subgroup $G$ of the invertible elements of $R$ such that $-1 \in G$.

**Definition 2. **A matrix $A$ over $R$ is a *(strong)* $\mathbb{P}$-*matrix* if every square submatrix has a determinant in $G\cup \{0\}$.

**Definition 3. **An $r\times n$ matrix $A$ over $R$ is a *weak* $\mathbb{P}$-*matrix* if every $r\times r$ square submatrix has a determinant in $G\cup \{0\}$, and at least one such matrix has nonzero determinant.

One can check that, if $D$ is a submatrix with nonzero determinant as in the latter definition, then $D^{-1}A$ represents the same matroid, and is a strong $\mathbb{P}$-matrix. The following was Theorem 10 in Part I:

**Proposition 4. ***If $A$ is a weak $\mathbb{P}$-matrix with columns labeled by a set $E$, then $\mathcal{B} = \{ B \subseteq E : |B| = r, \det(A[B]) \neq 0\}$ is the set of bases of a matroid $M$, denoted $M = M[A]$.*

Today, we are interested in the following question: given a matroid $M$, can we find a partial field $\mathbb{P}_M$ that is a “best fit” for $M$? The qualities we are looking for are:

- $M$ is representable over $\mathbb{P}_M$;
- We can find (information about) other representations of $M$;
- We can compute (with) $\mathbb{P}_M$;
- The representation of $M$ over $\mathbb{P}_M$ is unique.

The fourth property turns out to be impractical, but we can get the first three. This post is based on Section 4 of [PvZ10]; see also Section 3.3 of my thesis [vZ09].

Let $M$ be a rank-$r$ matroid with ground set $E$. Fix a basis $B$ of $M$. First, let $D^\#$ be the *$B$-fundamental-circuit incidence-matrix *(see [Oxl11, p.182]). Recall that any representation of $M$ of the form $[I\ D]$ will have the same zero/nonzero pattern in $D$ as in $D^\#$, and that we can scale rows and columns of $D$ to make certain nonzero entries equal to $1$. Let $T$ be the set of coordinates of a maximal set of coordinates that we can scale to be $1$ (see [Oxl11, Theorem 6.4.7] for details).

Next, for each $x \in B$ and $y \in E-B$ introduce a variable $a_{xy}$; also introduce variables $i_{B’}$ for every basis $B’$ of $M$. Let $\mathcal{Y}$ be the set of all these variables, and consider the ring of polynomials $\mathbb{Z}[\mathcal{Y}]$. Let $\hat D$ be the $B\times (E-B)$ matrix with entries $a_{xy}$, and let $\hat A = [I\ \hat D]$.

Now consider the ideal $I_{M,B,T}$ in $\mathbb{Z}[\mathcal{Y}]$ generated by the following relations:

- $\det(\hat A[Z])$ for all $r$-subsets $Z\subseteq E$ that are nonbases of $M$;
- $\det(\hat A[Z])i_Z – 1$ for all bases $Z$ of $M$;
- $a_{xy} – 1$ for all $xy \in T$.

Finally, we set $\mathbb{B}_M = \mathbb{Z}[\mathcal{Y}] / I_{M,B,T}$ and the partial field

$$\mathbb{P}_M = (\mathbb{B}_M, \langle \{-1\}\cup \{ i_{B’} : B’ \text{ basis of } M\}\rangle),$$

where $\langle\cdot\rangle$ denotes “multiplicative group generated by”. Note that this formalism is nothing other than introducing notation for the steps you would already take to produce a representation of a given abstract matroid. We have the following nice properties (which I won’t prove):

**Proposition 5. ***Let $M$ be a matroid and $\mathbb{P}_M$, $\hat A$, etc. as above.
*

*The partial field $\mathbb{P}_M$ does not depend on the choice of $B$ or $T$.**If $\mathbb{P}_M$ is not the trivial partial field (that is, if $1 \neq 0$), then $M$ is represented over $\mathbb{P}_M$ by the image $A$ of $\hat A$ under the obvious map from $\mathbb{Z}[\mathcal{Y}] \to \mathbb{Z}[\mathcal{Y}] / I_{M,B,T}$.**If $M$ is represented over a (partial) field $\mathbb{P}$ by a matrix $A’$, then there is a partial field homomorphism $\phi:\mathbb{P}_M\to\mathbb{P}$ with $\phi(A) = A’$.*

Here a *partial field homomorphism $\phi: (R_1, G_1) \to (R_2, G_2)$* is a ring homomorphism $\phi:R_1 \to R_2$ such that $\phi(G_1) \subseteq G_2$. Such homomorphisms preserve matroids, i.e. $M[A] = M[\phi(A)]$ if $A$ is a $\mathbb{P}$-matrix. This third property earns $\mathbb{P}_M$ the name *universal *partial field: *every* representation of $M$ can be obtained from it!

The *set of characteristics *of a matroid is the subset of $\{p: p = 0$ or $p$ is prime $\}$ such that $M$ has a representation over some field of that characteristic. It is known that the set of characteristics can be any finite subset not containing 0, and any infinite subset containing 0 (see [Oxl11, Section 6.8]). In [BV03], Baines and Vámos gave an algorithm to compute the set of characteristics from the ideal $I_{M,B,T}$ defined above. A brief summary is as follows:

- Compute a Gröbner basis $G$ over the integers for $I_{M,B,T}$.
- If the $G$ contains 1, then $M$ is not representable.
- If the $G$ contains a constant $k > 1$, then the prime factors of $k$ are exactly the characteristics over which $M$ is representable.
- Otherwise, let $\gamma$ be the least common multiple of the leading coefficients of the polynomials in $G$. Compute the Gröbner basis for the ideal generated by $G \cup \{\gamma\}$, and let $k$ be its constant member. Then $M$ is representable over characteristic 0 and all prime characteristics,
*except*those dividing $\gamma$ but not $k$.

Note that Gröbner basis computations are notoriously difficult to compute and can take up huge amounts of memory. But for small examples this works well. The Gröbner basis generates the same ideal as the one we started with, and these computations often give a simpler representation of the partial field.

The universal partial field of $\text{PG}(2,q)$ is $\text{GF}(q)$. The non-Fano matroid and $P_8$ both have the *dyadic* partial field as their universal partial field, as do the ternary Dowling geometries. The *Betsy Ross *matroid can be shown to yield the *Golden Ratio *partial field (that is, it’s representable over $\text{GF}(4)$ and $\text{GF}(5)$), and the matroid represented by the following diagram is representable over the partial field $\mathbb{P}_4$, and is the source of Conjecture 15 in Part I. The matroid was obtained from Gordon Royle, out of his database of matroids with up to 9 elements, through a certain query regarding matroids with only 1 representation over $\text{GF}(5)$.

Many people have devoted attention to the theory of generic representations of a matroid. In [PvZ10] we discuss a construction of the universal partial field that does not rely on the choice of a special basis and scaling set. This construction is built on the idea of a *bracket ring*, introduced by White [Whi75], where we introduce a variable for each $r$-tuple of elements of $E$, followed by relations that make these $r$-tuples behave like determinants (alternating, 0 for repeated columns, 0 for nonbases, and relations encoding basis exchange). In addition to White’s construction we introduce symbols to make the bracket of each basis invertible.

Dress and Wenzel [DW89] introduce the *Tutte Group*, which again introduces a symbol for each basis, but only takes *multiplicative* relations into account. This group has received a considerable amount of attention. In particular the *torsion* of this group can give information on characteristics over which $M$ is not representable. Dress and Wenzel give a number of constructions of their group, based on various matroid axiomatizations.

I’ll conclude with some (computational) questions I’ve recently been asking myself.

- Can we employ scaling techniques as in the definition of $\mathbb{P}_M$ to obtain a (quotient of the) Tutte-group that is faster to compute?
- Can we combine computations of the Tutte-group with Gröbner basis techniques for a faster computation of the set of characteristics, or to determine whether $M$ is representable over any given finite field?

Unfortunately, $M$ is not always uniquely representable over $\mathbb{P}_M$ (up to partial field automorphisms, row- and column-scaling). The only obstacles I know involve partial fields $\mathbb{P}_M$ with an infinite set of cross ratios, such as the “near-regular-mod-2” partial field. Perhaps unique representability is recovered when the set of cross ratios is finite?

[BV03] R. Baines, P. Vámos, *An algorithm to compute the set of characteristics of a system of polynomial equations over the integers. *J. Symbolic Computat. 35, pp. 269-279 (2003).

[DW89] A.W.M. Dress, W. Wenzel, *Geometric Algebra for Combinatorial Geometries. *Adv. in Math. 77, pp. 1-36 (1989).

[Oxl11] J. Oxley, *Matroid Theory. Second Edition. *Oxford University Press (2011).

[PvZ10] R.A. Pendavingh, S.H.M. van Zwam, *Confinement of matroid representations to subsets of partial fields. *J. Comb. Th. Ser. B, vol. 100, pp. 510-545 (2010).

[Whi75] N.L. White. *The bracket ring of a combinatorial geometry. I. *Trans. Amer. Math. Soc. 214, pp. 233-248 (1975).

[vZ09] S.H.M. van Zwam, *Partial Fields in Matroid Theory. *PhD Thesis, Eindhoven University of Technology (2009).

]]>

This semester, I’ve been doing a reading course with Stefan van Zwam, the bulk of which involved reading interesting papers about tangles. This post highlights some of my favorite ideas so far. We start with an example that essentially comes from Reinhard Diestel and Geoff Whittle’s paper “Tangles and the Mona Lisa” [1]. The goal is to illustrate the intuition behind some of the definitions related to tangles. Precise definitions for connectivity systems and tangles as related to pictures can be found in [1]. Here is a picture of my brother Daniel and me on a ferry outside Seattle.

If this picture were printed out and cut into two pieces, then we could sometimes decide that one piece was less important than the other. For instance, if we cut along the red lines below, it is clear that the center piece, while missing some of the beautiful scenery, is the important part of the picture from the perspective of capturing the human subjects of the picture.

If, however, we cut the picture with a squiggly cut, as below, then perhaps neither piece is unimportant. A * connectivity system* $K=(E(K),\lambda_K)$ consists of a finite set, $E(K)$, together with a * connectivity function* $\lambda _K:2^{E(K)}\rightarrow\mathbb{R}$ that gives us a way to rank the size of cuts. As such, we expect $\lambda_K(X)=\lambda_K(E(K)-X)$, for all subsets $X$ of $E(K)$. That is, we expect $\lambda_K$ to be * symmetric*. We also want our connectivity function to be submodular, that is, $\lambda_K(X)+\lambda_K(Y)\geq \lambda_K(X\cap Y)+\lambda_K(X\cup Y)$ for all subsets $X$ and $Y$ of $E(K)$. Any function which is symmetric and submodular is a connectivity function.

If $\lambda_1$ and $\lambda_2$ are connectivity functions on the same set, then it is straightforward to check that so is $\lambda_1+\lambda_2$ and, for a positive constant $c$, both $c\cdot\lambda_1$and $\lambda_1+c$. If $K=(E,\lambda_K)$ and $K’=(E,\lambda_{K’})$ are connectivity systems, then we say that $K’$ is a * tie breaker* if $\lambda_{K’}(X)\leq \lambda_{K’}(Y)$ whenever $\lambda_K(X)\leq \lambda_K(Y)$ and $\lambda_{K’}(X)\not=\lambda_{K’}(Y)$ unless $X=Y$ or $X=E-Y$. Geelen, Gerards, and Whittle’s “Tangles, tree-decompositions and grids in matroids” [2] proves the following.

**Lemma 1** *All connectivity functions have tie breakers.*

*Proof.* Let $K=(E,\lambda)$ be a connectivity function. We can assume that $E=\{1, 2, \ldots, n\}$ Define $\lambda_L$ by $\lambda_L(X)=\sum_{i\in X}2^i$ for $X\subseteq {1, 2,\ldots, n-1}$, and $\lambda(X)=\lambda(E-X)$ for $X$ containing $n$. We need to show that $\lambda_L$ is submodular. If $X,Y\subseteq\{1, 2,\ldots, n-1\}$, then

\begin{align*}

\lambda(X)+\lambda(Y) & =\sum_{i\in X}2^i+\sum_{i\in Y}2^i=\sum_{i\in X\cap Y}2^i+\sum_{i\in X-Y}2^i+\sum_{i\in Y}2^i=\sum_{i\in X\cap Y}2^i+\sum_{i\in X\cup Y}2^i\\

& =\lambda_L(X\cap Y)+\lambda_L(X\cup Y).

\end{align*}

If $X\subseteq\{1,2,\ldots,n-1\}$ and $n\in Y$, then

\begin{align*}

\lambda(X)+\lambda(Y) & =\sum_{i\in X}2^i+\sum_{i\not\in Y}2^i=\sum_{i\in X\cap Y}2^i+\sum_{i\in X-Y}2^i+\sum_{i\not\in Y\cup X}2^i+\sum_{i\not\in X-Y}2^i\\

& \geq\sum_{i\in X\cap Y}2^i+\sum_{i\not\in X\cap Y}2^i=\lambda_L(X\cap Y)+\lambda_L(X\cup Y).

\end{align*}

If $n\in X$ and $n\in Y$, then

\begin{align*}

\lambda_L(X)+\lambda_L(Y) & =\lambda_L(E-X)+\lambda_L(E-Y)\\

& =\lambda_L((E-X)\cap(E-Y))+\lambda_L((E-X)\cup(E-Y))\\

& =\lambda_L(E-(X\cup Y))+\lambda_L(E-(X\cap Y)).

\end{align*}

Thus $\lambda_L$ is a connectivity function. Now let $\lambda'(X)=2^n\cdot\lambda(X)+\lambda_L(X)$. It is straightforward to check that $K’=(E,\lambda’)$ is a tie breaker for $K$.

$\square$

A * tangle* can be thought of as a collection $\mathcal{T}$ of less important (or * small *) pieces. So far, we expect that the tangle should only make decisions about relatively simple cuts, and that the tangle should make a decision for all of the simple cuts. If we decide that the two pieces cut out of the picture below are both small, then we don’t want the part of the picture that is left to be contained in a small piece. More precisely, if $K$ is a connectivity system, then a collection $\mathcal{T}$ is a tangle of order $\theta$ in $K$ if

- (T1)
- $\lambda_K(A) < \theta$, for all $A \in \mathcal{T}$.
- (T2)
- For each separation $(A,B)$ of order less that $\theta$, either $A\in\mathcal{T}$ or $B\in\mathcal{T}$.
- (T3)
- If $A,B,C\in\mathcal{T}$, then $A\cup B\cup C \not=E(K)$.
- (T4)
- $E(K)-e$ is not in $\mathcal{T}$, for each $e\in E(K)$.

For some types of cuts, there might be different opinions about which part is less important. For instance, in the following picture, the matroid community would probably say that the part containing my brother was slightly less important, because the other half contains someone who at least knows the definition of a matroid, while my brother’s company would probably say that the part that contains me is slightly less important.

My mother would not consider either part unimportant, as we are both somewhat meaningful to her. Since the tangle induced by the opinion of the matroid community disagrees with the one induced by the opinion of Daniel’s company about which half of the separation above is less important, that separation is called a * distinguishing separation*. Since the tangle induced by my mother’s opinions (that is, a piece is only unimportant if it avoids me and avoids Daniel) is a subset of the matroid community tangle (where a piece is unimportant if it avoids me) and a subset of the company tangle (where a piece in unimportant if it avoids Daniel), it is called a * truncation* of either of them. That is, $\mathcal{T}’$ is a * truncation* of $\mathcal{T}$ if $\mathcal{T}’\subseteq\mathcal{T}$. If a tangle is not a proper truncation of another tangle, then it is a * maximal* tangle.

The above definition of a connectivity function includes the usual connectivity function of a matroid $M$, namely, $\lambda(X)=r(X)+r(E-X)-r(M)$. One of the main results of [2] gives us information about maximal tangles in a matroid. Before we state the theorem, we need to introduce a bit more notation. If $K=(E,\lambda)$ is a connectivity system, $T$ is a tree, and $\mathcal{P}$ a function from $E$ to $V(T)$, then we say that $(T,\mathcal{P})$ is a * tree decomposition* of $E$. For a subset $X$ of $V(T)$, we define $\mathcal{P}(X)=\{e:P(e)\in X\}$, and for a subtree $T’$ of $T$, we define $\mathcal{P}(T’)=\mathcal{P}(V(T’))$. We say that a separation $(A,B)$ of $K$ is * displayed* by $(T,\mathcal{P})$ if $A=\mathcal{P}(T’)$ for some subtree $T’$ of $V(T)$ obtained by deleting an edge of $T$ and letting $T’$ be one of the resulting connected components.The order if a separation $(A,B)$ is $\lambda(A)=\lambda(B)$.

**Theorem 1**

*Let $K$ be a connectivity system, and let $\mathcal{T}_1,\mathcal{T}_2,\ldots,\mathcal{T}_n$ be maximal tangles in $K$. Then there is a tree decomposition $(T,\mathcal{P})$ of $E(K)$ with $V(T)=[n]$ such that the following hold.*

- For each $i\in V(T)$ and $e\in E(T)$, if $T’$ is the component of $T-e$ containing $i$, then $\mathcal{P}(T’)$ is not in $\mathcal{T}_i$.
- For each pair of distinct vertices $i$ and $j$ of $T$, there is a minimum-order distinguishing separation for $\mathcal{T}_i$ and $\mathcal{T}_j$ displayed by $T$.

Before we sketch a proof of this theorem, we give an example that illustrates the statement of the theorem.

Consider the matroid, $M=([12],\mathcal{I})$ illustrated above. We first need to determine what the maximal tangles are. Let $S_1=\{9,10\}$, $S_2=\{11,12\}$, and $S_3=[8]$. We will show that an order-1 tangle of $M$ has the form the union of $\{S_i,S_j,(S_i\cup S_j)\}$ together with the set of singletons, where $i$ and $j$ are distinct members of $\{1,2,3\}$.

Let $\mathcal{T}$ be an order-1 tangle of $M$. The sets with connectivity 1 consist of the 12 singletons, their complements, and $S_1$, $S_2$, $S_3$, and their complements. By (T3), at most two of $S_1, S_2, S_3$ are in $\mathcal{T}$. Assume that $\{i,j,k\}=\{1,2,3\}$, and that $S_k$ is not in $\mathcal{T}$. Since $(S_i\cup S_j,S_k)$ is an order-1 separation of $M$, by (T2) $S_i\cup S_j$ is in $\mathcal{T}$. By (T3) (and the fact that there are at least 3 elements of $\mathcal{T}$), we get that neither $S_j\cup S_k$ nor $S_i\cup S_k$ is in $\mathcal{T}$, so by (T2), $S_i$ and $S_j$ are in $\mathcal{T}$.

Since singletons must all be in every tangle of order at least 1, the result follows.

The sets with connectivity 2 either contain $\{1,2,3,4\}$ and avoid $\{5,6,7,8\}$ or contain $\{5,6,7,8\}$ and avoid $\{1,2,3,4\}$. Arguing as above, we get that that the order-2 tangles of $M$ are $\{X:\lambda(X)\leq 2 \text{ and } S\not\subseteq X\}$ for $S\in\{\{1,2,3,4\}, \{5,6,7,8\}\}$. We note that the order-1 tangle where $\{S_i,S_j\}=\{\{1,2,3,4\},\{5,6,7,8\}\}$ is a truncation of both of the order-2 tangles, and that the other order-1 tangles can be represented as $\{X:\lambda(X)\leq 1 \text{ and } \{1,2, 3, 4\}\not\subseteq X\}$ and $\{X:\lambda(X)\leq 1 \text{ and } \{5,6, 7, 8\}\not\subseteq X\}$. Since there are no separations of order-3 or more in $M$, the four maximal tangles of $M$ are $\mathcal{T}_S=\{X:\lambda(X)\leq \lambda(S) \text{ and } S\not\subseteq X\}$, for $S\in\{\{1,2,3,4\},\{5,6,7,8\},\{9,10\},\{11,12\}\}$. For $S \in \{\{1,2,3,4\}, \{9,10\},\{11,12\}\}$, a minimum-order distinguishing separation for $\mathcal{T}_S$ and $\mathcal{T}_{S’}$ is $(S,[12]-S)$, which is displayed by the tree, $T$, below.

The above argument showing that if $S_i\cup S_j$ is in $\mathcal{T}$, then each of $S_i$ and $S_j$ are in $\mathcal{T}$, can be generalized to show the following.

** Lemma 2**

*If $A$ is in a tangle, $\mathcal{T}$ of $K$ of order $\theta$, and $B\subseteq A$ with $\lambda_K(B)\leq\theta$, then $B\in\mathcal{T}$.*

To prove Theorem 1, we actually prove the following slightly stronger result also from [2].

**Theorem 2**

*Let $K$ be a connectivity system, and let $\mathcal{T}_1,\mathcal{T}_2,\ldots,\mathcal{T}_n$ be tangles in $K$, none a truncation of another. Then there is a tree decomposition $(T,\mathcal{P})$ of $E(K)$ with $V(T)=[n]$ such that the following hold.*

- For each $i\in V(T)$ and $e\in E(T)$, if $T’$ is the component of $T-e$ containing $i$, then $\mathcal{P}(T’)$ is not in $\mathcal{T}_i$.
- For each pair of distinct vertices $i$ and $j$ of $T$, there is a minimum-order distinguishing separation for $\mathcal{T}_i$ and $\mathcal{T}_j$ displayed by $T$.

**Proof Sketch**

If $K$ is a connectivity system and $K’$ is a tie breaker for $K$, then it is easy to see that a tangle $\mathcal{T}$ of $K$ is also a tangle of $K’$, so it is safe to assume that $K$ is its own tie breaker.

Since $K$ is its own tie breaker, for each distinct $i$ and $j$ in $[n]$, there is a minimum-order separation $(X_{ij},Y_{ij})$ of $K$ distinguishing $\mathcal{T}_i$ and $\mathcal{T}_j$, where $X_{ij}\in\mathcal{T}_i$. Using some well known results (whose statements require terminology which we haven’t introduced), it can be show that there is a tree decomposition $(T,\mathcal{P})$ of $E(K)$ such that each separation $(X_{ij},Y_{ij})$ is displayed by $(T,\mathcal{P})$, that these are the only separations displayed by $(T,\mathcal{P})$, and that each edge of $T$ displays a proper and distinct separation. It remains to show that there is a bijection from $\{\mathcal{T}_1,\ldots,\mathcal{T}_n\}$ to the vertices of $T$ satisfying the conclusion of Theorem 2.

For $i\in[n]$, let $\chi_i$ be the set of subsets $X$ of $V(T)$ such that $E(K)-\mathcal{P}[X]\in\mathcal{T}_i$ and such that $(E(K)-\mathcal{P}[X],\mathcal{P}[X])$ is displayed by $(T,\mathcal{P})$. Each member of $\chi_i$ induces a sub-tree of $T$, and, by (T3), each two members of $\chi_i$ intersect, so the intersection of the members of $\chi_i$ is non-empty. Call this intersection $V_i$. Each edge of $T$ that leaves $V_i$ displays a separation $(A,B)$ with $\mathcal{P}[V_i]\subseteq A$ and $B\in\mathcal{T}_i$. The proof concludes by showing that the $V_i$’s partition the vertices of $T$ into singletons, since this shows that $|V(T)|=n$, and since vertices in $V_i$ satisfy the condition (1) of the theorem.

For each $i\not=j$, the set $\mathcal(V_i)$ is in $Y_{ij}$ and the set $\mathcal(V_j)$ is in $Y_{ji}=X_{ij}$, so the sets $V_1, \ldots, V_n$ are disjoint.

Suppose that $w\in V_i$. We need to show that $\{w\}=V_i$. Choose the edge $e$ incident with $w$ that displays the separation $(X_{ij},Y_{ij})$ of strictly largest order. Since we are assuming that $K$ is its own tie breaker, there is a strictly largest order. Now, each of the other edges incident with $w$ displays a separation of order less than the order displayed by $e$, and hence, less than the orders of $\mathcal{T}_i$ and $\mathcal{T}_j$. Then, by the definition of $(X_{ij},Y_{ij})$, none of these separations distinguish $\mathcal{T}_i$ and $\mathcal{T}_j$. By the definition of $V_i$ and the fact that $X_{ij}$ is in $\mathcal{T}_i$, we know that $\mathcal{P}(w)$ is not a subset of $X_{ij}$ so it is a subset of $Y_{ij}$. Then by Lemma 2, for each of the separations induced by edges other than $e$ incident to $w$, the set not containing $\mathcal{P}(w)$ is in $\mathcal{T}_j$, and hence it is in $\mathcal{T_i}$. Thus, $V_i=\{w\}$.

$\square$

The complete details of the last proof can be found in [2] along with a corollary which bounds the number of maximal tangles.

As hinted at in the example, tangles can end up pointing to “highly” connected regions of the matroid. This is useful in structure theory because the highly connected regions usually contain the structure that we care about, and a tangle can be used to identify where deletions and contractions can be made while preserving the structure. This idea is utilized, for example, in [3, 4]. Tangles in general grew out of a definition for tangles of graphs, which were used to prove that finite graphs are well quasi ordered.

[1] R. Diestel, and G. Whittle, Tangles and the Mona Lisa, arXiv:1603.06652v2 [math.CO]

[2] J. Geelen, B. Gerards, and G. Whittle, Tangles, tree-decompositions and grids in matroids, J. Combin. Theory Ser. B 99 (2009) 657-667

[3] J. Geelen, and S. van Zwam, Matroid 3-connectivity and branch width, arXiv:1107.3914v2 [math.CO]

[4] P. Nelson, and S. van Zwam, Matroids representable over fields with a common subfield, arXiv:1401.7040v1 [math.CO]

[5] N. Robertson and P. D. Seymour, Graph minors, X. Obstructions to tree-decomposition. J. Combin. Theory Ser. B, 52(2):153-190, 1991.

]]>In his talk at the recent workshop in Eindhoven, Immanuel Albrecht noted that each matroid in the appendix of examples in James Oxley’s book *Matroid Theory* is designated as either being a gammoid or not, except for $P_8^=$. In this post, we show that $P_8^=$ is not a gammoid. The ideas used may apply more widely.

Gammoids are minors of transversal matroids, so in this section, we sketch the items about transversal matroids and their duals that we use. Those who know the characterization of transversal and cotransversal matroids in Theorems 2 and 3 might prefer to omit this introductory section.

We start with a bipartite graph with vertex classes $S$ and $T$. We will use the example below, with $S=\{a,b,c,d,s,t,u\}$.

A *partial transversal* is a subset of $S$, such as $\{a,b,u\}$ above, that can be matched with a subset of the vertices in $T$ via edges in the graph. The partial transversals are the independent sets of the resulting *transversal matroid*.

For $X\subseteq S$, let $N_G(X)$ denote the set of neighbors of $X$ in $G$, that is,

$$N_G(X) = \{v\in T\,:\, v \text{ is adjacent to at least one } x\in X\}.$$ A set $X$ in a matroid $M$ is *cyclic* if $X$ is a union of circuits, that is, $M|X$ has no coloops. The cyclic flats in the example are $\emptyset$, $\{a,c,s,t\}$, $\{b,d,t,u\}$, $\{a,d,s,u\}$, $\{b,c,s,u\}$, and $S$. In this example, the rank of each cyclic flat ($0$, $3$, $3$, $3$, $3$, and $4$, respectively) is its number of neighbors. This illustrates the lemma below.

**Lemma 1.** *Let $G$ be a bipartite graph on $S\cup T$ that represents $M$. If $X$ is a cyclic set of rank $k$ in $M$, then $|N_G(X)|=k$.*

*Proof.* A basis of $X$ can be matched to the vertices in some $k$-element subset $T’$ of $T$. Let $G’$ be the induced subgraph of $G$ on $S\cup T’$, so $G’$ gives a rank-$k$ transversal matroid $M’$ on $S$, and $r_{M’}(X)=k$.

We first show that $M’|X=M|X$. Independent sets in $M’|X$ are independent in $M|X$. If the converse failed, then some circuit $C$ of $M’|X$ would be independent in $M|X$. Since $C$ can be matched in $G$ but not in $G’$, some $a\in C$ is adjacent to a vertex $v\in T-T’$. Extend $C-a$ to a basis $B$ of $M’|X$. Now $r_{M’}(X)=k$, so matching $B$ with the vertices in $T’$ and $a$ with $v$ gives the contradiction $r_M(X)>k$. Thus, $M’|X=M|X$.

We now get that $N_G(X)=T’$, for if instead some $b\in X$ were adjacent to some vertex $v\in T-T’$, then the same type of matching argument, using a basis $B$ of $M|X$ with $b\not\in B$ (which exists since $b$ is not a coloop of $M|X=M’|X$), would yield a contradiction. Thus, $|N_G(X)|=k$. $\square$

This proof adapts to show that we can always choose $G$ so that $|T|=r(M)$.

The bipartite graph $G$ on $S\cup T$ is an induced subgraph of a bipartite graph $G’$ on $S’\cup T$ in which, for each $x\in T$, there is a $y\in S’$ with $N_{G’}(y)=\{x\}$. Such an extension of our example above is shown below; the red vertices have been added.

The bipartite graph $G$ on $S\cup T$ is an induced subgraph of a bipartite graph $G’$ on $S’\cup T$ in which, for each $x\in T$, there is a $y\in S’$ with $N_{G’}(y)=\{x\}$. Such an extension of our example above is shown below; the red vertices have been added.

The resulting matroid $M’$ is an extension of $M$.

Pick a subset $B$ of $S’$ (for example, the red and green vertices above) for which, for each $x\in T$, there is one $y\in B$ with $N_{G’}(y)=\{x\}$. Clearly $B$ is a basis of $M’$. Moreover, if $X$ is a cyclic flat of $M’$, then $X\cap B$ is a basis of $X$ since $r(X) = |N_{G’}(X)|$ by Lemma 1, so the flat $X$ must contain the elements of $B$ that are adjacent to the vertices in $N_{G’}(X)$. Thus, $r_{M’}(X) = |X\cap B|$. It is not hard to show, more generally, that for any cyclic flats $X_1,X_2,\ldots,X_n$ of $M’$,

\begin{equation*}

r_{M’}(X_1\cup X_2\cup \cdots \cup X_n )= \bigl|B\cap (X_1\cup X_2\cup

\cdots \cup X_n)\bigl|

\end{equation*}

and

\begin{equation*}

r_{M’}(X_1\cap X_2\cap \cdots \cap X_n) = \bigl|B\cap (X_1\cap X_2\cap

\cdots \cap X_n)\bigr|.

\end{equation*}

From these equations and inclusion/exclusion, it follows that

\begin{equation*}

r_{M’}(X_1\cap X_2\cap \cdots \cap X_n) =

\sum_{J\subseteq[n]} (-1)^{|J|+1}r_{M’}\bigl(\bigcup_{j\in J}X_j\bigr).

\end{equation*}

A transversal matroid might not have the type of basis that we used to derive the equalities above, but we do get the inequality in Theorem 2 below. To see this, delete $S’-S$ to get the original transversal matroid $M$. The rank of unions of cyclic flats of $M$ are the same as for their closures in $M’$, but the rank of intersections may be less. This proves the half of Theorem 2 that we will use. (For a proof of the converse, see [1].) John Mason proved this result for cyclic sets and Aubrey Ingleton refined it to cyclic flats. We let $\cup\mathcal{F}$ denote the union of a family of sets, and we use $\cap\mathcal{F}$ similarly.

**Theorem 2.** *A matroid $M$ is transversal if and only if for all nonempty sets $\mathcal{A}$ of cyclic flats of $M$,*

\begin{equation*}

r(\cap\mathcal{A})\leq \sum_{\mathcal{F}\subseteq \mathcal{A}} (-1)^{|\mathcal{F}|+1} r(\cup\mathcal{F}).

\end{equation*}

We will use the dual of this result, which we state next. Duals of transversal matroids are called *cotransversal matroids* or *strict gammoids*.

**Theorem 3.** *A matroid $M$ is cotransversal if and only if for all sets $\mathcal{A}$ of cyclic flats of $M$,*

\begin{equation}

r(\cup \mathcal{A}) \leq \sum_{\mathcal{F}\subseteq \mathcal{A}\,:\,\mathcal{F}\ne\emptyset}

(-1)^{|\mathcal{F}|+1}r(\cap \mathcal{F}).\qquad\qquad (1)

\end{equation}

Restrictions of transversal matroids are transversal, so any gammoid (a minor of a transversal matroid) is a contraction of a transversal matroid. The set we contract can be taken to be independent since $M/X =M\backslash (X-B)/B$ if $B$ is a basis of $M|X$, so any gammoid is a nullity-preserving contraction of a transversal matroid. The class of gammoids is closed under duality, so we get Lemma 4 below.

**Lemma 4.** *Any gammoid has a rank-preserving extension to a cotransversal matroid.*

Because of this lemma, below we focus exclusively on extensions that are rank-preserving.

We will also use the corollary below of the following theorem of Ingleton and Piff [2].

**Theorem 5.** *A matroid of rank at most three is a gammoid if and only if it is cotransversal.*

**Corollary 6.** *Let $M$ be a rank-$r$ matroid with $r\geq 3$. If $H_1$, $H_2$, $H_3$, and $H_4$ are cyclic hyperplanes, any two of which intersect in a flat of rank $r-2$, and each set of three or four intersects in a flat $X$ of rank $r-3$, then $M$ is not a gammoid.*

*Proof.* In $M/X$, the sets $H_i-X$, for $1\leq i\leq 4$, are cyclic lines. (Contraction does not create coloops.) The rank of the union of these four cyclic lines is $3$, but the alternating sum in inequality (1) is $4\cdot 2 – 6\cdot 1 = 2$, so $M/X$ is not cotransversal by Theorem 3. Thus, since $r(M/X)=3$, it is not a gammoid by Theorem 5, so $M$ is not a gammoid. $\square$

To show that $P_8^=$ is not a gammoid, we focus on a particular failure of inequality (1) in $P_8^=$ and show that between $P_8^=$ and any extension $M’$ of $P_8^=$ in which the counterpart of that particular inequality holds, we have a single-element extension of $P_8^=$ to which Corollary 6 applies. Thus, any such $M’$ has a restriction that is not a gammoid, so $M’$ is not cotransversal, and so $P_8^=$ is not a gammoid by Lemma 4.

To define $P_8^=$, we first briefly discuss $P_8$, which we get by placing points at the vertices of a cube and twisting the top of the cube an eighth turn. Label the points in the top and bottom planes of $P_8$ as shown below (the second diagram shows the view from above). We get $P_8^=$ by from $P_8$ by relaxing the circuit-hyperplanes $\{a,b,c,d\}$ and $\{s,t,u,v\}$.

From this, we see that the cyclic flats of $P_8^=$, besides the empty set and the whole set, are the following planes. In each, we put what we call its *diagonal* in bold.

$$\{\mathbf{a},\mathbf{c},u,v\}

\quad\{\mathbf{a},\mathbf{c},s,t\} \quad

\{\mathbf{b},\mathbf{d},s,v\}\quad

\{\mathbf{b},\mathbf{d},t,u\}$$

$$\{a,b,\mathbf{t},\mathbf{v}\}

\quad\{c,d,\mathbf{t},\mathbf{v}\}\quad

\{a,d,\mathbf{s},\mathbf{u}\}\quad

\{b,c,\mathbf{s},\mathbf{u}\}$$

Observe that each cyclic plane $X$ intersects five others in exactly two points (this includes all four cyclic planes that are not in the same row as $X$), and the remaining two cyclic planes in one point each (and those are different points). Thus, the number of sets of two cyclic planes that intersect in two points is $8\cdot 5/2 = 20$, and the number of sets of two cyclic planes that intersect in a single point is $8\cdot 2/2 = 8$. No triple of cyclic planes intersects in two points. Also, each point is in exactly four cyclic planes, so the number of sets of three planes that intersect in a singleton is $8\cdot 4=32$, and the number of sets of four points that intersect in a singleton is $8$.

Let $\mathcal{A}$ be the set of all eight cyclic planes. The term on the left side of inequality (1) is $4$. On the right side, the counting in the previous paragraph gives $$8\cdot 3 – 20\cdot 2 -8 + 32 -8 = 0.$$ Thus, inequality (1) fails.

Let $M’$ be a rank-preserving extension of $P_8^=$ in which the counterpart of this instance of inequality (1) holds. (If there is no such $M’$, then $P_8^=$ is not a gammoid, as we aim to show.) Think of constructing $M’$ by a sequence of single-element extensions. If each of these single-element extensions added a point to at most two of the cyclic planes of $P_8^=$, or parallel to a point of $P_8^=$, then the counterpart of this instance of inequality (1) would fail; thus, not all extensions are of these types. Focus on one point, say $e$, that is added to at least three cyclic planes of $P_8^=$ and not parallel to an element of $P_8^=$, and consider the single-element extension of $P_8^=$ formed by restricting $M’$ to $E(P_8^=)\cup e$. We show that, up to symmetry, there are only two such single-element extensions of $P_8^=$. As we see below, in both options, $e$ is added to exactly three cyclic planes, not more.

First consider adding $e$ to two cyclic planes that have the same diagonal, say, by symmetry, $\{a,c,u,v\}$ and $\{a,c,s,t\}$. We cannot add $e$ to $\{a,b,t,v\}$ since this plane intersects the other two in lines that share a point: adding $e$ to all three planes would give $r(\{a,v,e\})=2=r(\{a,t,e\})$, so either $e$ is parallel to $a$ (which we discarded) or $\{a,t,v\}$ would be a $3$-circuit, contrary to the structure of $P_8^=$. Similar reasoning eliminates adding $e$ to any plane in the second row. The only candidates left are $\{b,d,s,v\}$ and $\{b,d,t,u\}$, and we cannot add $e$ to both since then $\{a,c,e\}$ and $\{b,d,e\}$ would be lines that intersect in $e$, but $a,b,c,d$ are not coplanar.

Now assume that no two planes to which we add $e$ have the same diagonal. We must have a pair of sets in the same row but with different diagonals; by symmetry, we can use $\{a,c,u,v\}$ and $\{b,d,s,v\}$. An argument like the one above shows that the only other planes to which we can add $e$ are $\{a,d,s,u\}$ or $\{b,c,s,u\}$, and we cannot add $e$ to both since they have the same diagonal.

Thus, between $P_8^=$ and $M’$ we have a single-element extension $M$ of $P_8^=$ in which a point $e$ is added to exactly three of the original cyclic planes. By the argument above, up to symmetry, there are two cases to consider: the extended cyclic planes are either

- $\{a,c,u,v,e\}$, $\{a,c,s,t,e\}$, and $\{b,d,s,v,e\}$, or
- $\{a,c,u,v,e\}$, $\{b,d,s,v,e\}$, and $\{a,d,s,u,e\}$.

In either case, the cyclic planes of $M$ that contain $v$, that is, $$\{a,c,u,v,e\}, \quad \{b,d,s,v,e\},\quad \{a,b,t,v\}, \quad\{c,d,t,v\},$$ satisfy the hypothesis of Lemma 6, so $M$ is not a gammoid. Thus, no coextension of $M$ is cotransversal, so $P_8^=$ is not a gammoid. Indeed, we have the result below.

**Proposition 7.** *The matroid $P_8^=$ is an excluded minor for the class of gammoids.*

To prove this, first check that $P_8^=$ is self-dual. With that and the symmetry of $P_8^=$, it suffices to check that $P_8^=\backslash v$ is a gammoid. One can check that it is the transversal matroid in our example in the introductory section.

[1] J. Bonin, J.P.S. Kung, and A. de Mier, Characterizations of transversal and fundamental transversal matroids, *Electron. J. Combin.* **18** (2011) #P106.

[2] A.W. Ingleton and M.J. Piff, Gammoids and transversal matroids, *J. Combinatorial Theory Ser. B ***15** (1973) 51-68.

Rudi and Jorn wrote a nice post earlier this year about questions in asymptotic matroid theory, and beautiful new results they’ve obtained in this area. While reading one of their papers on this topic, I saw that they restated the conjecture that almost all matroids on $n$ elements are non-representable. This was first explicitly written down by Mayhew, Newman, Welsh and Whittle [1] but earlier alluded to by Brylawski and Kelly [2] (in fact, the latter authors claim that the problem is an ‘exercise in random matroids’ but give no clue how to complete it). Indeed I would argue that most of us would independently come up with the same conjecture after thinking about these questions for a few minutes; surely representable matroids are vanishingly rare among all matroids!

In any case, reading this conjecture reminded me that, like many ‘obvious’ statements in asymptotic matroid theory it was still open, and seemed somewhat hard to approach with existing techniques. I’m happy to say that this is no longer the case; as it happened, I discovered a short proof that I will now give in this post. The proof is also on the arXiv [3]; there, it is written with the bounds as tight as possible. Here, I will relax the calculations a little here to make the proof more accessible, as well as using more elementary bounds so the entire argument is self-contained. The theorem we prove is the following:

**Theorem: ** For $n \ge 10$, the number of representable matroids with ground set $[n] = \{1,\dotsc,n\}$ is at most $2^{2n^3}$.

The number of matroids on $n$ elements is well-known to be doubly exponential in $n$, so the above gives the ‘almost all’ statement we need, in fact confirming the intuition that representable matroids are *extremely* rare among matroids in general. The bound in the proof can in fact be improved to something of the form $2^{n^3(1/4 + o(1))}$, and I believe the true count has this form; see [3] for more of a discussion.

Our path to the proof is indirect; we proceed by considering a more general question on `zero-patterns’ of polynomials, in the vein of [4]. Let $f_1, \dotsc, f_N$ be integer polynomials in variables $x_1, \dotsc, x_m$. Write $\|f\|$ for the absolute value of the largest coefficient of a polynomial $f$, which we call its *height*; it is fairly easy to prove that $\|f + g\| \le \|f\| + \|g\|$ and that $\|fg\| \le \binom{\deg(f) + \deg(g)}{\deg(f)} \|f\|\ \|g\|$ for all $f$ and $g$. We will map these polynomials to various fields; for a field $\mathbb{K}$ and a polynomial $f$, write $f^{\mathbb{K}}$ for the polynomial in $\mathbb{K}[x_1, \dotsc, x_m]$ obtained by mapping each coefficient of $f$ to an element of $\mathbb{K}$ using the natural homomorphism $\phi\colon \mathbb{Z} \to \mathbb{K}$.

Given a field $\mathbb{K}$ and some $u_1, \dotsc, u_m \in \mathbb{K}$, the polynomials $f_i^{\mathbb{K}}(u_1, \dotsc, u_m)$ all take values in $\mathbb{K}$, and in general some will be zero and some nonzero in $\mathbb{K}$. We are interested in the number of different ways this can happen, where we allow both the field $\mathbb{K}$ and the $u_j$ to be chosen arbitrarily; to this end, we say a set $S \subseteq [N]$ is \*realisable* with respect to the polynomials $f_1, \dotsc, f_N$ if there is a field $\mathbb{K}$ and there are values $u_1, \dotsc, u_m \in \mathbb{K}$ such that \[S = \{i \in [N]: f^{\mathbb{K}}(u_1, \dotsc, u_m) \ne 0_{\mathbb{K}}\}.\] In other words, $S$ is realisable if and only if, after mapping to some field and substituting some values into the arguments, $S$ is the support of the list $f_1, \dotsc, f_N$. We will get to the matroid application in a minute; for now, we prove a lemma that bounds the number of different realisable sets:

**Lemma:** Let $c,d$ be integers and let $f_1, \dotsc, f_N$ be integer polynomials in $x_1, \dotsc, x_m$ with $\deg(f_i) \le d$ and $\|f_i\| \le c$ for all $i$. If an integer $k$ satisfies

\[ 2^k > (2kc(dN)^d)^{N\binom{Nd+m}{m}}, \] then there are at most $k$ realisable sets for $(f_1, \dotsc, f_N)$.

**Proof: ** If not, then there are distinct realisable sets $S_1, \dotsc, S_k \subseteq [N]$. For each $i \in [k]$, define a polynomial $g_i$ by $g_i(x_1, \dotsc, x_m) = \prod_{j \in S_i}f_j(x)$. Clearly $\deg(g_i) \le Nd$, and since each $g_i$ is the product of at most $N$ different $f_i$, we use our upper bound on the product of heights to get

\[ \|g_i\| \le c^N \binom{dN}{d}^N (2kc’)^D\], so we have a collision – there exist distinct sets $I,I’ \subseteq [k]$ such that $g_I = g_I’$. By removing common elements we can assume that $I$ and $I’$ are disjoint.

Let $\ell \in I \cup I’$ be chosen so that $|S_{\ell}|$ is as small as possible. We can assume that $\ell \in I$. Since the set $S_{\ell}$ is realisable, there is a field $\mathbb{K}$ and there are values $u_1, \dotsc, u_m \in \mathbb{K}$ such that $S_{\ell} = \{i \in [N]: f_{\ell}^{\mathbb{K}}(u_1, \dotsc, u_m) \ne 0_{\mathbb{K}}\}$. So $g^{\mathbb{K}}_{\ell}$, by its definition, is the product of nonzero elements of $\mathbb{K}$, so is nonzero. For each $t \in I \cup I’ – \{\ell\}$, on the other hand, since $|S_t| \ge |S_\ell|$ and $S_t \ne S_\ell$ there is some $j \in S_t – S_\ell$, which implies that the zero term $f^{\mathbb{K}}_j(u_1, \dotsc, u_m)$ shows up in the product $g^{\mathbb{K}}_t(u_1, \dotsc, u_m)$. It follows from these two observations that

\[ 0_{\mathbb{K}} \ne g^{\mathbb{K}}_I(u_1, \dotsc, u_m) = g^{\mathbb{K}}_{I’}(u_1, \dotsc, u_m) = 0_{\mathbb{K}}, \] which is a contradiction.

Why is this relevant to representable matroids? Because representing a rank-$r$ matroid $M$ with ground set $[n]$ is equivalent to finding an $r \times n$ matrix $[x_{i,j}]$ over some field, for which the $r \times r$ determinants corresponding to bases of $M$ are nonzero and the other determinants are zero. In other words, a matroid $M$ is representable if and only if the set $\mathcal{B}$ of bases of $M$ is realisable with respect to the polynomials $(f_A \colon A \in \binom{[n]}{r})$, where $f_A$ is the integer polynomial in the $rn$ variables $[x_{ij} \colon i \in [r],j \in [n]]$ that is the determinant of the $r \times r$ submatrix of $[x_{ij}]$ with column set $A$. Thus, the number of rank-$r$ representable matroids on $n$ elements is the number of realisable sets with respect to these $f_A$.

To bound this quantity, we apply the Lemma, for which we need to understand the parameters $N,m,a,d$. Now $m = rn \le n^2$ is just the number of variables. $N = \binom{n}{r} \le 2^n$ is the number of $r \times r$ submatrices. (We can in fact assume that $N = 2^n$ and $m = n^2$ by introducing dummy polynomials and variables). Finally, since the $f_A$ are determinants, we have $\deg(f_A) = r \le n$ and $\|f_A\| = 1$ for all $a$, so $(N,m,c,d) = (2^n,n^2,1,n)$ will do. To apply the lemma, it suffices to find a $k$ for which

\[ 2^k > (2kc(dN)^d)^{N\binom{Nd+m}{m}}, \] or in other words, \[k > N\binom{Nd+m}{m}\log_2(2kc(dN)^d)).\] If you are happy to believe that $k = 2^{2n^3}/2n$ satisfies this, then you can skip the next two estimates, but for the sticklers among us, here they are:

Using $(N,m,d,c) = (2^n,n^2,n,1)$ and $n \ge 20$ we have \[N\binom{Nd+m}{m} \le 2^n(2^{n+1}n)^{n^2} = 2^{n^2(n+1 + \log_2(n)) + n} \le 2^{2n^3}/6n^4.\] (Here we need that $n^3 > n^2(1+\log_2 n) + n + \log_2(6n^4)$, which holds for $n \ge 10$.) Similarly, for $k = 2^{2n^3}/(2n)$ we have $2kc < 2^{2n^3}$, so

\[\log_2(2kc(dN)^d)) < \log_2(2^{2n^3}(n2^n)^n) < 2n^3 + n^2 + n \log_2 n < 3n^3.\] Combining these estimates, we see that $k = 2^{2n^3}/2n$ satisfies the hypotheses of the lemma, so this $k$ is an upper bound on the number of rank-$r$ representable matroids on $n$ elements. This is only a valid bound for each particular $r$, but that is what our extra factor of $2n$ was for; the rank $r$ can take at most $n+1$ values, so the number of representable matroids on $[n]$ in total is at most $(n+1)k < 2nk = 2^{2n^3}$. This completes the proof of the main theorem.

[1] D. Mayhew, M. Newman, D. Welsh, and G. Whittle, *On the asymptotic proportion of connected matroids,* European J. Combin. 32 (2011), 882-890.

[2] T. Brylawski and D. Kelly, *Matroids and combinatorial geometries*, University of North Carolina Department of Mathematics, Chapel Hill, N.C. (1980). Carolina Lecture Series

[3] P. Nelson, * Almost all matroids are non-representable*, arXiv:1605.04288 [math.CO]

[4] L. Rónyai, L. Babai and M. K. Ganapathy, * On the number of zero-patterns of a sequence of polynomials*, J. Amer. Math. Soc. 14 (2001), 717–735.

]]>The Department of Mathematics in the College of Science and Mathematics at California State University, Fresno seeks applicants for a tenure- track, academic year position starting at the level of Assistant Professor. The successful candidate will teach, supervise and advise undergraduate and graduate students according to departmental needs; conduct scholarly research in the area(s) of Combinatorial/Discrete Geometry or Algebraic Geometry resulting in peer-reviewed publications, presentations and external grant submissions; and engage in service-related activities. Primary teaching responsibilities include mathematics courses at both the undergraduate and at the graduate level, and project/thesis advising of students in the Masters of Arts in Mathematics program. Special consideration will be given to applicants with research interests overlapping current areas of research in the department.

[…]

all applicants must submit an application online at

http://www.fresnostate.edu/adminserv/hr/jobs/

no later than December 1, 2016, when we begin reviewing applications.

Interested individuals can find out more about California State University, Fresno by going to the university website at

The university also has a website for New/Prospective Faculty at

http://www.fresnostate.edu/adminserv/hr/jobs/valley/index.html

that provides information not only on the university but also on the greater Fresno-Clovis Metropolitan Area. Vacancy announcements for all full-time faculty positions can be located at

In a previous post, Dr. Pivotto posted about multimatroids. Her post includes a definition of delta-matroid, and a natural way that delta-matroids arise in the context of multimatroid theory. I recommend her post to readers interested in multimatroids, which generalize delta-matroids. I will discuss delta-matroids in this post, their discovery and natural ways that a mathematician may innocently stumble into their wonderful world.

Delta-matroids were first studied by Andre Bouchet [BG]. I use $X\bigtriangleup Y$ to denote symmetric difference of sets $X$ and $Y$, which is equal to $(X\cup Y)-(X\cap Y)$. To get a delta-matroid, take a finite set $E$ and a collection of subsets $\mathcal{F}$, called feasible sets, satisfying the following.

I) $\mathcal{F}\neq \emptyset$.

II) If $F,F’\in \mathcal{F}$ and $e\in F\bigtriangleup F’$, then there exists $f\in F\bigtriangleup F’$ such that $F\bigtriangleup \{e,f\}$ is in $\mathcal{F}$.

Then $D=(E,\mathcal{F})$ is a delta-matroid.

It is worth noting that the feasible sets of a delta-matroid can have different cardinalities. Taking all of the feasible sets of smallest cardinality gives the bases of a matroid, namely the lower matroid for $D$. Likewise the feasible sets with maximum cardinality give the bases of the upper matroid of $D$. No other collections of feasible sets of a given size are guaranteed to comprise the bases of a matroid.

Taking minors in delta-matroids is modeled well by considering the bases of a matroid minor. Take $e\in E$. As long as $e$ is not in every feasible set (that is, $e$ is not a coloop), the deletion of $e$ from $D$, written $D\backslash e$, is the delta-matroid $(E-e,\{F \mid F\in\mathcal{F}\text{ and }e\notin F\}).$ As long as $e$ is not in no feasible set (that is, $e$ is not a loop), then contracting $e$ from $D$, written $D/e$, is the delta-matroid $(E-e,\{F-e \mid F\in \mathcal{F}\text{ and }e\in F\})$. If $e$ is a loop or coloop, then $D\backslash e=D/e$.

There are several natural ways to get to delta-matroids. They keep showing up, like the page where you die in a choose-your-own-adventure book. *The stairs grow dimmer and dimmer as you walk down the stone staircase into darkness. You hear what may be screams in the distance. You finally reach a closed door and hold your candle up to read the label, scrawled in blood.* The label on the door in this metaphor is “delta-matroids,” and they are not as scary as I portrayed them in that story.

0) Choose your own adventure by preceding to the appropriate section.

1) “I studied embedded graphs and now I see delta-matroids everywhere.”

2) “Partial duality brings me to delta-matroids.”

3) “I left out a basis axiom when defining matroids. Ahoy, delta-matroids!”

4) “Circle graphs seemed like fun. Until they hatched into delta-matroids.”

5) “C’mon, skew symmetric matrices. This can’t end in delta-matroids. Or can it?”

6) “DNA recombination in ciliates is my cup of tea. Who knew I was brewing delta-matroids?”

7) “I abandon this quest and run away.”

1) *“I studied embedded graphs and now I see delta-matroids everywhere.” *

One way to arrive at delta-matroids is by considering cellularly embedded graphs, which I like to think of as ribbon graphs, following [EMM].

To get a cellularly embedded graph, start with a surface (compact, connected 2-manifold), then put vertices (points) and edges (curves between vertices) onto the surface so that no edges cross and each face (unbroken piece of the surface enclosed by edges and vertices) is basically a disk. That is, no face contains a handle or cross-cap.

The particular embedding of a graph encodes more information than the abstract graph, which just encodes adjacencies. There’s an order to the edges incident with a vertex as you circumnavigate the vertex in the embedding, but not in the abstract graph. If you take the matroid of an embedded graph, then you lose the extra information stored in the embedding and you wind up with the matroid of the abstract graph. For example, a pair of loops is indistinguishable from a pair of loops, even though the first pair of loops are embedded in a sphere so that the graph has three faces, and the second pair of loops is embedded in a torus so that the graph has one face. By looking at the matroid of an embedded graph, you can’t even tell if the graph is embedded in an orientable surface or a non-orientable surface. So matroids are the wrong object to model embedded graphs.

Here is a figure by Steven Noble, where $\mathcal{R}$ is the set of ribbon graphs. The correspondence between graphs and matroids is akin to the correspondence between ribbon graphs and question mark. Likewise, graphs are to embedded graphs as matroids are to question mark. Andre Bouchet showed that delta-matroids are the question mark.

To get a ribbon graph, begin with a cellularly embedded graph, cut around the vertices and edges, and throw away the faces. The vertices have become disks, and the edges have become ribbons connecting disks. Each missing face counts as a boundary component in the ribbon graph. We have not lost any of the information from our embedding, since the faces were just disks and can be glued back along the boundary components to return to the original presentation. Spanning forests in a ribbon graph are exactly what you expect, and the edge sets of spanning forests of a ribbon graph give the bases of a matroid. To get a quasi-tree, we are allowed to delete edges (remove ribbons) from our ribbon graph so that we leave behind a ribbon graph with exactly as many boundary components as the original graph had connected components. Note that each spanning forest is a quasi-tree. The edge sets of quasi-trees are the feasible sets of a delta-matroid. The reader may take a break to draw a ribbon graph with quasi-trees of multiple sizes. For more information along these lines, I refer you to [CMNR1] or [CMNR2].

You may be familiar with the mutually enriching relationship between graphs and matroids. There appears to be a similar mutually enriching relationship between ribbon graphs and delta-matroids. Tutte said, “If a theorem about graphs can be expressed in terms of edges and circuits only it probably exemplifies a more general theorem about matroids.” To alter this quote for our purposes, we say, “If a theorem about ribbon graphs can be expressed in terms of edges and quasi-trees only it probably exemplifies a more general theorem about delta-matroids.”

Protip: Not every delta-matroid can be represented by a ribbon graph. Geelen and Oum gave an excluded minor characterization for ribbon-graphic delta-matroids in [GO].

2) *“Partial duality brings me to delta-matroids.”*

A planar graph has a nice, well-defined dual. Not all graphs have well-defined duals. A graph that is not planar that is cellularly embedded in a surface has a well-defined dual, but the dual depends on the surface. The matroid of a graph has a well-defined dual, as do all matroids. Matroids are nice and general in that sense. The notion of partial duality was developed by Chmutov [CG] in the context of embedded graphs, which can be viewed as ribbon graphs, as discussed in ending (1). To get the dual from a ribbon graph, replace the boundary components with vertices, and the vertices with boundary components. Now the ribbons still link up the vertices, but they are short and thick, rather than being long and ribbony. In fact, one way to look at taking a dual is to focus on the ribbon edges, and simply switch the parts of each edge that are incident with vertices with the parts of the edge that are incident with boundary components. Furthermore, there’s nothing particularly special about switching parts of the ribbon edges for the entire ribbon graph, rather than just a subset of the edges. We use $G^A$ to denote the partial dual of a ribbon-graph, $G$, with respect to the edge set $A$.

Here is a drawing of a partial dual for a ribbon graph that Iain Moffatt drew. Actually, it is a slide from a talk by Steven Noble using Iain Moffatt’s figures, with a dash of copyright infringement. Luckily, this is being used for educational purposes and I’m not being paid for this. Unless Spielberg buys the movie rights to this. Then I will cut Noble and Moffatt in on the profits.

Partial duality is also natural enough in matroids, but the partial dual of a matroid is rarely a matroid. Recall that $X\bigtriangleup Y$ denotes symmetric difference of sets $X$ and $Y$, which is equal to $(X\cup Y)-(X\cap Y)$. A matroid $M=(E,\mathcal{B})$ defined in terms of its bases has a dual that may be written $(E,\{E\bigtriangleup B \mid B\in\mathcal{B}\})$. The dual of a matroid is a matroid. Now, for a set $A\subseteq E$, the entity $(E,\{A\bigtriangleup B\mid B\in\mathcal{B}\}):=M*A$ is the partial dual with respect to $A$. There is a way to make sure that the partial dual with respect to $A$ is a matroid. The following result is Theorem 3.10 in [CMNR2].

*Theorem.* *Let $M=(E,\mathcal{B})$ be a matroid and $A$ be a subset of $E$. Then $M*A$ is a matroid if and only if $A$ is separating or $A\in\{\emptyset,E\}$.*

Whenever $A$ is not empty or the ground set of a component of the matroid, then the partial dual with respect to $A$ is *(scrawled in blood)* a delta-matroid! Matroids may be too abstract for most human beings, but they are not quite abstract enough to accommodate partial duality, which is a natural notion generalizing from ribbon graphs. Delta-matroids are the right object, and we tend to view the set of partial duals of a delta-matroid as all belonging to the same equivalence class, just as matroid theorists often view a matroid and its dual as belonging to one equivalence class.

3) *“I left out a basis axiom when defining matroids. Ahoy, delta-matroids!”*

For a set $E$ and a collection $\mathcal{B}$ of subsets of $E$, the set $\mathcal{B}$ form the bases of matroid $(E,\mathcal{B})$ exactly when the following hold.

I) $\mathcal{B}\neq \emptyset$.

II) If $B,B’\in \mathcal{B}$ and $e\in B\bigtriangleup B’$, then there exists $f\in B\bigtriangleup B’$ such that $B\bigtriangleup \{e,f\}$ is in $\mathcal{B}$.

III) The sets in $\mathcal{B}$ are equicardinal.

Omit (III) and hello, sailor! You have the definition of a delta-matroid! Just change the word “bases” to the phrase “feasible sets.”

4) *“Circle graphs seemed like fun. Until they hatched into delta-matroids.”*

You approach the nest full of circle graphs with the stealth and speed of a mongoose, only to discover they are cracking open, each containing an even delta-matroid, where even will be defined in a moment. You should have known that a circle graph is a ribbon-graph in disguise, and a ribbon-graph is, in turn, just a dressed-up delta-matroid. Geelen and Oum used this relationship in [GO] to find an excluded-minor characterization for ribbon-graphic delta-matroids.

A delta-matroid is even exactly when all of its feasible sets have the same parity. They do not all have to have even parity, odd parity is also fine in an even delta-matroid, as long as the parity is exclusive. Maybe a better title would be a monogamous delta-matroid, but maybe not. A circle graph is easy to view as a ribbon-graph. To get a circle graph, you start with a circle, and draw chords (straight lines) across it, and then you check to see which ones cross each other. Your circle graph has a vertex for each chord, and an edge between each pair of vertices corresponding to chords that cross. Go back to the chord diagram and fatten up your chords into ribbons, which cross each other. Where two chords cross, just let one ribbon go over the other, we don’t restrict ourselves to two-dimensions. It doesn’t matter which ribbon is higher than the other, but don’t put any twists into the edges. Now view the circle as a big vertex. Your chord diagram has become a ribbon-graph. It is worth noting that the ribbon-graph corresponds to a graph embedded in an orientable surface.

The edges in your circle graph now correspond to pairs of intertwined loops in your ribbon-graph. By intertwined, I mean that two loops, $a$ and $b$, incident with a single vertex so that, when you circumnavigate the vertex they share, you hit $a$, $b$, $a$, and then $b$; rather than $a$, $a$, $b$, and $b$. Now the feasible sets of your delta-matroid include the empty set (because your vertex has a single boundary component), no single-element sets, and each pair $\{v,w\}$ where $vw$ is an edge in your circle graph. Check this by drawing a big vertex and two interlaced ribbon-graph loops and tracing around the boundary components. You will find there’s only one boundary component. The rest of the feasible sets of the delta-matroid come from the remaining quasi-trees in the ribbon-graph, but you’ll find that, mysteriously, there are no quasi-trees with an odd number of edges. Ribbon-graphs from orientable surfaces give even delta-matroids. Bouchet showed that even ribbon-graph delta-matroids also come naturally from 4-regular directed graphs. For more information along these lines, see Section 4.2 and Section 5.2 of [CMNR1].

Bouchet and Duchamp showed in [BD] that ribbon-graphs correspond to a subset of binary delta-matroids, which will be considered in (5). They did this by giving an excluded minor characterization for binary delta-matroids. In [GO], Geelen and Oum built on the work of Bouchet [BC] in the area of circle graphs and found pivot-minor-minimal non-circle-graphs. As an application of this they obtained the excluded minors for ribbon-graphic delta-matroids.

5) *“C’mon, skew symmetric matrices. This can’t end in delta-matroids. Or can it?”*

A lot of matroid theorists enjoy representable matroids, which have matrix representations. Delta-matroids do not disappoint. Take an $E$x$E$ skew-symmetric matrix over your favorite field. For $A\subseteq E$, consider the $A$x$A$ submatrix obtained by restricting to the rows and columns labeled by elements in $A$. If this submatrix is non-singular, then put $A$ into the collection $\mathcal{F}$. Guess what $(E,\mathcal{F})$ is. A delta-matroid! Every ribbon-graphic delta-matroid has a partial dual that has a binary matrix representation. If you pick a field with characteristic other than two, then your delta-matroids representable over that field will be even. This follows from the nature of skew-symmetric matrices. For more information along these lines, see Section 5.7 in [CMNR1]

6) *“DNA recombination in ciliates is my cup of tea. Who knew I was brewing delta-matroids?”*

The title to this section may sound like a good pick-up line, but I have had no success with it. Ciliates (phylum Ciliophora) are single-celled organisms that experience nuclear dimorphism. Their cells each contain two nuclei, which contain different, but related, genomes. The DNA reconstruction in ciliates has something to do with 4-regular graphs, which can be thought of as medial graphs of ribbon graphs. I’m out of my depth here, so I will refer you to the amazing work of people who know more about this subject. Jonoska and Saito put together a book on biomathematics that is on my reading list. I’ll highlight in particular an article by Brijder and Hoogeboom [BH] in this book for more delta-matroids. While you’re waiting for your local library to order that book, I suggest checking out [AJS] by Angeleska, Jonoska, and Saito.

7) *“I abandon this quest and run away.”*

Very well, you decide to abandon this quest and run away. You drop your axe. You put down your boomerang. You throw away your ninja stars. You retire the commander of your armies, and donate your blowtorches to charity. You turn from the Siren-like call of the delta-matroids, but what is that sound? Is the song growing stronger even as you run away? Yes, delta-matroids seem to be in front of you every direction you face. After a meltdown or two, you pull yourself together and return to (0), resolved to pick a different course of action.

***

[AJS] A. Angeleska, N. Jonoska, and M. Saito. DNA recombination through assembly graphs. *Discrete Applied Mathematics*. **157:14** (2009) 3020–3037.

[BC] A. Bouchet, Circle graph obstructions, *J. Combin. Theory Ser. B.* **60** (1994) 107–144.

[BG] A. Bouchet, Greedy algorithm and symmetric matroids, *Math. Program.* **38** (1987) 147– 159.

[BD] A. Bouchet and A. Duchamp, Representability of delta-matroids over GF(2), *Linear Algebra Appl.* **146** (1991) 67–78.

[BH] R. Brijder and H. Hoogeboom, The algebra of gene assembly in ciliates. In: N. Jonoska and M. Saito (eds.) *Discrete and Topological Models in Molecular Biology.* Natural Computing Series, Springer, Heidelberg (2014) 289—307.

[CG] S. Chmutov, Generalized duality for graphs on surfaces and the signed Bollobás–Riordan polynomial, *J. Combin. Theory Ser. B.* **99** (2009) 617–638.

[CMNR1] C. Chun, I. Moffatt, S. Noble, and R. Rueckriemen, Matroids, delta-matroids, and embedded graphs, arXiv:1403.0920.

[CMNR2] C. Chun, I. Moffatt, S. Noble, and R. Rueckriemen, On the interplay between embedded graphs and delta-matroids, arXiv:1602.01306.

[EMM] J. Ellis-Monaghan and I. Moffatt, *Graphs on surfaces: Dualities, Polynomials, and Knots*, Springer, (2013).

[GO] J. Geelen, S. Oum, Circle graph obstructions under pivoting. *J. Graph Theory.* **61** (2009) 1–11.

**Definitions**

To define multimatroids we need a bit of setup. Let $K$ be a partition of a finite set $U$; in our context we call each class $k \in K$ a *skew class* and we say that two elements form a *skew pair* if they are contained in the same skew class. A *subtransversal* of $K$ is a subset of $U$ that contains at most one element from each skew class. A *transversal* is a maximal subtransversal (i.e. a subset of $U$ that contains exactly one element from each skew class). Denote by $\mathcal{S}(K)$ and $\mathcal{T}(K)$ the set of all subtransversals and transversals of $K$ respectively. Now let $r$ be a function from $\mathcal{S}(K) \rightarrow \mathbb{N}$ such that the following hold:

- For every transversal $T$, the restriction of $r$ to $T$ induces a matroid on $T$.
- For every subtransversal $S$ and any skew pair $\{x,y\}$ in a skew class disjoint from $S$, we have $r(S\cup x)-r(S)+r(S\cup y)-r(S)\geq 1$.

In other words, property 2. is saying that if we have a subtransversal $S$ and a skew class $k$ not covered by $S$, then $r(S \cup x)=r(S)+1$ for all but possibly one $x \in k$.

**Some properties and constructions**

A multimatroid in which every skew class has size $p$ is called a $p$-matroid. From the definition it’s immediate that a 1-matroid is simply a matroid. There is another way of obtaining a multimatroid (in fact, a 2-matroid) from a matroid, as follows. Say that $N=(E,r)$ is a matroid and $N^*=(E,r^*)$ is its dual. Make two copies $E_1$ and $E_2$ of $E$, where, for every $e\in E$ we denote by $e_i$ the copy of $e$ in $E_i$ and similarly, for every subset $A \subseteq E$ we denote by $A_i$ the subset of $E_i$ corresponding to $A$ (and write $r(A_i)$ for $r(A)$, and similarly for $r^*$). Define a 2-matroid as follows: the ground set is $E_1 \cup E_2$, the skew classes are all pairs of the form $\{e_1,e_2\}$ for $e \in E$ and, for every subtransversal $S$, the rank of $S$ in the 2-matroid is $r(S\cap E_1) + r^*(S \cap E_2)$. Bouchet showed that, not only this is multimatroid, but in fact we can do a similar construction for any set of mutually orthogonal matroids $N_1,\ldots, N_n$. He also show that, conversely, if $T_1$ and $T_2$ are two disjoint transversals of a multimatroid $M$, then the matroids induced on $T_1$ and on $T_2$ are orthogonal (see [B98]).

Later in the post I’ll explain how Delta matroids correspond to 2-matroids, but first let’s talk about general multimatroids a bit more.

I defined multimatroids in terms of rank and as a matroid theorist the first question that pops to mind is whether we can talk about other matroid-like objects, such as independent sets, bases, etc. The answer is yes, but only while being extra careful. For example, we need to keep in mind that these terms only make sense when talking about subtransversals, not any subset of the ground set that you could think of. As one may expect, a subtransversal $S$ is *independent* if $r(S)=|S|$, otherwise it’s *dependent*. The *bases* of a multimatroid are the maximal independent sets and (by 2. above) if every skew class has size at least 2 then every base is a transversal. Finally a *circuit* is a minimal transversal that is not independent. In [B97] Bouchet gave equivalent definitions of multimatroids in terms of bases, independent sets and circuits.

One can also define minor operations for multimatroids, as in [B98], but here things are a bit different than for matroids: we don’t have a deletion and a contraction operation, but just a minor operation. Let $x$ by an element of a multimatroid $M=(U,K,r)$ and $k_x$ be the skew class containing $x$. Then the multimatroid $M|x$ has ground set $U’=U-k_x$, skew classes $K’=K-k_x$ and, for every subtransversal $S$ of $K’$ we have $r'(S)=r(S\cup x)-r(x)$. This looks a bit like contracting $x$ and deleting all other elements in $k_x$. In fact, if $M$ is the multimatroid obtained from a matroid $N$ and its dual as above, then $M|x_1$ is the multimatroid obtained from $N/x$ and its dual, while $M|x_2$ is the multimatroid obtained from $N\backslash x$ and its dual.

**Delta matroids**

First let me define what Delta matroids are. A Delta matroid is a pair $(E,\mathcal{F})$ where $E$ is a finite set and $\mathcal{F}$ is a nonempty family of subsets of $E$ (called *feasible sets*) satisfying the following property: if $F,F’\in \mathcal{F}$ and $x \in F\Delta F’$, then there exists $y \in F \Delta F’$ such that $F\Delta \{x,y\} \in \mathcal{F}$. Here $x$ could be equal to $y$, though if that doesn’t happen what we get is an *even Delta matroid*. The terminology is a bit unfortunate here, but basically a Delta matroid is even if all feasible sets have the same parity.

Now say that $(E,\mathcal{F})$ is a Delta matroid. As before, make two copies $E_1$ and $E_2$ of $E$ and define a 2-matroid as follows: the ground set is $E_1 \cup E_2$, the skew classes are all pairs of the form $\{e_1,e_2\}$ for $e \in E$ and the set of bases of the 2-matroid is $\{F_1 \cup (E_2-F_2) : F \in \mathcal{F}\}$. Then what we get is a 2-matroid and in fact all 2-matroids may be obtained this way from Delta matroids.

**References**

[B97] A. Bouchet, *Multimatroids I. Coverings by independent sets*. SIAM J. Discrete Math 10 (1997) 626-646.

[B98] A. Bouchet, *Multimatroids II. Orthogonality, minors and connectivity*. Electron. J. Combinatorics 5 (1998) R8.

This post is an informal discussion of an ongoing project to find the excluded minors for the classes of matroids that arise from two partial fields, the *Hydra-5 partial field* and the *2-regular partial field*. Stefan’s earlier posts on partial fields and stabilizers are highly recommended as background.

The Hydra-5 partial field, $\mathbb{H}_5$, was introduced by Pendavingh and Van Zwam [PZ10]. The class of $\mathbb{H}_5$-representable matroids can essentially be thought of as the class of matroids having six inequivalent representations over $\text{GF}(5)$. The excluded minors for this class of matroids are the first step in a 5-step process to find the excluded minors for the class of $\text{GF}(5)$-representable matroids, as described in [MWZ12].

The 2-regular partial field, $\mathbb{U}_2$, was introduced by Semple [S97]. Representations over this partial field are the natural generalisation of `totally unimodular’ and `totally near-unimodular’ representations over the regular and near-regular partial fields. It is therefore natural to ask if $\mathbb{U}_2$ captures the class of matroids representable over all fields of size at least four, but sadly that’s not the case. However, we do expect the excluded minors for the class of $\mathbb{U}_2$-representable matroids will shed light on the following problem.

**Problem 1.** *Find the partial field $\mathbb{P}$ such that the set of $\mathbb{P}$-representable matroids is exactly the set of matroids representable over $\text{GF}(4)$ and all larger fields.*

The classes of matroids that arise from $\mathbb{H}_5$ and $\mathbb{U}_2$ are closely related (the latter is a subset of the former), and it seems that any differences in the excluded-minor analyses can be confined to finite case checks.

Broadly speaking, we follow the strategy used in Geelen, Gerards and Kapoor’s excluded-minor characterisation of the $\text{GF}(4)$-representable matroids [GGK00]. The first step is to construct a candidate matrix for an excluded minor $M$. That is, a candidate for a representation of $M$ that we construct by piecing together representations for some of its minors, say $M\backslash a$ and $M\backslash b$. The construction of this candidate representation is complicated by the possibility of inequivalent representations, that is, the minors $M\backslash a$ and $M\backslash b$ could have representations that we cannot piece together.

The way around the problem of inequivalent representations is to keep a certain minor and sufficient connectivity. We have a class of $\mathbb{P}$-representable matroids for some fixed partial field $\mathbb{P}$. A matroid $N$ is called a *strong stabilizer* for the class of $\mathbb{P}$-representable matroids if every $\mathbb P$-representation of $N$ extends uniquely to a $\mathbb{P}$-representation of any 3-connected $\mathbb{P}$-representable matroid having $N$ as a minor. In [GGK00], the strong stabilizer is $U_{2,4}$. For the partial fields $\mathbb{U}_5$ and $\mathbb{U}_2$, the strong stabilizer is $U_{2,5}$ (and its dual $U_{3,5}$).

We can therefore construct a candidate matrix $A$ for the excluded minor $M$. Since $M$ is an excluded minor, there must be some difference between the subdeterminants of $A$ and the corresponding subsets of $M$. In fact, we can choose $A$ so that this difference is certified by a $2\times 2$ subdeterminant of $A$ (see [MWZ12]). We call the corresponding set of elements an *incriminating set* for $(M,A)$.

Since an excluded minor $M$ is minimal with respect to being non-representable, we deduce that there are no elements that can be removed (in the right way relative to the candidate matrix $A$) keeping all three of: the strong stabilizer $N$ minor, sufficient connectivity, and the incriminating set. Considering only the elements that are required to keep the $N$-minor, it follows that $M$ has some minor $M’$ with the property that $M’\backslash e$ or $M’/e$ has no $N$-minor for every element $e$ of $M’$. Matroids with this property are said to be *$N$-fragile*, and we can think of them as the foundation for building excluded minors.

In [GGK00], it is shown that the $\text{GF}(4)$-representable $U_{2,4}$-fragile matroids are whirls. Here, we are interested in the structure of the $\mathbb{H}_5$- and $\mathbb{U}_2$-representable $\{U_{2,5}, U_{3,5}\}$-fragile matroids.

A matroid has *path width $3$* if there is an ordering $(e_1, e_2, \ldots, e_n)$ of its ground set such that $\{e_1, \ldots, e_t\}$ is $3$-separating for all $t$. *Gluing a wheel to $M$* is the process of taking the generalized parallel connection of $M$ with $M(\mathcal{W}_n)$ along a triangle $T$, and deleting any subset of $T$ containing the rim element. We proved the following.

**Theorem 1.** [CMWZ16] *Let $\mathbb{P} \in \{\mathbb{H}_5, \mathbb{U}_2\}$. Let $M$ be a $3$-connected $\{U_{2,5}, U_{3,5}\}$-fragile $\mathbb{P}$-representable matroid with $|E(M)| \geq 10$. Then either*

*$M$ or $M^*$ can be obtained by gluing up to three wheels to $U_{2,5}$; or**$M$ has path width $3$.*

In fact, we can describe the structure of the matroids in this class in much more detail, using the concept of generalized $\Delta$-$Y$ exchange.

The idea of the proof of Theorem 1 is to bound the size of a minimal counterexample to at most 12 elements, then obtain a contradiction by finding all of the $\mathbb{H}_5$-representable $\{U_{2,5}, U_{3,5}\}$-fragile matroids up to 12 elements and showing they have the right structure. The exhaustive search was made feasible by the development of Sage Matroid.

The foremost issue is another fragility problem. We would like to use Theorem 1 to determine the minimal $\{U_{2,5}, U_{3,5}\}$-fragile $\mathbb{P}$-representable matroids that could be used to build an excluded minor. This is relatively simple for whirls in the GF$(4)$ case, and also for the matroids in the first case of Theorem 1, using the structure to find `pivots’ in all but the smallest cases. However, there are families of fragile matroids of path width $3$ where these pivoting arguments are unsuccessful. These families of matroids have a `flan’-type path of $3$-separations.

Beyond that, there is an intertwining problem to solve to bridge the gap from the fragile minor to the incriminating set. This is attacked using blocking sequences in [GGK00]. An approach that we hope will simplify the problem is to make stronger connectivity assumptions on $M\backslash a,b$. We proved the following.

**Theorem 2.** [COSW16] *Let $M$ be a sufficiently large excluded minor for the class of matroids representable over $\mathbb{H}_5$ or $\mathbb{U}_2$. Then $M$ has a $\{U_{2,5}, U_{3,5}\}$-minor, and if $M$ has a pair of elements $a,b$ such that $M\backslash a,b$ is $3$-connected with a $\{U_{2,5}, U_{3,5}\}$-minor, then $M\backslash a,b$ is a $\{U_{2,5}, U_{3,5}\}$-fragile matroid.*

Roughly, the strategy for proving Theorem 2 is to assume there are elements that are `not fragile’, that is, elements we can remove to keep the stabilizer minor and the incriminating set. Removing such elements must destroy the $3$-connectivity, so we are able to deduce that $M\backslash a,b$ has a certain path of $3$-separations. Analysing this structure we find pivots if $M\backslash a,b$ is sufficiently large.

To date, we have no guarantee that a $3$-connected deletion pair exists, but progress towards finding such a pair is happening. Williams [W15] proved the following theorem. We say that a pair $a,b$ of elements of a 3-connected matroid $M$ is *detachable* if either $M\backslash a,b$ of $M/a,b$ is 3-connected.

**Theorem 3.** [W15] *Let $M$ be a $3$-connected matroid with $|E(M)|\geq 12$. Then one of the following holds.*

*$M$ is a spike; or**$M$ contains a detachable pair; or*$M$ that contains a detachable pair.*There is a matroid obtained by performing a single $\Delta$-$Y$ operation on*

The next step in this direction is to obtain a `splitter’ analogue of Theorem 3 that preserves a copy of a minor $N$.

[COSW16] B.Clark, J. Oxley, C. Semple, G. Whittle. *Excluded minors are almost fragile.* Submitted.

[CMWZ16] B.Clark, D. Mayhew, G. Whittle, S.H.M. van Zwam. *The structure of $\{U_{2,5}, U_{3,5}\}$-fragile matroids.* SIAM J. Discrete Math., to appear.

[GGK00] J. Geelen, A.M.H. Gerards, A. Kapoor. *The Excluded Minors for GF(4)-Representable Matroids.* J. Combin. Theory Ser. B, 79, 247-299, 2000.

[MWZ12] D. Mayhew, G. Whittle, S.H.M. van Zwam. *Stability, Fragility, and Rota’s Conjecture.* J. Combin. Theory Ser. B, 102(3):760-783, 2012.

[PZ10] R.A. Pendavingh, S.H.M. van Zwam. *Confinement of matroid representations to subsets of partial fields. *J. Combin. Theory Ser. B, 100(6):510-545, 2010.

[S99] C. Semple. *On maximum-sized k-regular matroids.* Graphs Combin., 15, 441-462, 1999.

[W15] A. Williams. *Detachable pairs in 3-connected matroids.*. PhD thesis, Victoria University of Wellington, 2015.

Let’s start with a familiar way to stick matroids together: via 2-sums. Suppose that we have matroids $M_1$ and $M_2$ with ground sets $E_1$ and $E_2$, and suppose that these ground sets meet in only a single edge $e$. For the sake of simplicity, we’ll assume that $e$ isn’t a loop or a coloop in either matroid. Now we can build a new matroid $M_1 \oplus_2 M_2$ on the ground set $E_1 \triangle E_2$, by taking the circuits to be sets of the following 3 kinds:

- circuits of $M_1$ not containing $e$
- circuits of $M_2$ not containing $e$
- sets of the form $C_1 \cup C_2 -e$, where $C_1$ and $C_2$ are circuits of $M_1$ and $M_2$ respectively, both containing $e$.

These three kinds of circuit are illustrated below:

This operation is associative, in the sense we might expect: suppose we have 3 matroids $M_1$, $M_2$ and $M_3$ on ground sets $E_1$, $E_2$ and $E_3$. Suppose that $E_1 \cap E_2$ and $E_2 \cap E_3$ each contain just a single edge, and that $E_1 \cap E_3$ is empty. Then the two matroids $M_1 \oplus_2 (M_2 \oplus_2 M_3)$ and $(M_1 \oplus_2 M_2) \oplus_2 M_3$ are equal. This is easy to see by considering the six kinds of circuits which can arise in the sum, as shown here:

Usually associativity means that it doesn’t matter how we bracket a list of things that we want to add together, but in the case of 2-sums the geometry is a little different. The kind of configuration we might want to stick together with 2-sums in general is not a list of matroids, but rather a tree of matroids. More precisely, suppose we have a finite tree $T$, and that for each node $t$ of $T$ we have a matroid $M_t$ with ground set $E_t$. Suppose further that for distinct nodes $t$ and $t’$ of $T$ the intersection $E_t \cap E_{t’}$ either consists of a single edge $e_{tt’}$, if $t$ and $t’$ are neighbours in $T$, or else is empty. Let’s call a structure like this a finite tree of matroids. Then there is an unambiguously defined 2-sum of our tree of matroids, whose ground set is the union of all the sets $E_t$ but without the `gluing edges’ $e_{tt’}$. A typical circuit in such a 2-sum of a tree is shown here:

More precisely, each circuit of the 2-sum is supported on some subtree $S$ of $T$. It is determined by a choice, for each node $t$ of $S$, of a circuit $C_t$ of $M_t$. This circuit $C_t$ should contain precisely those gluing edges $e_{tt’}$ of $M_t$ such that $t’$ is also in $S$. Given such a subtree $S$ and such a family of circuits $C_t$, we get a circuit of the 2-sum whose edges are the non-gluing edges of the circuits $C_t$.

All of this works just as well for sticking together infinite matroids as for finite ones. But the real power of this technique for building new infinite matroids comes when we consider 2-sums of infinitely many matroids at once.

So what happens if we try to stick together an infinite tree of matroids in the way outlined above? To understand the kinds of difficulty which can arise, let’s first of all consider the simplest possible infinite tree: an infinite ray. We certainly want to allow circuits of the following kind:

But what about ones which go all the way to the end of the ray:

Should we allow them?

It turns out that if we allow them, then we get an infinite matroid. But we also get a different infinite matroid by banning them: by only allowing those circuits whose underlying tree $S$ is finite. So we have two different options for how to build a matroid by sticking together an infinite ray of matroids. We say that the circuits are *allowed to use the end* of the ray in the first of these matroids, but not in the second.

We could just decide to fix one of these two as the `correct’ 2-sum of the ray of matroids, but it turns out that that would be a bad idea. To see why, we should consider the interaction of 2-sums with duality. If we take the dual of the 2-sum of 2 matroids, then it turns out to be the same as the 2-sum of their duals. It follows that the same is true for finite trees of matroids: the dual of the 2-sum of a tree of matroids is the 2-sum of the duals of those matroids. This is certainly a property which we would like to keep for infinite 2-sums.

Now suppose that we have a ray of matroids, and we take the variant of the 2-sum where the circuits are not allowed to use the end, then take the dual of that 2-sum. What we get is the 2-sum of the duals of the matroids, but this time in the version where the circuits are allowed to use the end. So these two different constructions of the 2-sum of a ray are dual to each other, and we shouldn’t privilege either of them.

Formally, we take the decision about whether the circuits should be allowed to use the end of the ray to be part of the data used to specify the sum, in addition to the choices of the matroids lying along the ray. More generally, suppose that we have an infinite tree of matroids. Then in order to specify a 2-sum for this tree of matroids, we must decide, for each end of the tree, whether the circuits are allowed to use that end. Here the ends of the tree can be identified with the rays from a fixed root.

More precisely, suppose that we have a (possibly infinite) tree $T$, and that for each node $t$ of $T$ we have a matroid $M_t$ with ground set $E_t$. Suppose further that for distinct nodes $t$ and $t’$ of $T$ the intersection $E_t \cap E_t’$ either consists of a single edge $e_{tt’}$, if $t$ and $t’$ are neighbours in $T$, or else is empty. Suppose further that we have a subset $\Psi$ of the set of ends of $T$, which we intend to be the set of ends which we will allow circuits to use. Then we might hope to define the 2-sum of this tree of matroids as follows:

Each circuit of the 2-sum will be supported on some subtree $S$ of $T$, such that all rays in $S$ go to ends in $\Psi$. The circuit is determined by a choice, for each node $t$ of $S$, of a circuit $C_t$ of $M_t$. This circuit $C_t$ should contain precisely those gluing edges $e_tt’$ of $M_t$ such that $t’$ is also in $S$. Given such a subtree $S$ and such a family of circuits $C_t$, we get a circuit of the 2-sum whose edges are the non-gluing edges of the circuits $C_t$.

So the circuits would look something like this:

Unfortunately, this construction does not always give a matroid. The reason why not is related to some tricky set-theoretical issues, so we will not discuss it here. But due to a very deep and beautiful set-theoretic result called Borel Determinacy, we know that this construction will give a matroid if the set $\Psi$ is topologically nice enough (if it is a Borel set) [BC15].

There are also certain infinite trees of matroids such that the above construction will give a matroid for any set $\Psi$ of ends. This has the immediate consequence that there are a lot of infinite matroids – as many as there could possibly be. A matroid is determined by a set of subsets of its ground set, so there could be no more than $2^{2^{\aleph_0}}$ matroids on a countable set. And in fact, there are that many non-isomorphic matroids on a countable set, because there are that many possibilities for the set $\Psi$.

This means that when an infinite matroid is constructed from a tree of matroids together with a set $\Psi$ of ends, most of the information is encoded in the set $\Psi$ – if the matroids in the tree are finite, then there are only $2^{\aleph_0}$ possibilities for the tree of matroids, but there are $2^{2^{\aleph_0}}$ possibilities for $\Psi$. This huge reservoir of extra information hidden at the ends means that countable matroids behave in many ways like uncountable graphs. For example, the constructions which show that infinite graphs in general are not well-quasi-ordered under the minor relation already work for countable matroids, even though they do not work for countable graphs [BC15].

The construction we are considering also plays a key role for the reconstruction of infinite matroids from their decompositions into 3-connected parts. It is known that any finite matroid is canonically expressible as a 2-sum of a finite tree of matroids, each of which is either 3-connected or else consists of a single circuit or cocircuit [CE80, S80]. This result extends directly to infinite matroids: any matroid can be canonically expressed as a 2-sum (in the above sense) of a tree of matroids, each of which is either 3-connected or else consists of a single circuit or cocircuit [ADP15, BC16]. As with the finite result, this implies that it often suffices to prove a result only for 3-connected matroids in order to be able to deduce it for all matroids.

The construction we have considered was based on the 2-sum, one of the simplest possible ways to stick matroids together. Next time, we shall see that similar ideas work to give infinitary versions of other more complicated ways of sticking matroids together, and the surprisingly close connections between these infinitary sums and the matroids naturally arising in infinite graphs.

[ADP15] E. Aigner-Horev, R. Diestel, L. Postle. The structure of 2-separations of infinite matroids, to appear in JCTB, available here.

[BC15] N. Bowler and J. Carmesin, Infinite Matroids and Determinacy of Games, submitted to LMS, available here.

[BC16] N. Bowler and J. Carmesin, The ubiquity of Psi-matroids, preprint, available here.

[CE80] W. H. Cunningham and J. Edmonds, A combinatorial decomposition theory, Canad. J. Math. 32 (1980), no. 3, 734–765.

[S80] P. D. Seymour, Decomposition of regular matroids, J. Combin. Theory Ser. B 28 (1980), no. 3, 305–359.

The topic of this post is to introduce a new parameter $\beta$ measuring the complexity of the facets of the matching polytope of a graph, which also extends to binary matroids. We give a simple efficient algorithm deciding whether a binary matroid satisfies $\beta\leq 1$. This is the first non-trivial case after bipartiteness, which is equivalent to $\beta=0$. The results presented here are joint work with András Sebő, and appeared in [1].

The matching polytope of a graph plays an important role in Graph Theory and Combinatorial Optimization. The methods involved in the study of its structure and properties initiated many relations between polyhedra, min-max theorems and polynomial-time algorithms.

A *matching* of a graph is a set of pairwise non-incident edges. The incidence vector of a matching $M$ of a graph $G$ is the 0-1 vector $\chi^{M}$ of $\mathbb{R}^{E(G)}$ defined for every $e\in E(G)$ by: $\chi^{M}(e)=1$ if and only if $e\in M$. The *matching polytope* of $G$, denoted by $\mathsf{MATCH}(G)$, is the convex hull of the incidence vectors of the matchings of $G$. As a polyhedron, it is the set of solutions of a finite number of linear inequalities over $\mathbb{R}^{E(G)}$. Basically, information on the inequalities describing this polytope is useful as it allows applying linear programming tools (for example the duality theorem, the Ellipsoid method, etc.).

The following result, due to Edmonds and Pulleyblank [5], gives a complete description of the matching polytope using linear inequalities. A graph $H$ is *factor-critical* if for every $v\in V(H)$, the graph obtained from $H$ by deleting $v$ and every edge incident to $v$ has a perfect matching (a matching covering every vertex).

**Theorem 1. **For every graph $G$:

\begin{equation*}\label{fonda-eqn-MATCH}

\mathsf{MATCH}(G):=\left\{x\in\mathbb{R}^{E(G)}\colon\begin{array}{cc}

x\geq 0, & \\

\displaystyle \sum_{\text{e incident to $v$}}x_e\leq 1 & \forall v\in V(G), \\

\displaystyle \sum_{e\in E(H)}x_e\leq \frac{|V(H)|-1}{2} & \text{$\forall H$ 2-connected factor-critical } \\

& \text{induced subgraph of $G$}

\end{array}\right\}.

\end{equation*}

Furthermore, Edmonds [4] shows that each inequality associated with a 2-connected factor-critical induced subgraph $H$ appears in every description of $ \mathsf{MATCH}(G)$.

An *ear-decomposition* of a graph $G$ is a sequence $(P_0,\ldots,P_k)$ of a circuit $P_0$ and paths (with two distinct ends) $P_1,\ldots,P_k$ of $G$ such that $E(G)=E(P_0)\cup\ldots\cup E(P_k)$ and for every $i\in\left\{

1,\ldots,k\right\}$, $P_i$ meets $P_0\cup\ldots\cup P_{i-1}$ exactly on its two ends; the $P_i$ are the *ears* of the decomposition (notice that we do not allow an ear to attach on a single vertex). The decomposition is *odd* if all ears have an odd number of edges. Lovász proved:

**Theorem 2. **A graph is 2-connected and factor-critical if and only if it has an odd ear-decomposition.

It follows from Menger’s theorem that a graph admits an ear-decomposition if and only if it is 2-connected. In addition, all ear-decompositions of a 2-connected graph $G$ have the same number $|E(G)|-|V(G)|+1$ of ears (indeed, deleting a single edge from each ear of an arbitrary ear-decomposition yields a spanning tree), hence the *number of ears* of $G$ is well-defined.

This and Lovász’s result suggest the following measure of complexity of the matching polytope:

**Definition 3. **For each graph $G$, let $\beta(G)$ denote the largest number of ears of a 2-connected factor-critical subgraph of $G$.

For example, $\beta(G)=0$ if and only if $G$ is a bipartite graph, and $\beta(G)\leq 1$ if and only if $G$ does not contain three pairwise internally-disjoint paths with the same ends, such that exactly two of them have an odd number of edges.

The Parity Minor Algorithm [6] implies that for each fixed $k$, determining whether a graph $G$ satisfies $\beta(G)\geq k$ can be done in polynomial-time. However, the proof in [6] is built upon elaborate techniques of the Graph Minor Project, and is oriented towards generality. This suggests searching for more adapted algorithms. In this direction, Bruhn and Schaudt [2] provided a direct solution to test $\beta\leq 1$ efficiently for graphs with maximum degree 3.

In the rest of this post, we extend $\beta$ to binary matroids and characterize those which satisfy $\beta\leq 1$. As a consequence, we obtain an elementary polynomial-time algorithm for testing whether a binary matroid $M$ satisfies $\beta(M)\leq 1$ or finding an obstruction, only using ear-decompositions and basic computations in the cycle space (mod 2). This completely avoids Graph Minors and implies a rather simple algorithm deciding $\beta\leq 1$ in graphs, with no restriction on the degree.

Also, applying our result to the co-graphic case yields an unexpected consequence: determining whether a graph has two odd bonds meeting on an even number of edges can be carried out efficiently.

Our motivation in studying $\beta$ comes from the recognition problem for $h$-perfect graphs. A graph is *$h$-perfect* if the convex hull of the incidence vectors of its stable sets (sets of pairwise non-adjacent vertices) is completely described by non-negativity and rank-inequalities of cliques and odd circuits. The class of $h$-perfect graphs are a superclass of perfect graphs, and share a number of similar properties with the latter (see the second volume of Schrijver’s Combinatorial Optimization for further details).

Whereas perfection can be checked in polynomial-time, it is still open whether the same holds for $h$-perfection. Testing the $h$-perfection of a line graph $L(G)$ is essentially equivalent to checking $\beta(G)\leq 1$, and thus our results on $\beta$ directly imply a simple algorithm recognizing $h$-perfect line graphs.

An *ear-decomposition* of a matroid $M$ is a sequence $(C_1,\ldots,C_k)$ of circuits of $M$ such that: $C_1\cup \ldots \cup C_k=E(M)$ and for each $i\in\left\{2,\ldots,k \right\}$, $C_i$ meets both $\cup_{j<i}C_j$ and its complement, and $C_i\setminus (\cup_{j<i}C_j)\neq\emptyset$ is a circuit of the contraction $M/(\cup_{j<i}C_j)$. The ear-decomposition is *odd* if the sets $C_i\setminus (\cup_{j<i}C_j)$ (the *ears*) all have odd cardinality (for $i\in\left\{1,\ldots,k\right\}$).

A matroid is *factor-critical* if it has an odd ear-decomposition. It is easy to check that a matroid has an ear-decomposition if and only if it is connected, and that all the ear-decompositions of a connected binary matroid have the same number $|E(M)|-\mathsf{rk}(M)$ of ears. Hence, for each binary matroid $M$, we may define $\beta(M)$ as the largest number of ears of a factor-critical restriction of $M$. It is straightforward to check that the values of $\beta$ for a graph and its circuit matroid are equal.

For algorithmic considerations, we assume that binary matroids are given by a linear representation or an independence oracle; complexity refers to the number of required calls to this oracle.

The first important observation is the following (see [1] for a proof).

**Lemma 4. **A connected binary matroid $M$ satisfies $\beta(M)\geq 2$ if and only if it has two odd circuits which meet in an even number of elements. Furthermore, a factor-critical restriction of $M$ with two ears can be constructed from two such odd circuits in polynomial-time.

This states that we have to check the parity of the intersection of every pair of odd circuits of $M$ to certify that $\beta(M)\leq 1$. In fact, we need only to check it for pairs of a certain basis of the cycle space of $M$.

A *cycle* of a matroid is a union of disjoint circuits, and it is *odd* if it has odd cardinality. The set of (incidence vectors of) cycles of a binary matroid $M$ is a subspace of $\mathbb{F}_2^{E(M)}$ (this characterizes binary matroids), called the *cycle space* of $M$ and denoted by $\mathcal{C}(M)$. Clearly, $\mathcal{C}(M)$ is spanned by the circuits of $M$ and $\dim \mathcal{C}(M)=|E(M)|-\mathsf{rk}(M)$ (consider the set of fundamental circuits of a basis). A set $S$ of cycles of $M$ is a *cycle basis* of $M$ if the incidence vectors of the elements of $S$ form a basis of $\mathcal{C}(M)$.

A cycle basis $B$ is *odd* if all its cycles are odd, and it is *totally odd* if moreover all the cycles of $B$ pairwise-intersect on an odd number of elements.

We now prove the main ingredient of our characterization. The fact that we have only circuits in the basis obtained is crucial.

**Lemma 5. **Each connected non-bipartite binary matroid has an odd cycle basis formed by circuits only. It can be constructed in polynomial time.

*Proof.* Let $M$ be a connected and non-bipartite binary matroid. Let $M_p$ be the binary matroid obtained by adding successively an all-0 column and an all-1 line to a matrix representation of $M$, and let $p$ be the new element of $M_p$.

Lehman’s matroid-port theorem easily implies that each element of $M$ belongs to an odd circuit (see [8]). It follows straightforwardly that $M_p$ is a connected matroid.

Furthermore, a theorem of Coullard and Hellerstein states that for each connected matroid $N$ and every $e\in E(N)$, there exists an ear-decomposition of $N$ whose circuits all contain $e$ and which can be found in polynomial-time [3].

Now, consider an ear-decomposition $\left\{C_1,\ldots,C_k \right\}$ of $M_p$ such that all the $C_i$ contain $p$. It is easy to check that $\left\{C_1\setminus \left\{p\right\},\ldots,C_k\setminus \left\{p\right\}\right\}$ is an odd cycle basis of $M$, formed by circuits only. $\blacksquare$

The following statement is a direct consequence of the well-known fact that, for two subsets $S_1,S_2$ of a set $S$: the sum of the incidence vectors of $S_1$ and $S_2$ in $\mathbb{F}_2^{S}$ is the incidence vector of $S_1\Delta S_2$ and their product is the parity of $ |S_1\cap S_2|$ (with respect to the standard bilinear form on $\mathbb{F}_2^{S}$).

**Lemma 6. **If a binary matroid has a totally odd cycle basis, then all its odd cycles pairwise-intersect in an odd number of elements.

We can finally prove our characterization.

**Theorem 7. **Let $M$ be a connected non-bipartite binary matroid. The following statements are equivalent.

- $\beta(M)\leq 1$,
- $M$ has a totally odd cycle basis formed by circuits only,
- each odd cycle basis of $M$ is totally odd.

*Proof. *We first prove that $ (1)\Rightarrow (2) $. Since $\beta(M)\leq 1$ and as $M$ is connected and non-bipartite, Lemma 5 shows that $M$ has an odd cycle basis $B$ whose elements are circuits. Lemma 4 implies that the elements of $B$ pairwise-intersect in an odd number of elements. That is, $B$ is totally odd.

The implication $ (2)\Rightarrow (3) $ straightforwardly follows from Lemma 6.

We finally prove $ (3)\Rightarrow (1) $. Suppose that each odd cycle basis of $M$ is totally odd. Since $M$ is connected and non-bipartite, Lemma 5 shows that $M$ has an odd cycle basis $B$. Since $B$ is totally odd, Lemma 6 implies that all odd cycles, and in particular all odd circuits of $M$, pairwise-intersect in an odd number of elements. By Lemma 4, we get $\beta(M)\leq 1$. $\blacksquare$

Now, testing whether a connected non-bipartite binary matroid $M$ satisfies $\beta(M)\leq 1$ can be done as follows: build an odd cycle basis $B$ of $M$ formed by circuits only (with Lemma 5), and compute the parities of the intersections of pairs of elements of $B$; there is only a polynomial number of such pairs, since $|B|=\dim \mathcal{C}(M)=|E(M)|-\mathsf{rk}(M)$.

If two elements of $B$ meet in an even number of elements, then Lemma 4 shows $\beta(M)\geq 2$ and a factor-critical restriction of $M$ with exactly two ears. Otherwise, $B$ is totally odd and Theorem 7 implies that $\beta(M)\leq 1$.

Clearly, a binary matroid $M$ satisfies $\beta(M)\leq 1$ if and only if every non-bipartite block of $M$ satisfies this condition. The blocks of $M$ can be easily found in polynomial-time, and hence we need only one more subroutine to finish the algorithm: deciding in polynomial-time whether a connected binary matroid is bipartite. This can be carried out using the following straightforward proposition, which generalizes the bipartiteness test of graphs.

**Proposition 8. **Let $M$ be a connected binary matroid. The following statements are equivalent.

- $M$ is bipartite,
- There exists a cycle basis of $M$ containing only even circuits,
- Each cycle basis of $M$ contains only even cycles.

Hence, testing bipartiteness only requires building a cycle basis formed by circuits (from the fundamental circuits of a basis of $M$, for example) and checking whether all its circuits are even. If so, the matroid is bipartite and otherwise we find an odd circuit.

$\beta$ can be used as a parameter to separate on, for properties of the matching polytope (for example, the integer decomposition property [1]). The complexity of computing $\beta$ for graphs is apparently not known.

Clearly, the property $\beta(G)\geq k$ is in $\mathsf{NP}$ (as factor-criticality can be tested using a maximum-matching algorithm), but we do not know if it belongs to $\mathsf{co}$-$\mathsf{NP}$. Also, it is not clear whether the Parity Minor algorithm can be circumvented to check efficiently, for fixed $k\geq 3$, whether a graph $G$ satisfies $\beta(G)\geq k$.

For binary matroids, the situation is even less clear: results of Szegedy and Szegedy [9] show that testing whether a binary matroid is factor-critical can be done in randomized polynomial-time, but a deterministic algorithm is still missing, even for co-graphic matroids. Finally, it is not clear whether $\beta\leq 1$ can be decided efficiently for other interesting classes of matroids (our approach does not directly extend to anything else).

[1] Y. Benchetrit and A. Sebő. *Ear-decompositions and the complexity of the matching polytope.* arXiv preprint, 2015.

[2] H. Bruhn and O. Schaudt. *Claw-free t-perfect graphs can be recognised in polynomial time. *In Jon Lee and Jens Vygen, editors, IPCO, volume 8494 of LNCS, pages 404–415. 2014.

[3] C. Coullard and L. Hellerstein. *Independence and port oracles for matroids, with an appli**cation to computational learning theory.* Combinatorica, 16(2):189–208, 1996.

[4] J. Edmonds. *Maximum matching and a polyhedron with 0, 1 vertices.* J. of Res. the Nat. Bureau of Standards, 69 B:125–130, 1965.

[5] J. Edmonds and W. Pulleyblank. *Facets of 1-matching polyhedra.* In Hypergraph Seminar of Columbus, 1974.

[6] K. Kawarabayashi, B. Reed, and P. Wollan. *The graph minor algorithm with parity condi**tions.* In FOCS, 2011 IEEE 52nd Annual Symposium, pages 27–36. IEEE, 2011.

[7] L. Lovász. *A note on factor-critical graphs.* Studia Sci. Math. Hungar., 7:279–280, 1972.

[8] James Oxley. *Matroid Theory.* 1992.

[9] B. Szegedy and C. Szegedy. *Symplectic spaces and ear-decomposition of matroids.* Combinatorica, 26(3):353–377, 2006.