Google Summer of Code

As you might know, SageMath is a software system for mathematical computation. Built on Python, it has extensive libraries for numerous areas of mathematics. One of these areas is Matroid Theory, as has been exhibited several times on this blog.

Google Summer of Code is a program where Google pays students to work on open-source software during the summer.

Once again, SageMath has been selected as a mentoring organization for the Google Summer of Code. We’ve had students work on aspects of the Matroid Theory functionality for the past four years. Maybe this year, YOU can join those illustrious ranks! Check out the call for proposals and ideas list. Read the instructions on both pages carefully. Applications open on March 12, so it’s a good idea to start talking to potential mentors and begin writing your proposal!

Matroids over Partial Hyperstructures

Guest Post by Matt Baker.

In this post I’d like to explain a new generalization of matroids developed in this recent paper with Nathan Bowler (referred to henceforth as [BB]). We show that by working over certain algebraic structures which we call tracts, linear subspaces, matroids, valuated matroids, oriented matroids, and matroids over partial fields all become special cases of a single general concept. Actually, there are at least two natural notions of matroids over tracts, which we call weak and strong matroids, but in many cases of interest (such as all the ones mentioned above) the two notions coincide.

Two important special cases of tracts are hyperfields and partial fields. We assume familiarity with the theory of partial fields (see for example this post and its sequels by Stefan van Zwam).


Roughly speaking, a hyperfield is an algebraic structure satisfying similar axioms to a field, but where addition is allowed to be multi-valued. Like fields, hyperfields come equipped with an additive identity element called zero and a negation map $x \mapsto -x$. However, one does not require that the hypersum of $ x$ and $ -x$ is zero, only that zero is contained in the hypersum. Before giving a precise axiomatic definition, let us give some motivating examples.

  1. (Fields) Any field $ K$ is in particular a hyperfield.
  2. (The Krasner hyperfield) The Krasner hyperfield $ {\mathbb K}$ records the arithmetic of “zero” versus “nonzero”. Specifically, define $ {\mathbb K} = \{ 0,1 \}$ together with the binary operation $ 0 \odot 0 = 0, 0 \odot 1 = 0, 1 \odot 1 = 1$ and the binary hyperoperation $ 0 \boxplus 0 = \{ 0 \}, 0 \boxplus 1 = \{ 1 \}, 1 \boxplus 1 = \{ 0,1 \}$. If we think of $ 1$ as representing “nonzero”, the hyperaddition rules just say that zero plus zero is zero and zero plus something nonzero is always nonzero, but the sum of two things which are nonzero could be either zero or nonzero. The negation operator is the identity map, since negative zero is zero and the negative of something nonzero is again nonzero.
  3. (The hyperfield of signs) The hyperfield of signs $ {\mathbb S}$ records the arithmetic of “zero”, “positive”, and “negative”, represented by the symbols $ 0, 1, -1$, respectively. The product $ x \odot y$ is defined in the obvious way for $ x,y \in {\mathbb S} := \{ 0, 1, -1 \}$. Addition is also defined in the “obvious” way except we have $ 1 \boxplus -1 = \{ 0, 1, -1 \}$ since the sum of a positive number and a negative number could be either zero, positive, or negative. The negation operator takes $ 0$ to itself and interchanges $ 1$ and $ -1$.
  4. (The tropical hyperfield) The tropical hyperfield $ {\mathbb T}$ records the arithmetic of valuations. More precisely, if $ v : K \to {\mathbb T} := {\mathbb R} \cup \{ \infty \}$ is the valuation map on a valued field $ K$ with value group $ {\mathbb R}$, the hypersum $ a \boxplus b$ consists of all possible values of $ v(\alpha+\beta)$ where $ \alpha,\beta$ are elements of $ K$ with $ v(\alpha)=a$ and $ v(\beta)=b$. (Note that the axioms for a valuation tell us that $ v(\alpha \cdot\beta) = a + b$.) Concretely, this means that we should define $ a \odot b := a + b$ (with the evident conventions when one of $ a,b$ equals $ \infty$), and we define $ a \boxplus b := {\rm min}(a,b)$ if $ a \neq b$ and $ a \boxplus a := \{ z \in {\mathbb T} \; : \; z \geq a \}$. The multiplicative identity element is $ 0$ and the additive identity element is $ \infty$. The negation operator is the identity map, since the unique value of $ b$ such that $ \infty \in a \boxplus b$ is $ b = a$.

The above examples all have an additional property not shared by all hyperfields: they are doubly distributive (see below for the definition). Here are two examples of hyperfields which are not doubly distributive:

5. (The triangle hyperfield) Let $ {\mathbb V}$ be the set $ {\mathbb R}_{\geq 0}$ of nonnegative real numbers with the usual multiplication and the hyperaddition rule $ a \boxplus b := \{ c \in {\mathbb R}_{\geq 0} \; : \; |a-b| \leq c \leq a+b \}.$ In other words, $ a \boxplus b$ is the set of all real numbers $ c$ such that there exists a (possibly degenerate) Euclidean triangle with side lengths $ a, b, c$. Then $ {\mathbb V}$ is a hyperfield.

6. (The phase hyperfield) The phase hyperfield $ {\mathbb P}$ records the arithmetic of phases of complex numbers. If $ \pi : {\mathbb C} \to {\mathbb P} := S^1 \cup \{ 0 \}$ is the map taking 0 to 0 and $ z \in {\mathbb C}^*$ to $ z/|z|$, and if $ a,b \in {\mathbb P}$, the hypersum $ a \boxplus b$ consists of all possible values of $ \pi(\alpha + \beta)$ where $ \alpha,\beta$ are elements of $ {\mathbb C}$ with $ \pi(\alpha)=a$ and $ \pi(\beta)=b$. More precisely, multiplication in $ {\mathbb P}$ is defined as usual, and the hyperaddition law is defined for $ x,y \neq 0$ by setting $ x \boxplus -x := \{ 0, x, -x \}$ and $ x \boxplus y := \{ \frac{\alpha x + \beta y}{\| \alpha x + \beta y \|} \; | \; \alpha, \beta \in {\mathbb R}_{>0} \}$ otherwise.
Motivated by Proposition 1, if $ F$ is a tract we define a strong $ F$-matroid (or strong matroid over $ F$) of rank $ r$ on $ E$ to be a projective equivalence class of Grassmann-Plücker functions $ \varphi : E^r \to F$. Thus strong matroids over fields are the same as linear subspaces, and strong matroids over the Krasner hyperfield are the same as matroids in the usual sense. (By a matroid over a partial hyperfield $ F$, we mean a matroid over the corresponding tract.) One can also show that strong matroids over a partial field $ P$ are the same as matroids representable over $ P$ in the usual sense.

Definition of hyperrings and hyperfields

A hyperoperation on a set $ S$ is a map $ \boxplus$ from $ S \times S$ to the collection of non-empty subsets of $ S$.

If $ A,B$ are non-empty subsets of $ S$, we define $ A \boxplus B := \bigcup_{a \in A, b \in B} (a \boxplus b)$ and we say that $ \boxplus$ is associative if $ a \boxplus (b \boxplus c) = (a \boxplus b) \boxplus c$ for all $ a,b,c \in S$.

A (commutative) hyperring is a set $ R$ together with:

  1. A commutative and associative binary operation $ \odot$
  2. A commutative and associative binary hyperoperation $ \boxplus$
  3. Distinguished elements $ 0,1 \in R$

such that:

  1. $ (R, \odot, 1)$ is a commutative monoid.
  2. $ 0 \odot x = x \odot 0 = 0$ and $ 0 \boxplus x = x \boxplus 0 = \{ x \}$ for all $ x \in R.$
  3. For every $ x \in R$ there is a unique element of $ R$ (denoted $ -x$) such that $ 0 \in x\boxplus -x.$
  4. $ a \odot (x \boxplus y) = (a \odot x) \boxplus (a \odot y)$ for all $ a,x,y \in R.$

A hyperring $ R$ is called a hyperdomain if $ xy=0$ in $ R$ implies that $ x=0$ or $ y=0$.

A hyperring $ R$ is called doubly distributive if it satisfies $ (a \boxplus b)\odot (c \boxplus d) = (a\odot c) \boxplus (a \odot d) \boxplus (b \odot c) \boxplus (b\odot d)$ for all $ a,b,c,d \in R$.

A hyperring $ F$ is called a hyperfield if $ 0 \neq 1$ and every non-zero element of $ F$ has a multiplicative inverse.

Partial hyperfields

A partial hyperfield is a pair $ (G,R)$, where $ G$ is a subgroup of the group of units of a hyperdomain $ R$ such that $ -1 \in R$ and $ G$ generates $ R$ as a hyperring.

We set $ P := G \cup \{ 0 \}$, and denote the partial hyperfield $ (G,R)$ simply by $ P$ when no confusion is likely to arise.

Partial hyperfields generalize both hyperfields and partial fields in a natural way.  When $ R$ is a ring, $ P$ is just a partial field, and when $ G = R \backslash \{ 0 \}$ is a group, $ P$ is just a hyperfield.

A partial hyperfield is called doubly distributive if the ambient hyperring $ R$ is doubly distributive.


Partial hyperfields are special cases of still more general objects called tracts, which appear to be a natural setting for matroid theory.

We define a tract to be an abelian group $ G$ (written multiplicatively), together with a subset $ N_G$ of the group semiring $ {\mathbb N}[G]$ satisfying:

(T0) The zero element of $ {\mathbb N}[G]$ belongs to $ N_G$.
(T1) The identity element 1 of $ G$ is not in $ N_G$.
(T2) There is a unique element $ \epsilon$ of $ G$ with $ 1 + \epsilon \in N_G$.
(T3) $ N_G$ is closed under the natural action of $ G$ on $ {\mathbb N}[G]$.

We set $ F := G \cup \{ 0 \} \subset {\mathbb N}[G]$, and refer to the tract $ (G,N_G)$ simply as $ F$ when no confusion is likely to arise.  We will also sometimes write $ F^\times$ instead of $ G$, and $ -1$ instead of $ \epsilon$.

One thinks of $ N_G$ as those linear combinations of elements of $ G$ which “sum to zero” (the $ N$ in $ N_G$ stands for “null”).

Tracts from partial hyperfields

We can associate a tract to a partial hyperfield $ (G,R)$ by declaring that a formal sum $ \sum a_i g_i \in {\mathbb N}[G]$ belongs to $ N_G$ if and only if $ 0 \in \boxplus a_i g_i$ in $ R$.

We note, for the experts, that one can associate a tract to any fuzzy ring in the sense of Dress and Dress-Wenzel, and that if $ P$ is a doubly distributive partial hyperfield there is an associated fuzzy ring whose realization as a tract coincides with the realization of $ P$ itself as a tract.

Grassmann-Plücker functions over tracts

Now let $ F$ be a tract.  A function $ \varphi : E^r \to F$ is called a Grassmann-Plücker function if it is nonzero, alternating (meaning that $ \varphi(x_1,\ldots,x_i, \ldots, x_j, \ldots, x_r)= -\varphi(x_1,\ldots,x_j, \ldots, x_i, \ldots, x_r)$ and $ \varphi(x_1,\ldots, x_r) = 0$ if $ x_i = x_j$ for some $ i \neq j$), and it satisfies the following generalization of the classical Grassmann-Plücker relations:

(GP) For any two subsets $ X := \{ x_1,\ldots,x_{r+1} \}$ and $ Y := \{ y_1,\ldots,y_{r-1} \}$ of $ E$,
$ \sum_{k=1}^{r+1}  (-1)^k \varphi(x_1,x_2,\ldots,\hat{x}_k,\ldots,x_{r+1}) \cdot \varphi(x_k,y_1,\ldots,y_{r-1}) \in N_G.$

If $ F=K$ is a field then a projective equivalence class of Grassmann-Plücker functions $ \varphi : E^r \to K$ is the same thing as an $ r$-dimensional subspace of $ K^m$.  This is essentially just the assertion that the usual Grassmannian variety is cut out by the Plücker equations.

On the other hand:

Proposition 1: If $ F = {\mathbb K}$ is the Krasner hyperfield, a projective equivalence class of Grassmann-Plücker functions $ \varphi : E^r \to {\mathbb K}$ is the same thing as a rank $ r$ matroid on $ E$.

Indeed, if $ \varphi : E^r \to {\mathbb K}$ is a nonzero alternating function and $ {\mathbf B}_\varphi$ denotes the set of $ r$-element subsets $ \{ e_1,\ldots, e_r \}$ of $ E$ such that $ \varphi(e_1,\ldots,e_r) \neq 0$, it is easy to see that $ {\mathbf B} := {\mathbf B}_\varphi$ satisfies the Grassmann-Plücker relations (GP) if and only if

(BE) Given $ B,B’ \in {\mathbf B}$ and $ b \in B \backslash B’$, there exists $ b’ \in B’ \backslash B$ such that both $ (B \cup \{ b’ \}) \backslash \{ b \}$ and $ (B’ \cup \{ b \}) \backslash \{ b’ \}$ are in $ {\mathbf B}$.

But condition (BE) is well-known to be equivalent to the usual Basis Exchange property for matroids!  In other words, $ {\mathbf B}_\varphi$ is the set of bases of a rank $ r$ matroid $ M_{\varphi}$ on $ E$.  Conversely, given such a matroid $ M$, we can define $ \varphi_M : E^r \to {\mathbb K}$ by setting $ \varphi_M(e_1,\ldots,e_r) = 1$ if $ \{ e_1,\ldots,e_r \}$ is a basis of $ M$ and $ \varphi_M(e_1,\ldots,e_r) = 0$ if not.  By the exchange property (BE), the function $ \varphi_M(e_1,\ldots,e_r)$ will satisfy (GP).

Motivated by Proposition 1, if $ F$ is a tract we define a strong $ F$-matroid (or strong matroid over $ F$) of rank $ r$ on $ E$ to be a projective equivalence class of Grassmann-Plücker functions $ \varphi : E^r \to F$.  Thus strong matroids over fields are the same as linear subspaces, and strong matroids over the Krasner hyperfield are the same as matroids in the usual sense.   (By a matroid over a partial hyperfield $ F$, we mean a matroid over the corresponding tract.)  One can also show that strong matroids over a partial field $ P$ are the same as matroids representable over $ P$ in the usual sense.

We have the following additional interesting examples of (strong) matroids over tracts:

Proposition 2: If $ F = {\mathbb S}$ is the hyperfield of signs, a matroid over $ {\mathbb S}$ is the same thing as an oriented matroid.

Proposition 3: If $ F = {\mathbb T}$ is the tropical hyperfield, a matroid over $ {\mathbb T}$ is the same thing as a valuated matroid.

Proposition 4: If $ F = {\mathbb U}_0$ is the regular partial field $ (\{ \pm1 \}, {\mathbb Z})$, a matroid over $ {\mathbb U}_0$ is the same thing as a regular matroid.

Weak matroids over tracts

It is also of interest to consider objects which satisfy a weaker version of (GP), where we consider only the three-term Grassmann-Plücker relations. Specifically, a weak $ F$-matroid is a projective equivalence class of nonzero alternating functions $ \varphi : E^r \to F$ such that (a) $ {\mathbf B}_\varphi$ is the set of bases for a (classical) matroid on $ E$, and (b) $ \varphi$ satisfies (GP) for all $ X,Y$ with $ |X \backslash Y| = 3$.

It will turn out that in Propositions 1-4 above, strong and weak $ F$-matroids are the same.

Circuit axioms for matroids over tracts

Let $ {\mathcal C}$ be a collection of pairwise incomparable nonempty subsets of $ E$. We say that $ C_1,C_2 \in {\mathcal C}$ form a modular pair in $ {\mathcal C}$ if $ C_1 \neq C_2$ and $ C_1 \cup C_2$ does not properly contain a union of two distinct elements of $ {\mathcal C}$.

If $ F$ is a tract and $ X \in F^m$, we write $ \underline{X}$ for the support of $ X$, i.e., the set of $ i \in \{ 1,\ldots,m \}$ such that $ X_i \neq 0$. If $ {\mathcal C} \subset F^m$ and $ X,Y \in {\mathcal C}$, we say that $ X,Y$ are a modular pair in $ {\mathcal C}$ if $ \underline{X},\underline{Y}$ are a modular pair in $ \underline{\mathcal C} := \{ \underline{X} \; : \; X \in {\mathcal C} \}.$

Theorem 1: Let $ F$ be a tract and let $ E = \{ 1,\ldots,m \}$. There is a natural bijection between weak $ F$-matroids on $ E$ and collections $ {\mathcal C} \subset F^m$ satisfying:

(C0) $ 0 \not\in {\mathcal C}$.
(C1) If $ X \in {\mathcal C}$ and $ \alpha \in F^\times$, then $ \alpha \cdot X \in {\mathcal C}$.
(C2) [Incomparability] If $ X,Y \in {\mathcal C}$ and $ \underline{X} \subseteq \underline{Y}$, then there exists $ \alpha \in F^\times$ such that $ X = \alpha \cdot Y$.
(C3) [Modular Elimination] If $ X,Y \in {\mathcal C}$ are a modular pair in $ {\mathcal C}$ and $ e \in E$ is such that $ X_e= -Y_e \neq 0$, there exists $ Z \in {\mathcal C}$ such that $ Z_e=0$ and $ X_f + Y_f – Z_f \in N_G$ for all $ f \in E$.

We call $ {\mathcal C}$ the set of $ F$-circuits of the weak $ F$-matroid $ M$.

In [BB], there is also a stronger version of the circuit elimination axiom (C3) which gives a cryptomorphic characterization of strong $ F$-matroids in terms of circuits. Let’s say that a family of atomic elements of a lattice is modular if the height of their join in the lattice is the same as the size of the family. If $ \mathcal C$ is a subset of $ F^m$, a modular family of elements of $ \mathcal C$ is one such that the supports give a modular family of elements in the lattice of unions of supports of elements of $ \mathcal C$.

Then there is a natural bijection between strong $ F$-matroids on $ E$ and collections $ {\mathcal C} \subset F^m$ satisfying (C0),(C1),(C2), and the following stronger version of the modular elimination axiom (C3):

[Strong modular elimination] Suppose $ X_1,\ldots,X_k$ and $ X$ are $ F$-circuits of $ M$ which together form a modular family of size $ k+1$ such that $ \underline X \not \subseteq \bigcup_{1 \leq i \leq k} \underline X_i$, and for $ 1 \leq i \leq k$ let $ e_i \in (X \cap X_i) \setminus \bigcup_{\substack{1 \leq j \leq k \\ j \neq i}} X_j$ be such that $ X(e_i) = -X_i(e_i) \neq 0$. Then there is an $ F$-circuit $ Z$ such that $ Z(e_i) = 0$ for $ 1 \leq i \leq k$ and $ X_1(f) + \cdots + X_k(f) + X(f) – Z(f) \in N_G$ for every $ f \in E$.


There is a duality theory for matroids over tracts which generalizes the known duality theories for matroids, oriented matroids, valuated matroids, etc., and which corresponds to orthogonal complementation for matroids over fields.

Let $ F$ be a tract. The inner product of two vectors $ X,Y \in F^m$ is $ X \cdot Y := \sum_{i=1}^m X_i \cdot Y_i.$ We call $ X$ and $ Y$ orthogonal, denoted $ X \perp Y$, if $ X \cdot Y \in N_G$. If $ S \subset F^m$, we denote by $ S^\perp$ the orthogonal complement of $ S$, i.e., the set of all $ X \in F^m$ such that $ X \perp Y$ for all $ Y \in S$.

If $ M$ is a (weak or strong) $ F$-matroid on $ E$ whose collection of circuits is denoted $ {\mathcal C}$, then $ \underline{M} := \{ \underline{X} \; : \; X \in {\mathcal C} \}$ is the collection of circuits of a matroid in the usual sense on $ E$, which we call the underlying matroid of $ M$.

Theorem 2: Let $ F$ be a tract, and let $ M$ be a (weak, resp. strong) $ F$-matroid of rank $ r$ on $ E=\{ 1,\ldots,m \}$ with $ F$-circuit set $ {\mathcal C}$ and Grassmann-Plücker function $ \varphi.$
Then there is a (weak, resp. strong) $ F$-matroid $ M^*$ of rank $ m-r$ on $ E$, called the dual $ F$-matroid of $ M$, with the following properties:

1. The $ F$-circuits of $ M^*$ are the elements of $ {\mathcal C}^\perp$ of minimal non-empty support.

2. $ M^*$ has the Grassmann-Plücker function $ \varphi^*(x_1,\ldots,x_{m-r}) = {\rm sign}(x_1,\ldots,x_{m-r},x_1′,\ldots,x_r’) \varphi(x_1′,\ldots,x_r’),$ where $ x_1′,\ldots,x_r’$ is any ordering of $ E \backslash \{ x_1,\ldots,x_{m-r} \}.$

3. The underlying matroid of $ M^*$ is the dual of the underlying matroid of $ M$.

4. $ M^{**} = M$.

The deepest part of Theorem 2 is the fact that the elements of $ {\mathcal C}^\perp$ of minimal non-empty support satisfy the circuit elimination axiom (C3) (or its strong variant).

Dual pair axioms for matroids over hyperfields

One can give another cryptomorphic characterization of $ F$-matroids using the notion of dual pairs. It is perhaps the simplest of all descriptions of matroids over tracts, but it presupposes that one already knows what a (usual) matroid is.

Let $ M$ be a (classical) matroid on $ E$. We call a subset $ {\mathcal C}$ of $ F^m$ an $ F$-signature of $ M$ if it is closed under multiplication by nonzero elements of $ F$ and $ \underline{\mathcal C} := \{ \underline{X} \; : \; X \in {\mathcal C} \}$ is the set of circuits of $ M$.

We call $ ({\mathcal C},{\mathcal D})$ a dual pair of $ F$-signatures of $ M$ if:

(DP1) $ {\mathcal C}$ is an $ F$-signature of $ M$.
(DP2) $ {\mathcal D}$ is an $ F$-signature of the dual matroid $ M^*$.
(DP3) $ {\mathcal C} \perp {\mathcal D}$, meaning that $ X \perp Y$ for all $ X \in {\mathcal C}$ and $ Y \in {\mathcal D}$.

Theorem 3: Let $ F$ be a tract and let $ E = \{ 1,\ldots,m \}$. There is a natural bijection between strong $ F$-matroids on $ E$ and matroids $ M$ on $ E$ together with a dual pair of $ F$-signatures of $ M$.

An analogous theorem holds for weak $ F$-matroids if we only require that (DP3) holds when $ |\underline{X} \cap \underline{Y}| \leq 3$.

Vectors, perfection, and double distributivity

Given a tract $ F$ and a strong $ F$-matroid $ M$ on $ E$ with $ F$-circuit set $ \mathcal C$ and $ F$-cocircuit set $ \mathcal C^*$, a vector of $ M$ is an element of $ F^m$ which is orthogonal to everything in $ \mathcal C^*$. Similarly a covector of $ M$ is an element of $ F^m$ which is orthogonal to everything in $ \mathcal C$. We say that $ F$ is perfect if, for any strong matroid $ M$ over $ F$, all vectors are orthogonal to all covectors.

Theorem 4: Let $ F$ be a tract.

1. If $ F$ is perfect, then the notions of weak and strong $ F$-matroids coincide.

2. Every doubly distributive partial hyperfield is perfect.

As a consequence, we see that the notions of weak and strong $ F$-matroids coincide for doubly distributive partial hyperfields $ F$. This holds, in particular, when $ F=P$ is a partial field or when $ F$ is the Krasner hyperfield, the tropical hyperfield, or the hyperfield of signs.

There are examples in [BB] which show that weak and strong $ F$-matroids do not coincide when $ F$ is either the triangle hyperfield or the phase hyperfield.

Some directions for future research

There are many things one would like to know about matroids over tracts which aren’t yet well-understood. Here are some concrete problems which come to mind:

  1. Does the theory developed in [BB] also work for tracts in which multiplication is not assumed to be commutative? It would be nice, for example, if one could fold the theory of representations of matroids over skew partial fields (as developed in this paper by Pendavingh and van Zwam) into our framework.
  2. Laura Anderson has developed a theory of vectors and covectors for matroids over hyperfields. It would be interesting to resolve some of the conjectures in her paper, for example the Elimination Conjecture 7.2 or Conjecture 6.4 concerning flats.
  3. Can one extend the Lift Theorem from this paper by Pendavingh and van Zwam to matroids over tracts?
  4. Can one give an “algebraic” characterization of perfect tracts?

We can make Problem #4 a bit more precise. For any perfect tract, the matroids which are strongly representable over the tract are the same as the weakly representable ones, so they are determined by the set $ N_G^{(3)}$ of elements of $ N_G$ which are sums of at most 3 terms. On the other hand, any set of 3-term relations in $ N_G$ can be extended to a perfect tract by just including everything of size at least 4 in $ N_G$. The problem is that this includes a lot of extra stuff in $ N_G$ which might not be needed for strong representability. This motivates looking at the smallest tract extending $ N_G^{(3)}$ which could possibly be perfect, namely the set of all combinations which appear as sums in the GP relations (or the orthogonality relations) for weak matroids over $ N_G^{(3)}$. Could it be that if this tract is perfect then it must satisfy certain algebraic conditions which are also sufficient to guarantee perfection?

Lots of Ingleton Matroids


In 1971, Aubrey Ingleton [1] showed that the rank function of representable matroids satisfies additional linear inequalities that do not follow from the usual rank axioms. The inequality he observed now bears his name. It states that, for all sets $A,B,C,D$ in a representable matroid $M$,
&r(A\cup B) + r(A \cup C) + r(A \cup D) + r(B \cup C) + r(B \cup D) \\
\ge & \ r(A) + r(B) + r(A \cup B \cup C) + r(A \cup B \cup D) + r(C \cup D)

As difficult to typeset as it may be, this is an intriguing fact. The problem of characterizing which matroids are representable is difficult; various authors [2][3] have considered whether it is possible to extend one of the usual axiom systems in a natural way to capture precisely the representable matroids. Ingleton’s inequality suggests the kind of extra axiom that might need to be added to the rank axioms if thinking along these lines. Perhaps more importantly, the Ingleton inequality gives a concise way to certify that a matroid is not representable; if $(A,B,C,D)$ is a $4$-tuple in a matroid $M$ violating the inequality, then $M$ is non-representable. This has been useful to many authors that wish to construct non-representable matroids, in particular in a result of Mayhew, Newman, Welsh and Whittle [4] that constructs a very rich family of excluded minors for the real-representable matroid, all violating Ingleton’s inequality.

Finally, the Ingleton inequality is closely related to everyone’s favourite non-representable matroid, the Vámos matroid $V_8$. This rank-$4$ matroid has eight elements and ground set $E$ that is the disjoint union of four two-element sets $P_1,P_2,P_3,P_4$, in which the dependent four-element sets are precisely the pairs $P_i \cup P_j$ with $i < j$ and $(i,j) \ne (3,4)$. The tuple $(A,B,C,D) = (P_1,P_2,P_3,P_4)$ violates the inequality in this case: the left-hand and right-hand sides are $15$ and $16$ respectively. On the other hand, satisfying Ingleton’s inequality for all choices of $A,B,C,D$ is not enough to imply representability; for example, the non-Desargues matroid satisfies Ingleton’s inequality but is still non-representable, as is the direct sum of the Fano and non-Fano matroids.


This post is about a recent result [7] I’ve obtained with Jorn van der Pol that answers what we consider to be a natural question: how `close’ is the class of matroids that satisfy Ingleton’s inequality to the class of representable matroids? For convenience, we will call a matroid Ingleton if Ingleton’s inequality is satisfied for all choices of $(A,B,C,D)$. It might not be immediatley clear what the answer should be. Some (weak) positive evidence is the fact that the Ingleton matroids are closed under minors and duality (see [5], Lemmas 3.9 and 4.5) and that $V_8$ is non-Ingleton. However, I think the natural educated guess goes in the other direction; Ingleton’s inequality seems too coarse to capture, even approximately, a notion as intricate as linear representability In fact, Mayhew, Newman and Whittle [3] have in fact shown that it is impossible to define the class of representable matroids by adding any finite list of rank inequalities to the usual rank axioms, let alone a single inequality.

Our result confirms this suspicion.

Theorem 1: For all sufficiently large $n$, the number of Ingleton matroids with ground set $\{1,\dotsc,n\}$ is at least $2^{\frac{1.94 \log n}{n^2}\binom{n}{n/2}}$.

To view this result in context, the lower bound should be compared to the counts of representable matroids and of all matroids. On a fixed ground set $[n] = \{1,\dotsc, n\}$ The number of representable matroids is at most $2^{n^3/4}$ for all $n \ge 12$ [6], and the number of matroids is at least $2^{\tfrac{1}{n}\binom{n}{n/2}}$. (Both these upper and lower bounds are in fact correct counts up to a constant factor in the exponent). This first expression is singly exponential in $n$, having the form $2^{\mathrm{poly}(n)}$, while the second is doubly exponential, having the form $2^{2^n/\mathrm{poly}(n)}$. The lower bound in our theorem is of the second type, showing that the Ingleton matroids asymptoticlaly dwarf the representable matroids in number. In other words, knowing that a matroid is Ingleton tells you essentially nothing about whether it is representable. (In fact, our techniques show that the number of rank-$4$ Ingleton matroids on $[n]$ is asymptotically larger than the class of all representable matroids on $[n]$, which seems surprising.) The ideas in the proof of the above theorem are simple and we obtain a nice excluded minor result on the way; I briefly sketch them below. For the full proof, see .

Ingleton Sparse Paving Matroids

Our proof actually constructs a large number of Ingleton matroids of a specific sort: sparse paving. These matroids, which have come up in previous matroidunion posts, play a very special role in the landscape of all matroids; they are matroids that, while having a somewhat trivial structure, are conjectured to comprise almost all matroids. For the definition, it is easier to talk about the nonbases of a matroid than its bases. Let $\binom{[n]}{r}$ denote the collection of $r$-element subsets of $[n]$. Given a rank-$r$ matroid on $[n]$, call a dependent set in $\binom{[n]}{r}$ a nonbasis of $M$.

A rank-$r$ matroid is sparse paving if for any two nonbases $U_1,U_2$ of $M$, we have $|U_1 \cap U_2| < r-1$. Equivalently, no two nonbases of $M$ differ by a single exchange, or every dependent set of $M$ is a circuit-hyperplane of $M$. In fact, this condition itself implies that the matroid axioms are satisfied; given any collection $\mathcal{K}$ of $r$-element subsets of $[n]$, if no two sets in $K$ intersect in exactly $r-1$ elements, then $\mathcal{K}$ is the set of nonbases of a sparse paving matroid on $[n]$. Thus, an easy way to guarantee that a set $\mathcal{K}$ is actually the set of nonbases of a matroid is to prove that no two of its members intersect in $r-1$ elements.

Our key lemma gives a simpler way to understand Ingleton’s inequality for sparse paving matroids. In general, it is very hard to mentally juggle the $A$’s, $B$’s, $C$’s and $D$’s while working with the inequality, but for sparse paving matroids, things are much simpler.

Lemma 1: If $M$ is a rank-$r$ sparse paving matroid, then $M$ is Ingleton if and only if there do not exist pairwise disjoint sets $P_1,P_2,P_3,P_4,K$ where $|P_1| = |P_2| = |P_3| = |P_4| = 2$ and $|K|=r-4$, such that the five $r$-element sets $K \cup P_i \cup P_j: i < j, (i,j) \ne (3,4)$ are nonbases, while the set $K \cup P_3 \cup B_4$ is a basis.

This statement may look technical, but it should also look familiar. If $P_1,P_2,P_3,P_4,K$ are sets as above, then the minor $N = (M / K)|(P_1\cup P_2 \cup P_3 \cup P_4)$ is an eight-element sparse paving matroid having a partition $(P_1,P_2,P_3,P_4)$ into two-element sets, where precisely five of the six sets $P_i \cup P_j$ are nonbases, and the last is a basis. This is a structure very similar to that of the Vámos matroid. Call an eight-element, rank-$4$ matroid $N$ having such a property Vámos-like. (Such an $N$ need not be precisely the Vámos matroid, as there may be four-element sets of other forms that are also nonbases of $N$). In any Vámos-like matroid, $(A,B,C,D) = (P_1,P_2,P_3,P_4)$ will violate Ingleton’s inequality. We can restate Lemma 1 as follows.

Lemma 1 (simplified): If $M$ is a sparse paving matroid, then $M$ is Ingleton if and only if $M$ has no Vámos-like minor.

There are evidently only finitely many Vámos-like matroids, since they have eight elements; in fact, thanks to Dillon Mayhew and Gordon Royle’s excellent computational work [8], we know all $39$ of them; as well as the Vámos matroid itself, they include the matroid AG$(3,2)^-$ obtained by relaxing a circuit-hyperplane of the rank-$4$ binary affine geometry. It is easy to show that the sparse paving matroids are themselves a minor-closed class with excluded minors $U_{1,1} \oplus U_{0,2}$ and $U_{0,1} \oplus U_{2,2}$. Combined with Lemma 1, this gives us a nice excluded minor theorem:

Theorem 2: There are precisely $41$ excluded minors for the class of Ingleton sparse paving matroids: the $39$ Vámos-like matroids, as well as $U_{1,1} \oplus U_{0,2}$ and $U_{0,1} \oplus U_{2,2}$.

The Proof

Armed with Lemma 1, we can now take a crack at proving our main theorem. For simplicity, we will prove a slightly weaker result, with a worse constant of $0.2$ and no logarithmic factor in the exponent. The stronger result is obtained by doing some counting tricks a bit more carefully. The good news is that the proof of the weaker theorem is short enough to fit completely in this post.

Theorem 3: For all sufficiently large $n$, there are at least $2^{0.2n^{-2}\binom{n}{n/2}}$ Ingleton sparse paving matroids with ground set $[n]$

Let $n$ be large and let $r = \left\lfloor n/2 \right\rfloor$ and $N = \binom{n}{r}$. Let $c = 0.4$. We will take a uniformly random subset $\mathcal{X}$ of $\left\lfloor c n^{-2}\binom{n}{r}\right\rfloor$ of the $\binom{n}{r}$ sets in $\binom{[n]}{r}$. We then hope that $\mathcal{X}$ is the set of nonbases of an Ingleton sparse paving matroid. If it is not, we remove some sets in $\mathcal{X}$ so that it is.

We consider two possibilities that, together, encompass both ways $\mathcal{X}$ can fail to be the set of nonbases of a sparse paving matroid. They are

  1. $\mathcal{X}$ is not the set of nonbases of a sparse paving matroid. (That is, there are sets $U_1,U_2 \in \mathcal{X}$ whose intersection has size $r-1$.)
  2. $\mathcal{X}$ is the set of nonbases of a sparse paving matroid, but this matroid fails to be Ingleton. (That is, there are pairwise disjoint subsets $P_1,P_2,P_3,P_4,K$ of $[n]$ where $|P_1| = |P_2| = |P_3| = |P_4| = 2$ and $|K|=r-4$, such that at least five of the six $r$-element sets $K \cup P_i \cup P_j: i < j, (i,j)$ are nonbases.)

Let $a(\mathcal{X}),b(\mathcal{X})$ denote the number of times each of these types of failure occurs. The condition in (2) is slightly coarser than required, for reasons we will see in a minute. So $a(X)$ is the number of pairs $(U_1,U_2)$ with $|U_1 \cap U_2| = r-1$ and $U_1,U_2 \in X$, and $b(X)$ is the number of $5$-tuples $(P_1,P_2,P_3,P_4,K)$ satisfying the condition in (2).

Claim: If $n$ is large, then $\mathbf{E}(a(\mathcal{X}) + 2b(\mathcal{X})) < \tfrac{1}{2}|\mathcal{X}|$.

Proof of claim: The number of pairs $U_1,U_2$ intersecting in $r-1$ elements is $\binom{n}{r}r(n-r) < n^2\binom{n}{r}$. For each such pair, the probability that $U_1,U_2 \in X$ is at most $(|\mathcal{X}|/\binom{n}{r})^2 \le c^2n^{-4}$. Therefore \[\mathbf{E}(a(\mathcal{X})) \le c^2n^{-2}\binom{n}{r} = (1+o(1))c|\mathcal{X}|.\]

The number of $5$-tuples $(K,P_1,P_2,P_3,P_4)$ pf disjoint sets where $|K| = r-4$ and $|P_i| = 2$ is at most $\binom{n}{2}^4\binom{n-8}{r-4} < \tfrac{n^8}{16}\binom{n}{r}$. The probability that, for some such tuple, at least five of the sets $K \cup P_i \cup P_j$ are in $X$ is at most $6(|\mathcal{X}|/\binom{n}{r})^5 \le 6c^5n^{-10}$. Therefore
\[\mathbf{E}(b(\mathcal{X})) \le \frac{n^8}{16}\binom{n}{r}\cdot 6c^5 n^{-10} = (1+o(1))\tfrac{3}{8}c^4|\mathcal{X}|.\]
By linearity of expectation, the claim now holds since $c = 0.4$ gives $c + \tfrac{3}{4}c^4 < \tfrac{1}{2}$.

Now, let $\mathcal{X}_0$ be a set of size $\left\lfloor c n^{-2}\binom{n}{r}\right\rfloor$ for which $a(\mathcal{X}) + 2b(\mathcal{X}) < \tfrac{1}{2}|\mathcal{X}_0|$. We now remove from $\mathcal{X}_0$ one of the two sets $U_1,U_2$ for each pair contributing to $a(\mathcal{X}_0)$, and two of the sets $K \cup P_i \cup P_j$ for each tuple $(K,P_1,P_2,P_3,P_4)$ contributing to $b(\mathcal{X}_0)$. This leaves a subset $\mathcal{X}_0’$ of $\mathcal{X}_0$ of size $\tfrac{1}{2}|\mathcal{X}_0| \approx \tfrac{1}{2}cn^{-2}\binom{n}{r} = 0.2n^{-2}\binom{n}{r}$. By construction and Lemma 1, $\mathcal{X}_0’$ is the set of nonbases of a rank-$r$ Ingleton sparse paving matroid on $[n]$. However (and this is where condition (2) needed to be strengthened slightly), so are all subsets of $\mathcal{X}_0’$. This puts the size of $\mathcal{X}_0’$ in the exponent; there are therefore at least $2^{0.2n^{-2}\binom{n}{n/2}}$ Ingleton sparse paving matroids on $[n]$, as required.


  1. A.W. Ingleton. Representation of matroids. In D.J.A Welsh, editor, Combinatorial mathematics and its applications (Proceedings of a conference held at the Mathematical Institute, Oxford, from 7-10 July, 1969). Academic Press, 1971.
  2. P. Vámos. The missing axiom of matroid theory is lost forever. J. London Math. Soc. 18 (1978), 403-408
  3. D. Mayhew, M. Newman and G. Whittle. Yes, the “missing axiom” of matroid theory is lost forever, arXiv:1412.8399
  4. D. Mayhew, M. Newman and G. Whittle. On excluded minors for real-representability. J. Combin. Theory Ser. B 66 (2009), 685-689. 
  5. A. Cameron. Kinser inequalities and related matroids. Master’s Thesis, Victoria University of Wellington. Also available at arXiv:1401.0500
  6. P. Nelson. Almost all matroids are non-representable. arXiv:1605.04288
  7. P. Nelson and J. van der Pol. Doubly exponentially many Ingleton matroids. arXiv:1710.01924
  8. D. Mayhew and G.F. Royle. Matroids with nine elements. J. Combin. Theory Ser. B 98 (2008), 882-890.

Clutters III

A long time ago I started a series of posts abut clutters. The most recent post followed the textbook by Gérards Cornuéjols in defining several important classes, and introduced a Venn diagram, showing the relationships between them.


In this post we will spend a little more time discussing this diagram.

We let $H=(S,\mathcal{A})$ be a clutter. This means that $S$ is a finite set, and the members of $\mathcal{A}$ are subsets of $S$, none of which is properly contained in another. We let $M$ stand for the incidence matrix of $H$. This means that the columns of $M$ are labelled by the elements of $S$, and the rows are labelled by members of $\mathcal{A}$, where an entry of $M$ is one if and only if the corresponding element of $S$ is contained in the corresponding member of $\mathcal{A}$. Any entry of $M$ that is not one is zero. Let $w$ be a vector in $\mathbb{R}^{S}$ with non-negative values. We have two fundamental linear programs:

(1) Find $x\in \mathbb{R}^{S}$ that minimises $w^{T}x$ subject to the constraints $x\geq \mathbf{0}$ and $Mx\geq \mathbf{1}$.

The vectors $\mathbf{0}$ and $\mathbf{1}$ have all entries equal to zero and one, respectively. When we write that real vectors $a$ and $b$ with the same number of entries satisfy $a\geq b$, we mean that each entry of $a$ is at least equal to the corresponding entry in $b$.

(2) Find $y\in\mathbb{R}^{\mathcal{A}}$ that maximises $y^{T}\mathbf{1}$ subject to the constraints $y\geq \mathbf{0}$ and $y^{T}M\leq w$.

Lemma 1. Any clutter with the Max Flow Min Cut property also has the packing property.

Proof. Let $H$ be a clutter with the Max Flow Min Cut property. This means that for any choice of vector $w$ with non-negative integer entries, the programs (1) and (2) both have optimal solutions with integer values.

We will show that $H$ has the packing property. According to the definition in the literature, this means that for any choice of vector $w$ with entries equal to $0$, $1$, or $+\infty$, there are optimal solutions to (1) and (2) with integer entries. I think there is a problem with this definition. Assume that $r$ is a row of $M$, and every member of the support of $r$ receives a weight of $+\infty$ in the vector $w$. Then (2) cannot have an optimal solution. If $y$ is a purported optimal solution, then we can improve it by adding $1$ to the entry of $y$ that corresponds to row $r$. We are instructed that if entry $i$ in $w$ is $+\infty$, then this means when $x$ is a solution to (1), then entry $i$ of $x$ must be $0$. Again, we have a problem, for if $r$ is a row of $M$, and the entire support of $r$ is weighted $+\infty$, then (1) has no solution: if $x$ were a solution, then it would be zero in every entry in the support of $r$, meaning that $Mx$ has a zero entry.

The literature is unanimous in saying that $H$ has the packing property if (1) and (2) both have integral optimal solutions for any choice of vector $w$ with entries $0$, $1$, and $+\infty$. As far as I can see, this means that any clutter with $\mathcal{A}$ non-empty does not have the packing property: simple declare $w$ to be the vector with all entries equal to $+\infty$. Then neither (1) nor (2) has an optimal solution at all. I think the way to recover the definition is to say that whenever $w$ with entries $0$, $1$, or $+\infty$ is chosen in such a way that (1) and (2) have solutions, they both have optimal solutions that are integral. This is the definition that I will use here.

After this detour, we return to our task, and assume that $H$ has the Max Flow Min Cut property. Assume that $w$ is a vector with entries equal to $0$, $1$, or $+\infty$, and that (1) and (2) both have solutions. This means that any row in $M$ has a member of its support which is not weighted $+\infty$ by $w$. We obtain the vector $u$ by replacing each $+\infty$ entry in $w$ with an integer that is greater than $|S|$. Now because $H$ has the Max Flow Min Cut property, it follows that there are optimal integral solutions, $x$ and $y$, to (1) and (2) (relative to the vector $u$). We will show that $x$ and $y$ are also optimal solutions to (1) and (2) relative to the vector $w$.

We partition $S$ into $S_{0}$, $S_{1}$, and $S_{+\infty}$ according to whether an element of $S$ receives a weight of $0$, $1$, or $+\infty$ in $w$. We have assumed that no member of $\mathcal{A}$ is contained in $S_{+\infty}$. We note that if $z\in \mathbb{Z}^{S}$ is a vector which is equal to zero for each element of $S_{+\infty}$, and one everywhere else, then $z$ is a solution to (1), by this assumption. Moreover, $w^{T}z=u^{T}z\leq |S|$. Now it follows that $x$ must be zero in every entry in $S_{+\infty}$, for otherwise $u^{T}x>|S|$, and therefore $x$ is not an optimal solution to (1) relative to $u$. Since $x$ is integral and optimal, it follows that we can assume every entry is either one or zero. If $x$ is not an optimal solution to (1) relative to $w$, then we let $z$ be an optimal solution with $w^{T}z < w^{T}x$. But by convention, $z$ must be zero in every entry of $S_{+\infty}$. Therefore $u^{T}z=w^{T}z < w^{T}x=u^{T}z$, and we have a contradiction to the optimality of $x$. Thus $x$ is an optimal solution to (1) relative to the $\{0,1,+\infty\}$-vector $w$.

Now for problem (2). Since $y$ is integral and non-negative, and $y^{T}M\leq w$, where every member of $\mathcal{A}$ contains an element of $S_{0}$ or $S_{1}$, it follows that each entry of $y$ must be either one or zero. Let $z$ be any solution of (2) relative to $w$. Exactly the same argument shows that each entry of $z$ is between zero and one. Therefore $z^{T}\mathbf{1}\leq y^{T}\mathbf{1}$ so $y$ is an optimal solution to (2).

We have shown that relative to the vector $w$, both (1) and (2) have optimal solutions that are integral. Hence $H$ has the packing property. $\square$

Lemma 2. A clutter with the packing property packs.

Proof. This one is easy. In order to prove that $H$ packs, we merely need to show that (1) and (2) have optimal integral solutions when $w$ is the vector with all entries equal to one. But this follows immediately from our revised definition of clutters with the packing property. $\square$

The final containment we should show is that clutters with the packing property are ideal. Idealness means that (1) has an optimal integral solution for all vectors $w\in \mathbb{R}^{S}$. This proof is difficult, so I will leave it for a future post. Usually we prove it by using a theorem due to Lehman [Leh].

Theorem (Lehman). The clutter $H$ is ideal if and only if (1) has an optimal integral solution for all choices of vector $w\in\{0,1,+\infty\}^{S}$.

Question. Is there a short proof that clutters with the packing property are ideal? One that does not rely on Lehman’s (quite difficult) theorem?

We will conclude with some examples showing that various containments are proper.

Let $C_{3}^{2}$ and $C_{3}^{2+}$ be the clutters with incidence matrices
Let $Q_{6}$ and $Q_{6}^{+}$ be the clutters with incidence matrices

Exercise. Check that:

  1. $C_{3}^{2}$ is not ideal and does not pack,
  2. $C_{3}^{2+}$ packs, but is not ideal,
  3. $Q_{6}$ is ideal, but does not pack,
  4. $Q_{6}^{+}$ is ideal and packs, but does not have the packing property.

This leaves one cell in the Venn diagram without a clutter: the clutters with the packing property that do not have the Max Flow Min Cut property. In fact, Conforti and Cornuéjols [CC] have speculated that no such clutter exists.

Conjecture (Conforti and Cornuéjols). A clutter has the packing property if and only if it has the Max Flow Min Cut property.

[CC] M. Conforti and G. Cornuéjols, Clutters that Pack and the Max Flow Min Cut Property: A Conjecture, The Fourth Bellairs Workshop on Combinatorial Optimization, W.R. Pulleyblank and F.B. Shepherd eds. (1993).

[Leh] A. Lehman, On the width-length inequality. Mathematical Programming December 1979, Volume 16, Issue 1, pp 245–259.