Four proofs of a theorem by Vámos

Let $\chi(M)$ be the characteristic set of the matroid $M$; that is, \[\chi(M)=\{\operatorname{char}(\mathbb{F})\colon M\ \text{is representable over the field}\ \mathbb{F}\},\] where $\operatorname{char}(\mathbb{F})$ is the characteristic of $\mathbb{F}$. It is well known that $0\in\chi(M)$ if and only if $\chi(M)$ contains infinitely many primes. One direction is due to Rado [4]. The following theorem contains the other direction.

Theorem 1. If $\chi(M)$ contains infinitely many primes, then it contains zero.

One unusual aspect of Theorem 1 is that there are four proofs of it in the literature. All of them are attractive, and they call upon different (but related) sets of tools: Zorn’s lemma, compactness in first-order logic, ultraproducts, and Gröbner bases. In this post I will review all four proofs.

To start with, let $r$ be $r(M)$, and assume that $E(M)=\{1,\ldots, n\}$. Let $X$ be the set of variables $\{x_{i,j}\}_{1\leq i \leq r,\ 1\leq j \leq n}$, and let $R$ be the polynomial ring $\mathbb{Z}[X]$. We will let $A$ be the $r\times n$ matrix where the entry in row $i$ and column $j$ is $x_{i,j}$. Each $r$-element subset of $E(M)$ corresponds to an $r\times r$ submatrix of $A$, and thereby to a homogeneous polynomial in $R$, namely the determinant of that submatrix. Let $\mathcal{B}$ be the set of polynomials that correspond to bases of $M$, and let $\mathcal{N}$ contain the polynomials that correspond to dependent $r$-element subsets in $M$. Thus a representation of $M$ is a ring homomorphism from $R$ to a field that sends every polynomial in $\mathcal{N}$, but no polynomial in $\mathcal{B}$, to zero.

The first proof of Theorem 1 is due to Peter Vámos. He published two articles in the proceedings entitled Möbius Algebras, produced by the University of Waterloo in 1971. The proof of Theorem 1 has sometimes been cited as coming from [6], but in fact the correct attribution is to [5]. His proof hinges on the following lemma. Recall that an ideal, $P$, is prime, if $ab\in P$ implies that $a\in P$ or $b\in P$.

Lemma 1. Let $I$ be an ideal in a commutative ring, $S$, and let $T$ be a multiplicatively closed set of $S$ that is disjoint from $I$. There exists a prime ideal, $P$, such that $I\subseteq P$, and $P\cap T=\emptyset$.

Proof. We consider the partial order of ideals in $S$ that contain $I$ and that are disjoint from $T$. Assume that $J_{1}\subset J_{2}\subset J_{3}\subset \cdots$ is a chain in the order. It is easy to confirm that the $J_{1}\cup J_{2}\cup J_{3}\cup \cdots$ is itself an ideal in the order. This means that every such chain of ideals has a maximal element, so we can apply Zorn’s Lemma. Therefore the partial order has a maximal element, $P$. It remains to show that $P$ is a prime ideal. Assume otherwise, so that the product $ab$ belongs to $P$, even though $a\notin P$ and $b\notin P$. The maximality of $P$ means that the ideal generated by $P$ and $a$ is not disjoint from $T$, so there is some element, $ca+p$ in $T\cap \langle P, a\rangle$. (Here $p\in P$, and $c$ is an element of $S$.) Similarly, there are elements $q\in P$ and $d\in S$ such that $db+q$ belongs to $T$. Since $T$ is multiplicatively closed, we see that $(ca+p)(db+q)=(cd)(ab)+(ca)q+(db)p+pq$ is in $T$. But this element is also in $P$, since $ab\in P$. Now we have a contradiction to the fact that $P$ and $T$ are disjoint. $\square$

Proof 1. Let $p_{1},p_{2},p_{3},\ldots$ be distinct primes such that $M$ is representable over a field of characteristic $p_{i}$ for each $i$. Let $\phi_{i}$ be a homomorphism from $R$ to a field of characteristic $p_{i}$ that takes all the polynomials in $\mathcal{N}$, and no polynomial in $\mathcal{B}$, to zero. Then the kernel, $\ker(\phi_{i})$, of $\phi_{i}$ is a prime ideal of $R$ containing $\mathcal{N}$, but disjoint from $\mathcal{B}$. Moreover, the integer $p_{i}$ is contained in $\ker(\phi_{i})$. In fact, the only integers in $\ker(\phi_{i})$ are the multiples of $p_{i}$, since if $\ker(\phi_{i})$ contains a non-multiple of $p_{i}$, then it contains the greatest common divisor of that non-multiple and $p_{i}$ (namely $1$). This would imply that $\ker(\phi_{i})$ is equal to $R$, which is impossible.

Assume that there is an element in both the ideal $\langle \mathcal{N}\rangle$ and the multiplicatively closed set generated by $\mathcal{B}\cup\mathbb{Z}$. Such an element is of the form $kf$, where $k$ is an integer, and $f$ is a product of polynomials from $\mathcal{B}$. Since $kf$ is in $\langle \mathcal{N}\rangle$, it is in each $\ker(\phi_{i})$, even though $f$ is not. Thus $k$ is in each ideal $\ker(\phi_{i})$. Thus $k$ is a multiple of each of the distinct primes $p_{1},p_{2},p_{3},\ldots$, and we have a contradiction. Now, by Lemma $1$, there is a prime ideal, $P$, of $R$ which contains $\langle \mathcal{N}\rangle$, and which avoids $\mathcal{B}\cup\mathbb{Z}$. Because $P$ is prime, the quotient ring, $R/P$, is an integral domain. Let $F$ be the field of fractions of $R/P$. The natural homomorphism from $R$ to $F$ gives us a representation of $M$. As $P$ does not contain any integer, it follows that $R/P$ has characteristic zero, and we are done. $\square$

The next proof of Theorem 1 makes use of the Compactness Theorem in first-order logic. It is due to Samuel Wagstaff and appeared two years after Vámos’s proof, in 1973 [7]. Wagstaff does not supply many details — perhaps the account in his PhD thesis (upon which the article based) is more explicit. But I believe that my version below captures his key ideas.

Proof 2. We construct a first-order language of vector spaces. We have a countable supply of variables, $x_{1},x_{2},x_{3},\ldots$, and unary predicates $\operatorname{Scal}$ and $\operatorname{Vec}$. In our interpretation, these predicates will indicate that a variable stands for a scalar or a vector. We also have constants, $0_{S}$, $1_{S}$, and $0_{V}$, which are interpreted as the identities of the vector space. The functions $+_{S}$, $\times_{S}$, $+_{V}$, and $\times_{V}$, are interpreted as the binary operations of the field of scalars, as addition of vectors, and as scalar multiplication. The only binary relation we need is equality. Now, in this language, we can express the axioms for fields and for vector spaces. To illustrate, the compatibility of scalar and vector multiplication would be expressed with the sentence \begin{multline*}\forall x_{1}\forall x_{2}\forall x_{3} (\operatorname{Scal}(x_{1})\land \operatorname{Scal}(x_{2})\land \operatorname{Vec}(x_{3}) \to\\ ((x_{1}\times_{S}x_{2})\times_{V}x_{3}=x_{1}\times_{V}(x_{2}\times_{V}x_{3}))).\end{multline*} We can assert the representability of $M$ by requiring that there exist $x_{1},x_{2},\ldots, x_{n}$ such that $\operatorname{Vec}(x_{i})$ holds for each $i$. For each subset $X\subseteq E(M)=\{1,\ldots, n\}$, we include a sentence asserting that the corresponding subset of $\{x_{1},x_{2},\ldots, x_{n}\}$ is linearly independent if $X$ is independent in $M$, and otherwise asserting that it is linearly dependent. Let us illustrate how this can be accomplished by writing the formula which asserts that $\{x_{1},x_{2},x_{3}\}$ is linearly independent. Let $y_{1}, y_{2}, y_{3}$ be variables that are not used elsewhere in our sentence. The formula we need is as follows: \begin{multline*}\forall y_{1}\forall y_{2}\forall y_{3} (\operatorname{Scal}(y_{1})\land \operatorname{Scal}(y_{2})\land \operatorname{Scal}(y_{3}) \land (y_{1}\ne 0_{S} \lor y_{2}\ne 0_{S} \lor y_{3}\ne 0_{S})\to\\ (y_{1}\times_{V}x_{1}) +_{V} (y_{2}\times_{V} x_{2}) +_{V} (y_{3}\times_{V}x_{3})\ne 0_{V}).\end{multline*} We simply perform this trick for each subset of $E(M)$.

Now we have a finite collection, $\mathcal{T}$, of sentences which can be satisfied if and only if $M$ is representable over a field. For each prime, $p$, let $S_{p}$ be the sentence asserting that the sum of $p$ copies of $1_{S}$ is equal to $0_{S}$. Thus $M$ is representable over a field of characteristic $p$ if and only if $\mathcal{T}\cup\{S_{p}\}$ can be satisfied. Assume that $M$ is not representable over a field with characteristic zero. Then the collection $\mathcal{T}\cup\{\neg S_{p}\colon p\ \text{is a prime}\}$ is inconsistent: it cannot be satisfied by any model. The Compactness Theorem tells us that there is a finite subcollection which is inconsistent. Therefore we can assume that $\mathcal{T}\cup\{\neg S_{p} \colon p\ \text{is a prime and}\ p\leq N\}$ has no model, for some integer $N$. This implies that there can be no representation of $M$ over a field with characteristic greater than $N$, so we have proved the contrapositive of Theorem 1. $\square$

The next proof appeared in 1989, and comes courtesy of Imre Leader [3]. I think it might be my favourite out of the four. It uses the notion of an ultraproduct.

Proof 3. Let $F_{1},F_{2},F_{3},\ldots$ be an infinite sequence of fields such that $M$ is representable over each $F_{i}$ and $\operatorname{char}(F_{i+1})>\operatorname{char}(F_{i})$. Let $\mathcal{U}$ be a non-principal ultrafilter of the positive integers. This means that $\mathcal{U}$ is a collection of subsets of positive integers, and $\mathcal{U}$ satisfies: (1) any superset of a member of $\mathcal{U}$ is also in $\mathcal{U}$; (2) if $U,V\in\mathcal{U}$, then $U\cap V\in \mathcal{U}$; and (3) if $U$ is any subset of $\mathbb{Z}^{+}$, then exactly one of $U$ or $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. Non-principal ultrafilters cannot be described explicitly, but we know that they exist by an application of Zorn’s Lemma. If $U$ is a finite subset of $\mathbb{Z}^{+}$, then $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. We consider sequences $(x_{1},x_{2},x_{3},\ldots)$, where each $x_{i}$ is an element of $F_{i}$. We declare $(x_{1},x_{2},x_{3},\ldots)$ and $(y_{1},y_{2},y_{3},\ldots)$ to be equivalent if $\{i\in\mathbb{Z}^{+}\colon x_{i}=y_{i}\}$ is a member of $\mathcal{U}$. We perform componentwise addition and multiplication on sequences. It is not too difficult to see that these operations respect our notion of equivalence, and that the equivalence classes of sequences form a field, $F$, under componentwise addition and multiplication. Moreover, the characteristic of $F$ is zero. We consider a sequence of representations of $M$ over the fields $F_{1},F_{2},F_{3},\ldots$. This leads to a representation of $M$ over $F$ in the obvious way: we simply “stack” the infinite sequence of matrices, and produce a matrix whose entries are infinite sequences of numbers. $\square$

Finally, we come full circle! In 2003, Vámos (along with Rosemary Baines) published another proof of Theorem 1 [2]. The proof is actually an easy consequence of the algorithm that Stefan mentioned in a previous post, and the main ingredient is the use of Gröbner bases. I will tweak the proof slightly, for the sake of simplicity.

Proof 4. Let $f$ be the product of all polynomials in $\mathcal{B}$. We introduce the new, “dummy”, variable, $y$. From now on, we will operate in the ring $\mathbb{Z}[X\cup\{y\}]$. Note that an evaluation of the variables in $X\cup\{y\}$ will make $yf-1$ zero if and only if it makes each polynomial in $\mathcal{B}$ non-zero. Let $I$ be the ideal of $\mathbb{Z}[X\cup\{y\}]$ that is generated by $\mathcal{N}\cup\{yf-1\}$. Thus a representation of $M$ is really a ring homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field that takes every polynomial in $I$ to zero.

Let $G$ be a strong Gröbner basis of $I$ (see [1]). Thus $I$ is the ideal generated by $G$. In addition, we have an ordering on products of variables (the lexicographic order will suffice). The leading term of a polynomial is the highest non-zero term under this ordering. For $G$ to be a strong Gröbner basis, we require that if $h$ is in $I$, then there is a polynomial $g\in G$ such that the leading term of $g$ divides the leading term of $h$. Assume that $G$ contains an integer, $N$. If $\phi$ is a homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field, $F$, such that $\ker(\phi)$ contains $I$, then $N$ is taken to zero by $\phi$. This means that the characteristic of $F$ is at most $N$, so $M$ is not representable over infinitely many prime characteristics. Therefore we will now assume that $G$ contains no integer.

Let $I_{\mathbb{Q}}$ be the ideal of $\mathbb{Q}[X\cup\{y\}]$ generated by $\mathcal{N}\cup\{yf-1\}$. Assume that $1\in I_{\mathbb{Q}}$. Then $1$ can be expressed as a polynomial combination of the polynomials in $\mathcal{N}\cup\{yf-1\}$. By multiplying this combination through by the denominators of the coefficients, we see that $I$ contains an integer, $N$. Thus $G$ contains a polynomial whose leading term divides the leading term of $N$, which is $N$ itself. But in the lexicographic ordering, a constant polynomial is below all non-constant polynomials, from which we deduce that $G$ contains a constant polynomial, contradicting our conclusion from the previous paragraph. Therefore $1\notin I_{\mathbb{Q}}$, which means that $I_{\mathbb{Q}}$ is contained in a maximal ideal, $P$. Note that $P$ contains no constant polynomial. Now $\mathbb{Q}[X\cup\{y\}]/P$ is a field, and the natural homomorphism from $\mathbb{Q}[X\cup\{y\}]$ to this field leads to a representation of $M$ over a field of characteristic zero. $\square$


[1] W. W. Adams and P. Loustaunau. An introduction to Gröbner bases. Graduate Studies in Mathematics, 3. American Mathematical Society, Providence, RI, 1994.

[2] R. Baines and P. Vámos. An algorithm to compute the set of characteristics of a system of polynomial equations over the integers. J. Symbolic Comput. 35 (2003), no. 3, 269–279.

[3] I. Leader. A short proof of a theorem of Vámos on matroid representations. Discrete Math. 75 (1989), no. 1-3, 315–317.

[4] R. Rado. Note on independence functions. Proc. London Math. Soc. (3) 7 (1957), 300–320.

[5] P. Vámos. A necessary and sufficient condition for a matroid to be linear. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 162–169. Univ. Waterloo, Waterloo, Ont., 1971.

[6] P. Vámos. Linearity of matroids over division rings. Notes by G. Roulet. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 170–174. Univ. Waterloo, Waterloo, Ont., 1971.

[7] S. S. Wagstaff Jr. Infinite matroids. Trans. Amer. Math. Soc. 175 (1973), 141–153.

Clutters II

The last time I wrote on this subject, I made the case that clutters are of fairly natural interest to matroid theorists, since they are combinatorial objects with a duality involution, along with operations of deletion and contraction that are swapped by duality. They also arise fairly naturally in integer programming. In one formulation, integer programming is the problem of finding an integral vector $x$ that minimises the quantity $w^{T}x$ ($w$ is a fixed vector of weights) subject to the constraints $x\geq \mathbf{0}$ and $Mx\geq b$, where $M$ is a matrix of constraints. (Every entry of $\mathbf{0}$ is zero, and all comparisons of vectors are done entrywise – $x\geq \mathbf{0}$ means that every entry of $x$ is non-negative.)

Many optimisation problems in combinatorics can be expressed as integer programs. In particular, several problems are equivalent to integer programs where $b$ is equal to the vector of all ones, and $M$ contains only zeroes and ones. It’s an easy exercise to show that finding a minimum-size vertex cover in a graph is one of these problems. So is finding a minimum-size $s$-$t$ cut in a network.

In these cases, we may as well assume that the support of no row in $M$ properly contains the support of another row: if $r$ and $r’$ are rows of $M$, and $r$ is equal to $1$ in every position that $r’$ is equal to $1$, then we might as well remove the row $r$. This does not change the solution space of the program at all. In other words, we may as well assume that the rows of $M$ are the characteristic vectors of the edges in a clutter.

So clutters arise from combinatorial optimisation problems, via integer programming, and this leads to the following Venn diagram.


My aim today is modest: to define these classes of clutters. The illustration, along with essentially all the content found here, is in Combinatorial Optimisation: Packing and Covering, by Gérards Cornuéjols.

First of all, let $H=(S,\mathcal{A})$ be a clutter, so that $S$ is a finite set, and the members of $\mathcal{A}$ are subsets of $S$, none of which is properly contained in another. Let $M(H)$ be the incidence matrix of $H$. That is, the columns of $M(H)$ are labelled by the elements of $S$, and the rows are labelled by members of $\mathcal{A}$. An entry of $M(H)$ is one if and only if the corresponding element of $S$ is contained in the corresponding member of $\mathcal{A}$, otherwise it is zero.

Now assume that $w$ and $x$ are $|S|\times 1$ vectors with rows labelled by elements of $S$ (in the same order as the columns of $M(H)$), and the entries of $w$ non-negative. The vector $w$ can be seen as assigning a weight to each element of $S$. We will consider the following integer program.

(1) Minimise $w^{T}x$ subject to the constraints $x\geq \mathbf{0}$ and $M(H)x\geq \mathbf{1}$.

(Here $\mathbf{1}$ is the vector of all ones.)

If $x$ is an integer solution to this problem, then there is no harm in assuming that the entries of $x$ are all $0$ or $1$. (If any entry is greater than $1$, then replace it with $1$. Then $M(H)x\geq \mathbf{1}$ will still hold, and $w^{T}x$ will not have increased.) So an integer solution can be seen as a characteristic vector of a subset of $S$. This subset must intersect every member of $\mathcal{A}$ (by virtue of the constraint $M(H)x\geq \mathbf{1}$). Thus we are looking for a minimum-weight hitting set, and (1) can be thought of as the relaxation of the hitting set problem that allows non-integral solutions.

Next we consider the dual problem. Here $y$ is a $|\mathcal{A}|\times 1$ vector with entries labelled by the members of $\mathcal{A}$.

(2) Maximise $y^{T}\mathbf{1}$ subject to the constraints $y\geq \mathbf{0}$ and $y^{T}M(H)\leq w$.

In this problem we are thinking of $w$ as a capacity on each element of $S$. The vector $y$ assigns a non-negative flow value to each edge in $\mathcal{A}$, and the constraint $y^{T}M(H)\leq w$ says that the sum of flows over the edges that contain an element $s\in S$ cannot exceed the capacity that $w$ assigns to $s$. A solution to (2) maximises the sum of the flow values.

Now we can define the four classes in the Venn diagram.

$H$ has the Max Flow Min Cut property if, for every non-negative integer vector $w$, the programs (1) and (2) both have optimal solutions, $x$ and $y$, that are integral.

$H$ has the packing property if, for every vector $w$ whose entries are $0$, $1$, or $+\infty$, the programs (1) and (2) both have optimal solutions that are integral. In program (1), a weight of $+\infty$ means that we try to never include that element of $S$ in our hitting set. A weight of $0$ means that we can include that element for free.

$H$ packs if programs (1) and (2) have optimal solutions that are integral vectors when $w$ is set to be $\mathbf{1}$.

$H$ is ideal if, for any choice of vector $w$ with non-negative entries, not necessarily integral, program (1) has an optimal solution that is an integral vector.

Next time I will say more about the relations between these properties. For now, I will just note that, rather unfortunately, there really are clutters that pack, but which do not have the packing property. On the other hand, the shaded area of the Venn diagram is believed to be empty. So it is conjectured that a clutter has the Max Flow Min Cut property if and only if it has the packing property. Cornuéjols offers a bounty of $5000 for a resolution of this conjecture (or one of 17 others) before 2020.

Clutters I

I have been trying to firm up my feeling for the theory of clutters. To that end, I have been working through proofs of some elementary lemmas. For my future use, as much as anything else, I will post some of that material here.

A clutter is a pair $H=(S,\mathcal{A})$, where $S$ is a finite set, and $\mathcal{A}$ is a collection of subsets of $S$ satisfying the constraint that $A,A’\in\mathcal{A}$ implies $A\not\subset A’$. In other words, a clutter is a hypergraph satisfying the constraint that no edge is properly contained in another. For this reason we will say that the members of $\mathcal{A}$ are edges of $H$. Clutters are also known as Sperner families, because of Sperner’s result establishing that if $|S|=n$, then
\[\mathcal{A}\leq \binom{n}{\lfloor n/2\rfloor}.\]

Clutters abound in ‘nature’: the circuits, bases, or hyperplanes in a matroid; the edge-sets of Hamilton cycles, spanning trees, or $s$-$t$ paths in a graph. Even a simple (loopless with no parallel edges) graph may be considered as a clutter: just consider each edge of the graph to be a set of two vertices, and in this way an edge of the clutter. There is one example that is particularly important for this audience: let $M$ be a matroid on the ground set $E$ with $\mathcal{C}(M)$ as its family of circuits, and let $e$ be an element of $E$. We define $\operatorname{Port}(M,e)$ to be the clutter
\[(E-e,\{C-e\colon e\in C\in \mathcal{C}(M)\})\]
and such a clutter is said to be a matroid port.

If $H=(S,\mathcal{A})$ is a clutter, then we define the blocker of $H$ (denoted by $b(H)$) as follows: $b(H)$ is a clutter on the set $S$, and the edges of $b(M)$ are the minimal members of the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$. Thus a subset of $S$ is an edge of $b(H)$ if and only if it is a minimal subset that has non-empty intersection with every edge of $H$. Note that if $\mathcal{A}=\{\}$, then vacuously, $|X\cap A|\geq 1$ for all $A\in \mathcal{A}$, no matter what $X$ is. The minimal $X\subseteq S$ is the empty set, so $b((S,\{\}))$ should be $(S,\{\emptyset\})$. Similarly, if $\mathcal{A}=\{\emptyset\}$, then the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$ is empty, so $b((S,\{\emptyset\}))$ should be $(S,\{\})$. The clutter with no edges and the clutter with only the empty edge are known as trivial clutters.

Our first lemma was noted by Edmonds and Fulkerson in 1970.

Lemma. Let $H=(S,\mathcal{A})$ be a clutter. Then $b(b(H))=H$.

Proof. If $H$ is trivial, the result follows by the discussion above. Therefore we will assume that $H$ has at least one edge and that the empty set is not an edge. This implies that $b(H)$ and $b(b(H))$ are also non-trivial. Let $A$ be an edge of $H$. Now every edge of $b(H)$ has non-empty intersection with $A$, by the definition of $b(H)$. Since $A$ is a set intersecting every edge of $b(H)$, it contains a minimal such set. Thus $A$ contains an edge of $b(b(H))$.

Now let $A’$ be an edge of $b(b(H))$. Assume that $A’$ contains no edge of $H$: in other words, assume that every edge of $H$ has non-empty intersection with $S-A’$. Then $S-A’$ contains a minimal subset that has non-empty intersection with every edge of $H$; that is, $S-A’$ contains an edge of $b(H)$. This edge contains no element in common with $A’$. As $A’$ is an edge of $b(b(H))$, this contradicts the definition of a blocker. Hence $A’$ contains an edge of $H$.

Let $A$ be an edge of $H$. By the previous paragraphs, $A$ contains $A’$, an edge of $b(b(H))$, and $A’$ contains $A^{\prime\prime}$, an edge of $H$. Now $A^{\prime\prime}\subseteq A’\subseteq A$ implies $A^{\prime\prime}=A$, and hence $A=A’$. Thus $A$ is also an edge of $b(b(H))$. Similarly, if $A’$ is an edge of $b(b(H))$, then $A^{\prime\prime}\subseteq A\subseteq A’$, where $A’$ and $A^{\prime\prime}$ are edges of $b(b(H))$, and $A$ is an edge of $H$. This implies $A’=A^{\prime\prime}=A$, so $A’$ is an edge of $H$. As $H$ and $b(b(H))$ have identical edges, they are the same clutter. $\square$

If $H=(S,\mathcal{A})$ is a simple graph (so that each edge has cardinality two), then the edges of $b(H)$ are the minimal vertex covers. In the case of matroid ports, the blocker operation behaves exactly as we would expect an involution to do$\ldots$

Lemma. Let $M$ be a matroid and let $e$ be an element of $E(M)$. Then \[b(\operatorname{Port}(M,e))=\operatorname{Port}(M^{*},e).\]

Proof. Note that if $e$ is a coloop of $M$, then $\operatorname{Port}(M,e)$ has no edges, and if $e$ is a loop, then $\operatorname{Port}(M,e)$ contains only the empty edge. In these cases, the result follows from earlier discussion. Now we can assume that $e$ is neither a loop nor a coloop of $M$. Let $A$ be an edge in $\operatorname{Port}(M^{*},e)$, so that $A\cup e$ is a cocircuit of $M$. Since a circuit and a cocircuit cannot meet in the set $\{e\}$, it follows that $A$ has non-empty intersection with every circuit of $M$ that contains $e$, and hence with every edge of $\operatorname{Port}(M,e)$. Now $A$ contains a minimal set with this property, so $A$ contains an edge of $b(\operatorname{Port}(M,e))$.

Conversely, let $A’$ be an edge of $b(\operatorname{Port}(M,e))$. Assume that $e$ is not in the coclosure of $A’$. By a standard matroid exercise this means that $e$ is in the closure of $E(M)-(A’\cup e)$. Let $C$ be a circuit contained in $E(M)-A’$ that contains $e$. Then $C-e$ is an edge of $\operatorname{Port}(M,e)$ that is disjoint from $A’$. This contradicts the fact that $A’$ is an edge of the blocker. Therefore $e$ is in the coclosure of $A’$, so there is a cocircuit $C^{*}$ contained in $A’\cup e$ that contains $e$. Therefore $A’$ contains the edge, $C^{*}-e$, of $\operatorname{Port}(M^{*},e)$.

In exactly the same way as the previous proof, we can demonstrate that $b(\operatorname{Port}(M,e))$ and $\operatorname{Port}(M^{*},e)$ have identical edges. $\square$

This last fact should be attractive to matroid theorists: clutters have a notion of duality that coincides with matroid duality. There is also a notion of minors. Let $H=(S,\mathcal{A})$ be a clutter and let $s$ be an element of $S$. Define $H\backslash s$, known as $H$ delete $s$, to be
\[(S-s,\{A\colon A\in \mathcal{A},\ s\notin A\}\]
and define $H/s$, called $H$ contract $s$, to be
\[(S-s,\{A-s\colon A\in \mathcal{A},\ A’\in \mathcal{A}\Rightarrow A’-s\not\subset A-s\}.\]
It is very clear that $H\backslash s$ and $H/s$ are indeed clutters. Any clutter produced from $H$ by a (possibly empty) sequence of deletions and contractions is a minor of $H$.

We will finish with one more elementary lemma.

Lemma. Let $H=(S,\mathcal{A})$ be a clutter, and let $s$ be an element in $S$. Then

  1. $b(H\backslash s) = b(H)/s$, and
  2. $b(H/s) = b(H)\backslash s$.

Proof. We note that it suffices to prove the first statement: imagine that the first statement holds. Then
\[b(b(H)\backslash s)=b(b(H))/s=H/s\]
which implies that
b(H)\backslash s=b(b(b(H)\backslash s))=b(H/s)
and that therefore the second statement holds.

If $H$ has no edge, then neither does $H\backslash s$, so $b(H\backslash s)$ has only the empty edge. Also, $b(H)$ and $b(H)/s$ have only the empty edge, so the result holds. Now assume $H$ has only the empty edge. Then $H\backslash s$ has only the empty edge, so $b(H\backslash s)$ has no edges. Also, $b(H)$ and $b(H)/s$ have no edges. Hence we can assume that $H$ is nontrivial, and therefore so is $b(H)$.

If $s$ is in every edge of $H$, then $H\backslash s$ has no edges, so $b(H\backslash s)$ has only the empty edge. Also, $\{s\}$ is an edge of $b(H)$, so $b(H)/s$ has only the empty edge. Therefore we can now assume that some edge of $H$ does not contain $s$, and that therefore $H\backslash s$ is non-trivial and $\{s\}$ is not an edge of $b(H)$.

As $b(H)$ has at least one edge we can let $A$ be an arbitrary edge of $b(H)/s$, and as $\{s\}$ is not an edge of $b(H)$, it follows that $A$ is non-empty. Since $s$ is not in every edge of $H$, we can let $A’$ be an arbitrary edge of $H\backslash s$. Hence $A’$ is an edge of $H$. As $H$ is non-trivial, $A’$ is non-empty. If $A$ is an edge of $b(H)$, then certainly $A$ and $A’$ have non-empty intersection. Otherwise, $A\cup s$ is an edge of $b(H)$, so $A\cup s$ and $A’$ have non-empty intersection. As $A’$ does not contain $s$, it follows that $A$ and $A’$ have non-empty intersection in any case. This shows that every edge of $b(H)/s$ intersects every edge of $H\backslash s$, and thus every edge of $b(H)/s$ contains an edge of $b(H\backslash s)$.

As $H\backslash s$ is non-trivial, so is $b(H\backslash s)$. We let $A’$ be an arbitrary edge of $b(H\backslash s)$ and note that $A’$ is non-empty. Let $A$ be an arbitrary edge of $H$, so that $A$ is non-empty. If $s\notin A$, then $A$ is an edge of $H\backslash s$, so $A’\cap A\ne\emptyset$. If $s$ is in $A$, then $(A’\cup s)\cap A\ne\emptyset$. This means that $A’\cup s$ intersects every edge of $H$, so it contains an edge of $b(H)$ and therefore $A’=(A’\cup s)-s$ contains an edge of $b(H)/s$. We have shown that every edge of $b(H\backslash s)$ contains an edge of $b(H)/s$ and now the rest is easy. $\square$

Seymour’s 1-flowing conjecture III

A long time ago, I gave some background to the definition of a 1-flowing matroid. Assume $M$ is a matroid on the ground set $E$, and $e$ is an element in $E$. Take $c_{x}$ to be a non-negative integral capacity assigned to each element, $x$, in $E-e$. A flow is an assignment of a non-negative real, $f_{C}$, to each circuit, $C$, that contains $e$, with the constraint that if $x$ is in $E-e$, then the sum of $f_{C}$ over all circuits that contain $e$ and $x$ does not exceed $c_{x}$. We say that $M$ is $e$-flowing if, for every assignment of capacities, there is a flow whose value (sum over all circuits containing $e$) is equal to
\[\min\left\{\sum_{x\in C^{*}-e} c_{x}\mid C^{*}\ \text{is a cocircuit containing}\ e\right\}.\]
If $M$ is $e$-flowing for every $e$ in $E$, then $M$ is $1$-flowing.

We are motivated to study $1$-flowing matroids because problems that should really be intractable, magically become easy to solve when we consider such matroids. Consider the case of a graph with distinct vertices $s$ and $t$. We might ask for the largest number of edge-disjoint paths from $s$ to $t$. Now this resembles a difficult problem: we are asking to find a largest-possible pairwise-disjoint subfamily from a family of sets (the edge-sets of $s-t$ paths). Finding the maximum number of pairwise-disjoint sets is famously an NP-hard problem. However, as is well known, the problem of finding edge-disjoint $s-t$ paths can be solved using network-flow techniques by assigning a capacity of one to every edge. This really reflects the fact (discussed earlier) that graphic matroids are $1$-flowing. The fact that the maximum value of a flow through a network is equal to the minimum capacity of a cut gives enough structure to the problem to render it feasible.

In my most recent post, we can see another intimation that difficult problems may become tractable when dealing with $1$-flowing matroids. Let $M$ be a matroid on the ground set $E$, where $e$ is an element in $E$. We assign a real-valued variable $a_{x}$ to each element $x\in E-e$, and we insist that $a_{x}$ takes only non-negative values. In addition, for each circuit, $C$, that contains $e$, we require that the sum of variables, taken over all elements in $C-e$, must be at least one. Consider the set, $P$, of points in the Euclidean space $\mathbb{R}^{E-e}$ corresponding to assignments of values to variables such that these constraints are satisfied. Since $P$ is an intersection of half-spaces, it is a polyhedron. Now $M$ is $e$-flowing if and only if the $0$-dimensional extreme points (that is, the vertices) of $P$ have entirely integral coordinates. This means that integer programming problems associated with $P$ are in fact amenable to linear programming techniques. In other words, if there is a linear objective function which assigns a cost to each point in $P$, and we wish to find lowest-cost point in $P$ with integer coordinates, then we can relax the integrality constraint, since running the unconstrained linear program will return an integral point in any case. As integer programming is $NP$-hard, and linear progamming is solvable in polynomial time, this gives us some intuition as to why ostensibly difficult problems are easy for $1$-flowing matroids.

Proposition. The class of $1$-flowing matroids is closed under minors.

Proof. Let $M$ be a $1$-flowing matroid on the ground set $E$, and let $e$ be an element in $E$. Let $f$ be in $E-e$. We will show that $M\backslash f$ is $e$-flowing. Assume that each element $x\in E(M\backslash f)-e$ has been assigned a capacity $c_{x}$. We assign exactly the same capacities to these elements in $M$, and in addition we assign $f$ a capacity of $0$. Now every cocircuit of $M\backslash f$ is of the form $C^{*}-f$ where $C^{*}$ is a cocircuit of $M$, or is equal to a cocircuit of $M$. It follows that
\min\left\{\sum_{x\in C^{*}-e} c_{x}\mid e\in C^{*}\in\mathcal{C}^{*}(M)\right\}\leq\min\left\{\sum_{x\in C^{*}-e} c_{x}\mid e\in C^{*}\in\mathcal{C}^{*}(M\backslash f)\right\}.
If this is a strict inequality, then there is some cocircuit $C^{*}$ in $M$ that contains $e$, where the sum of capacities in $C^{*}-e$ is less than the minimum such sum in $M\backslash f$. Obviously $C^{*}$ is not a cocircuit of $M\backslash f$, so $f$ is in $\operatorname{cl}^{*}(C^{*})$. Thus some cocircuit $C^{*}_{0}$ of $M$ satisfies $f\in C^{*}_{0}\subseteq C^{*}\cup f$. But then $C^{*}_{0}-f$ is a cocircuit of $M\backslash f$ with total capacity at most the total capacity of $C^{*}$, a contradiction. Therefore the two minima are equal.

Since $M$ is $e$-flowing, there is a flow with value equal to these minima. We construct a flow for $M\backslash f$ with the same value by simply omitting all circuits that contains $e$ and $f$. This shows $M\backslash f$ is $e$-flowing.

Next we consider $M/f$. Assume that we assign the capacity $c_{x}$ to each $x$ in $E(M/f)-e$. Let $d$ be
\min\left\{\sum_{x\in C^{*}-e} c_{x}\mid e\in C^{*}\in\mathcal{C}^{*}(M/f)\right\}.
In $M$ we assign exactly the same capacities, and we also assign $f$ the capacity of $d$. It follows immediately that the minimum sum of capacities, taken over all cocircuits that contain $e$, is exactly the same in both matroids. Since $M$ is $1$-flowing, there is a flow whose value is equal to this minimum. For every circuit, $C$, in $M$ that contains $e$, there is a circuit, $\theta(C)$, of $M/f$ such that $\theta(C)\subseteq C$ and $e\in\theta(C)$. To construct a flow in $M/f$ we give each circuit $\theta(C)$ the sum of the values given to all pre-images of $C$ under $\theta$. Every other circuit receives a value of zero. This is enough to demonstrate that $M/f$ is $e$-flowing. The rest follows by easy induction. $\square$

Now we can hope to characterise the class of $1$-flowing matroids via its excluded minors. We might also note that the class is closed under duality. (This proof uses linear programming duality, and can be constructed from Theorem 1.17 in [Cor2001] and (3.1) in [Sey1977].) Thus the set of excluded minors will be closed under duality.

Let $M$ be isomorphic to $U_{2,4}$ and let $e$ be in $E$. We assign a capacity of one to the elements in $E-e$. Each cocircuit containing $e$ has a total capacity of two. Can we find a flow with this value? This would mean assigning values, $f_{1}$, $f_{2}$, and $f_{3}$, to the three circuits containing $e$, in such a way that $f_{1}+f_{2}+f_{3}=2$. However, the capacity constraints require that the sum of any two of $f_{1}$, $f_{2}$, and $f_{3}$ is at most one. A moment’s thought shows that this means $2(f_{1}+f_{2}+f_{3})\leq 3$, so there can be no solution to this system. Therefore $U_{2,4}$ is not $1$-flowing, and hence every $1$-flowing matroid is binary. The converse does not hold, as $AG(3,2)$ is not $1$-flowing (and is an excluded minor for the class). Seymour discovered two more excluded minors, $T_{11}$ and its dual. $T_{11}$ is the lift matroid of the even-cycle graph obtained from $K_{5}$ by making every edge negative.

Now we have arrived at the titular, and extremely lovely, conjecture by Seymour:

Conjecture (Seymour 1981). The set of excluded minors for the class of $1$-flowing matroids is $U_{2,4}$, $AG(3,2)$, $T_{11}$, and $T_{11}^{*}$.

[Cor2001] G. Cornuéjols, Combinatorial optimisation: Packing and covering. CBMS-NSF Regional Conference Series in Applied Mathematics, 74. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001.

[Sey1977] P. D. Seymour, The matroids with the max-flow min-cut property. J. Combinatorial Theory Ser. B 23 (1977) 189-222.

[Sey1981] P. D. Seymour, Matroids and multicommodity flows. European J. Combin. 2 (1981) 257-290.