Henry Crapo: A Brief Reminiscence

A guest post by James Oxley.

Henry in Wellington in 2007

Henry Crapo died on September 3, 2019 at the age of 86. He contributed much to matroid theory, making foundational contributions to the subject. Among his notable achievements were the first text in matroid theory, written jointly with Gian-Carlo Rota; the introduction of the Tutte polynomial and the beta invariant for matroids; the analysis of how to treat single-element extensions of matroids; the identification of the critical problem for matroids; the definition of an attractive non-commutative way to combine two matroids; and a catalogue of all matroids on at most eight elements. He was a very friendly and generous man who, in addition to his many influential publications, hosted intimate and stimulating conferences at his home in the south of France. He will be greatly missed.

This short remembrance will highlight some of Henry’s most important contributions to matroid theory. In 1964, Henry completed his Ph.D. dissertation at MIT. According to the online Mathematics Genealogy Project, Henry had two advisors: Gian-Carlo Rota and Kenneth Myron Hoffmann, although more will be said about this below. Rota’s name will be well known to matroid theorists (or “combinatorial geometers” as he would have preferred), but Hoffman is unlikely to be as he worked in functional analysis.

Henry was one of the attendees at the first conference in matroid theory, which was organized by Jack Edmonds and held at the National Bureau of Standards in Washington, D.C. in 1964. Bill Tutte was another notable attendee at that conference where he delivered his Lectures on Matroids. Of that year, Tutte wrote, `To me that was the year of the Coming of the Matroids. Then and there the theory of matroids was proclaimed to the mathematical world. And outside the halls of lecture there arose the repeated cry: “What the hell is a matroid?”.’

Henry’s first paper [3], which was on single-element extensions of matroids, appeared in the proceedings of that conference, which were published in 1965 in the Proceedings of the National Bureau of Standards. In that paper, Henry noted that, when extending a matroid $M$ by an element $e$ to produce a new matroid $N$, one needs only to consider the flats $F$ of $M$ such that $F\cup e$ is a flat of $N$ with $r(F \cup e) = r(F)$. The set ${\mathcal M}$ of such flats $F$ is called a modular cut. Thus, for example, if ${\mathcal M}$ is empty, then $e$ is added as a coloop, whereas if ${\mathcal M}$ consists of all of the flats of $M$, then $e$ is added as a loop. The extension $N$ is uniquely determined by the modular cut ${\mathcal M}$. Thus a study of the single-element extensions of $M$ is a study of the modular cuts of $M$. Henry proved that a subset ${\mathcal F}$ of the flats of $M$ is a modular cut if and only if every flat containing a member of ${\mathcal F}$ is also in ${\mathcal F}$, and, whenever $X$ and $Y$ are in ${\mathcal F}$ and $r(X) + r(Y) = r(X \cup Y) + r(X \cap Y)$, the flat $X \cap Y$ is also in ${\mathcal F}$.

The pictures in Figures 1 and 2 are from that first matroid conference and were kindly supplied to me by Bill Pulleyblank. In the first one, Henry is in the front row on the extreme left. Slightly to his left in the row behind him is Gian-Carlo Rota. Bill Tutte is in the same row as Rota but two along from him. Jack Edmonds is in the back row, to Tutte’s immediate right. That photo also includes Tutte’s students, Neil Robertson and Ron Mullin, as well as Ray Fulkerson and Dijen Ray-Chaudhuri. Since the reader may enjoy trying to identify those people, their locations will not be revealed until the end of this note.

Figure 1: Participants in the first matroid theory conference.

Figure 2: The matroid theory conference participants seated.

Henry’s second and third papers [4, 5] appeared in Volumes 1 and 2 of the newly founded Journal of Combinatorial Theory. His third paper A higher invariant for matroids introduced what Brylawski [2, p.252] called “Crapo’s Betsy invariant”. More formally, for a matroid $M$, the beta invariant is defined by $$ \beta (M) = (-1)^{r(M)} \sum_{A \subseteq E(M)} (-1)^{|A|} r(A).$$

Henry proved that, in a matroid $M$, for any element $e$ other than a loop or a coloop, $$ \beta (M) = \beta(M\backslash e) + \beta(M/e).$$ Since $\beta(U_{0,1}) = 0$ and $\beta(U_{1,1}) = 1$, a straightforward induction argument gives that $\beta (M) \ge 0$ for all matroids $M$. Henry proved that, when $M$ has at least two elements, $M$ is connected if and only if $\beta(M)$ is positive. He also showed, again when $M$ has at least two elements, that $\beta(M) = \beta(M^*)$. Then, using Tutte’s excluded-minor characterization of regular matroids, Henry proved the following result.

Theorem. A matroid $M$ is regular if and only if $\beta(N_4) \le 1$ for all $4$-element minors $N_4$ of $M$, and $\beta(N_7) \le 2$ for all $7$-element minors $N_7$ of $M$.

Henry’s 1969 paper [6] The Tutte polynomial introduced the Tutte polynomial for matroids by building on Tutte’s 1954 paper [14] A contribution to the theory of chromatic polynomials, which introduced what we now call the Tutte polynomial for graphs. In a footnote in Henry’s paper, he observes that Sections 3 and 4 of his paper “constitute a rewriting, with new proofs, of the main theorems in the author’s doctoral dissertation `On the Theory of Combinatorial Independence’, submitted to the Massachusetts Institute of Technology in June, 1964, under the supervision of Professor Gian-Carlo Rota.” Note that there is no mention here of Henry’s second advisor Kenneth Hoffman leading one to wonder what role he played.

Tutte himself studied the Tutte polynomial for representable matroids, which he called “nets”, in his 1948 Cambridge Ph.D. thesis [13]. Since representability was not important in Tutte’s definition, he had effectively treated the Tutte polynomial for all matroids although he never published this work. The Tutte polynomial for matroids has been extensively studied by many authors in the half century since Henry’s paper formally introducing it. It can be defined as follows:
$$T(M;x,y) = \sum_{A \subseteq E} (x-1)^{r(E) – r(A)}(y-1)^{|A| – r(A)}.$$ Thus $T(U_{0,1};x,y) = y$ and $T(U_{1,1};x,y) = x$. This polynomial obeys the following two basic recursions: $$T(M;x,y) = T(M|\{e\};x,y) T(M\backslash e;x,y) ~~~\text{when $e$ is a loop or a coloop,}$$ and $$T(M;x,y) = T(M\backslash e;x,y) + T(M/ e;x,y) ~~~\text{otherwise.}$$
It follows from these that we can write $$T(M;x,y) = \sum_{i,j\ge 0} t_{ij}x^iy^j$$ where $t_{ij} \ge 0$ for all $i$ and $j$. It turns out that the beta invariant shows up among these coefficients. Specifically, when $M$ has at least two elements, $$\beta(M) = t_{10} = t_{01}.$$

Beginning in 1964, Gian-Carlo Rota published a series of papers “On the foundations of combinatorial geometry.” The most widely cited paper in that series is Part I: Theory of Möbius functions [12]. Part II, joint with Henry, is Combinatorial geometries and was published in 1970. That 25-page paper [7] was followed in the same year by a monograph whose full title is On the Foundations of Combinatorial Theory: Combinatorial Geometries. Preliminary Edition. This was the first text in matroid theory and, as such, was very influential. A year later, Tutte published An Introduction to the Theory of Matroids, which was effectively a reprinting of his 1965 Lectures on Matroids, but that book [16] did not attract nearly the same attention as Crapo and Rota’s book. When Crapo and Rota decided to update their book, it began as a joint project with Neil White [18, p.xv]. It turned into a series of three books [18, 19, 20], which were edited by Neil White and which contained chapters by a large number of different authors including two in the first volume by Henry. The approach to matroid theory taken by Crapo and Rota was a very geometric one and much of the focus was on the lattice of flats. Studying such geometric lattices is, of course, equivalent to studying simple matroids. With hindsight, the major drawback of this approach is that the geometric lattice of the dual of a matroid $M$ is not easily obtained from the geometric lattice of $M$. The fundamental role of duality and the link that it provides between deletion and contraction would seem to have been a major factor in the subsequent shift in focus in the study of matroids away from the study of geometric lattices. Crapo and Rota’s book has many attractive features. One in particular is the introduction of the critical problem for matroids. The goal of that problem was to provide a common framework within which one could view a number of problems in extremal combinatorics including what was then the Four Colour Problem, the Five-Flow Conjecture [14], and Tutte’s Tangential $2$-Block Conjecture [15]. For a matroid $M$, the characteristic or (chromatic) polynomial is defined by $$p(M;\lambda) = \sum_{A \subseteq E} (-1)^{|A|}\lambda^{r(M) – r(A)}.$$ This is an evaluation of the Tutte polynomial: $$p(M;\lambda) = (-1)^{r(M)}T(M;1-\lambda,0).$$ If $G$ is a graph with $k(G)$ connected components and cycle matroid $M(G)$, the chromatic polynomial $P_G(\lambda)$ of the graph satisfies $$P_G(\lambda) = \lambda^{k(G)}p(M(G);\lambda).$$ Moreover, if $M$ is loopless and its underlying simple matroid is $M’$, then $p(M’;\lambda) = p(M;\lambda)$.

Now let $M$ be a rank-$r$ simple $GF(q)$-representable matroid and view $M$ as a restriction of the projective geometry $PG(r-1,q)$. When $q$ is at least five, the embedding of $M$ in $PG(r-1,q)$ need not be unique. The critical exponent (or critical number) $c(M;q)$ of $M$ is $r – k$ where $k$ is the rank of the largest projective subspace of $PG(r-1,q)$ that contains no element of $E(M)$. In particular, recalling that the affine geometry $AG(r-1,q)$ is obtained from $PG(r-1,q)$ by deleting the elements of a hyperplane of the latter, we see that $c(M;q) = 1$ if and only if $M$ is a restriction of $AG(r-1,q)$. It is natural to expect that $c(M;q)$ may depend on the embedding of $M$ in $PG(r-1,q)$. However, the following attractive result of Crapo and Rota [8] proves that it does not.

Theorem. Let $M$ be a rank-$r$ simple $GF(q)$-representable matroid. Then
$$c(M;q) = \min\{j \ge 1: p(M;q^j) > 0\}.$$
One consequence of this is that a simple graph $G$ is bipartite if and only if its critical exponent $c(M;2)$ is one. More generally, the chromatic number $\chi(G)$ of $G$ and the critical exponent of $M(G)$ obey the following: $$q^{c(M;q) – 1} < \chi(G) \le q^{c(M;q)}.$$ The study of the behaviour of the critical exponent of matroids has recently been enjoying a renaissance with it now being more commonly known as the critical number. Peter Nelson [11] has recently written a very approachable survey of some of this work.

In 1973, with John Blackburn and Denis Higgs, Henry published A Catalogue of Combinatorial Geometries [1]. The preprint of that had an extraordinary cover, which is reprinted in Figure 3. The catalogue was assembled by doing sequences of single-element extensions. Thanks to Rudi Pendavingh and Stefan van Zwam, we have a wonderful matroid package in SageMath. This paper of Henry and his collaborators seems to have been the first to do extensive computation involving small matroids.

Figure 3: Can you identify what this says? The answer appears below.

I met Henry when I was working with Tom Brylawski in North Carolina probably in 1980. By then, Henry’s research focus was on structural rigidity, a topic that occupied his mind for many years. Henry did return to work on matroids from time to time after that. I conclude this reminiscence with one such return, in 2005, when Henry and Bill Schmitt [9] introduced a beautiful and useful matroid construction. Let $M_1$ and $M_2$ be matroids on disjoint sets $E_1$ and $E_2$. One easy way to combine $M_1$ and $M_2$ is to take their direct sum, a matroid of rank $r(M_1) + r(M_2)$. Crapo and Schmitt defined a different matroid on $E_1 \cup E_2$ of rank $r(M_1) + r(M_2)$. The free product $M_1 \Box M_2$ of $M_1$ and $M_2$ has as its bases all subsets $B$ of $E_1 \cup E_2$ of cardinality $r(M_1) + r(M_2)$ such that $B\cap E_1$ is independent in $M_1$ while $B\cap E_2$ is spanning in $M_2$. This operation is non-commutative. Indeed, $M_1 \Box M_2 = M_2 \Box M_1$ if and only if both $M_1$ and $M_2$ have rank zero, or both $M_1^*$ and $M_2^*$ have rank zero. This operation has a number of attractive properties. For example, $$(M_1 \Box M_2)^* = M_2^* \Box M_1^*.$$ Moreover, given $|E_1|$, the matroid $M_1 \Box M_2$ uniquely determines $M_1$ and $M_2$ up to isomorphism.

Theorem. For matroids $M_1$, $M_2$, $N_1$, and $N_2$, if $M_1 \Box M_2 \cong N_1 \Box N_2$ and $|E(M_1)| = |E(N_1)|$, then $M_1 \cong N_1$ and $M_2 \cong N_2$.

In 1969, Dominic Welsh [17] made a natural and seemingly innocuous conjecture about the number $f(n)$ of non-isomorphic matroids on an $n$-element set. Thirty-five years later, the conjecture was settled essentially simultaneously by Manoel Lemos [10] and by Crapo and Schmitt [9] using quite different methods. Crapo and Schmitt’s proof of Dominic’s conjecture was derived by applying the last theorem.

Theorem. For all non-negative integers $m$ and $n$, $$f(m+n) \ge f(m)f(n).$$

Henry liked to host small meetings at his home in La Vacquerie in the south of France. These were very stimulating meetings. The mathematics was exciting and the food was superb, with Henry hiring a chef for the week. Henry had a wonderful cellar I am told, and he shared extensively from it during these meetings. In addition, he arranged for local cultural events in the evenings including music recitals and interactive puppet shows. The experience was unforgettable for all lucky enough to enjoy it. At the only such meeting I attended, Henry gave up his own bedroom for my use. He was a most generous man. Those of us fortunate enough to have known him will miss him greatly; and all of us have his extensive mathematical legacy to continue to enrich our lives.

Answers to the open questions

In the photograph in Figure 1, Ray Fulkerson is in the back row on the extreme right; next to him is Neil Robertson; on the row in front of that, Ron Mullin is on the exteme right and Dijen Ray-Chaudhuri is next to him. In the photograph in Figure 2, Jack Edmonds is at the right-hand end of the table; two along from him is Gian-Carlo Rota; three along from him are, in order, Bill Tutte, Ray Fulkerson, and Dijen Ray-Chaudhuri. In the second row, starting at the left-hand end, the first three people are Ron Mullin, Neil Robertson, and Henry Crapo. In Figure 3, the title of the preprint is The Henry Crapo Group Presents The Incredible Catalogue of 8 Point Geometries. See Single Element Extensions Grow Before Your Eyes.


[1] Blackburn, John E.; Crapo, Henry H.; Higgs, Denis A. A catalogue of combinatorial geometries. Math. Comp. 27 (1973), 155–166.

[2] Brylawski, Thomas H. A decomposition for combinatorial geometries. Trans. Amer. Math. Soc. 171 (1972), 235–282.

[3] Crapo, Henry H. Single-element extensions of matroids. J. Res. Nat. Bur. Standards Sect. B 69B (1965), 55–65.

[4] Crapo, Henry H. The Möbius function of a lattice. J. Combinatorial Theory 1 (1966), 126–131.

[5] Crapo, Henry H. A higher invariant for matroids. J. Combinatorial Theory 2 (1967), 406–417.

[6] Crapo, Henry H. The Tutte polynomial. Aequationes Math. 3 (1969), 211–229.

[7] Crapo, Henry H.; Rota, Gian-Carlo On the foundations of combinatorial theory. II. Combinatorial geometries. Studies in Appl. Math. 49 (1970), 109–133.

[8] Crapo, Henry H.; Rota, Gian-Carlo. On the Foundations of Combinatorial Theory: Combinatorial Geometries. Preliminary Edition. M.I.T. Press, Cambridge, Mass.-London, 1970.

[9] Crapo, Henry; Schmitt, William. The free product of matroids. European J. Combin. 26 (2005), 1060–1065.

[10] Lemos, Manoel. On the number of non-isomorphic matroids. Adv. in Appl. Math. 33 (2004), 733–746.

[11] Nelson, Peter Colouring without colours: graphs and matroids. Lond. Math. Soc. Newsl. No. 482 (2019), 25–29.

[12] Rota, Gian-Carlo. On the foundations of combinatorial theory. I. Theory of Möbius functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 2 (1964), 340–368.

[13] Tutte, W. T. An algebraic theory of graph colorings. Ph.D. thesis. University of Cambridge, 1948.

[14] Tutte, W. T. A contribution to the theory of chromatic polynomials. Canadian J. Math. 6 (1954), 80–91.

[15] Tutte, W. T. Lectures on matroids. J. Res. Nat. Bur. Standards Sect. B 69B (1965), 1–47.

[16] Tutte, W. T. Introduction to the Theory of Matroids. American Elsevier, New York, 1971.

[17] Welsh, D. J. A. Combinatorial problems in matroid theory. Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969) pp. 291–306, Academic Press, London, 1971.

[18] White, Neil (editor). Theory of Matroids. Cambridge University Press. Cambridge. 1986.

[19] White, Neil (editor). Combinatorial Geometries. Cambridge University Press. Cambridge. 1987.

[20] White, Neil (editor). Matroid Applications. Cambridge University Press. Cambridge. 1992.

Decomposition-width for matroids

In this post I want to discuss material I have been working on with Daryl Funk, Mike Newman, and Geoff Whittle. In particular, I’m going to discuss a parameter for matroids called decomposition-width. This terminology has been used by Dan Král [Kra12] and Yann Strozecksi [Str10, Str11]. We didn’t discover their work until after we had developed our own notion of decomposition-width, so our definition looks quite different from theirs, although it is equivalent. We have chosen to adopt their terminology.

Decomposition-width has a very natural motivation if you are familiar with matroids representable over finite fields, and matroid branch-width. Consider the following geometric illustration of the binary matroid $AG(3,2)$. The ground set has been partitioned into the sets $U$ and $V$. Let $X$ stand for the set of points coloured purple, and let $X’$ stand for the set of orange points. In the lefthand diagram, $V$ can distinguish between $X$ and $X’$. By this I mean that there is a subset $Z\subseteq V$ (we colour the points in $Z$ green) such that $X\cup Z$ is a circuit while $X’\cup Z$ is independent. However, in the righthand diagram, no subset of $V$ can distinguish $X$ and $X’$ in this way. Geometrically, this is because $X$ and $X’$ span exactly the same subset of the three-point line that lies in the spans of both $U$ and $V$ in the ambient binary space.

In general, let $M$ be a matroid on the ground set $E$, and let $(U,V)$ be a partition of $E$. We define the equivalence relation $\sim_{U}$ on subsets of $U$. We write $X\sim_{U} X’$ to mean that whenever $Z$ is a subset of $V$, both $X\cup Z$ and $X’\cup Z$ are independent, or neither is. This is clearly an equivalence relation.

Now we consider branch-width and decomposition-width. A decomposition of a matroid, $M=(E,\mathcal{I})$, consists of a pair $(T,\varphi)$, where $T$ is a binary tree (by this I mean that every vertex has degree one or three), and $\varphi$ is a bijection from $E$ to the set of leaves of $T$. If $e$ is an edge of $T$ joining vertices $u$ and $v$, then let $U_{e}$ be the subset containing elements $z\in E$ such that the path in $T$ from $\varphi(z)$ to $u$ does not contain $v$. Define $V_{e}$ symmetrically. We say that $U_{e}$ and $V_{e}$ are displayed by the decomposition. Define $\operatorname{bw}(M;T,\varphi)$ to be the maximum of $r(U_{e})+r(V_{e})-r(M)+1$, where the maximum is taken over all edges $e$ with end-vertices $u$ and $v$. Now I will define $\operatorname{dw}(M;T,\varphi)$ to be the maximum number of equivalence classes under the relation $\sim_{U_{e}}$, where we again take the maximum over all displayed sets $U_{e}$. The branch-width of $M$ is the minimum of $\operatorname{bw}(M;T,\varphi)$, where the minimum is taken over all decompositions $(T,\varphi)$. We define the decomposition-width of $M$ in the same way: as the minimum value taken by $\operatorname{dw}(M;T,\varphi)$. We write $\operatorname{bw}(M)$ and $\operatorname{dw}(M)$ for the branch- and decomposition-widths of $M$.

The notion of decomposition-width is clearly motivated by matroids over finite fields, but I won’t discuss those applications now. Instead we will continue to look at more abstract properties of decomposition-width. Král proved this next result for matroids representable over finite fields.

Proposition 1. Let $M$ be a matroid. Then $\operatorname{dw}(M)\geq \operatorname{bw}(M)$.

Proof. Let $E$ be the ground set of $M$, and let $U$ be a subset of $E$. Recall that $\lambda(U)$ is $r(U)+r(E-U)-r(M)$. We will start by proving that $\sim_{U}$ has at least $\lambda(U)+1$ equivalence classes. Define $V$ to be $E-U$. Let $B_{V}$ be a basis of $M|V$, and let $B$ be a basis of $M$ that contains $B_{V}$. Then $B\cap U$ is independent in $M|U$, and
r(U)-|B\cap U| &=r(U)-(|B|-|B_{V}|)\\
Therefore we let $(B\cap U)\cup\{x_{1},\ldots, x_{\lambda(U)}\}$ be a basis of $M|U$, where $x_{1},\ldots, x_{\lambda(U)}$ are distinct elements of $U-B$. Next we construct a sequence of distinct elements, $y_{1},\ldots, y_{\lambda(U)}$ from $B_{V}$ such that $(B-\{y_{1},\ldots, y_{i}\})\cup\{x_{1},\ldots, x_{i}\}$ is a basis of $M$ for each $i\in\{0,\ldots, \lambda(U)\}$. We do this recursively. Let $C$ be the unique circuit contained in\[(B-\{y_{1},\ldots, y_{i}\})\cup\{x_{1},\ldots, x_{i}\}\cup x_{i+1}\] and note that $x_{i+1}$ is in $C$. If $C$ contains no elements of $B_{V}$, then it is contained in $(B\cap U)\cup\{x_{1},\ldots, x_{\lambda(U)}\}$, which is impossible. So we simply let $y_{i+1}$ be an arbitrary element in $C\cap B_{V}$.

We complete the claim by showing that $(B\cap U)\cup\{x_{1},\ldots,x_{i}\}$ and $(B\cap U)\cup\{x_{1},\ldots, x_{j}\}$ are inequivalent under $\sim_{U}$ whenever $i< j$. Indeed, if $Z=B_{V}-\{y_{1},\ldots, y_{i}\}$, then $(B\cap U)\cup\{x_{1},\ldots, x_{i}\}\cup Z$ is a basis of $M$, and is properly contained in $(B\cap U)\cup\{x_{1},\ldots, x_{j}\}\cup Z$, so the last set is dependent, and we are done. Now assume for a contradiction that $\operatorname{bw}(M)>\operatorname{dw}(M)$. Let $(T,\varphi)$ be a decomposition of $M$ such that if $U$ is any set displayed by an edge of $T$, then $\sim_{U}$ has at most $\operatorname{dw}(M)$ equivalence classes. There is some edge $e$ of $T$ displaying a set $U_{e}$ such that $\lambda(U_{e})+1>\operatorname{dw}(M)$, for otherwise this decomposition of $M$ certifies that
$\operatorname{bw}(M)\leq \operatorname{dw}(M)$. But $\sim_{U_{e}}$ has at least $\lambda_{M}(U_{e})+1$ equivalence classes by the previous claim. As $\lambda_{M}(U_{e})+1>\operatorname{dw}(M)$, this contradicts our choice of $(T,\varphi)$. $\square$

Král states the next result without proof.

Proposition 2. Let $x$ be an element of the matroid $M$. Then $\operatorname{dw}(M\backslash x) \leq \operatorname{dw}(M)$ and
$\operatorname{dw}(M/x) \leq \operatorname{dw}(M)$.

Proof. Let $(T,\varphi)$ be a decomposition of $M$ and assume that whenever $U$ is a displayed set, then $\sim_{U}$ has no more than $\operatorname{dw}(M)$ equivalence classes. Let $T’$ be the tree obtained from $T$ by deleting $\varphi(x)$ and contracting an edge so that every vertex in $T’$ has degree one or three. Let $U$ be any subset of $E(M)-x$ displayed by $T’$. Then either $U$ or $U\cup x$ is displayed by $T$. Let $M’$ be either $M\backslash x$ or $M/x$. We will show that in $M’$, the number of equivalence classes under $\sim_{U}$ is no greater than the number of classes under $\sim_{U}$ or $\sim_{U\cup x}$ in $M$. Let $X$ and $X’$ be representatives of distinct classes under $\sim_{U}$ in $M’$. We will be done if we can show that these representatives correspond to distinct classes in $M$. Without loss of generality, we can assume that $Z$ is a subset of $E(M)-(U\cup x)$ such that $X\cup Z$ is independent in $M’$, but $X’\cup Z$ is dependent. If $M’=M\backslash x$, then $X\cup Z$ is independent in $M$ and $X’\cup Z$ is dependent, and thus we are done. So we assume that $M’=M/x$. If $U$ is displayed by $T$, then we observe that $X\cup (Z\cup x)$ is independent in $M$, while $X’\cup (Z\cup x)$ is dependent. On the other hand, if $U\cup x$ is displayed, then $(X\cup x)\cup Z$ is independent in $M$ and $(X’\cup x)\cup Z$ is dependent. $\square$

When we combine Propositions 1 and 2, we see that the class of matroids with decomposition-width at most $k$ is a minor-closed subclass of the matroids with branch-width at most $k$. The class of matroids with branch-width at most $k$ has finitely many excluded minors [GGRW03]. In contrast to this, Mike and I convinced ourselves that there are classes of the form $\{M\colon \operatorname{dw}(M) \leq k\}$ with infinitely many excluded minors. I guess we’d had a couple of beers by that point, but I think our argument holds up. I’ll eventually add that argument to this post. If anyone presents a proof in the comments before I do then I will buy them a drink at the next SIGMA meeting.


[GGRW03] J. F. Geelen, A. M. H. Gerards, N. Robertson, and G. P. Whittle. On the excluded minors for the matroids of branch-width $k$. J. Combin. Theory Ser. B 88 (2003), no. 2, 261–265.

[Kra12] D. Král. Decomposition width of matroids. Discrete Appl. Math. 160 (2012), no. 6, 913–923.

[Str10] Y. Strozecki. Enumeration complexity and matroid decomposition. Ph.D. thesis, Université Paris Diderot (2010).

[Str11] Y. Strozecki. Monadic second-order model-checking on decomposable matroids. Discrete Appl. Math. 159 (2011), no. 10, 1022–1039.

Clutters III

A long time ago I started a series of posts abut clutters. The most recent post followed the textbook by Gérards Cornuéjols in defining several important classes, and introduced a Venn diagram, showing the relationships between them.


In this post we will spend a little more time discussing this diagram.

We let $H=(S,\mathcal{A})$ be a clutter. This means that $S$ is a finite set, and the members of $\mathcal{A}$ are subsets of $S$, none of which is properly contained in another. We let $M$ stand for the incidence matrix of $H$. This means that the columns of $M$ are labelled by the elements of $S$, and the rows are labelled by members of $\mathcal{A}$, where an entry of $M$ is one if and only if the corresponding element of $S$ is contained in the corresponding member of $\mathcal{A}$. Any entry of $M$ that is not one is zero. Let $w$ be a vector in $\mathbb{R}^{S}$ with non-negative values. We have two fundamental linear programs:

(1) Find $x\in \mathbb{R}^{S}$ that minimises $w^{T}x$ subject to the constraints $x\geq \mathbf{0}$ and $Mx\geq \mathbf{1}$.

The vectors $\mathbf{0}$ and $\mathbf{1}$ have all entries equal to zero and one, respectively. When we write that real vectors $a$ and $b$ with the same number of entries satisfy $a\geq b$, we mean that each entry of $a$ is at least equal to the corresponding entry in $b$.

(2) Find $y\in\mathbb{R}^{\mathcal{A}}$ that maximises $y^{T}\mathbf{1}$ subject to the constraints $y\geq \mathbf{0}$ and $y^{T}M\leq w$.

Lemma 1. Any clutter with the Max Flow Min Cut property also has the packing property.

Proof. Let $H$ be a clutter with the Max Flow Min Cut property. This means that for any choice of vector $w$ with non-negative integer entries, the programs (1) and (2) both have optimal solutions with integer values.

We will show that $H$ has the packing property. According to the definition in the literature, this means that for any choice of vector $w$ with entries equal to $0$, $1$, or $+\infty$, there are optimal solutions to (1) and (2) with integer entries. I think there is a problem with this definition. Assume that $r$ is a row of $M$, and every member of the support of $r$ receives a weight of $+\infty$ in the vector $w$. Then (2) cannot have an optimal solution. If $y$ is a purported optimal solution, then we can improve it by adding $1$ to the entry of $y$ that corresponds to row $r$. We are instructed that if entry $i$ in $w$ is $+\infty$, then this means when $x$ is a solution to (1), then entry $i$ of $x$ must be $0$. Again, we have a problem, for if $r$ is a row of $M$, and the entire support of $r$ is weighted $+\infty$, then (1) has no solution: if $x$ were a solution, then it would be zero in every entry in the support of $r$, meaning that $Mx$ has a zero entry.

The literature is unanimous in saying that $H$ has the packing property if (1) and (2) both have integral optimal solutions for any choice of vector $w$ with entries $0$, $1$, and $+\infty$. As far as I can see, this means that any clutter with $\mathcal{A}$ non-empty does not have the packing property: simple declare $w$ to be the vector with all entries equal to $+\infty$. Then neither (1) nor (2) has an optimal solution at all. I think the way to recover the definition is to say that whenever $w$ with entries $0$, $1$, or $+\infty$ is chosen in such a way that (1) and (2) have solutions, they both have optimal solutions that are integral. This is the definition that I will use here.

After this detour, we return to our task, and assume that $H$ has the Max Flow Min Cut property. Assume that $w$ is a vector with entries equal to $0$, $1$, or $+\infty$, and that (1) and (2) both have solutions. This means that any row in $M$ has a member of its support which is not weighted $+\infty$ by $w$. We obtain the vector $u$ by replacing each $+\infty$ entry in $w$ with an integer that is greater than $|S|$. Now because $H$ has the Max Flow Min Cut property, it follows that there are optimal integral solutions, $x$ and $y$, to (1) and (2) (relative to the vector $u$). We will show that $x$ and $y$ are also optimal solutions to (1) and (2) relative to the vector $w$.

We partition $S$ into $S_{0}$, $S_{1}$, and $S_{+\infty}$ according to whether an element of $S$ receives a weight of $0$, $1$, or $+\infty$ in $w$. We have assumed that no member of $\mathcal{A}$ is contained in $S_{+\infty}$. We note that if $z\in \mathbb{Z}^{S}$ is a vector which is equal to zero for each element of $S_{+\infty}$, and one everywhere else, then $z$ is a solution to (1), by this assumption. Moreover, $w^{T}z=u^{T}z\leq |S|$. Now it follows that $x$ must be zero in every entry in $S_{+\infty}$, for otherwise $u^{T}x>|S|$, and therefore $x$ is not an optimal solution to (1) relative to $u$. Since $x$ is integral and optimal, it follows that we can assume every entry is either one or zero. If $x$ is not an optimal solution to (1) relative to $w$, then we let $z$ be an optimal solution with $w^{T}z < w^{T}x$. But by convention, $z$ must be zero in every entry of $S_{+\infty}$. Therefore $u^{T}z=w^{T}z < w^{T}x=u^{T}z$, and we have a contradiction to the optimality of $x$. Thus $x$ is an optimal solution to (1) relative to the $\{0,1,+\infty\}$-vector $w$.

Now for problem (2). Since $y$ is integral and non-negative, and $y^{T}M\leq w$, where every member of $\mathcal{A}$ contains an element of $S_{0}$ or $S_{1}$, it follows that each entry of $y$ must be either one or zero. Let $z$ be any solution of (2) relative to $w$. Exactly the same argument shows that each entry of $z$ is between zero and one. Therefore $z^{T}\mathbf{1}\leq y^{T}\mathbf{1}$ so $y$ is an optimal solution to (2).

We have shown that relative to the vector $w$, both (1) and (2) have optimal solutions that are integral. Hence $H$ has the packing property. $\square$

Lemma 2. A clutter with the packing property packs.

Proof. This one is easy. In order to prove that $H$ packs, we merely need to show that (1) and (2) have optimal integral solutions when $w$ is the vector with all entries equal to one. But this follows immediately from our revised definition of clutters with the packing property. $\square$

The final containment we should show is that clutters with the packing property are ideal. Idealness means that (1) has an optimal integral solution for all vectors $w\in \mathbb{R}^{S}$. This proof is difficult, so I will leave it for a future post. Usually we prove it by using a theorem due to Lehman [Leh].

Theorem (Lehman). The clutter $H$ is ideal if and only if (1) has an optimal integral solution for all choices of vector $w\in\{0,1,+\infty\}^{S}$.

Question. Is there a short proof that clutters with the packing property are ideal? One that does not rely on Lehman’s (quite difficult) theorem?

We will conclude with some examples showing that various containments are proper.

Let $C_{3}^{2}$ and $C_{3}^{2+}$ be the clutters with incidence matrices
Let $Q_{6}$ and $Q_{6}^{+}$ be the clutters with incidence matrices

Exercise. Check that:

  1. $C_{3}^{2}$ is not ideal and does not pack,
  2. $C_{3}^{2+}$ packs, but is not ideal,
  3. $Q_{6}$ is ideal, but does not pack,
  4. $Q_{6}^{+}$ is ideal and packs, but does not have the packing property.

This leaves one cell in the Venn diagram without a clutter: the clutters with the packing property that do not have the Max Flow Min Cut property. In fact, Conforti and Cornuéjols [CC] have speculated that no such clutter exists.

Conjecture (Conforti and Cornuéjols). A clutter has the packing property if and only if it has the Max Flow Min Cut property.

[CC] M. Conforti and G. Cornuéjols, Clutters that Pack and the Max Flow Min Cut Property: A Conjecture, The Fourth Bellairs Workshop on Combinatorial Optimization, W.R. Pulleyblank and F.B. Shepherd eds. (1993).

[Leh] A. Lehman, On the width-length inequality. Mathematical Programming December 1979, Volume 16, Issue 1, pp 245–259.

Four proofs of a theorem by Vámos

Let $\chi(M)$ be the characteristic set of the matroid $M$; that is, \[\chi(M)=\{\operatorname{char}(\mathbb{F})\colon M\ \text{is representable over the field}\ \mathbb{F}\},\] where $\operatorname{char}(\mathbb{F})$ is the characteristic of $\mathbb{F}$. It is well known that $0\in\chi(M)$ if and only if $\chi(M)$ contains infinitely many primes. One direction is due to Rado [4]. The following theorem contains the other direction.

Theorem 1. If $\chi(M)$ contains infinitely many primes, then it contains zero.

One unusual aspect of Theorem 1 is that there are four proofs of it in the literature. All of them are attractive, and they call upon different (but related) sets of tools: Zorn’s lemma, compactness in first-order logic, ultraproducts, and Gröbner bases. In this post I will review all four proofs.

To start with, let $r$ be $r(M)$, and assume that $E(M)=\{1,\ldots, n\}$. Let $X$ be the set of variables $\{x_{i,j}\}_{1\leq i \leq r,\ 1\leq j \leq n}$, and let $R$ be the polynomial ring $\mathbb{Z}[X]$. We will let $A$ be the $r\times n$ matrix where the entry in row $i$ and column $j$ is $x_{i,j}$. Each $r$-element subset of $E(M)$ corresponds to an $r\times r$ submatrix of $A$, and thereby to a homogeneous polynomial in $R$, namely the determinant of that submatrix. Let $\mathcal{B}$ be the set of polynomials that correspond to bases of $M$, and let $\mathcal{N}$ contain the polynomials that correspond to dependent $r$-element subsets in $M$. Thus a representation of $M$ is a ring homomorphism from $R$ to a field that sends every polynomial in $\mathcal{N}$, but no polynomial in $\mathcal{B}$, to zero.

The first proof of Theorem 1 is due to Peter Vámos. He published two articles in the proceedings entitled Möbius Algebras, produced by the University of Waterloo in 1971. The proof of Theorem 1 has sometimes been cited as coming from [6], but in fact the correct attribution is to [5]. His proof hinges on the following lemma. Recall that an ideal, $P$, is prime, if $ab\in P$ implies that $a\in P$ or $b\in P$.

Lemma 1. Let $I$ be an ideal in a commutative ring, $S$, and let $T$ be a multiplicatively closed set of $S$ that is disjoint from $I$. There exists a prime ideal, $P$, such that $I\subseteq P$, and $P\cap T=\emptyset$.

Proof. We consider the partial order of ideals in $S$ that contain $I$ and that are disjoint from $T$. Assume that $J_{1}\subset J_{2}\subset J_{3}\subset \cdots$ is a chain in the order. It is easy to confirm that the $J_{1}\cup J_{2}\cup J_{3}\cup \cdots$ is itself an ideal in the order. This means that every such chain of ideals has a maximal element, so we can apply Zorn’s Lemma. Therefore the partial order has a maximal element, $P$. It remains to show that $P$ is a prime ideal. Assume otherwise, so that the product $ab$ belongs to $P$, even though $a\notin P$ and $b\notin P$. The maximality of $P$ means that the ideal generated by $P$ and $a$ is not disjoint from $T$, so there is some element, $ca+p$ in $T\cap \langle P, a\rangle$. (Here $p\in P$, and $c$ is an element of $S$.) Similarly, there are elements $q\in P$ and $d\in S$ such that $db+q$ belongs to $T$. Since $T$ is multiplicatively closed, we see that $(ca+p)(db+q)=(cd)(ab)+(ca)q+(db)p+pq$ is in $T$. But this element is also in $P$, since $ab\in P$. Now we have a contradiction to the fact that $P$ and $T$ are disjoint. $\square$

Proof 1. Let $p_{1},p_{2},p_{3},\ldots$ be distinct primes such that $M$ is representable over a field of characteristic $p_{i}$ for each $i$. Let $\phi_{i}$ be a homomorphism from $R$ to a field of characteristic $p_{i}$ that takes all the polynomials in $\mathcal{N}$, and no polynomial in $\mathcal{B}$, to zero. Then the kernel, $\ker(\phi_{i})$, of $\phi_{i}$ is a prime ideal of $R$ containing $\mathcal{N}$, but disjoint from $\mathcal{B}$. Moreover, the integer $p_{i}$ is contained in $\ker(\phi_{i})$. In fact, the only integers in $\ker(\phi_{i})$ are the multiples of $p_{i}$, since if $\ker(\phi_{i})$ contains a non-multiple of $p_{i}$, then it contains the greatest common divisor of that non-multiple and $p_{i}$ (namely $1$). This would imply that $\ker(\phi_{i})$ is equal to $R$, which is impossible.

Assume that there is an element in both the ideal $\langle \mathcal{N}\rangle$ and the multiplicatively closed set generated by $\mathcal{B}\cup\mathbb{Z}$. Such an element is of the form $kf$, where $k$ is an integer, and $f$ is a product of polynomials from $\mathcal{B}$. Since $kf$ is in $\langle \mathcal{N}\rangle$, it is in each $\ker(\phi_{i})$, even though $f$ is not. Thus $k$ is in each ideal $\ker(\phi_{i})$. Thus $k$ is a multiple of each of the distinct primes $p_{1},p_{2},p_{3},\ldots$, and we have a contradiction. Now, by Lemma $1$, there is a prime ideal, $P$, of $R$ which contains $\langle \mathcal{N}\rangle$, and which avoids $\mathcal{B}\cup\mathbb{Z}$. Because $P$ is prime, the quotient ring, $R/P$, is an integral domain. Let $F$ be the field of fractions of $R/P$. The natural homomorphism from $R$ to $F$ gives us a representation of $M$. As $P$ does not contain any integer, it follows that $R/P$ has characteristic zero, and we are done. $\square$

The next proof of Theorem 1 makes use of the Compactness Theorem in first-order logic. It is due to Samuel Wagstaff and appeared two years after Vámos’s proof, in 1973 [7]. Wagstaff does not supply many details — perhaps the account in his PhD thesis (upon which the article based) is more explicit. But I believe that my version below captures his key ideas.

Proof 2. We construct a first-order language of vector spaces. We have a countable supply of variables, $x_{1},x_{2},x_{3},\ldots$, and unary predicates $\operatorname{Scal}$ and $\operatorname{Vec}$. In our interpretation, these predicates will indicate that a variable stands for a scalar or a vector. We also have constants, $0_{S}$, $1_{S}$, and $0_{V}$, which are interpreted as the identities of the vector space. The functions $+_{S}$, $\times_{S}$, $+_{V}$, and $\times_{V}$, are interpreted as the binary operations of the field of scalars, as addition of vectors, and as scalar multiplication. The only binary relation we need is equality. Now, in this language, we can express the axioms for fields and for vector spaces. To illustrate, the compatibility of scalar and vector multiplication would be expressed with the sentence \begin{multline*}\forall x_{1}\forall x_{2}\forall x_{3} (\operatorname{Scal}(x_{1})\land \operatorname{Scal}(x_{2})\land \operatorname{Vec}(x_{3}) \to\\ ((x_{1}\times_{S}x_{2})\times_{V}x_{3}=x_{1}\times_{V}(x_{2}\times_{V}x_{3}))).\end{multline*} We can assert the representability of $M$ by requiring that there exist $x_{1},x_{2},\ldots, x_{n}$ such that $\operatorname{Vec}(x_{i})$ holds for each $i$. For each subset $X\subseteq E(M)=\{1,\ldots, n\}$, we include a sentence asserting that the corresponding subset of $\{x_{1},x_{2},\ldots, x_{n}\}$ is linearly independent if $X$ is independent in $M$, and otherwise asserting that it is linearly dependent. Let us illustrate how this can be accomplished by writing the formula which asserts that $\{x_{1},x_{2},x_{3}\}$ is linearly independent. Let $y_{1}, y_{2}, y_{3}$ be variables that are not used elsewhere in our sentence. The formula we need is as follows: \begin{multline*}\forall y_{1}\forall y_{2}\forall y_{3} (\operatorname{Scal}(y_{1})\land \operatorname{Scal}(y_{2})\land \operatorname{Scal}(y_{3}) \land (y_{1}\ne 0_{S} \lor y_{2}\ne 0_{S} \lor y_{3}\ne 0_{S})\to\\ (y_{1}\times_{V}x_{1}) +_{V} (y_{2}\times_{V} x_{2}) +_{V} (y_{3}\times_{V}x_{3})\ne 0_{V}).\end{multline*} We simply perform this trick for each subset of $E(M)$.

Now we have a finite collection, $\mathcal{T}$, of sentences which can be satisfied if and only if $M$ is representable over a field. For each prime, $p$, let $S_{p}$ be the sentence asserting that the sum of $p$ copies of $1_{S}$ is equal to $0_{S}$. Thus $M$ is representable over a field of characteristic $p$ if and only if $\mathcal{T}\cup\{S_{p}\}$ can be satisfied. Assume that $M$ is not representable over a field with characteristic zero. Then the collection $\mathcal{T}\cup\{\neg S_{p}\colon p\ \text{is a prime}\}$ is inconsistent: it cannot be satisfied by any model. The Compactness Theorem tells us that there is a finite subcollection which is inconsistent. Therefore we can assume that $\mathcal{T}\cup\{\neg S_{p} \colon p\ \text{is a prime and}\ p\leq N\}$ has no model, for some integer $N$. This implies that there can be no representation of $M$ over a field with characteristic greater than $N$, so we have proved the contrapositive of Theorem 1. $\square$

The next proof appeared in 1989, and comes courtesy of Imre Leader [3]. I think it might be my favourite out of the four. It uses the notion of an ultraproduct.

Proof 3. Let $F_{1},F_{2},F_{3},\ldots$ be an infinite sequence of fields such that $M$ is representable over each $F_{i}$ and $\operatorname{char}(F_{i+1})>\operatorname{char}(F_{i})$. Let $\mathcal{U}$ be a non-principal ultrafilter of the positive integers. This means that $\mathcal{U}$ is a collection of subsets of positive integers, and $\mathcal{U}$ satisfies: (1) any superset of a member of $\mathcal{U}$ is also in $\mathcal{U}$; (2) if $U,V\in\mathcal{U}$, then $U\cap V\in \mathcal{U}$; and (3) if $U$ is any subset of $\mathbb{Z}^{+}$, then exactly one of $U$ or $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. Non-principal ultrafilters cannot be described explicitly, but we know that they exist by an application of Zorn’s Lemma. If $U$ is a finite subset of $\mathbb{Z}^{+}$, then $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. We consider sequences $(x_{1},x_{2},x_{3},\ldots)$, where each $x_{i}$ is an element of $F_{i}$. We declare $(x_{1},x_{2},x_{3},\ldots)$ and $(y_{1},y_{2},y_{3},\ldots)$ to be equivalent if $\{i\in\mathbb{Z}^{+}\colon x_{i}=y_{i}\}$ is a member of $\mathcal{U}$. We perform componentwise addition and multiplication on sequences. It is not too difficult to see that these operations respect our notion of equivalence, and that the equivalence classes of sequences form a field, $F$, under componentwise addition and multiplication. Moreover, the characteristic of $F$ is zero. We consider a sequence of representations of $M$ over the fields $F_{1},F_{2},F_{3},\ldots$. This leads to a representation of $M$ over $F$ in the obvious way: we simply “stack” the infinite sequence of matrices, and produce a matrix whose entries are infinite sequences of numbers. $\square$

Finally, we come full circle! In 2003, Vámos (along with Rosemary Baines) published another proof of Theorem 1 [2]. The proof is actually an easy consequence of the algorithm that Stefan mentioned in a previous post, and the main ingredient is the use of Gröbner bases. I will tweak the proof slightly, for the sake of simplicity.

Proof 4. Let $f$ be the product of all polynomials in $\mathcal{B}$. We introduce the new, “dummy”, variable, $y$. From now on, we will operate in the ring $\mathbb{Z}[X\cup\{y\}]$. Note that an evaluation of the variables in $X\cup\{y\}$ will make $yf-1$ zero if and only if it makes each polynomial in $\mathcal{B}$ non-zero. Let $I$ be the ideal of $\mathbb{Z}[X\cup\{y\}]$ that is generated by $\mathcal{N}\cup\{yf-1\}$. Thus a representation of $M$ is really a ring homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field that takes every polynomial in $I$ to zero.

Let $G$ be a strong Gröbner basis of $I$ (see [1]). Thus $I$ is the ideal generated by $G$. In addition, we have an ordering on products of variables (the lexicographic order will suffice). The leading term of a polynomial is the highest non-zero term under this ordering. For $G$ to be a strong Gröbner basis, we require that if $h$ is in $I$, then there is a polynomial $g\in G$ such that the leading term of $g$ divides the leading term of $h$. Assume that $G$ contains an integer, $N$. If $\phi$ is a homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field, $F$, such that $\ker(\phi)$ contains $I$, then $N$ is taken to zero by $\phi$. This means that the characteristic of $F$ is at most $N$, so $M$ is not representable over infinitely many prime characteristics. Therefore we will now assume that $G$ contains no integer.

Let $I_{\mathbb{Q}}$ be the ideal of $\mathbb{Q}[X\cup\{y\}]$ generated by $\mathcal{N}\cup\{yf-1\}$. Assume that $1\in I_{\mathbb{Q}}$. Then $1$ can be expressed as a polynomial combination of the polynomials in $\mathcal{N}\cup\{yf-1\}$. By multiplying this combination through by the denominators of the coefficients, we see that $I$ contains an integer, $N$. Thus $G$ contains a polynomial whose leading term divides the leading term of $N$, which is $N$ itself. But in the lexicographic ordering, a constant polynomial is below all non-constant polynomials, from which we deduce that $G$ contains a constant polynomial, contradicting our conclusion from the previous paragraph. Therefore $1\notin I_{\mathbb{Q}}$, which means that $I_{\mathbb{Q}}$ is contained in a maximal ideal, $P$. Note that $P$ contains no constant polynomial. Now $\mathbb{Q}[X\cup\{y\}]/P$ is a field, and the natural homomorphism from $\mathbb{Q}[X\cup\{y\}]$ to this field leads to a representation of $M$ over a field of characteristic zero. $\square$


[1] W. W. Adams and P. Loustaunau. An introduction to Gröbner bases. Graduate Studies in Mathematics, 3. American Mathematical Society, Providence, RI, 1994.

[2] R. Baines and P. Vámos. An algorithm to compute the set of characteristics of a system of polynomial equations over the integers. J. Symbolic Comput. 35 (2003), no. 3, 269–279.

[3] I. Leader. A short proof of a theorem of Vámos on matroid representations. Discrete Math. 75 (1989), no. 1-3, 315–317.

[4] R. Rado. Note on independence functions. Proc. London Math. Soc. (3) 7 (1957), 300–320.

[5] P. Vámos. A necessary and sufficient condition for a matroid to be linear. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 162–169. Univ. Waterloo, Waterloo, Ont., 1971.

[6] P. Vámos. Linearity of matroids over division rings. Notes by G. Roulet. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 170–174. Univ. Waterloo, Waterloo, Ont., 1971.

[7] S. S. Wagstaff Jr. Infinite matroids. Trans. Amer. Math. Soc. 175 (1973), 141–153.