# Polymath 12: Rota’s Basis Conjecture

Polymath 12 has just recently launched, and the topic is Rota’s Basis Conjecture!  See this guest post by Jan Draisma for information on Rota’s Basis Conjecture.

The polymath projects are a sort of mathematical experiment proposed by Tim Gowers to see if massively collaborative mathematics is possible.  The proposed proof proceeds via a sequence of public comments on a blog.  The general idea is to blurt out whatever idea comes into your head to allow for rapid public dissemination.

Polymath 12 is hosted by Timothy Chow and you can click here to follow the progress or to contribute.

# Building matroids from infinite graphs

Today we’ll be looking at infinite matroids again. We started this series by examining the question of how infinite matroids could be defined. With a suitable definition in hand, we took a quick tour through the zoo of known examples. Then we took a closer look at one very flexible way to build infinite matroids: by sticking together infinite trees of matroids.

In that construction we used 2-sums to stick the matroids together. Suppose that we have a finite collection of matroids arranged in a tree, so that their ground sets overlap (always in single edges) if and only if they are adjacent in the tree. Then because 2-sums are associative we have an unambiguously defined 2-sum of that collection. In the previous post in this series, we saw that this construction also works if we allow the tree to be infinite, but that we have to specify a little extra information the set $\Psi$ of ends of the tree which the circuits are allowed to use.

The same trick can be used for other kinds of associative sum of finite matroids. In this post we’ll see how it works for sums of representable matroids, and why that is useful for understanding the topological spaces associated to infinite graphs.

To see how the addition of representable matroids works, we need to look at them from a slightly unusual angle. Let’s say that we have a matroid $M$ on a ground set $E$, and suppose that we have vectors $\{v_e | e \in E\}$ in some vector space over a field $k$, giving us a representation of $M$ over $k$. Then we can capture all the essential information about this representation by forgetting the details of the vector space and focusing just on the linear relationships amongst the vectors. More formally, we say that an element $\lambda$ of the space $k^E$ is a linear dependence of the $v_e$ if $\sum_{e \in E} \lambda(e)v_e = 0$. Then the linear dependences form a subspace $V$ of $k^E$, and this subspace is enough to recover the matroid $M$; the circuits of $M$ are precisely the minimal nonempty supports of vectors in $V$. For those who prefer to think of representations in terms of matrices rather than families of vectors, the subspace we’re working with is just the orthogonal complement of the row space of the matrix.

So we can encode representations of matroids on $E$ over $k$ as subspaces of $k^E$. This way of seeing representations fits well with matroid duality, in that if $V \subseteq k^E$ represents a matroid $M$ on $E$ then the orthogonal complement $V^{\bot}$ represents the dual matroid $M^*$. If we define $M(V)$ to be the matroid whose circuits are the minimal nonempty supports of elements of $V$, then we can express this as $M(V^{\bot}) = (M(V))^*$.

The advantage of this perspective is that there is a natural way to glue together such subspaces, which we can use to build a self-dual gluing operation for represented matroids. Suppose that we have two sets $E_1$ and $E_2$ and subspaces $V_1$ and $V_2$ of $k^{E_1}$ and $k^{E_2}$, respectively. We now want to glue these together to give a subspace $V_1 \oplus V_2$ of $E_1 \triangle E_2$. As with the 2-sum, we throw away the gluing edges’ in the overlap $E_1 \cap E_2$. The idea is to take pairs of vectors which match on the gluing edges, patch them together and throw away the part supported on the gluing edges. More precisely, we set $V_1 \oplus V_2 := \{v \mathord{\upharpoonright}_{E_1 \triangle E_2} | v \in k^{E_1 \cup E_2}, v \mathord{\upharpoonright}_{E_i} \in V_i\}$.

Like the 2-sum, this definition is self-dual in the sense that $(M(V_1 \oplus V_2))^* = M(V_1^{\bot} \oplus V_2^{\bot})$. It is also associative, in that if $V_1$, $V_2$ and $V_3$ are subspaces of $k^{E_1}$, $k^{E_2}$ and $k^{E_3}$ respectively and the sets $E_1$ and $E_3$ are disjoint then $(V_1 \oplus V_2) \oplus V_3 = V_1 \oplus (V_2 \oplus V_3)$. So if we have a finite collection of such representations on ground sets $E_t$ arranged in a finite tree, such that the ground sets only overlap if they are adjacent in the tree, then we have an unambiguous sum of all these subspaces.

Just as for 2-sums, we can also glue together infinite trees of represented matroids in this way, as long as we are careful to specify which ends of the tree the circuits are allowed to use. Formally, we do this as follows. Suppose that we have a tree $T$, a family of sets $E_t$ indexed by the nodes of $T$, such that $E_s \cap E_t$ is only nonempty if $s = t$ or $s$ and $t$ are adjacent in $T$, a family of subspaces $V_t \subseteq k^{E_t}$ and a Borel subset $\Psi$ of the set $\Omega(T)$ of ends of $T$. Then we can build a subspace of $k^{\bigtriangleup_{t}E_t}$ by setting

$\bigoplus^{\Psi}_t V_t := \{v \mathord{\upharpoonright}_{\bigtriangleup_t E_t} | v \in k^{\bigcup_t E_t}, v \mathord{\upharpoonright}_{E_t} \in V_t \text{ and } \Omega(T) \cap \overline{\{t | v \upharpoonright_{E_t} = 0\}} \subseteq \Psi\}$

and $M(\bigoplus^{\Psi}_t V_t)$ will be an infinite matroid.

What are these infinite sums good for? Well, if we have a representable matroid and we have a $k$-separation of that matroid then we can split it up as a sum of two matroids in this way such that there are fewer than $k$ gluing edges. We can use this to break problems about bigger matroids down into problems about smaller matroids. Similarly, if we have a nested collection of finite separations in an infinite matroid, cutting the ground set up into a tree of finite parts, then we can cut the matroid up into a sum of finite matroids and analyse its properties in terms of their properties. This kind of chopping up and reconstruction can also be helpful to show that the infinite object is a matroid in the first place.

Let’s see how that might work for a more concrete problem. Suppose that we have a locally finite graph $G$. Then we can build a topological space $|G|$ from it by formally adding its ends as new points at infinity (see for example [D10]). These spaces and their subspaces are key elements of topological infinite graph theory, which was where this series of posts started.

At first, it was hoped that these subspaces would have the nice property that if they are connected then they are path connected. But Agelos Georgakopoulos eventually found a counterexample to this claim [G07]. However, the set of ends used by the counterexample he constructed was topologically horrible, so we might still hope that if we have a connected subspace $X$ of $|G|$ such that the set $\Psi$ of ends contained in $X$ is topologically nice, then $X$ will be path-connected. Well, if we take topologically nice’ to mean `Borel’, then the ideas above let us show that this is true.

We can do this by considering the matroid whose circuits are the edge sets of those topological circles in $|G|$ which only go through ends in $\Psi$. More precisely, we need to show that this really does give the circuit set of a matroid $M(G, \Psi)$. If we can do that, then we can argue as follows:

Let $P$ be the set of edges which, together with both endpoints, are completely included in $X$. Let $u$ and $v$ be vertices in $X$. Build a new graph $G + e$ with an extra edge $e$ joining $u$ to $v$. Then since $X$ is connected, there can be no cocircuit of $M(G + e, \Psi)$ which contains $e$ and is disjoint from $P$ (such a cocircuit would induce a topological separation of $X$ with $u$ and $v$ on opposite sides). So $e$ is not a coloop in the restriction of $M(G + e, \Psi)$ to $P \cup \{e\}$. Hence there is a circuit through $e$ in that matroid, and removing $e$ from the corresponding topological circle gives an arc from $u$ to $v$ through $X$ in $|G|$. So any two vertices are in the same path-component of $X$. Similar tricks show the same for ends and interior points of edges.

What this argument shows is that the connection between connectivity and path-connectivity is encoded in the statement that $M(G, \Psi)$ is a matroid. To prove that statement, we can build $M(G, \Psi)$ as the sum of an infinite tree of graphic matroids in the sense described above. First of all, since $G$ is locally finite, we can cut it up into an infinite tree of finite connected parts using disjoint finite separators. Then we define the torso corresponding to a part to consist of that part together with new edges forming complete graphs on each of the separators. This gives us an infinite tree of finite graphs, and the ends of the tree correspond precisely to the ends of $G$. Now we take the graphic matroids corresponding to those graphs, take the standard binary representations of those matroids, and glue them together along this tree, taking the ends in $\Psi$ to be allowed for circuits. And presto! We have build the matroid $M(G, \Psi)$.

The details of this argument are explained in [BC15].

I can’t resist mentioning that the matroids we’ve just built in a bottom-up way also have a top-down characterisation. Consider the class of matroids whose ground set is the set of edges of $G$, and in which every circuit is a topological circuit of $G$ and every cocircuit is a bond of $G$. Let’s call such matroids $G$-matroids.

For some graphs $G$, we can find $G$-matroids which are not of the form $M(G, \Psi)$. For example, in the following graph $Q$ we can define an equivalence relation on the (edge sets of) double rays by saying that two double rays are equivalent if they have finite symmetric difference. Then the set of finite circuits together with any collection of double rays closed under this equivalence relation gives the set of circuits of an infinite matroid.

The matroids I just described are a bit pathological, and they hover on the boundary between being binary and non-binary. None of them has a $U_{2,4}$-minor. They also still have the familiar property that any symmetric difference of two circuits is a disjoint union of circuits. But symmetric differences of three circuits might not be disjoint unions of circuits!

For example, there is such a matroid in which the first three sets depicted below are circuits, but the fourth, their symmetric difference, is not.

The problem is that these matroids are wild. This means that there are circuits and cocircuits which intersect in infinitely many elements. We only have a good theory of representability for tame matroids, those in which every intersection of a circuit with a cocircuit is finite. I hope to discuss this in more detail in a future post.

If we only consider tame $G$-matroids, then this proliferation of pathological matroids disappears. For the graph $Q$, for example, there are only 2 tame $Q$-matroids, namely the finite cycle matroid and the topological cycle matroid. Remarkably, for any locally finite graph $G$ it turns out that the tame $G$-matroids are precisely the matroids of the form $M(G, \Psi)$. So our top-down and bottom up characterisations meet, and any matroid associated to a graph can be built up from finite parts in a way which mirrors the structure of that graph. The reasons for this correspondence go far beyond the scope of this post, but they can for example be found in [BC16].

Now that we’ve seen the standard ways to build infinite matroids and their relationship to infinite graphs, in the next post we’ll examine the most important open problem about them: the Infinite Matroid Intersection Conjecture.

[BC15] N. Bowler and J. Carmesin, Infinite Matroids and Determinacy of Games, preprint here.
[BC16] N. Bowler and J. Carmesin, The ubiquity of Psi-matroids, preprint here.
[D10] R. Diestel, Locally finite graphs with ends: a topological approach I–III, Discrete Math 311–312 (2010–11).
[G07] A. Georgakopoulos. Connected but not path-connected subspaces of infinite graphs, Combinatorica, 27(6) 683–698 (2007).

# The Lights Out Game

In this short post, I will discuss a cute graph theory problem called the Lights Out Game. The set-up is as follows.  We are given a graph $G=(V,E)$, where we consider each vertex as a light.  At the beginning of the game, some subset of the vertices are ON, and the rest of the vertices are OFF.  We are allowed to press any vertex $v$, which has the effect of switching the state of $v$ and all of the neighbours of $v$.  The goal is to determine if it is possible to press some sequence of vertices to turn off all the lights.

The version of this game where $G$ is the $5 \times 5$ grid was produced in the real world by Tiger Electronics, and has a bit of a cult following.  For example, there is a dedicated wikipedia page and this site (from which I borrowed the image below) even has the original user manuals.

The goal of this post is to prove the following theorem.

Theorem. Let $G=(V,E)$ be a graph for which all the lights are initially ON.  Then it is always possible to press a sequence of vertices to switch all the lights OFF.

(Note that if not all the lights are ON initially, it will not always be possible to switch all the lights OFF.  For example, consider just a single edge where only one light is ON initially.)

Proof. Evidently, there is no point in pressing a vertex more than once, and the sequence in which we press vertices does not matter.  Thus, we are searching for a subset $S$ of vertices such that $|S \cap N[v]|$ is odd for all $v \in V$, where $N[v]$ is the closed neighbourhood of $v$ (the neighbours of $v$ together with $v$ itself).  We can encode the existence of such a set $S$ by doing linear algebra over the binary field $\mathbb{F}_2$.

Let $A$ be the $V \times V$ adjacency matrix of $G$, and let $B=A+I$. Note that the column corresponding to a vertex $v$ is the characteristic vector of the closed neighbourhood of $v$. Thus, the required set $S$ exists if and only if the all ones vector $\mathbb{1}$ is in the column space of $B$.  Since $B$ is symmetric, this is true if and only if $\mathbb{1}$ is in the row space of $B$.  Let $B^+$ be the matrix obtained from $B$ by adjoining $\mathbb{1}$ as an additional row. Thus, we can turn all the lights OFF if and only if $B$ and $B^+$ have the same rank.

Since this is a matroid blog, let $M(B)$ and $M(B^+)$ be the column matroids of $B$ and $B^+$ (over $\mathbb{F}_2$). We will prove the stronger result that $M(B)$ and $M(B^+)$ are actually the same matroid. Every circuit of $M(B^+)$ is clearly a dependent set in $M(B)$.  On the other hand let $C \subseteq V$ be a circuit of $M(B)$.  Then $\sum_{v \in C} \chi(N[v])=\mathbb{0}$, where $\chi(N[v])$ is the characteristic vector of the closed neighbourhood of $v$. Therefore, in the subgraph of $G$ induced by the vertices in $C$, each vertex has odd degree.  By the Handshaking Lemma, it follows that $|C|$ is even, and so $C$ is also dependent in $M(B^+)$.

Reference

Anderson, M., & Feil, T. (1998). Turning lights out with linear algebra. Mathematics Magazine, 71(4), 300-303.

# Four proofs of a theorem by Vámos

Let $\chi(M)$ be the characteristic set of the matroid $M$; that is, $\chi(M)=\{\operatorname{char}(\mathbb{F})\colon M\ \text{is representable over the field}\ \mathbb{F}\},$ where $\operatorname{char}(\mathbb{F})$ is the characteristic of $\mathbb{F}$. It is well known that $0\in\chi(M)$ if and only if $\chi(M)$ contains infinitely many primes. One direction is due to Rado [4]. The following theorem contains the other direction.

Theorem 1. If $\chi(M)$ contains infinitely many primes, then it contains zero.

One unusual aspect of Theorem 1 is that there are four proofs of it in the literature. All of them are attractive, and they call upon different (but related) sets of tools: Zorn’s lemma, compactness in first-order logic, ultraproducts, and Gröbner bases. In this post I will review all four proofs.

To start with, let $r$ be $r(M)$, and assume that $E(M)=\{1,\ldots, n\}$. Let $X$ be the set of variables $\{x_{i,j}\}_{1\leq i \leq r,\ 1\leq j \leq n}$, and let $R$ be the polynomial ring $\mathbb{Z}[X]$. We will let $A$ be the $r\times n$ matrix where the entry in row $i$ and column $j$ is $x_{i,j}$. Each $r$-element subset of $E(M)$ corresponds to an $r\times r$ submatrix of $A$, and thereby to a homogeneous polynomial in $R$, namely the determinant of that submatrix. Let $\mathcal{B}$ be the set of polynomials that correspond to bases of $M$, and let $\mathcal{N}$ contain the polynomials that correspond to dependent $r$-element subsets in $M$. Thus a representation of $M$ is a ring homomorphism from $R$ to a field that sends every polynomial in $\mathcal{N}$, but no polynomial in $\mathcal{B}$, to zero.

The first proof of Theorem 1 is due to Peter Vámos. He published two articles in the proceedings entitled Möbius Algebras, produced by the University of Waterloo in 1971. The proof of Theorem 1 has sometimes been cited as coming from [6], but in fact the correct attribution is to [5]. His proof hinges on the following lemma. Recall that an ideal, $P$, is prime, if $ab\in P$ implies that $a\in P$ or $b\in P$.

Lemma 1. Let $I$ be an ideal in a commutative ring, $S$, and let $T$ be a multiplicatively closed set of $S$ that is disjoint from $I$. There exists a prime ideal, $P$, such that $I\subseteq P$, and $P\cap T=\emptyset$.

Proof. We consider the partial order of ideals in $S$ that contain $I$ and that are disjoint from $T$. Assume that $J_{1}\subset J_{2}\subset J_{3}\subset \cdots$ is a chain in the order. It is easy to confirm that the $J_{1}\cup J_{2}\cup J_{3}\cup \cdots$ is itself an ideal in the order. This means that every such chain of ideals has a maximal element, so we can apply Zorn’s Lemma. Therefore the partial order has a maximal element, $P$. It remains to show that $P$ is a prime ideal. Assume otherwise, so that the product $ab$ belongs to $P$, even though $a\notin P$ and $b\notin P$. The maximality of $P$ means that the ideal generated by $P$ and $a$ is not disjoint from $T$, so there is some element, $ca+p$ in $T\cap \langle P, a\rangle$. (Here $p\in P$, and $c$ is an element of $S$.) Similarly, there are elements $q\in P$ and $d\in S$ such that $db+q$ belongs to $T$. Since $T$ is multiplicatively closed, we see that $(ca+p)(db+q)=(cd)(ab)+(ca)q+(db)p+pq$ is in $T$. But this element is also in $P$, since $ab\in P$. Now we have a contradiction to the fact that $P$ and $T$ are disjoint. $\square$

Proof 1. Let $p_{1},p_{2},p_{3},\ldots$ be distinct primes such that $M$ is representable over a field of characteristic $p_{i}$ for each $i$. Let $\phi_{i}$ be a homomorphism from $R$ to a field of characteristic $p_{i}$ that takes all the polynomials in $\mathcal{N}$, and no polynomial in $\mathcal{B}$, to zero. Then the kernel, $\ker(\phi_{i})$, of $\phi_{i}$ is a prime ideal of $R$ containing $\mathcal{N}$, but disjoint from $\mathcal{B}$. Moreover, the integer $p_{i}$ is contained in $\ker(\phi_{i})$. In fact, the only integers in $\ker(\phi_{i})$ are the multiples of $p_{i}$, since if $\ker(\phi_{i})$ contains a non-multiple of $p_{i}$, then it contains the greatest common divisor of that non-multiple and $p_{i}$ (namely $1$). This would imply that $\ker(\phi_{i})$ is equal to $R$, which is impossible.

Assume that there is an element in both the ideal $\langle \mathcal{N}\rangle$ and the multiplicatively closed set generated by $\mathcal{B}\cup\mathbb{Z}$. Such an element is of the form $kf$, where $k$ is an integer, and $f$ is a product of polynomials from $\mathcal{B}$. Since $kf$ is in $\langle \mathcal{N}\rangle$, it is in each $\ker(\phi_{i})$, even though $f$ is not. Thus $k$ is in each ideal $\ker(\phi_{i})$. Thus $k$ is a multiple of each of the distinct primes $p_{1},p_{2},p_{3},\ldots$, and we have a contradiction. Now, by Lemma $1$, there is a prime ideal, $P$, of $R$ which contains $\langle \mathcal{N}\rangle$, and which avoids $\mathcal{B}\cup\mathbb{Z}$. Because $P$ is prime, the quotient ring, $R/P$, is an integral domain. Let $F$ be the field of fractions of $R/P$. The natural homomorphism from $R$ to $F$ gives us a representation of $M$. As $P$ does not contain any integer, it follows that $R/P$ has characteristic zero, and we are done. $\square$

The next proof of Theorem 1 makes use of the Compactness Theorem in first-order logic. It is due to Samuel Wagstaff and appeared two years after Vámos’s proof, in 1973 [7]. Wagstaff does not supply many details — perhaps the account in his PhD thesis (upon which the article based) is more explicit. But I believe that my version below captures his key ideas.

Proof 2. We construct a first-order language of vector spaces. We have a countable supply of variables, $x_{1},x_{2},x_{3},\ldots$, and unary predicates $\operatorname{Scal}$ and $\operatorname{Vec}$. In our interpretation, these predicates will indicate that a variable stands for a scalar or a vector. We also have constants, $0_{S}$, $1_{S}$, and $0_{V}$, which are interpreted as the identities of the vector space. The functions $+_{S}$, $\times_{S}$, $+_{V}$, and $\times_{V}$, are interpreted as the binary operations of the field of scalars, as addition of vectors, and as scalar multiplication. The only binary relation we need is equality. Now, in this language, we can express the axioms for fields and for vector spaces. To illustrate, the compatibility of scalar and vector multiplication would be expressed with the sentence \begin{multline*}\forall x_{1}\forall x_{2}\forall x_{3} (\operatorname{Scal}(x_{1})\land \operatorname{Scal}(x_{2})\land \operatorname{Vec}(x_{3}) \to\\ ((x_{1}\times_{S}x_{2})\times_{V}x_{3}=x_{1}\times_{V}(x_{2}\times_{V}x_{3}))).\end{multline*} We can assert the representability of $M$ by requiring that there exist $x_{1},x_{2},\ldots, x_{n}$ such that $\operatorname{Vec}(x_{i})$ holds for each $i$. For each subset $X\subseteq E(M)=\{1,\ldots, n\}$, we include a sentence asserting that the corresponding subset of $\{x_{1},x_{2},\ldots, x_{n}\}$ is linearly independent if $X$ is independent in $M$, and otherwise asserting that it is linearly dependent. Let us illustrate how this can be accomplished by writing the formula which asserts that $\{x_{1},x_{2},x_{3}\}$ is linearly independent. Let $y_{1}, y_{2}, y_{3}$ be variables that are not used elsewhere in our sentence. The formula we need is as follows: \begin{multline*}\forall y_{1}\forall y_{2}\forall y_{3} (\operatorname{Scal}(y_{1})\land \operatorname{Scal}(y_{2})\land \operatorname{Scal}(y_{3}) \land (y_{1}\ne 0_{S} \lor y_{2}\ne 0_{S} \lor y_{3}\ne 0_{S})\to\\ (y_{1}\times_{V}x_{1}) +_{V} (y_{2}\times_{V} x_{2}) +_{V} (y_{3}\times_{V}x_{3})\ne 0_{V}).\end{multline*} We simply perform this trick for each subset of $E(M)$.

Now we have a finite collection, $\mathcal{T}$, of sentences which can be satisfied if and only if $M$ is representable over a field. For each prime, $p$, let $S_{p}$ be the sentence asserting that the sum of $p$ copies of $1_{S}$ is equal to $0_{S}$. Thus $M$ is representable over a field of characteristic $p$ if and only if $\mathcal{T}\cup\{S_{p}\}$ can be satisfied. Assume that $M$ is not representable over a field with characteristic zero. Then the collection $\mathcal{T}\cup\{\neg S_{p}\colon p\ \text{is a prime}\}$ is inconsistent: it cannot be satisfied by any model. The Compactness Theorem tells us that there is a finite subcollection which is inconsistent. Therefore we can assume that $\mathcal{T}\cup\{\neg S_{p} \colon p\ \text{is a prime and}\ p\leq N\}$ has no model, for some integer $N$. This implies that there can be no representation of $M$ over a field with characteristic greater than $N$, so we have proved the contrapositive of Theorem 1. $\square$

The next proof appeared in 1989, and comes courtesy of Imre Leader [3]. I think it might be my favourite out of the four. It uses the notion of an ultraproduct.

Proof 3. Let $F_{1},F_{2},F_{3},\ldots$ be an infinite sequence of fields such that $M$ is representable over each $F_{i}$ and $\operatorname{char}(F_{i+1})>\operatorname{char}(F_{i})$. Let $\mathcal{U}$ be a non-principal ultrafilter of the positive integers. This means that $\mathcal{U}$ is a collection of subsets of positive integers, and $\mathcal{U}$ satisfies: (1) any superset of a member of $\mathcal{U}$ is also in $\mathcal{U}$; (2) if $U,V\in\mathcal{U}$, then $U\cap V\in \mathcal{U}$; and (3) if $U$ is any subset of $\mathbb{Z}^{+}$, then exactly one of $U$ or $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. Non-principal ultrafilters cannot be described explicitly, but we know that they exist by an application of Zorn’s Lemma. If $U$ is a finite subset of $\mathbb{Z}^{+}$, then $\mathbb{Z}^{+}-U$ is in $\mathcal{U}$. We consider sequences $(x_{1},x_{2},x_{3},\ldots)$, where each $x_{i}$ is an element of $F_{i}$. We declare $(x_{1},x_{2},x_{3},\ldots)$ and $(y_{1},y_{2},y_{3},\ldots)$ to be equivalent if $\{i\in\mathbb{Z}^{+}\colon x_{i}=y_{i}\}$ is a member of $\mathcal{U}$. We perform componentwise addition and multiplication on sequences. It is not too difficult to see that these operations respect our notion of equivalence, and that the equivalence classes of sequences form a field, $F$, under componentwise addition and multiplication. Moreover, the characteristic of $F$ is zero. We consider a sequence of representations of $M$ over the fields $F_{1},F_{2},F_{3},\ldots$. This leads to a representation of $M$ over $F$ in the obvious way: we simply “stack” the infinite sequence of matrices, and produce a matrix whose entries are infinite sequences of numbers. $\square$

Finally, we come full circle! In 2003, Vámos (along with Rosemary Baines) published another proof of Theorem 1 [2]. The proof is actually an easy consequence of the algorithm that Stefan mentioned in a previous post, and the main ingredient is the use of Gröbner bases. I will tweak the proof slightly, for the sake of simplicity.

Proof 4. Let $f$ be the product of all polynomials in $\mathcal{B}$. We introduce the new, “dummy”, variable, $y$. From now on, we will operate in the ring $\mathbb{Z}[X\cup\{y\}]$. Note that an evaluation of the variables in $X\cup\{y\}$ will make $yf-1$ zero if and only if it makes each polynomial in $\mathcal{B}$ non-zero. Let $I$ be the ideal of $\mathbb{Z}[X\cup\{y\}]$ that is generated by $\mathcal{N}\cup\{yf-1\}$. Thus a representation of $M$ is really a ring homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field that takes every polynomial in $I$ to zero.

Let $G$ be a strong Gröbner basis of $I$ (see [1]). Thus $I$ is the ideal generated by $G$. In addition, we have an ordering on products of variables (the lexicographic order will suffice). The leading term of a polynomial is the highest non-zero term under this ordering. For $G$ to be a strong Gröbner basis, we require that if $h$ is in $I$, then there is a polynomial $g\in G$ such that the leading term of $g$ divides the leading term of $h$. Assume that $G$ contains an integer, $N$. If $\phi$ is a homomorphism from $\mathbb{Z}[X\cup\{y\}]$ to a field, $F$, such that $\ker(\phi)$ contains $I$, then $N$ is taken to zero by $\phi$. This means that the characteristic of $F$ is at most $N$, so $M$ is not representable over infinitely many prime characteristics. Therefore we will now assume that $G$ contains no integer.

Let $I_{\mathbb{Q}}$ be the ideal of $\mathbb{Q}[X\cup\{y\}]$ generated by $\mathcal{N}\cup\{yf-1\}$. Assume that $1\in I_{\mathbb{Q}}$. Then $1$ can be expressed as a polynomial combination of the polynomials in $\mathcal{N}\cup\{yf-1\}$. By multiplying this combination through by the denominators of the coefficients, we see that $I$ contains an integer, $N$. Thus $G$ contains a polynomial whose leading term divides the leading term of $N$, which is $N$ itself. But in the lexicographic ordering, a constant polynomial is below all non-constant polynomials, from which we deduce that $G$ contains a constant polynomial, contradicting our conclusion from the previous paragraph. Therefore $1\notin I_{\mathbb{Q}}$, which means that $I_{\mathbb{Q}}$ is contained in a maximal ideal, $P$. Note that $P$ contains no constant polynomial. Now $\mathbb{Q}[X\cup\{y\}]/P$ is a field, and the natural homomorphism from $\mathbb{Q}[X\cup\{y\}]$ to this field leads to a representation of $M$ over a field of characteristic zero. $\square$

References.

[1] W. W. Adams and P. Loustaunau. An introduction to Gröbner bases. Graduate Studies in Mathematics, 3. American Mathematical Society, Providence, RI, 1994.

[2] R. Baines and P. Vámos. An algorithm to compute the set of characteristics of a system of polynomial equations over the integers. J. Symbolic Comput. 35 (2003), no. 3, 269–279.

[3] I. Leader. A short proof of a theorem of Vámos on matroid representations. Discrete Math. 75 (1989), no. 1-3, 315–317.

[4] R. Rado. Note on independence functions. Proc. London Math. Soc. (3) 7 (1957), 300–320.

[5] P. Vámos. A necessary and sufficient condition for a matroid to be linear. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 162–169. Univ. Waterloo, Waterloo, Ont., 1971.

[6] P. Vámos. Linearity of matroids over division rings. Notes by G. Roulet. Möbius algebras (Proc. Conf., Univ. Waterloo, Waterloo, Ont., 1971), pp. 170–174. Univ. Waterloo, Waterloo, Ont., 1971.

[7] S. S. Wagstaff Jr. Infinite matroids. Trans. Amer. Math. Soc. 175 (1973), 141–153.