Infinite matroids

Guest post by Reinhard Diestel.

This is the first of what might become an irregular series of posts on infinite matroids, written on the invitation of Irene Pivotto for the Union. This first piece tells the story of how finite matroids and infinite graphs conspired to get us to think about how to axiomatize infinite matroids. How we ended up solving an old problem we had not even known about – and how it came to be that infinite matroid theory is suddenly exploding. It is meant to be light reading, with some deeper hints to ponder. Future posts, to be written by Nathan Bowler, are likely to be more mathematical and less anecdotal, as they follow up the various open ends of the emerging theory.

Back in 1966, Rado [6] asked whether there was a way to axiomatize matroids in such a way that infinite matroids, too, would have duals. A big challenge, given what was known. But we knew none of this.

What we did know was graphs. In particular, infinite graphs [5]. And these were luring us – towards matroids.

It had all begun with a dream, many years earlier. As an undergraduate studying with Halin, I had heard of the ends of an infinite graph: limit points at infinity such as the three red dots in this graph:

ThreeCycleRedDots

There is a combinatorial definition of ends that divorces the notion of convergence appealed to in this example from the topology of the plane: an end is taken to be an equivalence class of rays, that is, 1-way infinite paths, where two rays are considered as equivalent if no finite set of vertices separates them. As is easy to check, the graph above has three such classes each of which corresponds to one of the red dots, in that its rays converge to that dot in the plane.

Now here was my dream, inspired by this picture. There are a number of theorems about cycles in finite graphs that do not readily generalize to infinite graphs; Hamilton cycles are an obvious example. Might `infinite cycles’ such as the perimeter circle of the above graph come to the rescue and give us infinite analogues of finite cycle theorems? In our example, the perimeter circle would be an infinite Hamilton cycle. And the formal definition of an infinite cycle would be: a cyclic sequence of alternately double rays and ends, with `is an element of’ as incidence. What fun!

But when I tried this in earnest several years later, with my student Daniela Kühn, we soon got bogged down. It turned out that infinite cycles such as the above did not always
work: we had to iterate to allow cycles like

iterated

Then for the same reasons we had to iterate again, iterate transfinitely – and ultimately failed. The end of the story [4] was that topology, of all things, came to the rescue.
Even for graphs with abstract ends, not necessarily planar, there is a natural topology in which rays converge to `their’ ends. And it was the edge sets of circles in this topology that turned out to be the right notion of infinite cycle in locally finite infinite graphs, in the sense of allowing us to generalize those finite graph theorems. But these circles could be wild: containing infinitely many double rays arranged like the rationals, and continuum many ends! The perimiter circle of the planar graph below is an example:

WildCircle

And now matroids began to call. One of the finite graph theorems that wouldn’t generalize to infinite graphs naively was the tree packing theorem of Nash-Williams and Tutte. But we were able to prove an infinite tree packing theorem for the naturally adapted topological notion of tree: a set of edges that connects all the vertices but whose topological closure does not contain a circle. Shouldn’t there be a notion of infinite matroid hiding behind this, one in which these topological trees would be the bases, and edge sets of topological circles the circuits?

The usual matroid axioms, together with

(I4) An infinite set is independent as soon as all its finite subsets are independent

as known from vector spaces, do not cover such matroids: a circuit, with all its proper subsets independent, can obviously not be infinite. So we looked for new axioms: axioms for infinite matroids that would default to the known finite ones for finite structures, but which would also allow for infinite circuits such as those topological ones.

It was fun trying – slowly, experimenting as much as thinking, and unaware that others had tried before us. Eventually, we came up with five sets of infinite matroid axioms [2] – in terms of independent sets, bases, circuits, closure and rank – that worked in the way I had hoped: they made the edge sets of topological circles of a locally finite infinite graph into the circuits of a matroid, and our topological spanning trees into the bases of that matroid. (In due course, even a base packing theorem emerged that implied our topological tree packing theorem, as hoped for [1]. But that was much later.) Thus, for an infinite graph there were now two different cycle matroids: one whose circuits were the finite cycles, and another whose circuits were the edge sets of topological circles, finite or infinite. And each of these had a dual: the cocircuits of the first were all the bonds (finite or infinite), those of the latter precisely the finite bonds [3].

In particular: it turned out, with hindsight, that those `infinite cycles’ which had given us such pain to find, and which we were able to nail down only with the help of topology, had a combinatorial description after all: as the minimal non-empty sets of edges not meeting any finite bond in just a single edge. It was thus the matroids lurking in the background which – once found – provided that much-sought combinatorial description of infinite cycles that would make theorems about cycles in finite graphs generalize to infinite graphs!

To top it all, it turned out that we had solved that ancient problem of Rado’s. Or more precisely: that we discovered that Higgs and Oxley had solved it long before us. But this is a story for a later blog, in which we shall look at the infinite axioms more explicitly and give more concrete examples of infinite matroids.

To wind up this one, let me say a little about that traditional axiom (I4): why it was a natural one to suggest at the time matroids were invented, but also why it is – from a modern perspective – a somewhat simple-minded one.

Axiom (I4) makes independence into a property of an infinite combinatorial structure that depends only on its finite substructures. Such properties can typically be verified by an application of Zorn’s Lemma, or by a so-called compactness argument: one that uses the compactness of an infinite product of compact spaces, typically finite, as proved by Tychonoff in the 1930s. Compactness was en vogue at the time and emerged in various guises, and so it must have been natural then to specify the infinite independent sets of a matroid as determined by the finite ones as in (I4).

But let’s look at this problem again now, even put in this way: given the collection of finite subsets of some given set that we wish to call independent, which infinite subsets should we grant the same status?

Axiom (I4) gives a straightforward answer: take them all. All, that is, that have a chance of complying with that other intended axiom, that subsets of independent sets shall be independent. And Rado, one of the champions of compactness and other equivalents of the axiom of choice, saw that this was too crude: in order for infinite matroids to allow for duality as we know it from finite ones, we have to choose as the infinite independent sets a carefully balanced subclass of all those allowed by (I4). To find this subclass, and to describe it by axioms both subtle and simple enough to yield an interesting theory of infinite matroids with duality, was his 1966 challenge.

It now seems that we are finally there. With those newly found axioms, infinite matroid theory can at last be developed in line with its finite counterpart that has raced so successfully ahead. The train is already gaining momentum, inviting you to jump on – but this, too, will be a story for later posts.

[1] E. Aigner-Horev, J.-O. Fröhlich and J. Carmesin, Infinite matroid union, arXiv:1111.0602.

[2] H. Bruhn, R. Diestel, M. Kriesell, R. Pendavingh and P. Wollan, Axioms for infinite matroids, Adv. Math 239 (2013), 18–46.

[3] H. Bruhn and R. Diestel, Infinite matroids in graphs, in the Infinite Graph Theory special volume of Discrete Math. 311 (2011), 1461–1471.

[4] R. Diestel, The cycle space of an infinite graph, Comb. Probab. Computing 14 (2005), 59–79.

[5] R. Diestel, Locally finite graphs with ends: a topological approach I–III, Discrete Math 311–312 (2010–11).

[6] R. Rado, Abstract linear dependence, Colloq. Math. 14 (1966), 257–64.

Neil White: 1945 – 2014

Guest post by Gary Gordon

Those with memories of Neil White are invited to share them in the comments below.

Neil White passed away on Aug. 11, 2014. Neil was an inspiring teacher and one of the key contributors to the revitalization of matroid theory in the 1970’s and 80’s. He published on a variety of topics, but most of his work was characterized by the way it combined different areas of mathematics, especially combinatorial geometry and algebra. His co-authored book Oriented Matroids (with A. Bjorner, M. Las Vergnas, B. Sturmfels and G. Ziegler) from 1993, with a 2nd edition published in 1999, is the standard reference for this topic. His book Coxeter Matroids (co-authored with A. Borovik and I. Gelfand) and several papers he wrote on this topic are typical of his breadth: these objects draw on classical results in algebra, geometry and combinatorics.

Neil, a Michigan native, received his undergraduate degree from Michigan State University and his PhD from Harvard. Neil wrote his dissertation under the direction of G.-C. Rota in 1972. Neil’s doctoral thesis examined the bracket ring defined by a matroid, and this algebraic approach influenced much of his future work. Neil was one of several young PhDs in Cambridge in the early 1970’s, and this group (including Ken Baclawski, Tom Brylawski, Curtis Greene, Richard Stanley, Walter Whiteley, and Tom Zaslavsky, among others) contributed significantly to a resurgence of interest in matroids.

Neil is best remembered in the matroid community for editing the seminal series of books in the Cambridge Encyclopedia of Mathematics series: Theory of Matroids (1986), Combinatorial Geometries (1987) and Matroid Applications (1992). The wide range of topics and clear organization is a testament to Neil’s vision. Neil also wrote three chapters for these volumes that remain essential references today.

In addition to his work as an editor and expositor, Neil made significant contributions to invariant theory, the combinatorics of bar-and-body frameworks and oriented matroids. MathSciNet lists nearly 600 citations for his 53 publications, and this list does not include citations to the Encyclopedia of Mathematics series he edited. Reading through his list of published work gives an indication of the depth and breadth of his contributions.

Neil spent his career at the University of Florida, retiring in 2008. He was a dedicated and very inspiring teacher, teaching combinatorics, algebra and a variety of other subjects to both undergraduates and graduates. His courses were challenging, but he gave students the tools to solve difficult problems. He was also in charge of the Putnam team preparation for a time. He was always an excellent problem solver, finishing in the top 20 nationally on the Putnam while he was an undergraduate.

From a personal standpoint, Neil was a good friend and a calm, positive presence. He was my teacher for some 10 courses from 1975 – 1977 at the University of Florida, and his approach to mathematics and problem solving had a strong influence on me. His teaching notes were exceptionally clear; I have used his notes on ordinary and exponential generating functions and the Mobius function in my own classes. I do not believe I would be a mathematician if it were not for Neil.

Neil had wide interests outside of mathematics. He was an early advocate of the analytic approach to baseball, and he played a version of simulation baseball for 30 years. He played bridge, read widely, and volunteered his time for local organizations. He also had a good sense of humor: after starting my first job in the 1980’s, he wrote to me, evidently at my request. The “letter” consisted on one word: “Regularly.”

Neil White was a first-rate mathematician, a clear expositor, an inspiring teacher and a genuinely decent and humane person. I will miss him.

Neil White’s obituary appears in The Gainesville Sun.

Sudoku Matroids and Graph Colouring Verification

As this is my first post, I feel obliged to say the obligatory Hello World!. With that aside, the problem I am going to discuss comes from the popular game Sudoku. Suppose that we are given a filled Sudoku $S$ (which we cannot see), and we wish to verify its correctness. To do so, we are granted access to an oracle which can tell us if $S$ is consistent on any row, column or block.  Consistent simply means that each number from 1 to 9 appears exactly once.

Question 1.  How many checks are necessary to verify that $S$ is correct?

Certainly, 27 checks is sufficient, but can we do better? Here’s a short proof that we can in fact do better.

Proposition 2.  Every Sudoku can be verified with at most 21 checks.

Proof.  It will be convenient to fix some notation. Let

$$K:= \{r_1, \dots, r_9\} \cup \{c_1, \dots, c_9\} \cup \{b_1, \dots, b_9\}$$

be the set of rows, columns and blocks of a Sudoku. By convention, the blocks are labelled as you (an English reader) would read the words along a page. Now suppose that a Sudoku $S$ is consistent on $b_1, b_2, b_3, r_1$ and $r_2$. We claim that this implies $S$ is also consistent on $r_3$.

Suppose not. Then some number, say 1, appears at least twice in $r_3$. However, since $S$ is consistent on $r_1$ and $r_2$, 1 appears exactly once in each of $r_1$ and $r_2$. Thus, 1 appears at least 4 times in $b_1 \cup b_2 \cup b_3$.  By the Pigeonhole Principle, 1 appears at least twice in $b_1, b_2$ or $b_3$, which is a contradiction. Thus, to verify $S$ it suffices to check $K \setminus (\{r_3, r_6, r_9\} \cup \{c_3, c_6, c_9\})$. $\square$

Can we do better than 21 checks? At first this seems rather tricky.  One can use information theory considerations to prove lower bounds.  For example, clearly at least 9 checks are necessary, since otherwise there will be a cell $x$ for which the row, column and box containing $x$ are all unchecked.  However, it seems difficult to get to a lower bound of 21 in this way. See this MathOverflow question for someone trying to carry this out.

At this point though, our hero Matroid Theory comes to the rescue. That is, somewhat remarkably, there is a matroid structure underlying this problem, which I will now describe. Define a subset $V$ of $K$ to be a verifier if every Sudoku which is consistent on $V$ is consistent everywhere.

Theorem 3.  The set of minimal (under inclusion) verifiers $\mathcal{V}$ is the set of bases of a matroid on $K$.

There is a nice proof of this fact by Emil Jeřábek as an answer to the same MathOverflow question above.  It turns out that this matroid is actually representable (over $\mathbb{Q}$) and that the proof is also valid for the more general $n \times n$ versions of Sudoku. We will not say anything more about the proof due to space limitations.  However, using the fact that $M=(K, \mathcal{V})$ is a matroid, it is now easy to show that 21 is in fact the answer to Question 1.

Proposition 4.  The minimum number of checks needed to verify the correctness of a Sudoku is 21.

Proof.  Translated into matroid theory language, the original question is simply asking what the rank of $M$ is. We have already shown that $B:=K \setminus (\{r_3, r_6, r_9\} \cup \{c_3, c_6, c_9\})$ is a verifier, so it suffices to show that it is a minimal verifier. Certainly, we cannot remove a block from $B$ and remain verifying, since there would then be a completely unchecked cell. On the other hand, for any two rows (or columns) $a$ and $b$ in the same band, it is easy to construct a Sudoku which is consistent everywhere except $a$ and $b$. Thus, we also cannot remove a row or column from $B$ and remain verifying. $\square$

Of course, there is nothing special about the Sudoku graph, and we can attempt to play this game on an arbitrary graph. Let $G$ be a graph and suppose that we wish to verify the correctness of a (not necessarily proper) $\chi(G)$-colouring $C$ of $G$. Again we are given access to an oracle which can tell us whether $C$ is consistent on any maximal clique of $G$. As before, we can define the set family $(K(G), \mathcal{V}(G))$, where $K(G)$ is the set of maximal cliques of $G$ and $\mathcal{V}(G)$ is the family of minimal verifiers.

We now arrive at our main problem.

Problem 5.  Characterize the graphs $G$ for which $(K(G), \mathcal{V}(G))$ is a matroid.

We now show that there are many such graphs.

Proposition 6.  For all bipartite graphs $G$, $(K(G), \mathcal{V}(G))$ is a matroid.

Proof.  We may assume that $G$ is connected. Since $G$ is bipartite, its set of maximal cliques is simply its set of edges. Let $T \subseteq E(G)$ be a minimal verifier. Evidently, $T$ must span all the vertices of $G$. On the other hand, if a 2-colouring of $G$ is correct on a spanning tree, then it is correct everywhere. Thus, $T$ is a spanning tree, and so $(K(G), \mathcal{V}(G))$ is actually the graphic matroid of $G$. $\square$

It would be nice if there were a unified proof that worked for both the bipartite case and the Sudoku graphs. Such a proof would have to utilize some common structure of these two classes, since it is not true that for all graphs $G$, $(K(G), \mathcal{V}(G))$ is a matroid.

Proposition 7.  If $G=K_{2,2,2}$, then $(K(G), \mathcal{V}(G))$ is not a matroid.

Proof.  It will be convenient to regard $K_{2,2,2}$ as the octahedron $O$. Let $1234$ be the middle 4-cycle of $O$ and $t$ and $b$ be the top and bottom vertices.  Now, it is easy to see that $B_1:=\{t12, t34, b23\}$ and $B_2:=\{t23, t14, b34\}$ are minimal verifiers. On the other hand, one easily checks that basis exchange fails for $B_1$ and $B_2$. $\square$

There are also many interesting questions that one can ask about the framework we have set up for graph colouring verification.  For example, we can define the verification number of a graph $G$ to be the minimum size of a verifier (this is the the same thing as the size of a minimal verifier if $(K(G), \mathcal{V}(G))$ is a matroid).

Question 8.  Among all $n$-vertex graphs, which ones have the largest verification number?

I was not able to find much in the literature, so any comments or references would be appreciated. Thanks for reading and see you in $3 + \epsilon$ months!

Acknowledgements.  Parts of this post are based on contributions from François Brunault, Emil Jeřábek and Zack Wolske on MathOverflow and from Ross Kang, Rohan Kapadia, Peter Nelson and Irene Pivotto at Winberie’s in Princeton.

Valuations on matroid base polytopes

Guest post by Joseph Kung.

We will give a short account of the $\mathcal{G}$-invariant from the point of view of a classical matroid theorist. The $\mathcal{G}$-invariant is a universal valuation on matroid base polytopes. Much of the theory holds for polymatroids and even more general submodular functions, but we will focus on matroids. This account is based on the work of H. Derksen and A. Fink (singly and jointly). My version differs in details and emphases from theirs. Also, I try to explain how bases of quasi-symmetric functions are used, as far as I can. I may be seriously misrepresenting the work of Derksen and Fink and I apologize in advance if I have done so.

For a finite set $E$, let $\mathbb{R}^E$ be the $|E|$-dimensional real vector space with coordinates labeled by the set $E$, so that the standard basis vectors $e_i,\, i \in E$ form a basis. The (matroid) base polytope $Q(M)$ of the matroid $M(E)$ is the convex polytope in $\mathbb{R}^E$ obtained by taking the convex closure of indicator vectors of bases of $M$, that is,
\[
Q(M) = \mathrm{conv}\left\{ \sum_{b \in B} e_b: B \,\,\text{is a basis of}\,\,M \right\}.
\]
A (base polytope) decomposition is a decomposition $Q(M) = Q(M_1) \cup Q(M_2)$ where (a) both $Q(M_1)$ and $Q(M_2)$ are base polytopes for matroids $M_1$ and $M_2$, and (b) the intersection $Q(M_1) \cap Q(M_2)$ is a base polytope and a face of the polytopes $Q(M_i)$ and $Q(M_j)$.

A function $v$ defined on base polytopes is a (matroid base polytope) valuation if
\[
v(Q(M)) = v(Q(M_1)) + v(Q(M_2)) – v(Q(M_1) \cap Q(M_2))
\]
when $Q(M) = Q(M_1) \cup Q(M_2)$ is a base polytope decomposition.

As a base polytope is the convex closure of indicator vectors of bases, any reasonable function defined on matroids which is a sum over bases should be a valuation. In particular the Tutte polynomial (as a sum of monomials defined by internal and external activities over all bases) is a valuation. (See the paper of Ardila, Fink, and Rincon.) In this sense, valuations are generalizations of Tutte polynomials. Derksen defined a valuation, the $\mathcal{G}$-invariant, in the following way: Let $M(E)$ be a rank-$r$ matroid on the set $E$, labeled as $\{1,2,\ldots,d\}$, where $d =|E|$. A permutation $\pi = x_1x_2 \ldots x_d$ of $E$ defines the sequence of non-negative integers
\[
(r_1, r_2 – r_1, r_3 – r_2, \ldots, r_d-r_{d-1}),
\]
where
\[
r_j = r(\{x_1,x_2,\ldots,x_{j}\}).
\]
This sequence is called the rank sequence $r(\pi)$ of the permutation $\pi$. The $\mathcal{G}$-invariant, is defined by
\[
\mathcal{G}(M) = \sum_{\pi} b_{r(\pi)},
\]
where the sum ranges over all $d!$ permutations of $E$ and $b_{r(\pi)}$ is a basis for quasi-symmetric functions.

We must digress here and explain what quasi-symmetric functions are. A formal power series $f(\underline{x})$ in the variables $x_1,x_2, \ldots$ is quasi-symmetric if $f$ has bounded degree and for a given (finite) sequence of non-negative integers $(\alpha_1, \alpha_2, \ldots, \alpha_k)$, the coefficients of the monomials $x^{\alpha_1}_{i_1} x^{\alpha_2}_{i_2} \cdots x^{\alpha_k}_{i_k}$, where $i_1 < i_2 < \cdots < i_k,$ are the same. Put another way, $f$ is quasi-symmetric if and only if it is a finite linear combination of monomial symmetric functions
\[
m_{(\alpha_1, \alpha_2, \ldots, \alpha_k)} := \sum_{i_1 < i_2 < \cdots < i_k} x_{i_1}^{\alpha_1} x^{\alpha_2}_{i_2} \cdots x^{\alpha_k}_{i_k}.
\]
In particular, a quasi-symmetric function is determined by an array of coefficients indexed by sequences of non-negative integers and one may choose any convenient basis for quasi-symmetric functions to define the $\mathcal{G}$-invariant. Each basis gives a different function $f(\underline{x})$ but all of them contain the same information about the matroid $M$. Thus, the $\mathcal{G}$-invariant is really an equivalence class of power series, defined up to a choice of basis of quasi-symmetric functions. Alternatively, we can simply think of the $\mathcal{G}$-invariant as an array of coefficients indexed by sequences of non-negative integers which may be transformed by certain change-of-basis matrices.

For matroids, rank sequences are sequences of $0$’s and $1$s with exactly $r$ $1$’s, where $r$ is the rank of the matroid. Using the formula for the rank function of the dual $M^*$, it is immediate that for a given permutation $\pi$ of $E$, the rank sequence of $\pi$ in $M^*$ is the complementary sequence (the sequence obtained by switching $0$’s with $1$’s and conversely) to the rank sequence in $M$. Thus, $\mathcal{G}(M^*)$ can be obtained from $\mathcal{G}(M)$ by a change of basis of quasi-symmetric functions.

The elements of the permutation where the $1$’s occur form a basis of $M$. Thus we can assign a (unique) basis to each rank sequence and the $\mathcal{G}$-invariant is a sum over bases. It is not surprising that it is a base polytope valuation.

Theorem (Derksen and Fink).The $\mathcal{G}$-invariant is a “universal” valuation on matroid base polytopes, in the sense that every base polytope valuation is an “evaluation” of the $\mathcal{G}$-invariant. In particular, the Tutte polynomial is an “evaluation” of the $\mathcal{G}$-invariant.

The proof is complicated. See the paper of Derksen and Fink cited at the end of this note.

As an example, consider the uniform matroid $U_{3,6}$ on the set $\{1,2,3,4,5,6\}$. All $3$-subsets are bases. The permutation $123456$ gives the composition $(1,1,1,0,0,0)$ and so does any other permutation. Hence, using the basis of monomial symmetric functions,
\[
\mathcal{G}(U_{3,6})=720m_{(1,1,1,0,0,0)}=720\sum_{i_1 < i_2 < i_3} x_{i_1}x_{i_2}x_{i_3}.
\]
For comparison,
\begin{multline*}
T(U_{3,6};x,y) = (x-1)^3+ 6 (x-1)^2 +15 (x-1) + 20 + 15(y-1) + 6(y-1)^2 + (y-1)^3
\\
= x^3 + 3x^2 + 6x + 6y + 3y^2 + y^3.
\end{multline*}
Here is where I might be totally wrong. The only way I can see of getting $T$ from $\mathcal{G}$ is to do a change of basis and assign different values to basis elements. So “evaluation” as used in the theorem means something more general.

What information does the $\mathcal{G}$-invariant contain? Since almost all matroids are paving matroids, a first-order answer might be obtained by calculating the $\mathcal{G}$-invariant for paving matroids. This is a simple calculation. I think R. T. Tugger is the first to do this.

Most of you would know what a paving matroid is, but I need to establish notation. Recall that a (Hartmanis) $k$-partition $\underline{H}$ of the set $E$ is a collection of subsets called blocks satisfying three conditions:

(a) the union of all the blocks equals $E$;

(b) every block has size at least $k$;

(c) every subset of size $k$ in $E$ is contained in exactly one block.

The blocks with more than $k$ elements are non-trivial and those with exactly $k$ elements are trivial. Let $\underline{H}$ be an $(r-1)$-partition. The paving matroid $\mathrm{Pav}(\mathcal{H})$ is the matroid of rank $r$ with the following flats: the entire set $E,$ the blocks of $\underline{H},$ and all subsets of size $r-2$ or less. Roughly speaking, all the dependencies occur at the top.

Proposition. The invariant $\mathcal{G}(\mathrm{Pav}(\underline{H}))$ can be computed from the multi set $\{ |H|: H \,\,\text{is a non-trivial block}\}$.

Proof. Let $|E|=d$ and $\pi = x_1,x_2, \ldots, x_d$ be a permutation of $E$. since every subset of size $r-1$ is independent, the rank sequence of $\pi$ starts with $r-1$ $1$’s. The remaining $1$ occurs in position $i, \, i \ge r.$ If $\{x_1, x_2, \ldots,x_{r-1}\}$ is a trivial block, then $i = r$. If not, then $\{x_1, x_2, \ldots,x_{r-1}\}$ spans a non-trivial block $H$ and $i$ can vary from $r$ to $|H|$. Indeed, the index is $i$ if and only if $\{x_1,x_2, \ldots,x_{i-1}\} \subseteq H$. Hence, in the sum for the $\mathcal{G}$-invariant, a trivial block contributes
\[
(r-1)!(d-r+1)! m_{(1^r0^{d-r})}.
\]
On the other paw, a non-trivial block contributes
\[
\sum_{i=r}^{|H|} \frac {|H|!} {(|H| – i + 1)!} (|E| – |H|) (|E| – i)! m_{(1^{r-1 }0^{i-r}10^{d-i})}.
\]
Here $(1^{r-1 }0^{i-r}10^{d-i}) = (1,1,\ldots,1,0,0,\ldots,0,1,0,0,\ldots,0)$ is the $0$-$1$ sequence with $1$’s in the leading $r-1$st positions and the $i$th position. Note that the number of trivial blocks can be calculated from the sizes of the non-trivial blocks.

As an example, let $P$ be the paving matroid on $123456$ defined by the $ 2$-partition $\{1234,456, 15, 16, 25, 26, 35,36\}$. Geometrically, $\mathrm{P}$ is the rank-$3$ matroid consisting of a $4$-point line $1234$ and a $3$-point line $456$ intersecting at the common point $4$. Then
\[
\mathcal{G}(P) = 48 m_{(1,1,0,0,1,0)} +144 m_{1,1,0,1,0,0} + 540 m_{(1,1,1,0,0,0)}.
\]

The analog of the proposition holds for the Tutte polynomial. For a paving matroid, the $\mathcal{G}$-invariant contains exactly the same information about the matroid as the Tutte polynomial. In particular, the $\mathcal{G}$-invariant and the Tutte polynomial has the same asymptotic power to distinguish matroids. However, $\mathcal{G}$-invariant can distinguish pairs of matroids the Tutte polynomial cannot. For examples, see the paper of Billera, Jia, and Reiner.

R. T. Tugger conjectures that one can use the argument in the chapter of Brylawski and Oxley in “Matroid Applications” (Exercise 9, p.~198) to show that for any integer $N$, there are at least $N$ non-isomorphic matroids having the same $\mathcal{G}$-invariant.

Although it is a sum over a lot of permutations, calculating the $\mathcal{G}$-invariant feels somehow more elegant than calculating the Tutte polynomial. (Yes, I like the definition!) As an exercise, the reader might imagine calculating $\mathcal{G}(\mathrm{PG}(r-1,q))$. The bases of a projective geometry all behave in the same way, so one can easily write down the terms in the sum for the $\mathcal{G}$-invariant.

For each basis, the sum over the quasi-symmetric functions determined by its rank sequences summarizes in some way how the basis interacts with the other elements of the matroid. This includes its internal and external activity. It will be very interesting to derive directly internal and external activities from the quasi-symmetric function of the basis. I have no idea how to do this and will be happy if anyone tells me how to do this.

The $\mathcal{G}$-invariant has applications. I am not the person to write about them but I hope this initial attempt would motivate someone else to tell us about these applications.

Here are four papers out of many I could have cited:

F. Ardila, A. Fink, F. Rincon, Valuations for matroid polytope subdivisions, Canad. J. Math. 62 (2010) 1228-1245.

L.J. Billera, N. Jia, V. Reiner, A quasisymmetric function for matroids, European J. Combin. 30 (2009) 1727-1757.

H. Derksen, Symmetric and quasi-symmetric functions associated to polymatroids, arXiv: 0801.4393.

H. Derksen, A. Fink, Valuative invariants for polymatroids, Adv. Math. 225 (2010) 1840-1892.

An introduction to quasi-symmetric functions can be found in the book Enumerative Combinatorics II by Richard Stanley.