Counting Matroids by Entropy

Guest post by Rudi Pendavingh and Jorn van der Pol


Bounding the entropy of a random variable often gives surprisingly strong bounds on the cardinality of its support. With Nikhil Bansal [BPvdP15], we gave a short entropy argument to obtain a bound on the number of matroids on groundset $\{1,2,\ldots,n\}$ that is quite close to the best known upper bound.

Roughly, the method works by bounding the number of matroids of rank 2, and using an entropy argument to derive a bound for the number of matroids of rank $r$. The same methods works for a number of other minor-closed classes, possibly after bounding the number of matroids of rank 3 (rather than 2) in the class under consideration.

In this blog post, we show how entropy can be used for counting problems in general and for counting matroids in particular, and we give a new proof for the following theorem from [PvdP15].

Theorem. If $N=U_{2,k}$ or $N=U_{3,6}$, then asymptotically almost all matroids have an $N$-minor.


Let $X$ be a random variable taking its values in a set $\mathcal{X}$ (we will always assume that $\mathcal{X}$ is a finite set). For $x \in \mathcal{X}$, let us write $P(x) = \mathbb{P}(X=x)$ for the probability that $X=x$. The entropy of $X$ is defined as
$$\mathcal{H}(X) = -\sum_{x \in \mathcal{X}} P(x) \log P(x),$$
where $\log$ denotes the binary logarithm, and for convenience we write $0\log 0 = 0$.

As an example, let us consider the case where $X$ takes the values 0 and 1 with probability $1-p$ and $p$, respectively. The entropy of $X$ is given by the binary entropy function,
$$\mathcal{H}(X) = H(p) := -p \log p – (1-p) \log (1-p).$$
If $p=1/2$, $X$ is a purely random bit of information, and we have $\mathcal{H}(X) = 1$. If $p=0$ or $p=1$, there is no randomness involved, and $\mathcal{H}(X) = 0$.

More generally, the entropy of $X$ measures the amount of information that is gained by learning the value of $X$. This may provide some intuition for the following properties of $\mathcal{H}(X)$.

  • Boundedness: $\mathcal{H}(X) \le \log |\mathcal{X}|$.
  • Subadditivity: $\mathcal{H}(X,Y) \le \mathcal{H}(X) + \mathcal{H}(Y)$.

Boundedness holds with equality if and only if $X$ has the uniform distribution on $\mathcal{X}$, i.e. $P(x) = 1/|\mathcal{X}|$ for all $x$. This makes entropy very useful for counting problems. If the random variable $X$ has the uniform distribution over $\mathcal{X}$, then $|\mathcal{X}| = 2^{\mathcal{H}(X)}$. So, bounds on the entropy of $X$ translate directly to bounds on $|\mathcal{X}|$.

In our applications, the space $\mathcal{X}$ consists of binary vectors, indexed by some set $J$. This allows us to define projections $X_T = (X_j : j \in T)$ for $T\subseteq J$.

Our main tool is the following result, which was first obtained in [CFGS86] in a different form. A proof and further discussion can also be found in [AS08].

Shearer’s Entropy Lemma. If $X$ is a random variable taking values in $\mathcal{X} \subseteq \{0,1\}^J$, and $T_1, T_2, \ldots, T_m$ is a collection of subsets of $J$ such that each $x \in \mathcal{X}$ is contained in at least $k$ of the $T_i$, then
$$\mathcal{H}(X) \le \frac{1}{k} \sum_{i=1}^m \mathcal{H}(X_{T_i}).$$

Intuitively, the lemma asserts that if every bit of information is contained in at least $k$ of the projected random variables, then the total amount of information contained in the projections is at least $k$ times the amount of information in $X$. Note that subadditivity follows as a special case of Shearer’s Lemma.

Matroid counting

In what follows, we will write $\mathbb{M}$ for the class of all matroids, $\mathbb{M}(E,r)$ for the class of matroids on groundset $E$ of rank $r$, and so on. $\mathcal{M}$ will be a subclass of matroids that is always closed under isomorphism, and possibly satisfies some additional properties.

As $\mathcal{M}$ is closed under isomorphism, $|\mathcal{M}\cap\mathbb{M}(E,r)|$ depends on $E$ only through its cardinality, and we will write $m_\mathcal{M}(n,r) = |\mathcal{M}\cap\mathbb{M}([n],r)|$.

The following result illustrates Shearer’s Entropy Lemma. As its proof is short, we will include it here.

Lemma. If $\mathcal{M}$ is closed under taking submatroids, then $$\frac{\log\left(1+m_{\mathcal{M}}(n,r)\right)}{  \binom{n}{r}} \leq \frac{\log \left(1+m_{\mathcal{M}}(n-1,r)\right)}{  \binom{n-1}{r}}.$$

Proof: If $M$ is a matroid with set of bases $\mathcal{B}$, and $e\in E(M)$ is not a coloop of $M$, then $\{B \in \mathcal{B} : e\not\in\mathcal{B}\}$ is the set of bases of $M\backslash e$. We will apply Shearer’s Entropy Lemma in such a way that its projections correspond to deletions.

Let $E$ be a set of cardinality $n$. We encode $M\in\mathcal{M}\cap\mathbb{M}(E,r)$ by the incidence vector of its bases. This is the binary vector $\chi$, indexed by the $r$-subsets of $[n]$, such that $\chi(B) = 1$ if and only if $B$ is a basis of $M$. Our space $\mathcal{X} \equiv \mathcal{X}(E,r)$ consists of all incidence vectors corresponding to $M\in\mathcal{M}\cap\mathbb{M}(E,r)$, as well as the all-zero vector. Thus, $|\mathcal{X}| = 1+m_\mathcal{M}(n,r) $, and if $X$ has the uniform distribution on $\mathcal{X}$, then $\mathcal{H}(X) = \log\left(1+ m_\mathcal{M}(n,r) \right)$.

If $e$ is not a coloop of $M$, then the incidence vector of $M\backslash e$ is obtained from $\chi$ by restricting $\chi$ to the entries avoiding $e$. (If $e$ is a coloop, this operation yields the all-zero vector, which is the reason that we need to include the all-zero vector in $\mathcal{X}$.)

We make two observations.

  • For $e \in E$, let $T_e = \binom{E-e}{r}$. It follows from the previous paragraph, that if $X \in \mathcal{X}(E,r)$, the projection $X_{T_e}\in\mathcal{X}(E-e,r)$.
  • Each $r$-subset of $E$ is contained in $|E|-r$ different $T_e$.

An application of Shearer’s Entropy Lemma, followed by an application of the Boundedness Property shows
$$\mathcal{H}(X) \le \frac{1}{n-r} \sum_{e \in E} \mathcal{H}(X_{T_e}) \le \frac{n}{n-r} \log \left(1+ |\mathcal{M}\cap\mathbb{M}(n-1,r)| \right),$$
which concludes the proof. $\square$

Repeated application of the statement dual to Lemma 1 yields the following variant:

Matroid Entropy Lemma. If $\mathcal{M}$ is closed under contraction, then for all $t \le r$
$$\frac{\log\left(1+m_{\mathcal{M}}(n,r)\right)}{\binom{n}{r}} \le \frac{\log\left(1+m_{\mathcal{M}}(n-t,r-t) \right)}{\binom{n-t}{r-t}}.$$

This variant enables us to lift upper bounds on the number of matroids of a fixed rank $s$  to upper bounds on the number of matroids in any rank $r\geq s$.

The first natural application of this scheme is counting matroids in general, so the case $\mathcal{M}=\mathbb{M}$. We will abbreviate $m(n,r):=m_{\mathbb{M}}(n,r)$, and consider the matroids of fixed rank $s=2$. Since each matroid of rank 2 on $n$ elements determines a nontrivial partition of $n+1$ elements, we have $1+m(n,2)\leq (n+1)^n$. Hence, $$\frac{\log (1+m(n,r))}{  \binom{n}{r}} \leq \frac{\log (1+m(n-r+2,2))}{\binom{n-r+2}{2}}\leq \frac{2\log(n-r+3)}{n-r+1}.$$ It then is straightforward to derive the following for $m(n):=\sum_r m(n,r)$.

Theorem (Bansal, Pendavingh, Van der Pol 2014).

$$\log m(n) \leq O\left(\binom{n}{n/2}\frac{\log(n)}{n}\right)\text{ as }n\rightarrow\infty$$

For comparison, we state the following lower bound.

Theorem (Knuth 1974;  Graham, Sloane 1980). $\log m(n) \geq \binom{n}{n/2}/n$.

So our entropy bound on $\log m(n)$ is off by a factor $\log n$, which is not too bad given the simplicity of the argument. The $\log n$ factor will not go away if we use a fixed rank $s>2$ for the base case. The following paraphrases a result due to Bennett and Bohman (see [Kee15]).

Theorem For any fixed $s\geq 3$, we have  $$\frac{\log m(n,s)}{\binom{n}{s}}\geq \frac{\log(e^{1-s}(n-s+1))}{n-s+1}\text{ as }n\rightarrow\infty$$

We defer the details of these lower bounds to a later post.

Further applications

To show that asymptotically almost all matroids have certain fixed matroid $N$  as a minor, it suffices to find an upper bound on the number of matroids without such a minor which is vanishing compared to the above lower bound on $m(n)$. We next show how entropy counting gives a sufficient upper bound in case $N=U_{2,k}$ or $N=U_{3,6}$. Let $$Ex(N):= \{M \in \mathbb{M}: M\not \geq N\}|.$$ We first consider a fixed $N=U_{2,k}$, and bound $m'(n,r):=m_{Ex(U_{2,k})}(n,r)$.

Each matroid of rank 2 on $n$ elements without $U_{2,k}$-minor corresponds 1-1 to a partition of $n+1$ items into at most $k$ parts, so that  $1+m'(n,2)=k^{n+1}$. This gives $$\frac{\log (1+m'(n,r))}{  \binom{n}{r}} \leq \frac{\log (1+m'(n-r+2,2))}{\binom{n-r+2}{2}}\leq \frac{2\log(k)}{n}(1+o(1)),$$ which does not suffice to push the upper bound on $m_{Ex(U_{2,k})}(n):=\sum_r m'(n, r)$ below the lower bound on $m(n)$.

At the time of writing of [PvdP15], we made this same calculation and then gave up on using the matroid entropy lemma to produce the upper bound. Instead, we developed cover complexity to do the job. But recently we noted that we were wrong to give up on entropy so soon, which works beautifully if we proceed from matroids of fixed rank $s=3$ rather than $2$.

It is easy to show that a simple matroid of rank 3 without $U_{2,k}$ as a minor has at most $1+(k-1)(k-2)$ points (fix a point and consider that all other points are on a line through that point). Hence, there is a global upper bound $c_k$ on the number of such (simple) matroids. Since each matroid is fully determined by its simplification and the assignment of its non-loop elements to parallel classes, we obtain the upper bound $1+m'(n,3)\leq c_k (k^2)^n$. It follows that $$\frac{\log (1+m'(n,r))}{  \binom{n}{r}} \leq \frac{\log (1+m(n-r+3,3))}{\binom{n-r+3}{3}}\leq \frac{12\log(k)}{n^2}(1+o(1)).$$ That bound does suffice:

Theorem (Pendavingh, Van der Pol 2015).  For any fixed $k$, we have $$\log m_{Ex(U_{2,k})}(n) \leq O\left(\binom{n}{n/2}/n^2\right)\text{ as } n\rightarrow \infty$$

Hence $\log m_{Ex(U_{2,k})}(n)\leq o(\log m(n))$, and it follows that asymptotically almost all matroids on $n$ elements have $U_{2,k}$ as a minor.

To bound the matroids without $U_{3,6}$ as a minor, it will also suffice to consider simple matroids in base rank 3.

Lemma. There is a constant $n_0$ so that if $M$ is a simple matroid of rank 3 on $E$ without a $U_{3,6}$-minor, and $|E|\geq n_0$, then there are lines $\ell_1$ and $\ell_2$ and a point $p$ of $M$ so that $E= \ell_1\cup\ell_2\cup\{p\}$.

Our shortest argument works for $n_0=56$. You may enjoy improving that constant.

The structural description of the lemma implies a bound on the number of simple matroids without $U_{3,6}$ in rank 3, which again is enough to prove that asymptotically almost all matroids on $n$ elements have $U_{3,6}$ as a minor.


When applying the matroid entropy lemma, a key issue  seems to be the choice of fixed rank $s$ for the base case. The lower we pick $s$, the easier work we have proving the base case, but higher $s$ will give better bounds. And as we found out to our embarrassment, we sometimes need to go to a sufficiently high $s$ before our bounds are any good.

So far we have not ventured beyond base rank $s=3$, which of course comes with an attractive arena of points-and-lines-with-your-favourite-property. We think this perhaps means that the entropy counting lemma has not been used to its full potential. To illustrate,  we offer the following conjectures.

Matroids without the 3-whirl

Conjecture. Asymptotically almost all matroids have the 3-whirl $W^3$ as a minor.

It is a theorem that almost all matroids are 3-connected [OSWW13], and we hazard the guess that most of those 3-connected matroids will have $M(K_4)$ or $W^3$ as a minor. Having $M(K_4)$ as a minor seems strictly more of a `coincidence’ than having $W^3$ as a minor, because $M(K_4)$ has all the hyperplanes of $W^3$, plus one extra. So if at least one of $W^3$ and $M(K_4)$ is a minor of almost all matroids, and the former is more likely than the latter, our conjecture becomes quite believable.

We could not make any sufficient upper bounds on $m_{Ex(W^3)}(n,s)$ in fixed rank $s=3$, but $s=4$ is open.

Matroids without a fixed uniform minor

We have very recently established the following [PvdP16].

Theorem Let $N$ be a uniform matroid. Asymptotically almost all matroids have $N$ as a minor.

The technique we used for this is much more involved than the above entropy method, and its does not give good bounds in fixed rank. The following is open.

Conjecture. Let $N$ be a uniform matroid. For any $c$ there exists an $s$ so that $$\log\left(1+m_{Ex(N)}(n,s)\right) / \binom{n}{s} \leq \frac{c}{n}$$ for all sufficiently large $n$.

If this conjecture is true for any $c<1/2$, this would yield an alternative proof of our theorem.

Oriented matroids

Conjecture Asymptotically almost all matroids are not orientable.

This conjecture may not appeal to all of you. An enumeration of small matroids reveals that the majority is in fact, orientable [MMIB12].

Let $\bar{m}(n,r)$ denote the number of oriented matroids on $E=\{1,\ldots, n\}$ and of rank $r$. There is a perfect analogue of the matroid entropy lemma for oriented matroids, with an entirely analogous proof (you use chirotopes rather than incidence vectors of base sets, otherwise the proof is identical).

Oriented Matroid Entropy Lemma. For all $t \le r$
$$\frac{\log\left(1+\bar{m}(n,r)\right)}{\binom{n}{r}} \le \frac{\log\left(1+\bar{m}(n-t,r-t) \right)}{\binom{n-t}{r-t}}.$$

If $\log \bar{m}(n)\leq (1-\epsilon)\log m(n)$ for sufficiently large $n$, then the number of oriented and hence the number of orientable matroids will be vanishing compared to the number of matroids, which would prove the conjecture. It would suffice to establish:

Conjecture. There is an $s$ and an $c<1/2$ such that $$\frac{\log\left(1+\bar{m}(n,s)\right)}{ \binom{n}{s}} \leq \frac{c}{n}$$ for all sufficiently large $n$.

Except for the constant $c<1/2$, that does not seem too much to ask, given the following theorem [ERS86].

Theorem. For each $s$ there is a $c$ such that $$\frac{\log \bar{m}(n,s)} { \binom{n}{s} }\leq \frac{c}{n}$$ for all sufficiently large $n$.

Note that this theorem already implies that for any fixed $s$, we have $$\log \bar{m}(n,s)\leq o(\log m(n,s))$$ for $n\rightarrow\infty$, since $\log m(n,s)/\binom{n}{s}\sim \log (n)/n$ as $n\rightarrow\infty$.

Finally, a construction due to Felsner and Valtr [FV11] shows that $$\log \bar{m}(n,3)\geq 0.1887 n^2,$$ i.e.  $\log \bar{m}(n,3)/ \binom{n}{3} \geq 1.32/n.$ So base rank $s=3$ will not do, but $s=4$ is open…


[AS08] Noga Alon and Joel Spencer, The probabilistic method, 3rd edition.

[BPvdP14] N. Bansal, R.A. Pendavingh, and J.G. van der Pol. An entropy argument for counting matroids. Journal of Combinatorial Theory B, 109:258-262, 2014.

[CFGS86] F. R. K. Chung, R. L. Graham, P. Frankl, and J. B. Shearer. Some intersection theorems for ordered sets and graphs. Journal of Combinatorial Theory A, 43(1):23-37, 1986.

[ERS86] H. Edelsbrunner, J. O’Rourke, and R. Seidel. Constructing arrangements of lines and hyperplanes with applications. SIAM Journal on Computing 15 (2): 341–363, 1986 doi:10.1137/0215024.

[FV11] S. Felsner and P. Valtr. Coding and Counting Arrangements of Pseudolines. Discrete & Computational Geometry, 46:405

[Kee15] P. Keevash. Counting designs.  Arxiv preprint 1504.02909

[MMIB12] Y. Matsumoto, S. Moriyama, H. Imai and D. Bremner. Matroid enumeration for incidence geometry. Discrete and Computational Geometry. Vol. 47, issue 1, pp. 17-43, 2012.

[OSWW13] J. Oxley, C. Semple, L. Warshauer, and D. Welsh. On properties of almost all matroids. Adv. Appl. Math., 50(1):115–124, 2013.

[PvdP15] R.A. Pendavingh and J.G. van der Pol. Counting matroids in minor-closed classes. Journal of Combinatorial Theory B, 111:126–147, 2015.

[PvdP16] R.A. Pendavingh and J.G. van der Pol. On the number of bases in almost all matroids. Arxiv preprint 1602.04763

Matroid Prophet Inequalities and Mechanism Design

Guest post by Bart de Keijzer.

This blog post is about a result in optimal stopping theory that relates to matroids, and an application of this result to mechanism design (a field of game theory).

The Classical Prophet Inequality

In the theory of optimal stopping, we deal with problems where a gambler observes a sequence of unknown and possibly random values, and has to decide when to take a particular action in order to optimize some objective value.  In a previous post, we have already seen the secretary problem, which is an example of a problem studied in optimal stopping theory. In that problem, we face a finite sequence of $n$ numbers, and each time a new number arrives we have to decide irrevocably whether we want to choose that number or not.

I will discuss in this post a slightly different problem from optimal stopping theory, where the permutation is not random, but the numbers are. We are given a sequence $X_1, \ldots, X_n$ of independent non-negative random variables, and the gambler observes this sequence. When he observes a number, he has to make an irrevocable decision about whether or not to stop. If he decides to stop, he receives a payoff equal to the number on which he chose to stop. If he decides not to stop, then the gambler simply moves on to the next number in the sequence. The classical prophet inequality of Krengel, Sucheston, and Garling [4] tells us that the gambler can get a payoff that is half of the expected maximum.

Theorem 1.  If $X_1, \ldots, X_n$ are independent non-negative distributions, then there exists a stopping rule $R$ such that $\mathbf{E}[R(X_1,\ldots,X_n)] \geq \mathbf{E}[\max\{X_i : 1 \leq i \leq n\}]/2$.

The stopping rule $R$ that achieves this particular guarantee is simple: stop at the first value that is at least half of the expected maximum. The name prophet inequality is because it shows that a gambler can achieve a payoff that is at least half of the value that would be picked by a prophet who is completely foresighted. Many variations and generalizations on this result have appeared in the literature. See [2] for an (old) survey.

Sequential Posted Price Mechanisms

We now move to a related scenario that is studied in mechanism design. Suppose that multiple identical products are being sold to a set of $n$ interested buyers. Each buyer is interested in obtaining at most one copy of the item, and each buyer has a valuation for the item that is for sale. The valuation of a buyer $i$ is expressed in terms of a real number which we denote by $v_i$, and is drawn from an independent probability distribution $D_i$. The valuations are privately held by the buyers and the auctioneer does not know them. However, the auctioneer does know the distributions $D_1, \ldots, D_n$.

The auctioneer wants to sell the item through a very simple process called a sequential posted price (SPP) mechanism. This means that the auctioneer has to make take-it-or-leave it offers to the buyers one by one. When a buyer $i$ receives an offer of $p_i$ he can choose to accept it or reject it. If he accepts, the buyer pays price $p_i$ and gets an item. The utility of buyer $i$ is then $v_i – p_i$. If the buyer rejects, he pays nothing, gets no item, and the auctioneer proceeds to the next buyer. The auctioneer makes at most one offer to each buyer and the buyers are assumed to optimize their utility, so that a buyer will accept an offer if and only if his valuation exceeds the price of the offer.

SPP mechanisms are studied because they are an abstraction of a sales mechanism that is frequently encountered in the real world. The popularity of posted price mechanisms is due to its simplicity, its transparency toward the participating buyers, and the efficiency with which it can be implemented.

The question that we are interested in is how well we can approximate the optimal social welfare by means of an SPP mechanism. The social welfare of an allocation is defined as the sum of valuations of all the buyers who get an item. In case there is only one unit for sale, this setting is very similar to the prophet inequality setting, and in fact we can use the prophet inequality to obtain a good approximation to the social welfare. The optimum social welfare is $\mathbf{E}[\max\{v_i : 1 \leq i \leq n\}]$, and the prophet inequality tells us that the auctioneer is able to achieve half of that by offering every buyer a price of $\mathbf{E}[\max\{v_i : 1 \leq i \leq n\}]/2$.

If there is an infinite supply or multiple copies of the item for sale, the original prophet inequality cannot be applied anymore.  Having multiple copies or infinite supply essentially means that we have a different allocation constraint on the set of buyers that we may allocate an item. In these cases the family of sets of buyers that we may allocate is a uniform matroid. Let us be even more ambitious, and assume that we have an arbitrary matroid constraint on the set of buyers that may get an item.

  • Chawla, Hartline, Malec and Sivan [1] prove that in this case there still exists an SPP mechanism $M$ that achieves a $1/2$-approximation to the optimal social welfare. The mechanism they propose assumes that the order in which to make offers to the buyers can be controlled. The way in which $M$ works is as follows.
  • For every bidder $i$ let $r_i$ be the probability that buyer $i$ gets an item in the social-welfare-optimizing allocation. (Note that this allocation is random, because the valuations are random: for every vector of valuations of the buyers there is a (possibly distinct) optimal allocation.) For buyer $i$, set $p_i$ to be the price such that $\mathbf{Pr}[v_i \geq p_i] = r_i$.
  • Iteratively offer the next buyer $i$ a take-it-or-leave it price $p_i$. Make the offers in order of non-increasing $p_i$, and only make $i$ an offer in case allocating them an item will yield an independent set in the matroid.

The following can then be proved.

Theorem 2.  The expected social welfare of $M$ is at least half of the expected optimum social felware.

The above can be seen as a prophet-inequality-type result for a setting where the gambler/auctioneer can select an independent set in a given matroid, instead of only a single item. However, the result requires that the gambler can choose the order of the random variables in advance.

Matroid Prophet Inequalities

Kleinberg and Weinberg [3] strengthen the above result by proving that it is not needed to have control over the order in which to offer the prices to the buyers.
This yields a true generalization of the original prophet inequality and strengthens the above result on SPP mechanisms.

Theorem 3.  Suppose that $X_1, \ldots, X_n$ is a sequence of nonnegative real values, arriving one by one, drawn from independent and non-negative distributions. If there is a matroid constraint $\mathcal{M}$ over the index set of values that may be selected, then there exists a selection rule such that
$\mathbf{E}[\text{ Total value of the selected elements }] \geq \mathbf{E}[\max\{\sum_{i \in I} X_i : I \in \mathcal{M}\}]/2$.

The selection rule that achieves this sets a specific threshold value for each number that arrives, and selects the arriving element if and only if its value exceeds the threshold and the element can be added without violating the matroid constraint. The threshold at iteration $i$ may depend on the numbers that have arrived in previous iterations. The authors prove the existence of the desired threshold values in three steps.

  • They first define the notion of $\alpha$-balancedness, which is a property that a threshold-based selection rule may posess. Let $T_1, \ldots, T_n$ be a set of thresholds, where $T_i$ is the threshold value that is used in the $i$th iteration. Let $A$ be the set of elements that would be selected when using these thresholds, and let $B$ be the random set that a prophet selects (i.e., a basis of the matroid that maximizes the total value after observing the values). Let $R(A)$ be the (also random) subset of $B \subset A$ of maximum total value such that $A \cup R(A)$ is a basis. The threshold rule is $\alpha$-balanced iff: (i.) For every realization of $T_1, \ldots, T_n$, the total threshold corresponding to the values in $A$ is at least $1/\alpha$ times the expected total value of the elements in $B \setminus R(A)$. Informally this means that the value of the set of elements that a foresighted prophet selects instead of $A$ is within a $1/\alpha$ factor of the set selected by the threshold selection rule.
    (ii.) For every realization of $T_1, \ldots, T_n$, let $V$ be any set disjoint from $A$ such that $A \cup V$ is a matroid basis. Then the total threshold corresponding to the values in $V$ must be at most $(1 – 1/\alpha)$ times the expected total value of $R(A)$. Informally, this means that the total weight of the elements rejected by the threshold rule is small (not more than a $(1 – 1/\alpha)$ times the value of the set $R(A)$ that a prophet would add in expectation).
  • They then prove that using $\alpha$-balanced thresholds yields a selection rule that achieves an expected value within $1/\alpha$ times the expected optimum value.
  • Finally they prove that there exists $2$-balanced thresholds. The threshold for the $i$th value $T_i$ is set to $\frac{1}{2}\mathbf{E}[w(R(A_{i-1})) – w(R(A_{i-1} \cup \{i\}))]$. Note that these thresholds are actually adaptive: the way $T_i$ is set depends on the set that was selected among the first $i-1$ elements.


[1] Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. Multi-parameter mechanism design and sequential posted pricing. In Proceedings of the Forty-second ACM Symposium on Theory of Computing (STOC), pages 311–320. ACM, 2010.

[2] Theodore P. Hill and Robert P. Kertz. A survey of prophet inequalities in optimal stopping theory. Contemporary Mathematics, 125, 1992.

[3] Robert Kleinberg and Seth Matthew Weinberg. Matroid prophet inequalities. In Proceedings of the Forty-fourth ACM Symposium on Theory of Computing (STOC), pages 123–136. ACM.

[4] Ulrich Krengel and Louis Sucheston. On semiamarts, amarts, and processes with finite value. Advances in Probability Related Topics.