This is a guest post by Zahra Parsaeian. This post is a follow-up to Tony Huynh’s 2015 blog post of the matroid secretary problem, focusing on the special case of laminar matroids. Zahra highlights the last decade’s steady progress in this special case and outlines the recent greedy algorithm that brings the competitive ratio down to 4.75.
From Secretaries to Laminar Matroids
The classical secretary problem asks for a strategy that hires (with high probability) the best of $n$ applicants who appear in uniformly random order. The well-known threshold strategy—observe the first $n/e$ applicants, then pick the first candidate better than all previous ones—achieves a competitive ratio of $e$.
The matroid secretary problem (MSP), introduced by Babaioff, Immorlica & Kleinberg (2007), generalises this set-selection game: instead of choosing a single element, we must pick an independent set of elements in an (unknown) weighted matroid that arrive online. The grand conjecture is that every matroid admits an $O(1)$-competitive algorithm.
A particularly structured—and surprisingly rich—family of matroids is the laminar matroids. Here the ground set $E$ is organised by a laminar family $\mathcal{F}$ (any two sets are either disjoint or nested), and each $B \in \mathcal{F}$ comes with a capacity $c(B)$; a subset $S \subseteq E$ is independent if $|S \cap B| \leq c(B)$ for every $B$. Partition matroids are the simplest laminar matroids, but many “hierarchical quota” constraints in practice are laminar as well.
Why single them out? Laminar matroids already capture the core difficulty of MSP while admitting extra structure that can be exploited algorithmically.
A Decade of Improving Constants
Tony’s 2015 survey listed Im & Wang’s first constant-competitive algorithm (competitive ratio $\approx 5333.33$). Since then the record book has been rewritten multiple times:
Year | Authors | Technique | Competitive ratio |
2011 | Im & Wang | Reduction to partition + “sample and price” | $16000/3 \approx 5333.33$ |
2013 | Jaillet, Soto & Zenklusen | Improved reduction | $\sqrt{3e} \approx 14.12$ |
2016 | Ma, Tang & Wang | Simulated greedy | 9.6 |
2018 | Soto, Turkieltaub & Verdugo | Forbidden sets | $3\sqrt{3} \approx 5.196$ |
2024 | Huang, Parsaeian & Zhu | Plain greedy | 4.75 |
2024 | Bérczi, Livanos, Soto & Verdugo | Labeling scheme (tight) | $1/(1 – \ln 2) \approx 3.257$ |
Progress has come from increasingly delicate analyses, often accompanied by algorithmic complexity. The latest result is a counter-trend: a simpler algorithm with a better constant.
Greedy—But With Better Timing
Before we describe the algorithm, recall the standard arrival model: each element independently receives a uniformly random arrival time in $[0,1]$, yielding a random order of appearance that our online algorithm must process.
Our algorithm really is the textbook greedy rule, decorated with a single time threshold $t_0$.
- Sampling phase. Ignore all elements that arrive before time $t_0$ (we use $t_0 = 0.7$).
- Selection phase. When an element $e$ arrives at time $t > t_0$, inspect the elements seen so far. If $e$ belongs to the offline optimum of the already-arrived instance and adding it keeps the set independent, accept it.
That is all! There are no prices, buckets, or recursive calls—just a look-ahead greedy algorithm.
Why does it work? Greedy alone is not new: the 9.6-competitive algorithm of Ma et al. also used greedy, but their acceptance rule only looked at the sample window. The key novelty is to test membership in the current optimum, which becomes increasingly selective as more elements arrive.
The analysis hinges on two observations:
- Independence of arrival times. Viewing arrival times as i.i.d. uniform variables decouples the combinatorial structure from time.
- Order statistics $\rightarrow$ Gamma distribution. For each laminar constraint $B$, the arrival gap between its last $c(B)$ optimal elements behaves like a Gamma-distributed random variable. Bounding the tail of this distribution shows that every optimal element survives with probability $\geq 1/4.75$.
The final ratio is therefore 4.75-probability-competitive, which also implies 4.75-utility-competitive. Moreover, the algorithm operates in the ordinal model (it only needs relative weight rankings), aligning with applications where exact weights are costly to elicit.
Greedy’s limit: the 3.257 barrier
Bérczi et al. recently introduced a labeling scheme framework and proved that the best achievable competitive ratio for any greedy algorithm on laminar matroids is $\frac{1}{1 – \ln 2} \approx 3.257$. This is tight: they present a greedy variant that achieves it, and show no greedy algorithm can do better.
How close are we to “optimal”?
Laminar matroids remain the most complex class with a known constant. A natural next step is to sharpen the constant—can we reach the golden goal of $e$? On the structural side, extending the greedy-with-look-ahead idea to regular or binary matroids looks tantalising (the last conjecture in Tony’s post is still wide open). The recent structure theorems for minor-closed classes may reopen that door.
Takeaways
- Simplicity can win. A one-line greedy rule beats a sophisticated forbidden-sets construction.
- Timing matters. The 0.7 threshold balances the risk of sampling versus missing high-value elements.
- Open questions abound. Improving the constant for laminar matroids (or proving a lower bound!), and generalising to wider matroid families, remain fertile ground.
I hope this short note complements Tony’s earlier exposition and sparks fresh interest in the laminar MSP. Feedback, questions, and ideas are most welcome—please share them in the comments or reach out directly.
This post was updated on May 26, 2025 to include the result in the recent Bérczi et al. paper.