This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

The menu complexity of “one-and-a-half-dimensional” mechanism design

Raghuvansh R. Saxena Department of Computer Science, Princeton University, rrsaxena@cs.princeton.edu.    Ariel Schvartzman Department of Computer Science, Princeton University, acohenca@cs.princeton.edu.    S. Matthew Weinberg Department of Computer Science, Princeton University, smweinberg@princeton.edu.
Abstract

We study the menu complexity of optimal and approximately-optimal auctions in the context of the “FedEx” problem, a so-called “one-and-a-half-dimensional” setting where a single bidder has both a value and a deadline for receiving an item [FGKK16]. The menu complexity of an auction is equal to the number of distinct (allocation, price) pairs that a bidder might receive [HN13]. We show the following when the bidder has nn possible deadlines:

  • Exponential menu complexity is necessary to be exactly optimal: There exist instances where the optimal mechanism has menu complexity 2n1\geq 2^{n}-1. This matches exactly the upper bound provided by Fiat et al.’s algorithm, and resolves one of their open questions [FGKK16].

  • Fully polynomial menu complexity is necessary and sufficient for approximation: For all instances, there exists a mechanism guaranteeing a multiplicative (1ϵ)(1-\epsilon)-approximation to the optimal revenue with menu complexity O(n3/2min{n/ϵ,ln(vmax)}ϵ)=O(n2/ϵ)O(n^{3/2}\sqrt{\frac{\min\{n/\epsilon,\ln(v_{\max})\}}{\epsilon}})=O(n^{2}/\epsilon), where vmaxv_{\max} denotes the largest value in the support of integral distributions.

  • There exist instances where any mechanism guaranteeing a multiplicative (1O(1/n2))(1-O(1/n^{2}))-approximation to the optimal revenue requires menu complexity Ω(n2)\Omega(n^{2}).

Our main technique is the polygon approximation of concave functions [Rot92], and our results here should be of independent interest. We further show how our techniques can be used to resolve an open question of [DW17] on the menu complexity of optimal auctions for a budget-constrained buyer.

1 Introduction

It is by now quite well understood that optimal mechanisms are far from simple: they may be randomized [Tha04, BCKW10, HN13], behave non-monotonically [HR15, RW15], and be computationally hard to find [CDW13, DDT14, CDP+14, Rub16]. To cope with this, much recent attention has shifted to the design of simple, but approximately optimal mechanisms (e.g. [CHK07, CHMS10, HN12, BILW14]). However, the majority of these works take a binary view on simplicity, developing simple mechanisms that guarantee constant-factor approximations. Only recently have researchers started to explore the tradeoff space between simplicity and optimality through the lens of menu complexity.

Hart and Nisan first proposed the menu complexity as one quantitative measure of simplicity, which captures the number of different outcomes that a buyer might see when participating in a mechanism [HN13]. For example, the mechanism that offers only the grand bundle of all items at price pp (or nothing at price 0) has menu complexity 11. The mechanism that offers any single item at price pp (or nothing at price 0) has menu complexity nn, and randomized mechanisms could have infinite menu complexity.

Still, all results to date regarding menu complexity have really been more qualitative than quantitative. For example, only just now is the state-of-the-art able to show that for a single additive bidder with independent values for multiple items and all ε>0\varepsilon>0, the menu complexity required for a (1ε)(1-\varepsilon) approximation is finite [BGN17] (and even reaching this point was quite non-trivial). On the quantitative side, the best known positive results for a single additive or unit-demand bidder with independent item values require menu complexity exp(n) for a (1ε)(1-\varepsilon)-approximation, but the best known lower bounds have yet to rule out that poly(n) menu complexity suffices for a (1ε)(1-\varepsilon)-approximation in either case. In this context, our work provides the first nearly-tight quantitative bounds on menu complexity in any multi-dimensional setting.

1.1 One-and-a-half dimensional mechanism design

The setting we consider is the so-called “FedEx Problem,” first studied in [FGKK16]. Here, there is a single bidder with a value vv for the item and a deadline ii for receiving it, and the pair (v,i)(v,i) is drawn from an arbitrarily correlated distribution where the number of possible deadlines is finite (nn). The buyer’s value for receiving the item by her deadline is vv, and her value for receiving the item after her deadline (or not at all) is 0. While technically a two-dimensional problem, optimal mechanisms for the FedEx problem don’t suffer the same undesirable properties as “truly” two-dimensional problems. Still, the space of optimal mechanisms is considerably richer than single-dimensional problems (hence the colloquial term “one-and-a-half dimensional”). More specifically, while the optimal mechanism might be randomized, it has menu complexity at most 2n12^{n}-1, and there is an inductive closed-form solution describing it. Additionally, there is a natural condition on each FiF_{i} (the marginal distribution of vv conditioned on ii) guaranteeing that the optimal mechanism is deterministic (and therefore has menu complexity n\leq n).111This condition is called “decreasing marginal revenues,” and is satisfied by distributions with CDF FF and PDF ff such that xf(x)1+F(x)x\cdot f(x)-1+F(x) is monotone non-decreasing.

A number of recent (and not-so-recent) works examine similar settings such as when the buyer has a value and budget [LR96, CG00, CMM11, DW17], or a value and a capacity [DHP17], and observe similar structure on the optimal mechanism. Such settings are quickly gaining interest within the algorithmic mechanism design community as they are rich enough for optimal mechanisms to be highly non-trivial, but not quite so chaotic as truly multi-dimensional settings.

1.2 Our results

We study the menu complexity of optimal and approximately optimal mechanisms for the FedEx problem. Our first result proves that the 2n12^{n}-1 upper bound on the menu complexity of the optimal mechanism provided by Fiat et al.’s algorithm is exactly tight:

Theorem 1.1.

For all nn, there exist instances of the FedEx problem on nn days where the menu complexity of the optimal mechanism is 2n12^{n}-1.

From here, we turn to approximation and prove our main results. First, we show that fully polynomial menu complexity suffices for a (1ε)(1-\varepsilon)-approximation. The guarantee below is always O(n2/ε)O(n^{2}/\varepsilon), but is often improved for specific instances. Below, if the FedEx instance happens to have integral support and the largest value is vmaxv_{\max}, we can get an improved bound (but if the support is continuous or otherwise non-integral, we can just take the n/εn/\varepsilon term instead).222Actually our bounds can be be improved to replace vmaxv_{\max} with many other quantities that are always vmax\leq v_{\max}, and will still be well-defined for continuous distributions, more on this in Section 4.

Theorem 1.2.

For all instances of the FedEx problem on nn days, there exists a mechanism of menu complexity O(nmin{n/ε,ln(vmax)}ε/n)O\left(n\sqrt{\frac{\min\{n/\varepsilon,\ln(v_{\max})\}}{\varepsilon/n}}\right) guaranteeing a (1ε)(1-\varepsilon) approximation to the optimal revenue.

In Theorem 1.2, observe that for any fixed instance, as ε0\varepsilon\rightarrow 0, our bound grows like O(1/ε)O(1/\sqrt{\varepsilon}) (because eventually n/εn/\varepsilon will exceed ln(vmax)\ln(v_{\max})). Similarly, our bound is always O(n2/ε)O(n^{2}/\varepsilon) for any vmaxv_{\max}. Both of these dependencies are provably tight for our approach (discussed shortly in Section 1.3), and in general tight up to a factor of nlogn\sqrt{n\log n}.333The gap of nlogn\sqrt{n\log n} comes as our upper bound approach requires that we lose at most ε𝖮𝖯𝖳/n\varepsilon\mathsf{OPT}/n “per day,” while our lower bound approach shows that any mechanism with lower menu complexity loses at least ε𝖮𝖯𝖳\varepsilon\mathsf{OPT} on some day.

Theorem 1.3.

For all nn, there exists an instance of the FedEx problem on nn days with vmax=O(n)v_{\max}=O(n), such that the menu complexity of every (1O(1/n2))(1-O(1/n^{2}))-optimal mechanism is Ω(n2)\Omega(n^{2}).

We consider Theorems 1.2 and 1.3 to be our main results, with Theorem 1.1 motivating the study of approximation in the first place. Taken together, the picture provided by these results is the following:

  • Exactly optimal mechanisms can require exponential menu complexity (Theorem 1.1), while (1ε)(1-\varepsilon)-approximate mechanisms exist with fully polynomial menu complexity (Theorem 1.2).

  • The menu complexity required to guarantee a (1ε)(1-\varepsilon)-approximation is nailed down within a multiplicative nlogn\sqrt{n\log n} gap, and lies in [Ω(n/lognmin{n/ε,ln(vmax)}ε/n),O(nmin{n/ε,ln(vmax)}ε/n)]\left[\Omega\left(\sqrt{n/\log n}\cdot\sqrt{\frac{\min\{n/\varepsilon,\ln(v_{\max})\}}{\varepsilon/n}}\right),O\left(n\cdot\sqrt{\frac{\min\{n/\varepsilon,\ln(v_{\max})\}}{\varepsilon/n}}\right)\right] (lower bound: Theorem 1.3, upper bound Theorem 1.2).

1.3 Our techniques

We’ll provide an intuitive proof overview for each result in the corresponding technical section, but we briefly want to highlight one aspect of our approach that should be of independent interest.

It turns out that the problem of revenue maximization with bounded menu complexity really boils down to a question of how well piece-wise linear functions with bounded number of segments can approximate concave functions (we won’t get into details of why this is the case until Section 4). This is a quite well-studied problem called polygon approximation (e.g. [Rot92, YG97, BHR91]). Questions asked here are typically of the form “for a concave function ff and interval [0,vmax][0,v_{\max}] such that f(0)=1f^{\prime}(0)=1, f(vmax)=0f^{\prime}(v_{\max})=0, what is the minimum number of segments a piece-wise linear function gg must have to guarantee f(x)g(x)f(x)εf(x)\geq g(x)\geq f(x)-\varepsilon for all x[0,vmax]x\in[0,v_{\max}]?”

The answer to the above question is Θ(vmax/ε)\Theta(\sqrt{v_{\max}/\varepsilon}) [Rot92, YG97]. This bound certainly suffices for our purposes to get some bound on the menu complexity of (1ε)(1-\varepsilon)-approximate auctions, but it would be much weaker than what Theorem 1.2 provides (we’d have linear instead of logarithmic dependence on vmaxv_{\max}, and no option to remove vmaxv_{\max} from the picture completely). Interestingly though, for our application absolute additive error doesn’t tightly characterize what we need (again, we won’t get into why this is the case until Section 4). Instead, we are really looking for the following kind of guarantee, which is a bit of a hybrid between additive and multiplicative: for a concave function ff and interval [0,vmax][0,v_{\max}] such that f(0)=1f^{\prime}(0)=1, f(vmax)=0f^{\prime}(v_{\max})=0, what is the minimum number of segments a piece-wise linear function gg must have to guarantee f(x)g(x)f(x)εε(f(vmax)f(0))f(x)\geq g(x)\geq f(x)-\varepsilon-\varepsilon(f(v_{\max})-f(0))?

At first glance it seems like this really shouldn’t change the problem at all: why don’t we just redefine ε:=ε(1+f(vmax)f(0))\varepsilon^{\prime}:=\varepsilon(1+f(v_{\max})-f(0)) and plug into upper bounds of Rote for ε\varepsilon^{\prime}? This is indeed completely valid, and we could again chase through and obtain some weaker version of Theorem 1.2 that also references additional parameters in unintuitive ways. But it turns out that for all examples in which this Ω(vmax/ε)\Omega(\sqrt{v_{\max}/\varepsilon}) dependence is tight, there is actually quite a large gap between f(0)f(0) and f(vmax)f(v_{\max}), and a greatly improved bound is possible (which replaces the linear dependence on vmaxv_{\max} with logarithmic dependence, and provides an option to remove vmaxv_{\max} from the picture completely at the cost of worse dependence on ε\varepsilon).

Theorem 1.4.

For any concave function ff and any ε>0\varepsilon>0 such that f(0)1f^{\prime}(0)\leq 1, f(vmax)0f^{\prime}(v_{\max})\geq 0, there exists a piece-wise linear function gg such that f(x)g(x)f(x)ε(1+f(vmax)f(0))f(x)\geq g(x)\geq f(x)-\varepsilon(1+f(v_{\max})-f(0)) with Θ(ln(vmax)/ε)\Theta(\sqrt{\ln(v_{\max})/\varepsilon}) segments, and this is tight.

If one wishes to remove the dependence on vmaxv_{\max}, then one can replace the bound with Θ(1/ε)\Theta(1/\varepsilon), which is also tight (among bounds that don’t depend on vmaxv_{\max}).

The proof of Theorem 1.4 is self-contained and appears in Section 4. Both the statement of Theorem 1.4 and our proof will be useful for future work on menu complexity, and possibly outside of mechanism design as well - to the best of our knowledge these kinds of hybrid guarantees haven’t been previously considered.444Interestingly (and completely unrelated to this work), hybrid additive-multiplicative approximations for core problems in online learning have also found use in other recent directions in AGT [DJF16, SBN17].

1.4 Related work

Menu complexity. Initial results on menu complexity prove that for a single additive or unit-demand bidder with arbitrarily correlated item values over just 22 items, there exist instances where the optimal (randomized, with infinite menu complexity) mechanism achieves infinite revenue, while any mechanism of menu complexity CC achieves revenue C\leq C (so no finite approximation is possible with bounded menu complexity) [BCKW10, HN13]. This motivated follow-up work subject to assumptions on the distributions, such as a generalized hazard rate condition [WT14], or independence across item values [DDT13, BGN17]. Even for a single bidder with independent values for two items, the optimal mechanism could have uncountable menu complexity [DDT13], motivating the study of approximately optimal mechanisms subject to these assumptions. Only just recently did we learn that the menu complexity is indeed finite for this setting [BGN17].

It is also worth noting that other notions of simplicity have been previously considered as well, such as the sample complexity (how many samples from a distribution are required to learn an approximately optimal auction?). Here, quantitative bounds are known for the single-item setting (where the menu complexity question is trivial: optimal mechanisms have menu complexity 11[CR14, HMR15, DHP16, GN17], but again only binary bounds are known for the multi-item setting: few samples suffice for a constant-factor approximation if values are independent [MR15, MR16], while exponentially many samples are required when values are arbitrarily correlated [DHN14]. In comparison to works of the previous paragraphs, we are the first to nail down “the right” quantitative menu complexity bounds in any multi-dimensional setting.

One-and-a-half dimensional mechanism design. One-and-a-half dimensional settings have been studied for decades by economists, the most notable example possibly being that of a single buyer with a value and a budget [LR96, CG00]. Recently, such problems have become popular within the AGT community as optimal auctions are more involved than single-dimensional settings, but not quite so chaotic as truly multidimensional settings [FGKK16, DW17, DHP17]. Each of these works focus exclusively on exactly optimal mechanisms (and exclusively on positive results). In comparison, our work is both the first to prove lower bounds on the complexity of (approximately) optimal mechanisms in these settings, and the first to provide nearly-optimal mechanisms that are considerably less complex.

Polygon approximation. Prior work on polygon approximation is vast, and includes, for instance, core results on univariate concave functions [Rot92, BHR91, YG97], the study of multi-variate functions [Bro08, GG09, DDY16], and even applications in robotics [BGO07]. The more recent work has mostly been pushing toward better guarantees for higher dimensional functions. To the best of our knowledge, the kinds of guarantees we target via Theorem 1.4 haven’t been previously considered, and could prove more useful than absolute additive guarantees for some applications.

1.5 Organization

In Section 2, we formally describe the FedEx problem and recap the main result of [FGKK16]. In Section 3 we present an instance of the FedEx problem whose menu complexity for optimal auctions is exponential, the worst possible. In Section 4 we present a mechanism that guarantees a (1-ε\varepsilon) fraction of the optimal revenue with a menu complexity of O(n2ε)O(\frac{n^{2}}{\varepsilon}). We also explain the connection between approximate auctions and polygon approximation. In Section 5 we present an instance of the FedEx problem that requires a menu complexity of Ω(n2)\Omega(n^{2}) in order to approximate the revenue within 1O(1/n2)1-O(1/n^{2}). In Section 7 we use similar techniques to those of Section 3 to construct an example resolving an open question of [DW17].555Specifically, [DW17] ask whether the optimal mechanism for a single buyer with a private budget and a regular value distribution conditioned on each possible budget is deterministic. The answer is yes if we replace “regular” with “decreasing marginal revenues,” or “private budget” with “public budget.” We show that the answer is no in general: the optimal mechanism, even subject to regularity, could be randomized.

2 Preliminaries

We consider a single bidder who’s type depends on two parameters: a value vv and a deadline i[n]i\in[n]. Deterministic outcomes that the seller can award are just a day [n]\in[n] to ship the item, or to not ship the item at all (and the seller may also randomize over these outcomes). A buyer receives value vv if the item is shipped by her deadline, and 0 if it is shipped after her deadline (or not at all).

The types (v,i)(v,i) are drawn from a known (possibly correlated) distribution \mathcal{F}. Let qiq_{i} denote the probability that the bidder’s deadline is ii and i\mathcal{F}_{i} the marginal distribution of vv conditioned on a deadline of ii. For simplicity of exposition, in several parts of this paper we’ll assume that \mathcal{F} is supported on {0,1,,vmax}×{1,,n}\{0,1,\ldots,v_{\max}\}\times\{1,\ldots,n\}. This assumption is w.l.o.g., and all results extend to continuous distributions, or distributions with arbitrary discrete support if desired [CDW16].

In Appendix A, we provide the standard linear program whose solution yields the revenue-optimal auction for the FedEx problem. We only note here the relevant incentive compatibility constraints (observed in [FGKK16]). First, note that w.l.o.g. whenever the buyer has deadline ii, the optimal mechanism can ship her the item (if at all) exactly on day ii. Shipping the item earlier doesn’t make her any happier, but might make the buyer interested in misreporting and claiming a deadline of ii if her deadline is in fact earlier. Next note that, subject to this, the buyer never has an incentive to overreport her deadline, but she still might have incentive to underreport her deadline (or misreport her value).

We will be interested in understanding the menu complexity of auctions, which is the number of different outcomes that, depending on the buyer’s type, are ever selected. If π(v,i)\pi(v,i) denotes the probability that a buyer with value vv and deadline ii receives the item, then we define the ii-deadline menu complexity to be the number of distinct options on deadline ii (|{p|v,π(v,i)=p}||\{p|\exists v,\pi(v,i)=p\}|). The menu complexity then just sums the ii-deadline menu complexities, and we will sometimes refer also to the “deadline menu complexity” as the maximum of the ii-deadline menu complexities.

2.1 Optimal auctions for the FedEx problem

Here, we recall some tools from [FGKK16] regarding optimal mechanisms for the FedEx problem. The first tool they use is the notion of a revenue curve.666For those familiar with revenue curves, note that this revenue curve is intentionally drawn in value space, and not quantile space.

Definition 2.1 (Revenue curves).

For a given deadline ii, define the ithi^{th} revenue curve RiR_{i} so that

Ri(v)=qivPrxi[xv].R_{i}(v)=q_{i}\cdot v\cdot\Pr_{x\leftarrow\mathcal{F}_{i}}[x\geq v].

Intuitively, Ri(v)R_{i}(v) captures the achievable revenue by setting price vv exclusively for consumers on deadline ii. It is also necessary to consider the ironed revenue curve, defined below.

Definition 2.2 (Ironed revenue curves).

For any revenue curve RiR_{i}, define R~i\tilde{R}_{i} to be its upper concave envelope.777That is, R~i\tilde{R}_{i} is the smallest concave function such that R~i(x)Ri(x)\tilde{R}_{i}(x)\geq R_{i}(x) for all xx. We say R~i\tilde{R}_{i} is ironed at vv if R~i(v)Ri(v)\tilde{R}_{i}(v)\neq R_{i}(v), and we call [x,y][x,y] an ironed interval of R~i\tilde{R}_{i} if R~i\tilde{R}_{i} is not ironed at xx or yy, but is ironed at vv for all v(x,y)v\in(x,y).

Of course, it is not sufficient to consider each possible deadline of the buyer in isolation. In particular, offering certain options on day ii constrains what can be offered on days i\geq i subject to incentive compatibility. For instance, if some (v,i)(v,i) pair receives the item with probability 11 on day 11 for price pp, no bidder with a deadline 1\geq 1 will ever choose to pay >p>p. So we would also like a revenue curve that captures the optimal revenue we can make from days i\geq i conditioned on selling the item deterministically at price pp on day ii. It’s not obvious how to construct such a curve, but this is one of the main contributions of [FGKK16], stated below.

Definition 2.3.

Let Rn(v):=Rn(v),R_{\geq n}(v):=R_{n}(v), and rn:=argmaxvRn(v).r_{\geq n}:=\operatorname*{arg\,max}_{v}R_{\geq n}(v). Define for i=n1i=n-1 to 11:

Ri(v)={Ri(v)+R~i+1(v)v<ri+1Ri(v)+R~i+1(ri+1)vri+1.R_{\geq i}(v)=\left\{\begin{array}[]{ll}R_{i}(v)+\tilde{R}_{\geq i+1}(v)&v<r_{\geq i+1}\\ R_{i}(v)+\tilde{R}_{\geq i+1}(r_{\geq i+1})&v\geq r_{\geq i+1}.\\ \end{array}\right.
Lemma 2.1 ([FGKK16]).

Ri(v)R_{\geq i}(v) is the optimal revenue of any mechanism that satisfy the following:

  • The buyer can either receive the item on day ii and pay vv, or receive nothing/pay nothing.

  • The buyer cannot receive the item on any day <i<i.

Moreover, for any v1<<vkv_{1}<\ldots<v_{k}, and ai(1),,ai(k)0a_{i}(1),\ldots,a_{i}(k)\geq 0 such that jai(j)1\sum_{j}a_{i}(j)\leq 1, jai(j)Ri(vj)\sum_{j}a_{i}(j)R_{\geq i}(v_{j}) is the optimal revenue of any mechanism that satisfy the following:

  • The buyer can receive the item on day ii with probability jai(j)\sum_{j\leq\ell}a_{i}(j) and pay jai(j)vj\sum_{j\leq\ell}a_{i}(j)v_{j}, for any [k]\ell\in[k] (or not receive the item on day ii and pay nothing).

  • The buyer cannot receive the item on any day <i<i.

Finally, we describe the optimal mechanism provided by [FGKK16], which essentially places mass optimally upon each day’s revenue curve, subject to constraints imposed by the decisions of previous days (a more detailed description appears in Appendix A, but the description below will suffice for our paper). First, simply set any price pp maximizing R1(p)R_{\geq 1}(p) to receive the item on day 11 (as day 11 is unconstrained by previous days). Now inductively, assume that the options for day ii have been set and we’re deciding what to do for day i+1i+1. If the menu options offered on day ii are (π0,p0),,(πk,pk)(\pi_{0},p_{0}),\ldots,(\pi_{k},p_{k}) (interpret the option (πj,pj)(\pi_{j},p_{j}) as “charge pjp_{j} to ship the item on day ii with probability πj\pi_{j}”), think of this instead as a distribution over prices, where price pjpj1πjπj1\frac{p_{j}-p_{j-1}}{\pi_{j}-\pi_{j-1}} has mass πjπj1\pi_{j}-\pi_{j-1}.888This is the standard transformation between “lotteries” and “distributions over prices” (e.g. [RZ83]). For each such price pp, it will undergo one of the following three operations to become an option for day i+1i+1.

  • If pri+1p\geq r_{\geq i+1}, move all mass from pp to ri+1r_{\geq i+1}.

  • If R^i+1\hat{R}_{\geq i+1} is not ironed at pp, and pri+1p\leq r_{\geq i+1}, keep all mass at pp.

  • If R^i+1\hat{R}_{\geq i+1} is ironed at pp, and pri+1p\leq r_{\geq i+1}, let [x,y][x,y] denote the ironed interval containing pp, and let qx+(1q)y=pqx+(1-q)y=p. Move a qq fraction of the mass at pp to xx, and a (1q)(1-q) fraction of the mass at pp to yy.

Once the mass is resettled, if there is mass ai(j)a_{i}(j) on price pjp_{j} for p1<<pkp_{1}<\ldots<p_{k}, the buyer will have the option to receive the item on day ii with probability jai(j)\sum_{j\leq\ell}a_{i}(j) for price jai(j)pj\sum_{j\leq\ell}a_{i}(j)p_{j} for any [k]\ell\in[k] (or not at all). Note that due to case three in the transformation above, there could be up to twice as many menu options on day ii as day i1i-1.

Theorem 2.2 ([FGKK16]).

The allocation rule described above is the revenue-optimal auction.

3 Optimal Mechanisms Require Exponential Menu Complexity

In this section we overview our construction for an instance of the FedEx problem with vmaxv_{\max} integral values for each day and nlog(vmax)n\leq\log(v_{\max}) days where the ii-deadline menu complexity of the optimal mechanism is 2i12^{i-1} for all ii (and this is the maximum possible [FGKK16]), implying that the menu complexity is 2n12^{n}-1. Note that the deadline menu complexity is always upper bounded by vmaxv_{\max}, so vmaxv_{\max} must be at least 2n2^{n}.

At a high level, constructing the example appears straight-forward, once one understands Fiat et al.’s algorithm (end of Section 2). Every menu option from day ii is either “shifted” to ri+1r_{\geq i+1}, “copied,” or “split.” If the option is shifted or copied, it spawns only a single menu option on day i+1i+1, while if split it spawns two (hence the upper bound of 2n12^{n}-1). So the goal is just to construct an instance where every option is split on every day.

Unfortunately, this is not quite so straight-forward: whether or not an option is split depends on whether it lies inside an ironed interval in this RiR_{\geq i} curve, which is itself the sum of revenue curves (some ironed and some not), and going back and forth between distributions and sums of revenue curves is somewhat of a mess. So really what we’d like to do is construct the RiR_{\geq i} curves directly, and be able to claim that there exists a FedEx input inducing them. While not every profile (R1,,Rn)(R_{\geq 1},\ldots,R_{\geq n}) of curves is valid, we do provide a broad class of curves for which it is somewhat clean to show that there exists a FedEx input inducing them.

From here, it is then a matter of ensuring that we can find the revenue curve profiles we want (where for every day ii, every menu option is split, because it is inside an ironed interval in RiR_{\geq i}) within our class. We’ll highlight parts of our construction below, but most details are in Appendix B.

Lemma 3.1.

For any vmaxv_{\max} and nlog(vmax)n\leq\log(v_{\max}), there exists an input to the FedEx problem such that:

  • R1R_{1} is maximized at vmax/2v_{\max}/2 (that is, R1(vmax/2)R1(x)xR_{1}(v_{\max}/2)\geq R_{1}(x)\ \forall x) and has no ironed intervals.

  • For all i>1i>1, R~i\tilde{R}_{i} has a maximizer at price v2i(2ni1)v\geq 2^{i}(2^{n-i}-1) and has ironed intervals [2ni+k2ni+2,2ni+k2ni+2+2ni+1][2^{n-i}+k2^{n-i+2},2^{n-i}+k2^{n-i+2}+2^{n-i+1}] for k{0,,2i21}k\in\{0,\ldots,2^{i-2}-1\}.

  • R~i\tilde{R}_{i} (the ironed revenue curve) is a constant function for all i2i\geq 2.999Note that it is possible for two disjoint ironed intervals to have the same slope.

  • RiR_{\geq i} has the same ironed intervals as Ri{R}_{i}. In fact, x\forall x, Ri(x)=Ri(x)+cR_{\geq i}(x)=R_{i}(x)+c for some constant cc.

We include in Figure 2 a picture of the generated revenue curves for n=4n=4. As a result of this construction, we see that RiR_{\geq i} has 2i22^{i-2} ironed intervals, whose endpoints themselves lie in ironed intervals of Ri+1R_{\geq i+1}. This guarantees that all menu options from day ii (which are guaranteed to be endpoints of ironed intervals) are split into two options on day i+1i+1. The proof of Theorem 3.2 (which implies Theorem 1.1) formalizes this.

Theorem 3.2.

The optimal mechanism for any instance satisfying the conditons of Lemma 3 has ii-deadline complexity 2i12^{i-1} for all ii, and menu complexity 2n12^{n}-1.

The proof of Theorem 3.2 is presented at the end of Section B of the Appendix.

4 Approximately Optimal Mechanisms with Small Menus

In this section, we describe a mechanism that attains at least 1ε1-\varepsilon fraction of the optimal revenue for any FedEx instance with menu complexity O(nnεmin(nε,logvmax))O\left(n\sqrt{\frac{n}{\varepsilon}\min\left(\frac{n}{\varepsilon},\log v_{\max}\right)}\right), which proves Theorem 1.2. Most proofs appear in Appendix C, but we overview our approach here.

Our main approach is to use the polygon approximation of concave functions applied to revenue curves. For a sequence of points XX in the domain of a function ff, the polygon approximation f~X\tilde{f}_{X} of a function with respect to XX is the piecewise linear function formed by connecting the points (x,f(x))(x,f(x)) for xXx\in X by line segments. Thus, if the sequence XX has nn points, the function f~X\tilde{f}_{X} will have n1n-1 segments. For a concave function ff, the line joining (x1,f(x1))(x_{1},f(x_{1})) and (x2,f(x2))(x_{2},f(x_{2})) for any two points x1x_{1} and x2x_{2}, lies entirely below the function ff. Thus, for concave functions ff, we have for any sequence XX, the value of f(x)f~X(x)0f(x)-\tilde{f}_{X}(x)\geq 0. Typically, for a ‘good’ polygon approximation, one requires for ε>0\varepsilon>0, that f(x)εf~X(x)f(x)f(x)-\varepsilon\leq\tilde{f}_{X}(x)\leq f(x).

It turns out that the question of approximating revenue with low menu complexity boils down to a question of approximating revenue curves with piecewise-linear functions of few segments. The connection isn’t quite obvious, but isn’t overly complicated. Without getting into formal details, here is a rough sketch of what’s going on:

  • Recall the Fiat et al. procedure to build the optimal mechanism: menu options from deadline i1i-1 might be “split” into two options for deadline ii if they lie inside an ironed interval of R~i\tilde{R}_{\geq i}. This might cause the menu complexity to double from one deadline to the next.

  • Instead, we want to create at most kk “anchoring points” on each revenue curve. For a menu option from deadline i1i-1, instead of distributing it to the endpoints of its ironed interval, we distribute it to the two nearest anchor points.

  • By subsection 2.1, we know exactly how to evaluate the revenue lost by this change, and it turns out this is captured by the maximum gap between R~i()\tilde{R}_{\geq i}(\cdot) and the polygon approximation obtained to R~i()\tilde{R}_{\geq i}(\cdot) (this isn’t obvious, but not hard. See Appendix C).

  • Finally, it turns out that the ii-deadline menu complexity with at most kk anchoring points is at most 2k2k (also not quite obvious, but also not hard). So the game is to find few anchoring points that obtain a good polygon approximation to each revenue curve. section 4 of subsection C.4 describes the reduction formally, but all related proofs are in Appendix C.

Corollary 4.1.

Consider a FedEx instance with nn deadlines. For all i{1,2,,n}i\in\{1,2,\cdots,n\}, let gig_{i} be the function R~i\tilde{R}_{\geq i} defined in 2.3, and let XiX_{i} be a sequence of kik_{i} points in [0,ri][0,r_{\geq i}] such that for all xrix\leq r_{\geq i}, we have gi(x)εigi~Xi(x)gi(x)g_{i}(x)-\varepsilon_{i}\leq\tilde{g_{i}}_{X_{i}}(x)\leq g_{i}(x). Then there exists a mechanism with ii-deadline menu complexity 2ki2k_{i} (and menu complexity 2iki2\sum_{i}k_{i}) whose revenue is at least 𝖮𝖯𝖳i=1nεi\mathsf{OPT}-\sum_{i=1}^{n}\varepsilon_{i}.

Here, 𝖮𝖯𝖳\mathsf{OPT} denotes the optimal revenue of the FedEx instance.

At this point, it seems like the right approach is to just set each εi=ε𝖮𝖯𝖳/n\varepsilon_{i}=\varepsilon\cdot\mathsf{OPT}/n and plug into the best existing bounds on polygon approximation. In some sense this is correct, but the menu complexity bounds one would obtain are far from optimal. The main insight is that we know something about the curves we wish to approximate: R~(x)𝖮𝖯𝖳\tilde{R}(x)\leq\mathsf{OPT} for all xx, and we want to leverage this fact if it can give us better guarantees. Additionally, if all values are integral in the range {1,,vmax}\{1,\ldots,v_{\max}\}, we wish to leverage this fact as well, as it implies that an additive ε\varepsilon loss is also OK, as 𝖮𝖯𝖳1\mathsf{OPT}\geq 1. It turns out that both facts can indeed be leveraged to obtain much stronger approximation guarantees than what are already known (essentially replacing vmaxv_{\max} with ln(vmax)\ln(v_{\max}) in previous bounds), stated in Theorem 4.2 below.

Theorem 4.2.

For any ε>0\varepsilon>0 and concave function f:[0,vmax][0,)f:[0,v_{\max}]\to[0,\infty) such that f(0)=0f(0)=0, f+(0)1f^{+}(0)\leq 1, f(vmax)0f^{-}(v_{\max})\geq 0101010We use f+f^{+} to denote the right hand derivative and ff^{-} to denote the left hand derivative., there exists a sequence XX of at most O(min{1/ε,logvmaxε})O\left(\min\left\{1/\varepsilon,\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right\}\right) points such that for all x[0,vmax],x\in[0,v_{\max}],

f(x)ε(1+f(vmax))f~X(x)f(x).f(x)-\varepsilon\left(1+f(v_{\max})\right)\leq\tilde{f}_{X}(x)\leq f(x).

The proof of Theorem 1.2 follows from section 4 and Theorem 4.2 together with a little bit of algebra, and is deferred until Appendix C.

Finally, we remark on some alternative terms that can be taken to replace vmaxv_{\max} in Theorem 1.2. It will become clear why these replacements are valid after reading the proof of Theorem 1.2, but we will not further justify the validity of these replacements here.

  • First, for instances with integral valuations, we may replace vmaxv_{\max} everywhere with maxiri\max_{i}r_{\geq i}. This is essentially because we don’t actually need to approximate R~i\tilde{R}_{\geq i} on the entire interval [0,vmax][0,v_{\max}], but only the interval [0,ri][0,r_{\geq i}].

  • We may further define q=maxiri/𝖮𝖯𝖳q=\max_{i}r_{\geq i}/\mathsf{OPT} for any (not necessarily integral, possibly continuous) instance, and replace vmaxv_{\max} everywhere with qq, even for non-integral instances. This is essentially because we only used the integrality assumption to guarantee that 𝖮𝖯𝖳1\mathsf{OPT}\geq 1.

  • Finally, if pip_{\geq i} denotes the probability that the buyer has value at least rir_{\geq i} and deadline at least ii, observe that 𝖮𝖯𝖳ripi\mathsf{OPT}\geq r_{\geq i}\cdot p_{\geq i}. So if the probability of sale at each rir_{\geq i} is at least pp, we may observe that q1/pq\geq 1/p (where qq is defined as in the previous bullet) and replace vmaxv_{\max} with 1/p1/p everywhere.

The bullets above suggest that the “hard” instances (where some instance-specific parameter shows up in order to maintain optimal dependence on ε\varepsilon) are those where most of the revenue comes from very infrequent events where the buyer has an unusually high value. Due to the intricate interaction between different deadlines, these parameters can’t be circumvented with simple discretization arguments, or by improved polygon approximations (provably, see Section 4.1), but it is certainly interesting to see if other arguments might allow one to replace logvmax\log v_{\max} with (for example) something like log(n/ε)\log(n/\varepsilon).

4.1 A tight example for polygon approximation

It turns out that the guarantees provided by Theorem 4.2 are tight. Specifically, if no dependence on vmaxv_{\max} is desired, then 1/ε1/\varepsilon is the best bound achievable. Also, if it’s acceptable to depend on both vmaxv_{\max} and ε\varepsilon, then the bound of logvmaxε\sqrt{\frac{\log v_{\max}}{\varepsilon}} in Theorem 4.2 is tight. Taken together, this means that O(min{1/ε,logvmaxε})O\left(\min\left\{1/\varepsilon,\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right\}\right) lies at the Pareto frontier of the dependences achievable as a function of both vmaxv_{\max} and ε\varepsilon. The examples proving tightness of these bounds are actually quite simple, and provably the worst possible examples (proof of the below claim appears in Appendix C)

Proposition 4.3.

Let ff be a concave function on [0,vmax][0,v_{\max}], and let there be no polygon approximation of ff using kk segments for additive error ε\varepsilon. Then there exists a concave function gg over [0,vmax][0,v_{\max}] satisfying:

  • There is no polygon approximation of gg using kk segments for additive error ε\varepsilon.

  • f(0)=g(0)f(0)=g(0), f+(0)=g+(0)f^{+}(0)=g^{+}(0), f(vmax)=g(vmax)f(v_{\max})=g(v_{\max}), f(vmax)=g(vmax)f^{-}(v_{\max})=g^{-}(v_{\max}).

  • gg is piecewise-linear with 2k2k segments.

5 Tightness of the approximation scheme

Finally, we construct an instance of the FedEx problem that is hard to approximate with small menu complexity. We try to reason similar to the example constructed in Section 3, but things are trickier here. In particular, the challenge in Section 3 was in mapping between distributions and revenue curves. But once we had the revenue curves, it was relatively straight-forward to plug through Fiat et al.’s algorithm [FGKK16] and ensure that the optimal auction had high menu complexity.

Already nailing down the behavior of an optimal auction was tricky enough, but we now have to consider every approximately optimal auction (almost all of which don’t necessarily result from Fiat et al.’s algorithm (see, e.g. Section 4)). Indeed, one can imagine doing all sorts of strange things on any day ii that are suboptimal, but might somehow avoid the gradual buildup in the ii-deadline menu complexity.111111For example, an ε\varepsilon-approximate menu could set price 0 or \infty with probability ε\varepsilon for shipment on any day, or something much more chaotic.

To cope with this, our approach has two phases: first, we characterize a restricted class of auctions that we call clean. At a very high level, clean auctions never make “bizarre” choices on day ii that both decrease the revenue gained on day ii and strictly increase constraints on choices available for future days. To have an example in mind: if the revenue on day 11 is maximized by setting a price of 11, it might make sense to set price 22 to receive the item on day 11 instead, as this relaxes constraints on future days, and maybe this somehow helps when also constrained by menu complexity. But it makes no sense to instead set price 1/21/2: this only decreases the revenue achieved on day 11, and provides stricter constraints on future days (as now she has the option to get the item on day 11 at a cheaper price).

For our example, we first show that all clean auctions that maintain a good approximation ratio must have high menu complexity. We then follow up by making the claims in the previous paragraph formal: any arbitrary auction of low menu complexity can be derived by “muddling” a clean auction, a process which never increases the revenue. A little more specifically, cleaning the menu for deadline ii can only increase the revenue and allow more options on later deadlines, without increasing the menu complexity. Formal definitions and claims related to this appear in Appendix D. We conclude with a formal statement of our lower bound, which proves Theorem 1.3.

Theorem 5.1.

Any mechanism for the FedEx instance described in subsection D.1 that has at most n/8n/8 menu options on a day i(n/4,n/2]i\in(n/4,n/2] has revenue at most 𝖮𝖯𝖳(11200000n2)\mathsf{OPT}\left(1-\frac{1}{200000n^{2}}\right).

6 Conclusions and Future Work

We provide the first nearly-tight quantitative results on menu complexity in a multi-dimensional setting. Along the way, we design new polygon approximations for a hybrid additive-multiplicative guarantee that turns out to be just right for our application (as evidenced by the nearly-matching lower bounds obtained from the same ideas).

There remains lots of future work in the direction of menu complexity, most notably the push for tighter quantitative bounds in “truly” multi-dimensional settings, where the gaps between upper (exponential) and lower (polynomial) are vast. We believe that continuing a polygon approximation approach is likely to yield fruitful results. After all, there is a known connection between concave functions and any mechanism design setting via utility curves, and low menu complexity exactly corresponds to piece-wise linear utility curves with few segments. Still, there are two serious barriers to overcome: first, these utility curves are now multi-dimensional instead of single-dimensional revenue curves. And second, the relationship between utility curves and revenue is somewhat odd (expected revenue is equal to an integral over the support of xΔf(x)f(x)\vec{x}\cdot\Delta_{f}(\vec{x})-f(\vec{x})), whereas the relationship between revenue curves and revenue is more direct. There are also intriguing directions for future work along the lines of one-and-a-half dimensional mechanism design, the most pressing of which is understanding multi-bidder instances (as all existing work, including ours, is still limited to the single-bidder setting).

7 Instances with regular distributions may require randomness

For single-dimensional settings, it’s well-understood that “the right” technical condition on value distributions to guarantee a simple optimal mechanism is regularity. This guarantees that “virtual values” are non-decreasing and removes the need for ironing, even for multi-bidder settings. Interestingly, “the right” technical condition on value distributions to guarantee a simple optimal mechanism for 1.5 dimensional settings is no longer regularity, but decreasing marginal values. For example, if all marginals satisfy decreasing marginal values, the optimal mechanism is deterministic for the FedEx problem [FGKK16], selling a single item to a budget-constrained buyer [CG00, DW17], and a capacity-constrained buyer [DHP17].

Still, regularity seems to buy something in these problems. For instance, Fiat et al. show that when there are only two possible deadlines, regularity suffices to guarantee that the optimal mechanism is deterministic. It has also been known since early work of Laffont and Robert that regularity suffices to guarantee that the optimal mechanism is deterministic when selling to a budget-constrained buyer with only one possible budget [LR96]. But the extent to which regularity guarantees simplicity remained open (and was explicitly stated as such in [DW17]). In this section, we show that regularity guarantees nothing beyond what was already known. In particular, there exists an instance of the FedEx problem with three possible deadlines where all marginals are regular but the optimal mechanism is randomized. This immediately implies an example for a budget-constrained buyer and three possible budgets as well (for instance, just set all three budgets larger than vmaxv_{\max} so they will never bind).

We proceed by describing now our instance of the FedEx problem where the optimal auction is randomized, despite all marginals being regular and there only being 33 possible deadlines (recall that Fiat et al. show that the optimal auction remains deterministic for regular marginals and 22 deadlines).Throughout this section, instead of using revenue curves R(v)R(v), we use Γ(v)=R(v)\Gamma(v)=-R(v). This is in accordance to [FGKK16].

7.1 The setting

Consider bidders with types distributed as F1F_{1} on day 11, F2F_{2} on day 22, and F3F_{3} on day 33.

F1(v)\displaystyle F_{1}(v) =1ev,\displaystyle=1-\mathrm{e}^{-v}, f1(v)\displaystyle f_{1}(v) =ev,\displaystyle=\mathrm{e}^{-v},
F2(v)\displaystyle F_{2}(v) =1e5v,\displaystyle=1-\mathrm{e}^{-5v}, f2(v)\displaystyle f_{2}(v) =5e5v,\displaystyle=5\mathrm{e}^{-5v},
F3(v)\displaystyle F_{3}(v) =1ev5,\displaystyle=1-\mathrm{e}^{-\frac{v}{5}}, f3(v)\displaystyle f_{3}(v) =15ev5.\displaystyle=\frac{1}{5}\mathrm{e}^{-\frac{v}{5}}.

The distribution over days qq is

q1=1021q2=1021q3=121.q_{1}=\frac{10}{21}\qquad q_{2}=\frac{10}{21}\qquad q_{3}=\frac{1}{21}.

Note that the three distributions are regular but don’t have decreasing marginal revenues.

Refer to caption
Refer to caption
Figure 1: (Left) The curve Γ^2\hat{\Gamma}_{\geq 2} in our example. (Right) The curve Γ1\Gamma_{\geq 1}. Note that this curve is minimized at a point in the ironed interval of Γ^2\hat{\Gamma}_{\geq 2}.

7.2 Analysis

We use the iterative procedure described in [FGKK16] to find the optimal auction.

Γ3=v21(1F3(v))\displaystyle\Gamma_{\geq 3}=-\frac{v}{21}(1-F_{3}(v)) =v21ev5,\displaystyle=-\frac{v}{21}\mathrm{e}^{-\frac{v}{5}},
r3\displaystyle r_{\geq 3} =5.\displaystyle=5.
Γ2={Γ2(v)+Γ^3(v),0v5Γ2(v)521e,5v\displaystyle\Gamma_{\geq 2}=\begin{cases}\Gamma_{2}(v)+\hat{\Gamma}_{\geq 3}(v),&0\leq v\leq 5\\ \Gamma_{2}(v)-\frac{5}{21\mathrm{e}},&5\leq v\\ \end{cases} ={10v21e5vv21ev5,0v510v21e5v521e,5v\displaystyle=\begin{cases}-\frac{10v}{21}\mathrm{e}^{-5v}-\frac{v}{21}\mathrm{e}^{-\frac{v}{5}},&0\leq v\leq 5\\ -\frac{10v}{21}\mathrm{e}^{-5v}-\frac{5}{21\mathrm{e}},&5\leq v\\ \end{cases}
r2\displaystyle r_{\geq 2} 5.0\displaystyle\approx 5.0\cdots
Γ1={Γ1(v)+Γ^2(v),0v5.0Γ1(v)521e,5.0v\displaystyle\Gamma_{\geq 1}=\begin{cases}\Gamma_{1}(v)+\hat{\Gamma}_{\geq 2}(v),&0\leq v\leq 5.0\cdots\\ \Gamma_{1}(v)-\frac{5}{21\mathrm{e}},&5.0\cdots\leq v\\ \end{cases} ={10v21ev10v21e5vv21ev5,0v0.24510v21evA line. See Figure 1,0.245𝒗2.7910v21ev10v21e5vv21ev5,2.79v5.010v21ev0.0876,5.0v\displaystyle=\begin{cases}-\frac{10v}{21}\mathrm{e}^{-v}-\frac{10v}{21}\mathrm{e}^{-5v}-\frac{v}{21}\mathrm{e}^{-\frac{v}{5}},&0\leq v\leq 0.245\cdots\\ -\frac{10v}{21}\mathrm{e}^{-v}-\text{\bf A line. See \autoref{fig:revcurves}},&\bm{0.245\cdots\leq v\leq 2.79\cdots}\\ -\frac{10v}{21}\mathrm{e}^{-v}-\frac{10v}{21}\mathrm{e}^{-5v}-\frac{v}{21}\mathrm{e}^{-\frac{v}{5}},&2.79\cdots\leq v\leq 5.0\cdots\\ -\frac{10v}{21}\mathrm{e}^{-v}-0.0876\cdots,&5.0\cdots\leq v\\ \end{cases}
r1\displaystyle r_{\geq 1} 1.07.\displaystyle\approx 1.07.

All real values written are approximate and end with “\cdots”. The case in boldface denotes the ironed interval. The optimal price for day 11 is 1.071.07 which is in that interval. Hence the optimal auction has to be randomized.

Appendix A Additional preliminaries

In this appendix, we summarize the approach in [FGKK16] to obtain the optimal mechanism for an arbitrary “FedEx” instance. We begin with the linear program that encodes this optimization problem.

maximize ivqifi(v)p(v,i)\displaystyle\sum_{i}\sum_{v}q_{i}f_{i}(v)p(v,i) (1)
subject to π(v,i)vp(v,i)\displaystyle\pi(v,i)v-p(v,i) π(v1,i)vp(v1,i)\displaystyle\geq\pi(v-1,i)v-p(v-1,i) v1,i\displaystyle\forall v\geq 1,i (Leftwards IC)
π(v,i)vp(v,i)\displaystyle\pi(v,i)v-p(v,i) π(v+1,i)vp(v+1,i)\displaystyle\geq\pi(v+1,i)v-p(v+1,i) v<vmax,i\displaystyle\forall v<v_{max},i (Rightwards IC)
π(v,i)vp(v,i)\displaystyle\pi(v,i)v-p(v,i) π(v,i1)vp(v,i1)\displaystyle\geq\pi(v,i-1)v-p(v,i-1) i>1,v\displaystyle\forall i>1,v (Downwards IC)
π(v,i)\displaystyle\pi(v,i) 1\displaystyle\leq 1 i,v\displaystyle\forall i,v (Feasibility)
π(0,1)\displaystyle\pi(0,1) =p(0,1)=0.\displaystyle=p(0,1)=0. (Individual Rationality)

Note that we have not included the constraints where the bidder misreports a higher deadline. No rational bidder would consider these deviations since they would always get non-positive utility. We now formally present the allocation curves described in Section 2. This, combined with the definitions of optimal revenue curves, provide a clean characterization of optimal auctions for any instance of the FedEx problem.

Definition A.1 (Optimal allocation curves [FGKK16]).

Let jj^{*} be the largest jj such that vjriv_{j}\leq r_{\geq i}. For any jjj\leq j^{*}, consider two cases:

  • R^i\hat{R}_{\geq i} is not ironed at vjv_{j}. Then

    ai,j(v)={0v<vj1else.a_{i,j}(v)=\left\{\begin{array}[]{ll}0&v<v_{j}\\ 1&\text{else}.\\ \end{array}\right.
  • R^i\hat{R}_{\geq i} is ironed at vjv_{j}. Let v¯j\underline{v}_{j} be the largest v<vjv<v_{j} such that RiR_{\geq i} is not ironed at vv and v¯j\overline{v}_{j} be the smallest vj<vv_{j}<v such that RiR_{\geq i} is not ironed at vv. Let 0<δ<10<\delta<1 be such that

    vj=δv¯j+(1δ)v¯j.v_{j}=\delta\underline{v}_{j}+(1-\delta)\overline{v}_{j}.

    Define

    ai,j(v)={0v<vj¯δvj¯vvj¯1v>vj¯.a_{i,j}(v)=\left\{\begin{array}[]{ll}0&v<\underline{v_{j}}\\ \delta&\underline{v_{j}}\leq v\leq\overline{v_{j}}\\ 1&v>\overline{v_{j}}.\\ \end{array}\right.

Then set ai(v)a_{i}(v) as follows

ai(v)={j=1j(βjβj1)ai,j(v)v<ri1vri,a_{i}(v)=\left\{\begin{array}[]{ll}\sum_{j=1}^{j*}(\beta_{j}-\beta_{j-1})a_{i,j}(v)&v<r_{\geq i}\\ 1&v\geq r_{\geq i},\\ \end{array}\right.

where βj\beta_{j} is the probability of allocating the item with value aja_{j} on day i1i-1.

Lemma A.1.

[FGKK16] The allocation curves aia_{i} are monotone increasing from 0 to 11 and satisfy the Downwards IC from 1. Moreover, each aia_{i} changes value only at points where R^i\hat{R}_{\geq i} is not ironed.

Remark A.1.

Every solution to LP 1 is an optimal mechanism for the FedEx problem.

In addition, we present this simple claim, which is unrelated to the FedEx problem itself, that will be useful in future sections.

Claim A.1.

Let ff be a distribution over [vmax][v_{\max}], and let ϕf,Rf\phi_{f},R_{f} be the corresponding virtual value and revenue curve, respectively. Then, for all v[vmax1]v\in[v_{max}-1]

Rf(v)Rf(v1)=ϕf(v)f(v).R_{f}(v)-R_{f}(v-1)=\phi_{f}(v)f(v).
Proof.

This follows from some simple calculations:

Rf(v)Rf(v1)\displaystyle R_{f}(v)-R_{f}(v-1) =v(1F(v1))(v+1)(1F(v))\displaystyle=v(1-F(v-1))-(v+1)(1-F(v))
=v(F(v)F(v1))(1F(v))\displaystyle=v(F(v)-F(v-1))-(1-F(v))
=vf(v)(1F(v))=f(v)ϕf(v).\displaystyle=vf(v)-(1-F(v))=f(v)\phi_{f}(v).

Appendix B Omitted proofs and perturbed example of Section 3

B.1 Omitted proofs

Lemma B.1.

Fix v>0v>0. Let (2v)1>ε>0(2v)^{-1}>\varepsilon>0 and let R={ri}i=1vR=\{r_{i}\}_{i=1}^{v} be a sequence of real numbers such that |ri+1ri|ε,i[v1]\lvert{r_{i+1}-r_{i}}\rvert\leq\varepsilon,\forall i\in[v-1]. If r1=1r_{1}=1, there exists a distribution f:[v][0,1]f:[v]\to[0,1] such that Rf(i)=ri,i[v]R_{f}(i)=r_{i},\forall i\in[v].

Proof.

By our choice of ε\varepsilon, all elements in the sequence are greater than 1/21/2. Let rv+1=0r_{v+1}=0 and define

F(i)=1ri+1i+1,i{0,1,2,,v}.F(i)=1-\frac{r_{i+1}}{i+1},\forall i\in\{0,1,2,\cdots,v\}.

We will show that FF indeed corresponds to a valid distribution function by showing that it is non-negative and non-decreasing. Once we have shown this, it becomes clear that Rf(i)=riR_{f}(i)=r_{i}, proving the Lemma.

Claim B.1.

F(i)F(i) is non-negative, non-decreasing.

Proof.

For i=0i=0, F(i)=11=0F(i)=1-1=0. Suppose i1i\geq 1. Since 0ri<2,i0\leq r_{i}<2,\forall i, it follows that 1F(i)1ri+12>01\geq F(i)\geq 1-\frac{r_{i+1}}{2}>0. To show it is non-decreasing, consider the difference between two consecutive terms. For all v>i1v>i\geq 1,

F(i)F(i1)\displaystyle F(i)-F(i-1) =riiri+1i+1=rii(i+1)+riri+1i+1\displaystyle=\frac{r_{i}}{i}-\frac{r_{i+1}}{i+1}=\frac{r_{i}}{i(i+1)}+\frac{r_{i}-r_{i+1}}{i+1}
1i+1(riiε)\displaystyle\geq\frac{1}{i+1}\left(\frac{r_{i}}{i}-\varepsilon\right)
>1i+1(riv12v)0.\displaystyle>\frac{1}{i+1}\left(\frac{r_{i}}{v}-\frac{1}{2v}\right)\geq 0.

For i=vi=v, F(i)F(i1)=rii0.F(i)-F(i-1)=\frac{r_{i}}{i}\geq 0.

We are now ready to explicitly construct an example that achieves deadline menu complexity of 2n12^{n-1} where vmax=2n1v_{\max}=2^{n}-1. Fix nn and construct n1n-1 sequences Si={sij}j=12n1S_{i}=\{s_{ij}\}_{j=1}^{2^{n}-1} of length 2n12^{n}-1, then

sij=ε(12bi+1(j))bi(j)(k=1i1(1bk(j))),s_{ij}=\varepsilon\cdot(1-2b_{i+1}(j))b_{i}(j)\left(\prod_{k=1}^{i-1}(1-b_{k}(j))\right),

where bi(j)b_{i}(j) denotes the ii-th least significant bit in the binary expansion of jj, ε=4n\varepsilon=4^{-n}. Let Sn={snj}j=12n1S_{n}=\{s_{nj}\}_{j=1}^{2^{n}-1} as snj=ε(2bn(j)1)s_{nj}=\varepsilon(2b_{n}(j)-1). Lemma B.1 implies that there exist distributions fjf_{j} j[n]\forall j\in[n] such that ϕfjf(j)=sii[v1]\phi_{f_{j}}f(j)=s_{i}\forall i\in[v-1]. In our construction the type distribution of day ii corresponds to the distribution fn+1if_{n+1-i}. Let the distribution over days be uniform (i.e. qi=1ni[n]q_{i}=\frac{1}{n}\forall i\in[n]). In order to show that this construction achieves a menu complexity of 2n12^{n-1} we first need to characterize the revenue curves for all days, and then show that an optimal auction exists where prices can be set so as to create a large menu. The intuition for these revenue curves is that their ironed intervals are nested: prices at the endpoints of ironed intervals on day ii are the midpoints of new ironed intervals on day i+1i+1. In addition, after ironing the curves, they look like constants. Therefore, the optimal revenue curves will have the same ironed intervals as their original counterparts. We design the revenue of the first day to be maximized at the median value. On the next day this price will belong to an ironed interval, meaning that any optimal auction must offer a lottery over two prices that are not ironed for day 2. The size of the lottery offered directly translate into the minimum number of options a menu for that day must have. By the nesting construction, this will double the number of options offered on day i+1i+1 with respect to day ii.

We can now use the above construction, combined with Claim A.1 and Lemma B.1, to show a more general result, Lemma 3.

Proof of section 3.

By simple examination, the revenue curve for day 11 is just a line that increases until it reaches vmax/2v_{\max}/2 and then decreases, so it is maxed at vmax/2v_{\max}/2. Consider day i2i\geq 2. The sequence that generates its revenue curve, sn+1i,js_{n+1-i,j} is non-zero iff jj has remainder 2nimod2ni+12^{n-i}\mod 2^{n-i+1}. Since there are 2n12^{n}-1 values in the sequence, there will be 2i12^{i-1} non-zero values. These values will alternate between ε,ε\varepsilon,-\varepsilon and each alternation creates one ironed interval: the revenue decreases by ε\varepsilon, stays at that value for a while, and increases by ε\varepsilon again. This gives 2i22^{i-2} ironed intervals for the revenue curve of day ii. Moreover, the last price where the revenue increases is at 2i(2ni+11)2^{i}(2^{n-i+1}-1), making it a valid maximizer. The revenue remains constant from there on so any higher value is also a maximizer.

For the third point, note that RiR_{i} takes values in {1,1ε}\{1,1-\varepsilon\}, alternating between them for intervals whose lengths depend on ii. R^i\hat{R}_{i}, the upper concave hull of RiR_{i}, is then a constant function with value 11 everywhere.

We prove the last point by induction, starting from RnR_{n}. This holds because Rn=RnR_{n}=R_{\geq n}. Note that, from our previous claim, R^n=R^n\hat{R}_{\geq n}=\hat{R}_{n} is a constant function. Suppose that Ri+1(v)=Ri+1(v)+cR_{\geq i+1}(v)=R_{i+1}(v)+c for some cc. For vri+1v\leq r_{\geq i+1}, Ri(v)=Ri(v)+R^i+1(v)R_{\geq i}(v)=R_{i}(v)+\hat{R}_{\geq i+1}(v). For v>ri+1v>r_{\geq i+1}, Ri(v)=Ri(v)+R^i+1(ri+1)R_{\geq i}(v)=R_{i}(v)+\hat{R}_{\geq i+1}(r_{\geq i+1}). In either case, the term added to Ri(v)R_{i}(v) is a constant (and it is the same constant) by the inductive hypothesis, so the claim follows. ∎

Lemma B.2.

The series of optimal revenue curves RiR_{\geq i} induced by the distributions fn+1if_{n+1-i} is such that on any day ii, an optimal allocation curve as constructed by [FGKK16] is a step function with 2i12^{i-1} jumps and takes the following form:

ai(v)={0v<2ni,k+12i12ni+k2ni+1v<2ni+(k+1)2ni+1,1v2n2ni,a_{i}(v)=\left\{\begin{array}[]{ll}0&v<2^{n-i},\\ \frac{k+1}{2^{i-1}}&2^{n-i}+k2^{n-i+1}\leq v<2^{n-i}+(k+1)2^{n-i+1},\\ 1&v\geq 2^{n}-2^{n-i},\\ \end{array}\right.

where 0k2i120\leq k\leq 2^{i-1}-2. Moreover, these prices where the function jumps on day ii will belong to ironed intervals for the optimal revenue curve of the following day, Ri+1R_{\geq i+1} for all i<ni<n.

Proof.

We prove this also by induction going from the first day to the last. As stated before, R1R_{\geq 1} is maximized at vmax/2v_{\max}/2 and has no ironed intervals, therefore it is clear r1=vmax/2r_{\geq 1}=v_{\max}/2 and the optimal allocation curve a1a_{1} corresponds to a step function at vmax/2v_{\max}/2. By construction, this price belongs to an ironed interval of R2R_{\geq 2}. Thus the base case holds. Suppose that the statement is true for day ii. We want to understand the optimal allocation curve for day i+1i+1.

First note that the places where the function jumps on day ii, Pi={2ni+k2ni+1}k=02i11P_{i}=\{2^{n-i}+k2^{n-i+1}\}_{k=0}^{2^{i-1}-1}, belong to ironed intervals of Ri+1R_{\geq i+1} (which are the same intervals as Ri+1R_{i+1}). This is because at these prices jPij\in P_{i} the function sni+1,js_{n-i+1,j} takes a value of 0. The nearest place where they are non-zero are exactly at j2ni1j-2^{n-i-1} and j+2ni1j+2^{n-i-1}, where the function takes values ε\varepsilon and ε-\varepsilon, meaning we are inside an iron interval (the revenue decreases and then increase by ε\varepsilon). Thus by the optimal allocation curves suggested in [FGKK16] we observe that if on day ii we offered a price of 2ni+k2ni+12^{n-i}+k2^{n-i+1} with positive probability k+12i1\frac{k+1}{2^{i-1}}, we must also allocate at prices 2ni1+(2k)2ni2^{n-i-1}+(2k)2^{n-i} and 2ni1+(2k+1)2ni2^{n-i-1}+(2k+1)2^{n-i} on day i+1i+1 with positive probability.

The probability for allocating the item with price 2ni1+k2niv<2ni1+(k+1)2ni2^{n-i-1}+k2^{n-i}\leq v<2^{n-i-1}+(k+1)2^{n-i} depends on the parity of kk. If kk is odd then the values in this range correspond to a non-ironed interval on day i+1i+1, meaning they preserve the probability of allocation from day ii. The probability of allocating on the interval with endpoints 2ni+(k1)2ni+12^{n-i}+\frac{(k-1)}{2}^{n-i+1} and 2ni+k+122ni+12^{n-i}+\frac{k+1}{2}2^{n-i+1}, which contains our new interval of interest, is k+12i+1\frac{k+1}{2^{i+1}}. If kk is even then we belong to an ironed interval on day i+1i+1, meaning that the probability of allocation is going to be the average of allocating on the two intervals on day ii that intersect this one. These intervals are [2ni+k222ni+1,2ni+k22ni+1][2^{n-i}+\frac{k-2}{2}2^{n-i+1},2^{n-i}+\frac{k}{2}2^{n-i+1}] and [2ni+k22ni+1,2ni+(k+2)22ni+1][2^{n-i}+\frac{k}{2}2^{n-i+1},2^{n-i}+\frac{(k+2)}{2}2^{n-i+1}]. Thus the probability of allocating at vv on day i+1i+1 is just k+12i+1\frac{k+1}{2^{i+1}}. ∎

Refer to caption
Figure 2: Revenue curves corresponding to an instantiation of our construction with n=4n=4 and ε=.25\varepsilon=.25.

With this understanding of the revenue curves of the instance we construct, we are ready to prove Theorem 3.2, the main result of Section 3.

Proof of Theorem 3.2.

By Lemma B.1, the optimal allocation curves for day ii are step-wise increasing functions with 2i12^{i-1} steps. This means that for any deadline ii, the ii-deadline menu complexity of this instance is 2i12^{i-1}. Combining the number of options offered over all nn days gives a menu complexity of 2n12^{n}-1. ∎

B.2 Perturbed case

In this appendix, we show how to tweak the example in Section 3 so that no optimal auction has a menu complexity less than 2n2^{n}. The problem with the example in Section 3 is that we don’t have to follow the allocation suggestion of [FGKK16] in order to achieve an optimal auction because we could simply choose larger ironed intervals on every day that spanned the whole spectrum of prices and (because of the simplicity of the construction) still recover all the revenue. We add a small non-linear term to the revenue curve of each day to dissuade from this while still preserving the ’nested’ structure of ironed intervals. Consider now the n1n-1 sequences

sij=ε2(12bi+1(j))bi(j)(k=1i1(1bk(j)))+δj(2vmaxj),s_{ij}=\frac{\varepsilon}{2}\cdot(1-2b_{i+1}(j))b_{i}(j)\left(\prod_{k=1}^{i-1}(1-b_{k}(j))\right)+\delta j\cdot(2v_{\max}-j),

where vmax=2n,ϵ=14vv_{\max}=2^{n},\epsilon=\frac{1}{4v} and δ\delta, the weight of the non-linear term, is v10v^{-10}. The distribution for the first day is the same as the one we used for the previous case. Note that the non-linear term added is maxed at vmaxv_{\max}. Again, Lemma B.1, Claim A.1 allow us to conclude there are revenue curves RiR_{i} and valid distributions fn+1if_{n+1-i}, whose changes are dictated by the sequence sn+1is_{n+1-i}. We restate the results from 3 with the appropriate adjustments.

Lemma B.3.

The series of revenue curves RiR_{i} induced by the distributions fn+1if_{n+1-i} satisfy the following properties:

  • R1R_{1} is maximized at vmax/2v_{\max}/2 and has no ironed intervals.

  • RiR_{i} for 1<id1<i\leq d has a unique maximizer at price vmaxv_{\max} and has 2i22^{i-2} ironed intervals.

  • RiR_{\geq i} has the same ironed intervals as RiR_{i}.

Proof.

The first point remains true since we haven’t changed R1R_{1}. Sine vmaxv_{\max} was a maximizer of the function before adding the non-linear term, it will remain a maximizer since it’s also optimal for that function. In fact, it is now the unique maximizer. This implies that rn=vmaxr_{\geq n}=v_{\max}. The last point follows from a similar inductive proof to the one in the original case. It is easy to see that now RiR_{\geq i} will be maximized at vmaxv_{\max}, for all ii. Then by the characterization from [FGKK16] we have that for any v<vmaxv<v_{\max}, Ri(v)=Ri(v)R_{\geq i}(v)=R_{i}(v). ∎

The key property of Lemma B.1 is that the ironed intervals for day ii are nested in those of day i+1i+1. This property is kept despite the small adjustment of the function, so the statement of the Lemma remains and the proof of the main theorem of Section 3 follows. After the adjustment the ironed intervals become unique and no clever algorithm can take large ironed intervals to avoid a large menu complexity.

Appendix C Omitted proofs of Section 4

C.1 Polygon approximation of concave functions

Polygon approximation of functions is a classic problem in approximation theory. The central problem in approximation theory is to see how ‘well’ does a class of ‘simple’ functions approximate an arbitrary function. Usually, the ‘simple’ functions have a property that makes them easy to study and the approximation scheme ensures that results carry over to arbitrary functions. In the subfield of polygon approximation, the class of ‘simple’ functions is the set of all continuous piecewise linear functions. Different error metrics (an error metric defines the properties of a good approximation) are studied, the most common being an additive error defined by a parameter ε\varepsilon. We define polygon approximation formally and state a celebrated result for additive ε\varepsilon error.

Definition C.1 (Polygon approximation).

Given a function f:[a,b]f:[a,b]\to\mathbb{R} and a sequence XX of points a=x0,x1,,xn1,xn=ba=x_{0},x_{1},\cdots,x_{n-1},x_{n}=b, define f~X\tilde{f}_{X}, the XX-approximation of ff, to be the continuous piecewise linear function that passes through (x0,f(x0)),(x1,f(x1)),,(xn,f(xn))(x_{0},f(x_{0})),(x_{1},f(x_{1})),\cdots,(x_{n},f(x_{n})).

Before continuing, we briefly state the notation that we are going to use throughout this section. For a concave function ff, we use f+f^{+} to denote the right-derivative and ff^{-} to denote the left derivative. We remind the reader that for a concave function over an interval, both ff^{-} and f+f^{+} are well defined at every point (and there are only countably many points for which ff is not differentiable). The following theorem has been taken from [Rot92] and has been reworded to fit our notation.

Theorem C.1 ([Rot92, BHR91]).

For any ε>0\varepsilon>0 and concave f:[a,b]f:[a,b]\to\mathbb{R} such that ba=Lb-a=L and f+(a)f(b)Δf^{+}(a)-f^{-}(b)\leq\Delta, there exists a sequence XX of at most 3+98LΔε3+\sqrt{\frac{9}{8}\frac{L\Delta}{\varepsilon}} points such that x[a,b],0f(x)f~X(x)ε\forall x\in[a,b],0\leq f(x)-\tilde{f}_{X}(x)\leq\varepsilon.

Theorem C.1, as stated above, deals with arbitrary concave functions and is known to be tight [Rot92]. In our setting of the FedEx problem, the concave functions that we wish to approximate are monotone revenue curves defined over [0,ri][0,r_{\geq i}] (for the rest of this section, we will use vmaxv_{\max} instead of rir_{\geq i} to refer to the right end-point of the interval to keep the notation non-specific to the FedEx problem). Being monotone revenue curves, they satisfy many other properties, e.g., f(0)=0f(0)=0, f+(x)1f^{+}(x)\leq 1, and f(x)0f^{-}(x)\geq 0. Also, the error that we are interested in is not the additive ε\varepsilon error that is dealt with in the theorem. subsection C.2 formalizes the exact guarantees we desire. Note that the error metric that we use is an additive multiplicative hybrid that increases with f(vmax)f(v_{\max}). To prove this lemma, we use Theorem C.1 separately on each of the intervals [0,1],[1,2],,[vmax/4,vmax/2],[vmax/2,vmax][0,1],[1,2],\cdots,[v_{\max}/4,v_{\max}/2],[v_{\max}/2,v_{\max}]. We show that when the size of the approximating sequence XX is large in these intervals, then f(vmax)f(v_{\max}) must be large. In turn, this implies that we have more wiggle room in our error. We exploit this room to improve the dependence on vmaxv_{\max} to logarithmic from the polynomial dependence on LL in Theorem C.1.

C.2 Proof of Theorem 4.2

We begin with an easy lemma: subsection C.2 shows how to lower bound f(vmax)f(v_{\max}) using the derivatives of various points along the way.

Lemma C.2.

For any concave f:[0,vmax][0,)f:[0,v_{\max}]\to[0,\infty) such that f(vmax)0f^{-}(v_{\max})\geq 0, we have

f(vmax)i=0logvmaxf+(vmax2i)vmax2i+1.f(v_{\max})\geq\sum_{i=0}^{\lceil\log v_{\max}\rceil}f^{+}\left(\frac{v_{\max}}{2^{i}}\right)\frac{v_{\max}}{2^{i+1}}.
Proof.

Since the function ff is concave, we get that for all i>0i>0,

f(vmax2i1)f(vmax2i)f+(vmax2i1)vmax2i.f\left(\frac{v_{\max}}{2^{i-1}}\right)-f\left(\frac{v_{\max}}{2^{i}}\right)\geq f^{+}\left(\frac{v_{\max}}{2^{i-1}}\right)\frac{v_{\max}}{2^{i}}.

Adding these inequalities for i=1,2,logvmax+1i=1,2,\cdots\lceil{\log v_{\max}}\rceil+1, we get

f(vmax)i=1logvmax+1f+(vmax2i1)vmax2ii=0logvmaxf+(vmax2i)vmax2i+1.f(v_{\max})\geq\sum_{i=1}^{\lceil\log v_{\max}\rceil+1}f^{+}\left(\frac{v_{\max}}{2^{i-1}}\right)\frac{v_{\max}}{2^{i}}\geq\sum_{i=0}^{\lceil\log v_{\max}\rceil}f^{+}\left(\frac{v_{\max}}{2^{i}}\right)\frac{v_{\max}}{2^{i+1}}.

And now, we show how to improve the depends on vmaxv_{\max} from linear to logarithmic for our relaxed hybrid guarantee.

Lemma C.3.

For any ε>0\varepsilon>0 and concave function f:[0,vmax][0,)f:[0,v_{\max}]\to[0,\infty) such that f(0)=0f(0)=0, f+(0)1f^{+}(0)\leq 1, f(vmax)0f^{-}(v_{\max})\geq 0, there exists a sequence XX of at most O(logvmax+logvmaxε)O\left(\log v_{\max}+\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right) points such that for all x[0,vmax],x\in[0,v_{\max}],

f(x)ε(1+f(vmax))f~X(x)f(x).f(x)-\varepsilon\left(1+f(v_{\max})\right)\leq\tilde{f}_{X}(x)\leq f(x).
Proof.

We divide the domain of the function ff into the intervals Ii=[vmax2i,vmax2i1]I_{i}=\Big{[}\frac{v_{\max}}{2^{i}},\frac{v_{\max}}{2^{i-1}}\Big{]} for i{1,2,logvmax}i\in\{1,2,\cdots\lceil\log v_{\max}\rceil\} and the interval I0=[0,vmax2logvmax]I_{0}=\Big{[}0,\frac{v_{\max}}{2^{\lceil\log v_{\max}\rceil}}\Big{]}. For each interval IiI_{i}, we define a sequence XiX_{i} that approximates ff well over this interval. The final sequence XX is obtained by combining all the sequences XiX_{i}.

We start with the interval I0I_{0}. The length of I0I_{0} is vmax2logvmax1\frac{v_{\max}}{2^{\lceil\log v_{\max}\rceil}}\leq 1. Since f+(0)1f^{+}(0)\leq 1, we can apply Theorem C.1 with Δ=1\Delta=1 and conclude that to approximate ff up to an additive εε(1+f(vmax))\varepsilon\leq\varepsilon\left(1+f(v_{\max})\right) error on this interval, we need a sequence X0X_{0} of at most 3+98ε3+\sqrt{\frac{9}{8\varepsilon}} points.

We now argue for the rest of the domain of ff. The length LiL_{i} of the interval IiI_{i} is vmax2i\frac{v_{\max}}{2^{i}}. Let Δi=f+(vmax2i)f(vmax2i1)\Delta_{i}=f^{+}\left(\frac{v_{\max}}{2^{i}}\right)-f^{-}\left(\frac{v_{\max}}{2^{i-1}}\right). Applying Theorem C.1 on IiI_{i} with the error parameter εf(vmax)ε(1+f(vmax))\varepsilon f(v_{\max})\leq\varepsilon\left(1+f(v_{\max})\right) gives a sequence XiX_{i} at most 3+98vmaxΔi2iεf(vmax)3+\sqrt{\frac{9}{8}\frac{v_{\max}\Delta_{i}}{2^{i}\varepsilon f(v_{\max})}} points such that f~Xi\tilde{f}_{X_{i}} approximates ff on IiI_{i} up to an additive εf(vmax)\varepsilon f(v_{\max}) error.

Combine all the sequences XiX_{i} for i=0,1,,logvmaxi=0,1,\cdots,\lceil\log v_{\max}\rceil to get a single sequence XX such that f~X\tilde{f}_{X} approximates ff up to an additive ε(1+f(vmax))\varepsilon(1+f(v_{\max})) throughout its domain [0,vmax][0,v_{\max}]. The number of points in XX is at most

3+98ε+\displaystyle 3+\sqrt{\frac{9}{8\varepsilon}}+ i=1logvmax3+98vmaxΔi2iεf(vmax)\displaystyle\sum_{i=1}^{\lceil\log v_{\max}\rceil}3+\sqrt{\frac{9}{8}\frac{v_{\max}\Delta_{i}}{2^{i}\varepsilon f(v_{\max})}}
3(1+logvmax)+98ε+1εf(vmax)i=1logvmax98vmaxΔi2i\displaystyle\leq 3(1+\lceil\log v_{\max}\rceil)+\sqrt{\frac{9}{8\varepsilon}}+\frac{1}{\sqrt{\varepsilon f(v_{\max})}}\sum_{i=1}^{\lceil\log v_{\max}\rceil}\sqrt{\frac{9}{8}\frac{v_{\max}\Delta_{i}}{2^{i}}}
3(1+logvmax)+98ε+1εf(vmax)logvmaxi=1logvmax98vmaxΔi2i\displaystyle\leq 3(1+\lceil\log v_{\max}\rceil)+\sqrt{\frac{9}{8\varepsilon}}+\frac{1}{\sqrt{\varepsilon f(v_{\max})}}\sqrt{\lceil\log v_{\max}\rceil\sum_{i=1}^{\lceil\log v_{\max}\rceil}\frac{9}{8}\frac{v_{\max}\Delta_{i}}{2^{i}}} (Cauchy-Schwarz)
3(1+logvmax)+98ε+1εf(vmax)logvmaxi=1logvmax98vmaxf+(vmax2i)2i\displaystyle\leq 3(1+\lceil\log v_{\max}\rceil)+\sqrt{\frac{9}{8\varepsilon}}+\frac{1}{\sqrt{\varepsilon f(v_{\max})}}\sqrt{\lceil\log v_{\max}\rceil\sum_{i=1}^{\lceil\log v_{\max}\rceil}\frac{9}{8}\frac{v_{\max}f^{+}\left(\frac{v_{\max}}{2^{i}}\right)}{2^{i}}} (Definition of Δi\Delta_{i})
3(1+logvmax)+98ε+188logvmaxε\displaystyle\leq 3(1+\lceil\log v_{\max}\rceil)+\sqrt{\frac{9}{8\varepsilon}}+\sqrt{\frac{18}{8}\frac{\lceil\log v_{\max}\rceil}{\varepsilon}} (subsection C.2)
=O(logvmax+logvmaxε).\displaystyle=O\left(\log v_{\max}+\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right).

It is noteworthy that subsection C.2 preserves the quadratic dependence on ε\varepsilon in Theorem C.1, and simply replaces vmaxv_{\max} with ln(vmax)\ln(v_{\max}). Note that essentially the main idea is that we wish to target additive error εf(vmax)\varepsilon\cdot f(v_{\max}) in the domain of “large xx” (as targeting an additive ε\varepsilon in this range requires linear dependence on vmaxv_{\max} that can be avoided if f(vmax)f(v_{\max}) is big) but additive error ε\varepsilon in the domain of “small xx” (as targeting an additive ε\varepsilon requires linear dependence on vmaxv_{\max} which is small). So in order to get the improved guarantee, our bound needs the flexibility to take either the additive or multiplicative approximation in the different ranges.

To complete the proof of Theorem 4.2, we additionally provide a much simpler argument showing that the dependence on vmaxv_{\max} can be removed at the cost of suboptimal dependence on ε\varepsilon.

Lemma C.4.

For any ε>0\varepsilon>0 and any increasing concave function f:[0,vmax][0,)f:[0,v_{\max}]\to[0,\infty) such that f(0)=0f(0)=0, there exists a sequence XX of at most O(1/ε)O(1/\varepsilon) points such that for all x[0,vmax]x\in[0,v_{\max}],

f(x)εf(vmax)f~X(x)f(x).f(x)-\varepsilon f(v_{\max})\leq\tilde{f}_{X}(x)\leq f(x).
Proof.

We divide the range [0,fmax][0,f_{\max}] of the function ff into 1/ε1/\varepsilon intervals and take one point from each interval. Formally, let xi=minx{xf(x)iεf(vmax)}x_{i}=\min_{x}\{x\mid f(x)\geq i\varepsilon f(v_{\max})\}, for all i{0,,1/ε}i\in\{0,\cdots,\lfloor{1/\varepsilon}\rfloor\}. Note that since ff is continuous (it is concave), f(xi)=iεf(vmax)f(x_{i})=i\varepsilon f(v_{\max}).

We define the sequence XX to be the points {xi}i=01/ε\{x_{i}\}_{i=0}^{\lfloor{1/\varepsilon}\rfloor} followed by x1/ε+1=vmaxx_{\lfloor{1/\varepsilon}\rfloor+1}=v_{\max}. We prove that f~X\tilde{f}_{X} is the required polygon approximation. For any point x[0,vmax]x\in[0,v_{\max}], let i{0,1,,1/ε}i\in\{0,1,\cdots,\lfloor{1/\varepsilon}\rfloor\} be such that xix<xi+1x_{i}\leq x<x_{i+1}. Since ff is increasing, we have f(xi)f(x)f(xi+1)f(x_{i})\leq f(x)\leq f(x_{i+1}). This gives iεf(vmax)f(x)(i+1)εf(vmax)i\varepsilon f(v_{\max})\leq f(x)\leq(i+1)\varepsilon f(v_{\max}).

Since f~X(x)\tilde{f}_{X}(x) is a piecewise linear function, we have f(xi)=f~X(xi)f~X(x)f~X(xi+1)=f(xi+1)f(x_{i})=\tilde{f}_{X}(x_{i})\leq\tilde{f}_{X}(x)\leq\tilde{f}_{X}(x_{i+1})=f(x_{i+1}). Again, we get iεf(vmax)f~X(x)(i+1)εf(vmax)i\varepsilon f(v_{\max})\leq\tilde{f}_{X}(x)\leq(i+1)\varepsilon f(v_{\max}).

Combining the two inequalities, we get f(x)f~X(x)εf(vmax)f(x)-\tilde{f}_{X}(x)\leq\varepsilon f(v_{\max}). Since ff is concave, any piecewise linear segment joining two points on the function lies below the function and we have:

0f(x)f~X(x)εf(vmax).0\leq f(x)-\tilde{f}_{X}(x)\leq\varepsilon f(v_{\max}).

Given any function ff, we can use either of the two polygon approximation schemes defined by subsection C.2 and subsection C.2. Taking the better of these two schemes we get a proof of Theorem 4.2.

Proof of Theorem 4.2.

Let X1X_{1} and X2X_{2} be the sequences given by subsection C.2 and subsection C.2 respectively. Suppose 1/εlogvmax1/\varepsilon\leq\log v_{\max}. In this case, the size of X1X_{1} is at most O(1/ε)=O(min{1/ε,logvmaxε})O(1/\varepsilon)=O\left(\min\left\{1/\varepsilon,\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right\}\right). Now suppose 1/εlogvmax1/\varepsilon\geq\log v_{\max}. In this case, the size of X2X_{2} is at most O(logvmax+logvmaxε)=O(logvmaxε)=O(min{1/ε,logvmaxε})O\left(\log v_{\max}+\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right)=O\left(\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right)=O\left(\min\left\{1/\varepsilon,\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right\}\right). Thus, the smaller of the two sequence is of size at most O(min{1/ε,logvmaxε})O\left(\min\left\{1/\varepsilon,\sqrt{\frac{\log v_{\max}}{\varepsilon}}\right\}\right). ∎

C.3 Proof of Proposition 4.1

Proof of Proposition 4.1.

Consider any increasing concave function ff and a polygon approximation f~X\tilde{f}_{X} of ff. Let X={0=x0,,xk=vmax}X=\{0=x_{0},\cdots,x_{k}=v_{\max}\}. Since ff is concave, the function f~X\tilde{f}_{X} lies ‘below’ the function ff. Consider now the tangents to the function ff drawn at points in XX. Since ff is concave, these tangents are always ‘above’ the function ff. The tangents can be stitched together to define a function gg that is always above the function ff. Formally, define g(x)=mini{0,,k}{f(xi)+min{f(xi)(xxi),f+(xi)(xxi)}}g(x)=\min_{i\in\{0,\ldots,k\}}\{f(x_{i})+\min\{f^{-}(x_{i})\cdot(x-x_{i}),f^{+}(x_{i})\cdot(x-x_{i})\}\}. Observe that f(xi)=g(xi)f(x_{i})=g(x_{i}), f(xi)=g(xi)f^{-}(x_{i})=g^{-}(x_{i}), and f+(xi)=g+(xi)f^{+}(x_{i})=g^{+}(x_{i}) for all ii, giving us the second bullet. Observe also that gg has at most 2k2k different slopes (because x0x_{0} and xkx_{k} each only contribute one possibility, while the rest contribute two possibilities), so it is piecewise-linear with at most 2k2k segments.

It remains to show that there exists a polygon approximation f~X\tilde{f}_{X} we can start with so that there is no polygon approximation of gg using kk segments for additive error ε\varepsilon. Let XX be k+1k+1 points so as to minimize maxx{f(x)f~X(x)}\max_{x}\{f(x)-\tilde{f}_{X}(x)\}, and call this value δ\delta (by hypothesis, δ>ε\delta>\varepsilon). This guarantees us the following property on XX:

maxx[xi,xi+1]{f(x)f~X(x)}=δ for all i.\max_{x\in[x_{i},x_{i+1}]}\{f(x)-\tilde{f}_{X}(x)\}=\delta\text{ for all }i.

If not, then there is some ii where maxx[xi,xi+1]{f(x)f~X(x)}<δ\max_{x\in[x_{i},x_{i+1}]}\{f(x)-\tilde{f}_{X}(x)\}<\delta. xix_{i} could then be moved a teeny bit to the left (making the approximation within the interval [xi,xi+1][x_{i},x_{i+1}] a teeny bit worse) so that maxx[xi,xi+1]{f(x)f~X(x)}\max_{x\in[x_{i},x_{i+1}]}\{f(x)-\tilde{f}_{X}(x)\} remains <δ<\delta, but maxx[xi1,xi]{f(x)f~X(x)}\max_{x\in[x_{i-1},x_{i}]}\{f(x)-\tilde{f}_{X}(x)\} becomes <δ<\delta as well (as it also makes the approximation within the interval [xi1,xi][x_{i-1},x_{i}] a teeny bit better, and it was δ\leq\delta to begin with). Iterating this procedure would then let us make maxx[xj,xj+1]{f(x)f~X(x)}<δ\max_{x\in[x_{j},x_{j+1}]}\{f(x)-\tilde{f}_{X}(x)\}<\delta for all jij\leq i. And a similar argument for moving xi+1x_{i+1} a teeny bit to the right allows us to claim the same for jij\geq i. At the end, we have now made maxx{f(x)f~X(x)}\max_{x}\{f(x)-\tilde{f}_{X}(x)\} strictly less than δ\delta, a contradiction.

Now that we have maxx[xi,xi+1]{f(x)f~X(x)}=δ>ε\max_{x\in[x_{i},x_{i+1}]}\{f(x)-\tilde{f}_{X}(x)\}=\delta>\varepsilon for all ii, we can immediately see that for any function g()g(\cdot) with g(xi)=f(xi)g(x_{i})=f(x_{i}) for all ii, and g(x)f(x)g(x)\geq f(x) for all xx, any YY such that g~Y()\tilde{g}_{Y}(\cdot) is an ε\varepsilon-approximation polygon approximation requires at least one point yY[xi,xi+1]y\in Y\cap[x_{i},x_{i+1}]. So we can immediately see that gg also has no polygon approximation using kk segments for additive error ε\varepsilon. ∎

With this in mind, we’ll now restrict attention to piecewise-linear functions with not too many segments. To this end, first consider a piecewise-linear function with just two segments. The first segment is of length l1l_{1} with slope m1m_{1}. Similarly, the second segment is of length l2l_{2} and slope m2<m1m_{2}<m_{1}. If we wish to approximate this function using just one segment, the segment should have length l1+l2l_{1}+l_{2} and slope m1l1+m2l2l1+l2\frac{m_{1}l_{1}+m_{2}l_{2}}{l_{1}+l_{2}}. The error in this approximation will be (m1m2)l1l2l1+l2\frac{(m_{1}-m_{2})l_{1}l_{2}}{l_{1}+l_{2}}. This calculation says that the error roughly depends on the product of the slope difference and the lengths. For our lower bound, we would wish to have many such segments combined together so that the error in all of the segments is the same. For a concave function, the slope keeps decreasing as xx grows. Thus, if the error has to be the same in every segment, the lengths of each segment have to keep increasing so that the product is the same. Observe that the function log(x)\log(x) satisfies this property.

With this direction in mind, we now demonstrate a function that we call LPLkLPL_{k} (for logarithmically-piecewise-linear) that can’t be approximated using less than kk segments. We chose this name because the function has derivative 1/2i1/2^{i} in the range [3(2i+12),3(2i+22)][3(2^{i+1}-2),3(2^{i+2}-2)] which is (roughly) a constant factor away from the derivative 1/(x+6)1/(x+6) of the logarithmic function ln(x+6)\ln(x+6). In fact, what we do can be seen as stitching together appropriate tangents to the logarithmic curve. For a value of ε<1/100\varepsilon<1/100, let k=1100εk=\frac{1}{100\varepsilon}. Consider the following function defined on [0,3(2k+12)=vmax][0,3(2^{k+1}-2)=v_{\max}]:

LPLk(x)={x,0x3(222)12(x3(222))+6,3(222)<x3(232)14(x3(232))+12,3(232)<x3(242)12k1(x3(2k2))+6(k1),3(2k2)<x3(2k+12).LPL_{k}(x)=\begin{cases}x,&0\leq x\leq 3(2^{2}-2)\\ \frac{1}{2}(x-3(2^{2}-2))+6,&3(2^{2}-2)<x\leq 3(2^{3}-2)\\ \frac{1}{4}(x-3(2^{3}-2))+12,&3(2^{3}-2)<x\leq 3(2^{4}-2)\\ \ \ \ \ \vdots&\\ \frac{1}{2^{k-1}}(x-3(2^{k}-2))+6(k-1),&3(2^{k}-2)<x\leq 3(2^{k+1}-2).\\ \end{cases}

The function LPLkLPL_{k} satisfies ε(1+LPLk(vmax))=ε+6kε12\varepsilon(1+LPL_{k}(v_{\max}))=\varepsilon+6k\varepsilon\leq\frac{1}{2}, and that f(0)1f^{\prime}(0)\leq 1, f(vmax)0f^{\prime}(v_{\max})\geq 0. We prove that any good polygon approximation for LPLk(x)LPL_{k}(x) must have at least Ω(k)\Omega(k) segments. Since k=1100εk=\frac{1}{100\varepsilon}, this proves that a dependence of 1/ε1/\varepsilon is unavoidable if we want to be independent of vmaxv_{\max}. Since in this example, logvmaxε=Θ(k)\sqrt{\frac{\log v_{\max}}{\varepsilon}}=\Theta(k), this also shows that the bound in Theorem 4.2 is tight. The idea of our proof is to identify disjoint intervals such that any sequence XX that defines a good polygon approximation should have at least one point in every such interval. We will identify Θ(k)\Theta(k) such intervals for LPLkLPL_{k}.

Lemma C.5.

Suppose there exists a sequence X={xi}i1X=\{x_{i}\}_{i\geq 1} of points such that LPLk(x)1/2LPLk~X(x)LPLk(x)LPL_{k}(x)-1/2\leq\widetilde{LPL_{k}}_{X}(x)\leq LPL_{k}(x) for all x[0,vmax]x\in[0,v_{\max}]. The sequence XX has at least one point in the range Ii=[3(2i2)2i,3(2i2)+2i]I_{i}=[3(2^{i}-2)-2^{i},3(2^{i}-2)+2^{i}] for all 2ik2\leq i\leq k.

Proof of subsection C.3.

Suppose for the sake of contradiction there is an ii for which XIi=ϕX\cup I_{i}=\phi. We prove that the approximation doesn’t hold at the midpoint of IiI_{i} (call this mm). Since no point of XX is in IiI_{i} and LPLkLPL_{k} is concave, LPLk~X(m)\tilde{LPL_{k}}_{X}(m) is at most 12(LPLk(3(2i2)2i)+LPLk(3(2i2)+2i))\frac{1}{2}(LPL_{k}(3(2^{i}-2)-2^{i})+LPL_{k}(3(2^{i}-2)+2^{i})). Thus, LPLk(m)LPLk~X(m)6(i1)12(6(i2)+2+6(i1)+2)=1.LPL_{k}(m)-\tilde{LPL_{k}}_{X}(m)\geq 6(i-1)-\frac{1}{2}\left(6(i-2)+2+6(i-1)+2\right)=1.

The following corollary follows immediately from Lemma C.3.

Corollary C.6.

An ε(1+LPLk(vmax))\varepsilon(1+LPL_{k}(v_{\max})) additive polygon approximation of LPLkLPL_{k} above requires at least k1k-1 segments.

subsection C.3 shows that the bounds of Theorem 4.2 are tight, both in the case that there is no dependence on vmaxv_{\max}, and in the case that we allow dependence vmaxv_{\max}. Moreover, subsection 4.1 shows that “LPL-like” functions exhibit the worst-case gaps for whatever kinds of guarantees are desired.

C.4 From Polygon Approximation to Approximate Auctions

In this subsection we prove subsection C.4 and section 4 establishing the connection between ”small” polygon approximations and approximate auctions with low menu complexity. The high level idea of the proof of subsection C.4 is that the revenue generated by a mechanism on days ii through nn is equal to a weighted sum of the function Ri()R_{\geq i}(\cdot) evaluated at points where the allocation curve changes (see subsection 2.1). If a good polygon approximation of RiR_{\geq i} exists, then every point where the allocation curve changes can be substituted by the endpoints of the segment of polygon approximation that contains it. The error in the polygon approximation would become loss in revenue after this transformation. The number of segments (or endpoints) in the polygon approximation would be the menu complexity of the transformed mechanism. Thus, a good and small polygon approximation would generate an almost optimal and low menu complexity mechanism. The proof of section 4 relies on iteratively calling on subsection C.4 from the first day to the last while keeping a check on the total revenue lost compared to the optimal over all days.

Proposition C.7.

Consider a FedEx instance with nn deadlines. For i{1,2,,n}i\in\{1,2,\cdots,n\}, let gg be the function R~i\tilde{R}_{\geq i} defined in 2.3, and let XX be a sequence of kk points in [0,ri][0,r_{\geq i}] such that for all xrix\leq r_{\geq i}, we have g(x)εg~X(x)g(x)g(x)-\varepsilon\leq\tilde{g}_{X}(x)\leq g(x). Then for any mechanism MM, there exists a mechanism with the following properties:

  • The menu offered on days 1,2,,i11,2,\cdots,i-1 is the same as MM.

  • The ii-deadline menu complexity is at most 2k2k.

  • The revenue is at least 𝖮𝖯𝖳M,iε\mathsf{OPT}_{M,i}-\varepsilon.

Here, 𝖮𝖯𝖳M,i\mathsf{OPT}_{M,i} denotes the optimal revenue for any mechanism that offers the same menu as MM on deadlines 1,2,,i11,2,\cdots,i-1.

Intuitively, every segment of g~X\tilde{g}_{X} corresponds to a different menu option in the approximate mechanism, and the fact that g~X(x)\tilde{g}_{X}(x) and g(x)=R~i(x)g(x)=\tilde{R}_{\geq i}(x) are close for all xx implies that the revenue of the mechanism based on g~X\tilde{g}_{X} isn’t far from optimal (by subsection 2.1). The additional factor of two comes from the fact that R~i\tilde{R}_{\geq i} is ironed, so to “set price pp” in R~i\tilde{R}_{\geq i}, we might need to randomize over two prices in RiR_{\geq i}.

Proof of subsection C.4.

We assume without loss of generality that the mechanism MM does the optimal pricing on days ii through nn (given the prices it sets on days 11 through i1i-1). Thus, the revenue generated by MM is 𝖮𝖯𝖳M,i\mathsf{OPT}_{M,i}. The menu offered by any such mechanism on day ii can be seen as a distribution ff over prices {pj}\{p_{j}\}. The prices pjp_{j} are all at most rir_{\geq i} as the MM behaves optimally on day ii (see A.1). Since the mechanism behaves optimally day i+1i+1 onwards, the revenue generated by MM on days ii through nn is jf(pj)Ri(pj)\sum_{j}f(p_{j})R_{\geq i}(p_{j}) by subsection 2.1.

Let XX be the sequence of kk points such that g(x)εg~X(x)g(x)g(x)-\varepsilon\leq\tilde{g}_{X}(x)\leq g(x) for all xrix\leq r_{\geq i}. Since g=R~ig=\tilde{R}_{\geq i} is the upper concave envelope of RiR_{\geq i}, for every point xrix\leq r_{\geq i}, there exists points xlxx_{l}\leq x and xrxx_{r}\geq x in [0,ri][0,r_{\geq i}] such that g(xl)=Ri(xl)g(x_{l})=R_{\geq i}(x_{l}), g(xr)=Ri(xr)g(x_{r})=R_{\geq i}(x_{r}), and for any λ[0,1]\lambda\in[0,1], we have g(λxl+(1λ)xr)=λg(xl)+(1λ)g(xr)g\left(\lambda x_{l}+(1-\lambda)x_{r}\right)=\lambda g(x_{l})+(1-\lambda)g(x_{r}). For all points xXx\in X, we add the points xlx_{l} and xrx_{r} to XX and remove the point xx. Adding the points xlx_{l} and xrx_{r} will only improve the quality of the polygon approximation. Further, since gg is linear on the interval [xl,xr][x_{l},x_{r}] that contains xx, removing xx doesn’t affect the quality of the polygon approximation. Thus, the new set XX satisfies

g(x)εg~X(x)g(x).g(x)-\varepsilon\leq\tilde{g}_{X}(x)\leq g(x). (2)

Further, by construction all elements vXv\in X satisfy

g(v)=Ri(v).g(v)=R_{\geq i}(v). (3)

The set XX now has at most 2k2k points. For any xrix\leq r_{\geq i}, we let x¯\underline{x} (resp. x¯\overline{x}) denote the largest (resp. smallest) value in XX that is at most (at least) xx. With this notation, if x=λxx¯+(1λx)x¯x=\lambda_{x}\underline{x}+(1-\lambda_{x})\overline{x}, then we have that

g~X(x)=λxg~X(x¯)+(1λx)g~X(x¯)=λxg(x¯)+(1λx)g(x¯).\tilde{g}_{X}(x)=\lambda_{x}\tilde{g}_{X}(\underline{x})+(1-\lambda_{x})\tilde{g}_{X}(\overline{x})=\lambda_{x}g(\underline{x})+(1-\lambda_{x})g(\overline{x}). (4)

We are now ready to define a mechanism MM^{\prime} with high revenue and low ii-deadline menu complexity. The mechanism MM^{\prime} is defined as follows:

  • Mimic MM on days 1,2,,i11,2,\cdots,i-1.

  • On day ii, if pjp_{j} has mass f(pj)f(p_{j}), add mass f(pj)λpjf(p_{j})\lambda_{p_{j}} to pj¯\underline{p_{j}} and mass f(pj)(1λpj)f(p_{j})(1-\lambda_{p_{j}}) to pj¯\overline{p_{j}}. This generates a new distribution ff^{\prime} on XX which defines our menu for day ii.

  • Do the optimal pricing days i+1i+1 onwards given the prices on day 11 through ii.

Since the menu offered on day ii is defined by a distribution over XX, it’s size is at most 2k2k, the size of XX. We finish the proof by showing that MM^{\prime} is feasible and has high revenue.

Claim C.1.

The mechanism MM^{\prime} defined above is feasible.

Proof.

We only need to check the intra-day incentive compatibility constraints. For days 11 through i1i-1, MM^{\prime} mimics MM and thus, these constraints are satisfied. Days i+1i+1 onwards, MM^{\prime} prices optimally and thus, satisfies the constraints. We only argue about the constraints for day ii. We wish to prove that the utility of any bidder with type (x,i)(x,i) in mechanism MM^{\prime} is at least that of the bidder with type (x,i1)(x,i-1). We know that the utility of any bidder with type (x,i)(x,i) in mechanism MM is at least that of the bidder with type (x,i1)(x,i-1). Since the mechanisms MM and MM^{\prime} are the same on days 11 through i1i-1, it is sufficient to prove that the utility of any bidder with type (x,i)(x,i) in mechanism MM^{\prime} is at least that of the bidder with type (x,i)(x,i) in mechanism MM. This follows directly from the expression of utilitiy pjxf(pj)(xpj)\sum_{p_{j}\leq x}f(p_{j})(x-p_{j}). We have

Xvxf(v)(xv)\displaystyle\sum_{X\ni v\leq x}f^{\prime}(v)(x-v)
=pjx¯f(pj)λpj(xpj¯)+f(pj)(1λpj)(xpj¯)+pj(x¯,x]f(pj)λpj(xpj¯)\displaystyle=\sum_{p_{j}\leq\underline{x}}f(p_{j})\lambda_{p_{j}}(x-\underline{p_{j}})+f(p_{j})(1-\lambda_{p_{j}})(x-\overline{p_{j}})+\sum_{p_{j}\in(\underline{x},x]}f(p_{j})\lambda_{p_{j}}(x-\underline{p_{j}}) (Definition of ff^{\prime})
=pjx¯f(pj)(xpj)+pj(x¯,x]f(pj)(1pjpj¯pj¯pj¯)(xpj¯)\displaystyle=\sum_{p_{j}\leq\underline{x}}f(p_{j})(x-p_{j})+\sum_{p_{j}\in(\underline{x},x]}f(p_{j})\bigg{(}1-\frac{p_{j}-\underline{p_{j}}}{\overline{p_{j}}-\underline{p_{j}}}\bigg{)}(x-\underline{p_{j}}) (Definition of λpj\lambda_{p_{j}})
pjx¯f(pj)(xpj)+pj(x¯,x]f(pj)(1pjpj¯xpj¯)(xpj¯)\displaystyle\geq\sum_{p_{j}\leq\underline{x}}f(p_{j})(x-p_{j})+\sum_{p_{j}\in(\underline{x},x]}f(p_{j})\bigg{(}1-\frac{p_{j}-\underline{p_{j}}}{x-\underline{p_{j}}}\bigg{)}(x-\underline{p_{j}}) (Since pj¯>x\overline{p_{j}}>x)
=pjx¯f(pj)(xpj)+pj(x¯,x]f(pj)(xpj)=pjxf(pj)(xpj).\displaystyle=\sum_{p_{j}\leq\underline{x}}f(p_{j})(x-p_{j})+\sum_{p_{j}\in(\underline{x},x]}f(p_{j})(x-p_{j})=\sum_{p_{j}\leq x}f(p_{j})(x-p_{j}).

Claim C.2.

The revenue generated by MM^{\prime} is at least 𝖮𝖯𝖳M,iε\mathsf{OPT}_{M,i}-\varepsilon.

Proof.

First, we note MM and MM^{\prime} are identical for the first i1i-1 days. Thus, the revenue generated on the first i1i-1 days is the same for both MM and MM^{\prime}. Since MM^{\prime} prices optimally day i+1i+1 onwards, the revenue generated on day ii through nn is vXf(v)Ri(v)\sum_{v\in X}f^{\prime}(v)R_{\geq i}(v). We have

vXf(v)Ri(v)\displaystyle\sum_{v\in X}f^{\prime}(v)R_{\geq i}(v) =vXf(v)g(v)\displaystyle=\sum_{v\in X}f^{\prime}(v)g(v) (Equation 3)
=vjf(vj)λvjg(vj¯)+f(vj)(1λvj)g(vj¯)\displaystyle=\sum_{v_{j}}f(v_{j})\lambda_{v_{j}}g(\underline{v_{j}})+f(v_{j})(1-\lambda_{v_{j}})g(\overline{v_{j}}) (Definition of ff^{\prime})
=vjf(vj)g~X(vj)\displaystyle=\sum_{v_{j}}f(v_{j})\tilde{g}_{X}(v_{j}) (Equation 4)
vjf(vj)(g(vj)ε)\displaystyle\geq\sum_{v_{j}}f(v_{j})\left(g(v_{j})-\varepsilon\right) (Equation 2)
vjf(vj)Ri(vj)ε.\displaystyle\geq\sum_{v_{j}}f(v_{j})R_{\geq i}(v_{j})-\varepsilon.

This analysis proves that the total revenue generated by MM^{\prime} is at most ε\varepsilon less than that generated by MM. In other words, it is at least 𝖮𝖯𝖳M,iε\mathsf{OPT}_{M,i}-\varepsilon.

These two claims combined suffice to prove the proposition. ∎

Having described the construction of low ii-deadline menu complexity mechanisms, we repeat this construction nn times (once for each deadline) to get our approximately optimal mechanism. We also show how to use subsection C.4 to go from the optimal mechanism to an entire menu of low menu complexity. Thus, the total revenue loss on days ii through nn is at most ε\varepsilon. Using the above result, we prove Corollary 4.

Proof of Corollary 4.

We prove that for all i{0,1,,n}i\in\{0,1,\cdots,n\}, there exists a mechanism with jj-deadline menu complexity of kjk_{j} for all 1ji1\leq j\leq i whose revenue is at least 𝖮𝖯𝖳j=1iεj\mathsf{OPT}-\sum_{j=1}^{i}\varepsilon_{j}. For i=ni=n, this is the same as the statement of the corollary. This proof will proceed via induction on ii. For i=0i=0, the statement is trivial. We assume the statement for i1i-1 and prove it for ii.

Let Mi1M_{i-1} be the mechanism that is promised by the induction hypothesis and let 𝖱𝖾𝗏i1𝖮𝖯𝖳j=1i1εi\mathsf{Rev}_{i-1}\geq\mathsf{OPT}-\sum_{j=1}^{i-1}\varepsilon_{i} be its revenue.

We invoke Proposition C.4 on XiX_{i} and Mi1M_{i-1} to get a mechanism MiM_{i} with the following properties:

  • The menu offered by MiM_{i} on days 1,2,,i11,2,\cdots,i-1 is the same as Mi1M_{i-1}.

  • The ii-deadline menu complexity is at most 2ki2k_{i}.

  • The revenue is at least 𝖮𝖯𝖳Mi1,iεi\mathsf{OPT}_{M_{i-1},i}-\varepsilon_{i}.

Property 11 and 22 imply that the jj-deadline menu complexity of MiM_{i} is at most kjk_{j} for all 1ji1\leq j\leq i. The revenue of MiM_{i} is at least 𝖮𝖯𝖳Mi1,iεi𝖱𝖾𝗏i1εi𝖮𝖯𝖳j=1i1εjεi=𝖮𝖯𝖳j=1iεj\mathsf{OPT}_{M_{i-1},i}-\varepsilon_{i}\geq\mathsf{Rev}_{i-1}-\varepsilon_{i}\geq\mathsf{OPT}-\sum_{j=1}^{i-1}\varepsilon_{j}-\varepsilon_{i}=\mathsf{OPT}-\sum_{j=1}^{i}\varepsilon_{j}. This completes the proof.

Finally, we may complete the proof of Theorem 1.2.

Proof of Theorem 1.2.

For any FedEx instance where the bidder’s types have integral support 1,2,,vmax1,2,\cdots,v_{\max}, observe that the optimal revenue is at least 11. This is because 11 is the revenue generated by the auction that offers the price 11 on all deadlines. Further, for all deadlines ii, 1+R~i(vmax)𝖮𝖯𝖳+𝖮𝖯𝖳=2𝖮𝖯𝖳1+\tilde{R}_{\geq i}(v_{\max})\leq\mathsf{OPT}+\mathsf{OPT}=2\mathsf{OPT}.

Note that all revenue curves R~i=gi\tilde{R}_{\geq i}=g_{i} when restricted to [0,ri][0,r_{\geq i}] satisfy the requirements of Theorem 4.2. This implies that for all ii, there exists sequences XiX_{i} of at most O(min{n/ε,nεlogvmax})O\left(\min\left\{n/\varepsilon,\sqrt{\frac{n}{\varepsilon}\log v_{\max}}\right\}\right) points such that for all x[0,ri],x\in[0,r_{\geq i}],

gi(x)ε2n(1+gi(vmax))gi~Xi(x)gi(x).g_{i}(x)-\frac{\varepsilon}{2n}\left(1+g_{i}(v_{\max})\right)\leq\tilde{g_{i}}_{X_{i}}(x)\leq g_{i}(x).

We set εi=ε2n(1+gi(vmax))\varepsilon_{i}=\frac{\varepsilon}{2n}\left(1+g_{i}(v_{\max})\right) and use section 4 to get that there exists a mechanism with menu complexity at most O(nmin{n/ε,nεlogvmax})O\left(n\min\left\{n/\varepsilon,\sqrt{\frac{n}{\varepsilon}\log v_{\max}}\right\}\right) whose revenue is at least 𝖮𝖯𝖳i=1nεi=𝖮𝖯𝖳i=1nε2n(1+gi(vmax))𝖮𝖯𝖳i=1nε𝖮𝖯𝖳n=(1ε)𝖮𝖯𝖳\mathsf{OPT}-\sum_{i=1}^{n}\varepsilon_{i}=\mathsf{OPT}-\sum_{i=1}^{n}\frac{\varepsilon}{2n}\left(1+g_{i}(v_{\max})\right)\geq\mathsf{OPT}-\sum_{i=1}^{n}\varepsilon\frac{\mathsf{OPT}}{n}=(1-\varepsilon)\mathsf{OPT}.

Appendix D Analysis omitted in Section 5

D.1 The instance

In this section, we describe our instance of the FedEx problem that is hard to approximate using small menus. Before delving into the details of our parameters, we give a brief high level intuition.

In Section 4.1, we described a function LPLkLPL_{k} that is hard to polygon approximate. The function LPLkLPL_{k} provided a lower bound for Theorem 4.2 that was tight (modulo constant factors). The idea behind this example was to have a piecewise linear function such that the slope of the constituent segments decreases gradually in a controlled way. Because of the intimate connection between polygon approximation and menu complexity (Proposition C.4), similar ideas should also help in constructing a FedEx instance that is hard to approximate using small menus.

The construction of the hard FedEx instance can be expected to have many additional complications. The complications arise from the fact that each revenue curve RiR_{\geq i} in a FedEx instance is formed by appropriately adding revenue curves of multiple distributions (see Definition 2.3) and R~i\tilde{R}_{\geq i} is obtained by ‘ironing’ RiR_{\geq i}. Since the size of the optimal menu only increases when prices ‘split’ across ironed intervals, it is necessary to have multiple (polynomial, in our case) ironed intervals in the function R~i\tilde{R}_{\geq i}.

The presence of multiple ironed intervals in R~i\tilde{R}_{\geq i} is, however, not sufficient. It is easy to see why. Imagine an example where all the revenue curves RiR_{i} are generated from the same underlying distribution. If this is the case, all the curves RiR_{\geq i} might have the same ironed intervals.121212This is not always true as whether a particular interval is ironed or not for a given deadline depends on many other factors, e.g. the distribution qiq_{i} across the various days. If the ironed intervals in the curves R~i\tilde{R}_{\geq i} for all ii correspond, then even the optimal revenue auction would require only 11 price on each day. This is because only those prices ‘split’ that are inside an ironed interval. If all the ironed intervals correspond, then no price can ever lie inside a interval. The solution is to have ironed intervals ‘nested’ so that the ends of the interval on day ii lie inside the intervals on day i+1i+1.

The two points raised above concern splitting in general and were also true of our worst case example described in Section 3. If we want a lower bound for all approximate auctions, we also want the slope between two consecutive segments to be very different (as in LPLkLPL_{k}). This ensures that we need at least one menu option for each segment for a good approximation.

With these ideas in mind, we describe our hard instance. We assume that the bidder’s type is drawn from a distribution supported on {1,2,,vmax}×{1,2,,n}\{1,2,\cdots,v_{\max}\}\times\{1,2,\cdots,n\}. The value of vmaxv_{\max} in our example is 5n5n and the distribution qq over days is uniform. We make explicit that we don’t include the qq in our calculation. Since they only scale all revenue curves by 1/n1/n, they don’t affect the quality of a 1ϵ1-\epsilon approximation. The marginal distribution FiF_{i} on day ii is designed so that the revenue curve R~i\tilde{R}_{\geq i} has i+1i+1 segments. The slope of the first ii segments is geometrically decreasing with common ratio λ=1+14n\lambda=1+\frac{1}{4n}. This value is large enough so that a 1O(1/n2)1-O(1/n^{2}) approximation will require one menu option for each segment. It also satisfies,

Claim D.1.

For ni0n\geq i\geq 0, 1λi>34\frac{1}{\lambda^{i}}>\frac{3}{4}.

Proof.
34λi<34(1+14n)n34e14<1.\frac{3}{4}\lambda^{i}<\frac{3}{4}\Big{(}1+\frac{1}{4n}\Big{)}^{n}\leq\frac{3}{4}\mathrm{e}^{\frac{1}{4}}<1.

As we define the slopes to be in geometric progression, the geometric sum 1+1λ1+\frac{1}{\lambda}\cdots will appear multiple times in our description and its analysis. To avoid writing the all the terms everywhere, we define S0=nS_{0}=n and Si=n+j=0i1λjS_{i}=n+\sum_{j=0}^{i-1}\lambda^{-j} for all i1i\geq 1.

The (i+1)th(i+1)^{\text{th}} segment of R~i\tilde{R}_{\geq i} will have slope (n+1i)C(n+1-i)C where C=1/2C=1/2. This last segment will split into two segments on day i+1i+1. The first segment will have slope 1/λi1/\lambda^{i}, continuing the geometric progression. The last segment will have slope (ni)C(n-i)C, allowing it to be split similarly in the future. In order to ensure this behavior of R~i\tilde{R}_{\geq i}, we need the slope of RiR_{i} to be βi\beta_{i} for this segment, where

βi=13n+i(3ni2niλi+Si).\beta_{i}=\frac{1}{3n+i}\left(\frac{3n-i}{2}-\frac{n-i}{\lambda^{i}}+S_{i}\right). (5)

This value of βi\beta_{i} satisfies the following two properties:

Claim D.2.

βi\beta_{i} is an increasing sequence for ni0n\geq i\geq 0.

Proof.

Fix n>i0n>i\geq 0.

βi+1βi\displaystyle\beta_{i+1}-\beta_{i}
=13n+i+1(3ni12ni1λi+1+Si+1)13n+i(3ni2niλi+Si)\displaystyle=\frac{1}{3n+i+1}\left(\frac{3n-i-1}{2}-\frac{n-i-1}{\lambda^{i+1}}+S_{i+1}\right)-\frac{1}{3n+i}\left(\frac{3n-i}{2}-\frac{n-i}{\lambda^{i}}+S_{i}\right) (Equation 5)
=1(3n+i)(3n+i+1)(3n+(λ1)(3n+i+1)(ni)λi+1+4nλi+1+3n+iλiSi)\displaystyle=\frac{1}{(3n+i)(3n+i+1)}\left(-3n+\frac{(\lambda-1)(3n+i+1)(n-i)}{\lambda^{i+1}}+\frac{4n}{\lambda^{i+1}}+\frac{3n+i}{\lambda^{i}}-S_{i}\right)
1(3n+i)(3n+i+1)(3(3n+i+1)(ni)16n+3(3n+i)4Si)\displaystyle\geq\frac{1}{(3n+i)(3n+i+1)}\left(\frac{3(3n+i+1)(n-i)}{16n}+\frac{3(3n+i)}{4}-S_{i}\right) (Claim D.1)
1(3n+i)(3n+i+1)(9n42n)>0.\displaystyle\geq\frac{1}{(3n+i)(3n+i+1)}\left(\frac{9n}{4}-2n\right)>0.

Claim D.3.

For ni1n\geq i\geq 1, 12<β1=12+54λ(3n+1)βi34\frac{1}{2}<\beta_{1}=\frac{1}{2}+\frac{5}{4\lambda(3n+1)}\leq\beta_{i}\leq\frac{3}{4}.

Proof.

Verify that β1\beta_{1} is indeed correct. After proving Claim D.2, it is sufficient to establish βn34\beta_{n}\leq\frac{3}{4}.

βn=14n(n+Sn)3n4n.\beta_{n}=\frac{1}{4n}(n+S_{n})\leq\frac{3n}{4n}.

We now define the marginal distribution FiF_{i} on the ithi^{\text{th}} deadline. We write a expression for the distribution and then, calculate the revenue curve RiR_{i} of the distribution FiF_{i}. The part of the analysis where we prove that FiF_{i} is a valid distribution is omitted. This would use Sin+iSi+1n+i+1\frac{S_{i}}{n+i}\geq\frac{S_{i+1}}{n+i+1} and βiSin+i\beta_{i}\leq\frac{S_{i}}{n+i} (where Claim D.1 and Claim D.3 help prove the latter) and can easily be done by the interested reader. The distribution over types on the ithi^{\text{th}} day is given by :

Fi(x)={0,0xn(1S1n+1),nxn+1(1S2n+2),n+1xn+2(1S3n+3),n+2xn+3(1Sin+i),n+i1xn+i(1βi),n+ix3n+i1,3n+ix5n.F_{i}(x)=\begin{cases}0,&0\leq x\leq n\\ \Big{(}1-\frac{S_{1}}{n+1}\Big{)},&n\leq x\leq n+1\\ \Big{(}1-\frac{S_{2}}{n+2}\Big{)},&n+1\leq x\leq n+2\\ \Big{(}1-\frac{S_{3}}{n+3}\Big{)},&n+2\leq x\leq n+3\\ \ \ \ \ \ \ \ \vdots&\\ \Big{(}1-\frac{S_{i}}{n+i}\Big{)},&n+i-1\leq x\leq n+i\\ (1-\beta_{i}),&n+i\leq x\leq 3n+i\\ 1,&3n+i\leq x\leq 5n.\\ \end{cases} (6)

The revenue curve for this distribution is given by Ri(v)=v(1Fi(v))R_{i}(v)=v\left(1-F_{i}(v)\right):

Ri(x)={x,0xn(S1n+1)x,n<xn+1(S2n+2)x,n+1<xn+2(S3n+3)x,n+2<xn+3(Sin+i)x,n+i1<xn+iβix,n+i<x3n+i0,3n+i<x5n.R_{i}(x)=\begin{cases}x,&0\leq x\leq n\\ \left(\frac{S_{1}}{n+1}\right)x,&n<x\leq n+1\\ \left(\frac{S_{2}}{n+2}\right)x,&n+1<x\leq n+2\\ \left(\frac{S_{3}}{n+3}\right)x,&n+2<x\leq n+3\\ \ \ \ \ \ \ \ \vdots&\\ \left(\frac{S_{i}}{n+i}\right)x,&n+i-1<x\leq n+i\\ \beta_{i}x,&n+i<x\leq 3n+i\\ 0,&3n+i<x\leq 5n.\\ \end{cases} (7)

Using Definition 2.3, we now calculate the combined revenue curve RiR_{\geq i} for days ii through nn. We prove that RiR_{\geq i} has the form we desire using induction.

Theorem D.1.
Ri(x)={x+(ni)x,0xn(S1n+1)x+(ni)(xn+S0),n<xn+1(S2n+2)x+(ni)(xn1λ+S1),n+1<xn+2(S3n+3)x+(ni)(xn2λ2+S2),n+2<xn+3(Sin+i)x+(ni)(x+1niλi1+Si1),n+i1<xn+iβix+(ni)(xniλi+Si),n+i<xn+i+1βix+(ni)(C(xni1)+Si+1),n+i+1<x3n+i(ni)(C(xni1)+Si+1),3n+i<x3n+i+1(ni)(2nC+Si+1),3n+i+1<x5n.R_{\geq i}(x)=\begin{cases}x+(n-i)x,&0\leq x\leq n\\ \left(\frac{S_{1}}{n+1}\right)x+(n-i)\left(x-n+S_{0}\right),&n<x\leq n+1\\ \left(\frac{S_{2}}{n+2}\right)x+(n-i)\left(\frac{x-n-1}{\lambda}+S_{1}\right),&n+1<x\leq n+2\\ \left(\frac{S_{3}}{n+3}\right)x+(n-i)\left(\frac{x-n-2}{\lambda^{2}}+S_{2}\right),&n+2<x\leq n+3\\ \ \ \ \ \ \ \ \vdots&\\ \left(\frac{S_{i}}{n+i}\right)x+(n-i)\left(\frac{x+1-n-i}{\lambda^{i-1}}+S_{i-1}\right),&n+i-1<x\leq n+i\\ \beta_{i}x+(n-i)\left(\frac{x-n-i}{\lambda^{i}}+S_{i}\right),&n+i<x\leq n+i+1\\ \beta_{i}x+(n-i)\left(C(x-n-i-1)+S_{i+1}\right),&n+i+1<x\leq 3n+i\\ (n-i)\left(C(x-n-i-1)+S_{i+1}\right),&3n+i\ <x\leq 3n+i+1\\ (n-i)\left(2nC+S_{i+1}\right),&3n+i+1<x\leq 5n.\\ \end{cases} (8)
Proof.

Proof by backwards induction. The base case i=ni=n is easily verified. For the inductive step, we first calculate R~i\tilde{R}_{\geq i}. This is done by ironing the curve RiR_{\geq i} defined in Equation 8. We provide an expression for R~i\tilde{R}_{\geq i} and prove that it is correct in Lemma D.2 in Subsection D.2 of this Appendix. The expression is:

R~i(x)={(n+1i)x,0xn(n+1i)(xn+S0),n<xn+1(n+1i)(xn1λ+S1),n+1<xn+2(n+1i)(xn2λ2+S2),n+2<xn+3(n+1i)(x+1niλi1+Si1),n+i1<xn+i(n+1i)(C(xni)+Si),n+i<x3n+i(ni)(2nC+Si+1)+(ni)Cβi(3n+i)2ni(x5n),3n+i<x5n.\tilde{R}_{\geq i}(x)=\begin{cases}(n+1-i)x,&0\leq x\leq n\\ (n+1-i)(x-n+S_{0}),&n<x\leq n+1\\ (n+1-i)\left(\frac{x-n-1}{\lambda}+S_{1}\right),&n+1<x\leq n+2\\ (n+1-i)\left(\frac{x-n-2}{\lambda^{2}}+S_{2}\right),&n+2<x\leq n+3\\ \ \ \ \ \ \ \ \vdots&\\ (n+1-i)\left(\frac{x+1-n-i}{\lambda^{i-1}}+S_{i-1}\right),&n+i-1<x\leq n+i\\ (n+1-i)\left(C(x-n-i)+S_{i}\right),&n+i<x\leq 3n+i\\ (n-i)\left(2nC+S_{i+1}\right)+\frac{(n-i)C-\beta_{i}(3n+i)}{2n-i}(x-5n),&3n+i<x\leq 5n.\\ \end{cases} (9)

Note that this function is maximized at ri=3n+ir_{\geq i}=3n+i. We now use Definition 2.3 which says:

Ri1(v)={Ri1(v)+R~i(v),v<riRi1(v)+R~i(ri),vri.R_{\geq i-1}(v)=\begin{cases}R_{i-1}(v)+\tilde{R}_{\geq i}(v),&v<r_{\geq i}\\ R_{i-1}(v)+\tilde{R}_{\geq i}(r_{\geq i}),&v\geq r_{\geq i}.\\ \end{cases}

Using Equation 7 and Equation 9 in this gives:

Ri1(x)={x+(n+1i)x,0xn(S1n+1)x+(n+1i)(xn+S0),n<xn+1(S2n+2)x+(n+1i)(xn1λ+S1),n+1<xn+2(S3n+3)x+(n+1i)(xn2λ2+S2),n+2<xn+3(Si1n+i1)x+(n+1i)(x+2niλi2+Si2),n+i2<xn+i1βi1x+(n+1i)(x+1niλi1+Si1),n+i1<xn+iβi1x+(n+1i)(C(xni)+Si),n+i<x3n+i1(n+1i)(C(xni)+Si),3n+i1<x3n+i(n+1i)(2nC+Si),3n+i<x5n.R_{\geq i-1}(x)=\begin{cases}x+(n+1-i)x,&0\leq x\leq n\\ \left(\frac{S_{1}}{n+1}\right)x+(n+1-i)(x-n+S_{0}),&n<x\leq n+1\\ \left(\frac{S_{2}}{n+2}\right)x+(n+1-i)\left(\frac{x-n-1}{\lambda}+S_{1}\right),&n+1<x\leq n+2\\ \left(\frac{S_{3}}{n+3}\right)x+(n+1-i)\left(\frac{x-n-2}{\lambda^{2}}+S_{2}\right),&n+2<x\leq n+3\\ \ \ \ \ \ \ \ \vdots&\\ \left(\frac{S_{i-1}}{n+i-1}\right)x+(n+1-i)\left(\frac{x+2-n-i}{\lambda^{i-2}}+S_{i-2}\right),&n+i-2<x\leq n+i-1\\ \beta_{i-1}x+(n+1-i)\left(\frac{x+1-n-i}{\lambda^{i-1}}+S_{i-1}\right),&n+i-1<x\leq n+i\\ \beta_{i-1}x+(n+1-i)\left(C(x-n-i)+S_{i}\right),&n+i<x\leq 3n+i-1\\ (n+1-i)\left(C(x-n-i)+S_{i}\right),&3n+i-1<x\leq 3n+i\\ (n+1-i)\left(2nC+S_{i}\right),&3n+i<x\leq 5n.\\ \end{cases}

which is of the form required by the induction hypothesis. ∎

It is relatively easy to find the optimal auction in our instance using Fiat et al.’s algorithm [FGKK16]. The optimal auction closely follows the behavior of the curves R~i\tilde{R}_{\geq i}. It has one menu option =3n+1=3n+1 on day 11. On each subsequent day, the last menu option splits into two options while all the other options are carried as is. For example, the price 3n+13n+1 splits into two prices of n+2n+2 and 3n+23n+2 on day 22. The price n+2n+2 is pushed forward to all of the days while the price 3n+23n+2 is split into n+3n+3 and 3n+33n+3. The price n+3n+3 is pushed through while the price 3n+33n+3 is again spit into two on day 44. This process goes on till the nthn^{\text{th}} day. The menu offered on day nn has options n+2,n+3,,n+nn+2,n+3,\cdots,n+n and 4n4n.

Remark D.1.

The optimal revenue in the setting described is at most maxR1=n(2n+1)\max R_{\geq 1}=n(2n+1) which is between 2n22n^{2} and 3n23n^{2}. This is denoted by 𝖮𝖯𝖳\mathsf{OPT} throughout.

D.2 Omitted details from the proof of Theorem D.1

In this subsection we show that R~i(x)\tilde{R}_{\geq i}(x) is the upper concave envelope of Ri(x)R_{\geq i}(x), where both functions are as defined in Appendix D. The proof consists of showing that the two functions agree on the points where R~i(x)\tilde{R}_{\geq i}(x) changes, and that R~i(x)\tilde{R}_{\geq i}(x) is greater than Ri(x)R_{\geq i}(x) on the other points. This, combined with the fact that R~i(x)\tilde{R}_{\geq i}(x) is linear in between the points where it changes, suffices to show the claim.

Lemma D.2.

For all x{0,n,n+1,n+2,,n+i1,n+i,3n+i,5n}x\in\{0,n,n+1,n+2,\cdots,n+i-1,n+i,3n+i,5n\}, we have R~i(x)=Ri(x)\tilde{R}_{\geq i}(x)=R_{\geq i}(x).

Proof.

We prove this lemma using four claims, all of which establish R~i(x)=Ri(x)\tilde{R}_{\geq i}(x)=R_{\geq i}(x) for different domains of xx.

Claim D.4.

For all x{0,n}x\in\{0,n\}, we have R~i(x)=Ri(x)\tilde{R}_{\geq i}(x)=R_{\geq i}(x).

Proof.

By verification: x+(ni)x=(n+1i)xx+(n-i)x=(n+1-i)x. ∎

Claim D.5.

For all x{n+1,n+2,,n+i1,n+i}x\in\{n+1,n+2,\cdots,n+i-1,n+i\}, we have R~i(x)=Ri(x)\tilde{R}_{\geq i}(x)=R_{\geq i}(x).

Proof.

By verification:

R~i(x)Ri(x)=(1λxn1+Sxn1)Sxn=0.\tilde{R}_{\geq i}(x)-R_{\geq i}(x)=\left(\frac{1}{\lambda^{x-n-1}}+S_{x-n-1}\right)-S_{x-n}=0.

Claim D.6.

R~i(3n+i)=Ri(3n+i)\tilde{R}_{\geq i}(3n+i)=R_{\geq i}(3n+i).

Proof.
Ri(3n+i)\displaystyle R_{\geq i}(3n+i) =βi(3n+i)+(ni)(C(2n1)+Si+1)\displaystyle=\beta_{i}(3n+i)+(n-i)\left(C(2n-1)+S_{i+1}\right)
=(3ni2niλi+Si)+(ni)(C(2n1)+Si+1)\displaystyle=\left(\frac{3n-i}{2}-\frac{n-i}{\lambda^{i}}+S_{i}\right)+(n-i)\left(C(2n-1)+S_{i+1}\right) (Equation 5)
=n+((ni)Cniλi+Si)+(ni)(C(2n1)+Si+1)\displaystyle=n+\left((n-i)C-\frac{n-i}{\lambda^{i}}+S_{i}\right)+(n-i)\left(C(2n-1)+S_{i+1}\right)
=n+Si+(ni)(2nC+Si)\displaystyle=n+S_{i}+(n-i)\left(2nC+S_{i}\right)
=(n+1i)(2nC+Si)=R~i(3n+i).\displaystyle=(n+1-i)\left(2nC+S_{i}\right)=\tilde{R}_{\geq i}(3n+i).

Claim D.7.

R~i(5n)=Ri(5n)\tilde{R}_{\geq i}(5n)=R_{\geq i}(5n).

Proof.

This is easily verifiable from the description of the functions. ∎

Lemma D.3.

For all 0x5n0\leq x\leq 5n, we have R~i(x)Ri(x)\tilde{R}_{\geq i}(x)\geq R_{\geq i}(x).

Proof.

Note that both functions are piecewise linear. The function R~i\tilde{R}_{\geq i} changes form at points S1={0,n,n+1,n+2,,n+i1,n+i,3n+i,5n}S_{1}=\{0,n,n+1,n+2,\cdots,n+i-1,n+i,3n+i,5n\}. The function RiR_{\geq i} changes form at points in S2=S1{n+i+1,3n+i+1}S_{2}=S_{1}\cup\{n+i+1,3n+i+1\}. To prove this lemma, it is sufficient to show that R~i(x)Ri(x)\tilde{R}_{\geq i}(x)\geq R_{\geq i}(x) for all xS2x\in S_{2}. If xS1x\in S_{1}, this follows by subsection D.2. We only prove for x{n+i+1,3n+i+1}x\in\{n+i+1,3n+i+1\}.

If x=n+i+1x=n+i+1, we get

R~i(n+i+1)\displaystyle\tilde{R}_{\geq i}(n+i+1) Ri(n+i+1)\displaystyle-R_{\geq i}(n+i+1)
=(n+1i)(C+Si)βi(n+i+1)(ni)(1λi+Si)\displaystyle=(n+1-i)\left(C+S_{i}\right)-\beta_{i}(n+i+1)-(n-i)\left(\frac{1}{\lambda^{i}}+S_{i}\right)
=C+Si+(ni)(C1λi)βi(n+i+1)\displaystyle=C+S_{i}+(n-i)\left(C-\frac{1}{\lambda^{i}}\right)-\beta_{i}(n+i+1)
=C+βi(3n+i)2nCβi(n+i+1)\displaystyle=C+\beta_{i}(3n+i)-2nC-\beta_{i}(n+i+1) (Equation 5)
=(βiC)(2n1)\displaystyle=(\beta_{i}-C)(2n-1)
>0.\displaystyle>0. (D.3)

If x=3n+i+1x=3n+i+1, we get

R~i(3n+i+1)(ni)(2nC+Si+1)=Ri(3n+i+1),\tilde{R}_{\geq i}(3n+i+1)\geq(n-i)\left(2nC+S_{i+1}\right)=R_{\geq i}(3n+i+1),

as the other term in the definition of R~i\tilde{R}_{\geq i} is positive.

Lemma D.4.

The upper concave envelope of the function RiR_{\geq i} defined in Equation 8 is the function R~i\tilde{R}_{\geq i} defined in Equation 9.

Proof.

By definition, R~i\tilde{R}_{\geq i} is a continuous piecewise linear function whose successive segments have decreasing slope. Thus, it is concave. subsection D.2 shows that the function R~i\tilde{R}_{\geq i} is always larger that RiR_{\geq i}. Note that the function R~i\tilde{R}_{\geq i} is piecewise linear and changes form at points 0,n,n+1,n+2,,n+i1,n+i,3n+i0,n,n+1,n+2,\cdots,n+i-1,n+i,3n+i, and 5n5n. subsection D.2 shows that at all these points, the value of R~i\tilde{R}_{\geq i} is that same as that of RiR_{\geq i}. This finishes the proof. ∎

D.3 Cleaning

We start by fixing notation that we employ in this subsection. The letters RR, R~\tilde{R}, qq, and FF are reserved for the revenue curves and the type distribution of our FedEx instance. These are described in Equation 8, Equation 9, and Equation 6 respectively (qq is assumed to be uniform and is omitted throughout). The letters AA and BB would denote the allocation curves of mechanisms, i.e., the allocation curve on day ii would be denoted by AiA_{i} or BiB_{i}. Thus, for x[0,vmax]x\in[0,v_{\max}], a bidder that reports type (x,i)(x,i) gets the item with probability Ai(x)A_{i}(x). We assume without loss of generality that Ai(0)=0A_{i}(0)=0 and Ai(vmax)=1A_{i}(v_{\max})=1. In line with the notation used for probability distributions, we reserve the lower case aia_{i} for the ‘allocation density’, i.e., ai(x)=Ai(x)Ai(x1)a_{i}(x)=A_{i}(x)-A_{i}(x-1). With this notation, the menu complexity of a mechanism MAM_{A} is the number of (x,i)(x,i) such that ai(x)0a_{i}(x)\neq 0. Unless specified otherwise, all statements about clean mechanisms pertain our particular instance of the FedEx problem defined in the previous section.

In the FedEx problem, menus on day ii are constrained by the menu offered on day i1i-1. These constraints are represented by the downwards IC constraints in Equation 1. In this sense, some menus on day i1i-1 are strictly more constraining than others. For example, a price of p1p_{1} is offered on day i1i-1 is strictly more constraining than a price of p2>p1p_{2}>p_{1}. This is because the utility π(v,i1)vp(v,i1)\pi(v,i-1)v-p(v,i-1) is strictly higher. Similarly, a distribution over two prices of p1p_{1} and p2p_{2} with probabilities qq and 1q1-q respectively is strictly more constraining than offering the single price p=qp1+(1q)p2p=qp_{1}+(1-q)p_{2}. This reasoning also applies in case the price pp has a small mass in a larger distribution. In the case a feasible less constraining menu generates a higher revenue on a particular day than a more constraining one, the latter can be changed to the former without violating any of the feasibility constraints. This change will only increase the revenue generated.

In the problem instance we consider, the revenue curve RiR_{i} for a given day ii is increasing and touches the upper concave envelope at all points in [0,n+i]\mathbb{Z}\cap[0,n+i]. Thus, if a price pp in this range is offered with probability f(p)f(p) on day i1i-1, the least constraining and revenue maximizing option on day ii is to offer the same price with the same probability. We define ‘clean’ mechanisms to be mechanisms that satisfy this property. Formally,

Definition D.1 (Clean mechanisms).

Let aia_{i} be the allocation density for day ii of a mechanism MM. The mechanism MM is said to be clean if for all days i1i\geq 1 and any point 0xn+i0\leq x\leq n+i, we have

ai+1(x)ai(x).a_{i+1}(x)\geq a_{i}(x).

The reason we do not require equality in the expression above is that the (allocation) mass from points higher than n+in+i may be moved to lower points causing an increase in the value of ai()a_{i}(\cdot) there. We prove in Theorem D.8 that an arbitrary mechanism can be converted to a clean mechanism that is ‘intimately’ connected to the original mechanism. We state a direct corollary here that is obtained by setting k=nk=n and xi=n+ix_{i}=n+i for all ii in the statement of the theorem.

Corollary D.5 (Cleaning).

Consider any mechanism MAM_{A} for the FedEx instance described in subsection D.1. There exists a mechanism MBM_{B} such that:

  • MBM_{B} mimics MAM_{A} on day 11.

  • MBM_{B} is clean.

  • For all days ii, we have BiAiB_{i}\succeq A_{i}.

  • For all days ii, j=0vmaxRi(j)bi(j)j=0vmaxRi(j)ai(j)\sum_{j=0}^{v_{\max}}R_{i}(j)b_{i}(j)\geq\sum_{j=0}^{v_{\max}}R_{i}(j)a_{i}(j).

Here, \succeq denotes stochastic dominance of the second-order.

D.4 The statement and proof of Theorem D.8 131313Auxiliary results that can be skipped without loss of comprehension.

Consider any mechanism MM. Let Ai(v):{0,1,,V}[0,1]A_{i}(v):\{0,1,\cdots,V\}\to[0,1] denote the allocation function of mechanism MM on day ii. We know from the IC constraints that Ai(v)A_{i}(v) is monotone non-decreasing for all ii. Without loss of generality, assume that Ai(V)=1A_{i}(V)=1. We interpret AiA_{i} as the cumulative distribution function of some random variable. The leftwards IC constraints say that for all i>1,vVi>1,v\leq V, we have,

j=0vAi(j)j=0vAi1(j).\sum_{j=0}^{v}A_{i}(j)\geq\sum_{j=0}^{v}A_{i-1}(j). (10)

We define stochastic dominance and state a well-known result

Definition D.2.

Consider two discrete random variables XX and YY supported on the non-negative integers. (FF, GG are the cdfs of the random variables). We say XX has (second-order) stochastic dominance over YY if and only if for all xx\in\mathbb{Z},

j=0xF(j)j=0xG(j).\sum_{j=0}^{x}F(j)\leq\sum_{j=0}^{x}G(j).

We denote this by XYX\succeq Y.

We have the following well-known result on necessary and sufficient conditions for second-order stochastic dominance:

Theorem D.6.

Let XAX_{A} and XBX_{B} be two random variables with distributions AA and BB respectively. The following statements are equivalent:

  • XAXBX_{A}\succeq X_{B}.

  • There exist random variables YY and ZZ such that XB=𝑑XA+Y+ZX_{B}\overset{d}{=}X_{A}+Y+Z, with YY always at most 0 and ZZ such that 𝔼[ZXA+Y]=0\mathbb{E}[Z\mid X_{A}+Y]=0.

  • For any concave and increasing function ff, it holds that j=0Vf(j)a(j)j=0Vf(j)b(j)\sum_{j=0}^{V}f(j)a(j)\geq\sum_{j=0}^{V}f(j)b(j).

Here, =𝑑\overset{d}{=} denotes equivalence in distribution.

We proceed to prove our main technical lemma.

Lemma D.7.

If any two random variables AA and CC defined on [0,V][0,V]\cap\mathbb{Z} satisfy ACA\succeq C, then for any xx, there exists a BB such that ABCA\succeq B\succeq C and for all vxv\leq x, we have Pr(B=v)Pr(A=v)\Pr(B=v)\geq\Pr(A=v).

Furthermore, if ff is a function that is increasing on [0,x][0,x] such that f(j)=f~(j)f(j)=\tilde{f}(j) for all 0jx0\leq j\leq x, then j=0Vf(j)Pr(B=j)j=0Vf(j)Pr(C=j)\sum_{j=0}^{V}f(j)\Pr(B=j)\geq\sum_{j=0}^{V}f(j)\Pr(C=j).

Proof.

Theorem D.6 promises random variables YY and ZZ such that C=𝑑A+Y+ZC\overset{d}{=}A+Y+Z, where Y0Y\leq 0 and 𝔼[ZA+Y]=0\mathbb{E}[Z\mid A+Y]=0. Our goal is to construct a random variable BB such that ABCA\succeq B\succeq C. We wish to establish both the stochastic dominance results via Theorem D.6. To this end, we first construct random variables Y1Y_{1} and Z1Z_{1} and use these to define BB. Let

Y1={0,if A<xmax(Y,xA),if Ax.Y_{1}=\begin{cases}0,&\text{if }A<x\\ \max(Y,x-A),&\text{if }A\geq x.\\ \end{cases} (11)

Note that Y10Y_{1}\leq 0. Next, define Z1Z_{1}

Z1={Z,if xA<Y10,if xAY1.Z_{1}=\begin{cases}Z,&\text{if }x-A<Y_{1}\\ 0,&\text{if }x-A\geq Y_{1}.\\ \end{cases} (12)

We prove that 𝔼[Z1A+Y1]=0\mathbb{E}[Z_{1}\mid A+Y_{1}]=0 in D.9 paving the way for Theorem D.6. For that, we need another result.

Claim D.8.

For any k>xk>x, the event A+Y1=kA+Y_{1}=k happens if and only if A+Y=kA+Y=k.

Proof.

First, assume A+Y1=k>xA+Y_{1}=k>x. This means that AA+Y1=k>xA\geq A+Y_{1}=k>x and hence, by Equation 11, Y1=max(Y,xA)Y_{1}=\max(Y,x-A). However, Y1>xAY_{1}>x-A implying that Y1=YY_{1}=Y.

Second, assume that A+Y=kA+Y=k. Again, we have AA+Y=k>xA\geq A+Y=k>x and hence, by Equation 11, Y1=max(Y,xA)=YY_{1}=\max(Y,x-A)=Y. ∎

Claim D.9.

𝔼[Z1A+Y1]=0.\mathbb{E}[Z_{1}\mid A+Y_{1}]=0.

Proof.

Fix A+Y1=kA+Y_{1}=k where kk is arbitrary. If kxk\leq x, then Z1=0Z_{1}=0 and the claim follows.

If, on the other hand, k>xk>x, then D.8 applies. Thus, by Equation 12, we have

𝔼[Z1A+Y1=k]=𝔼[ZA+Y=k]=0.\mathbb{E}[Z_{1}\mid A+Y_{1}=k]=\mathbb{E}[Z\mid A+Y=k]=0.

Define B=A+Y1+Z1B=A+Y_{1}+Z_{1}. Then, D.9 means that ABA\succeq B. Now, we wish to prove that BCB\succeq C. Our strategy is to again employ Theorem D.6. For this, we define Y2Y_{2} and Z2Z_{2}.

Y2=YY1.Y_{2}=Y-Y_{1}. (13)

Note that

Claim D.10.

Y20Y_{2}\leq 0.

Proof.

We prove that YY1Y\leq Y_{1}. If A<xA<x, the Y1=0Y_{1}=0 and this is trivial. Otherwise, Y1=max(Y,xA)YY_{1}=\max(Y,x-A)\geq Y. ∎

We next define Z2Z_{2} as

Z2=ZZ1.Z_{2}=Z-Z_{1}. (14)
Claim D.11.

𝔼[Z2B+Y2]=0\mathbb{E}[Z_{2}\mid B+Y_{2}]=0.

Proof.

Fix B+Y2=kB+Y_{2}=k where kk is arbitrary. Since B=A+Y1+Z1B=A+Y_{1}+Z_{1}, this is equivalent to saying that A+Y+Z1=kA+Y+Z_{1}=k.

We prove that 𝔼[Z2B+Y2]=0\mathbb{E}[Z_{2}\mid B+Y_{2}]=0 by arguing 𝔼[Z2B+Y2,xA<Y1]=0\mathbb{E}[Z_{2}\mid B+Y_{2},x-A<Y_{1}]=0 and 𝔼[Z2B+Y2,xAY1]=0\mathbb{E}[Z_{2}\mid B+Y_{2},x-A\geq Y_{1}]=0. We first deal with the former. If xA<Y1x-A<Y_{1}, then Z1=ZZ_{1}=Z (Equation 12) and hence Z2=0Z_{2}=0 and we are done.

Now, suppose xAY1x-A\geq Y_{1}. In this case, Z1=0Z_{1}=0 (Equation 12) and hence B+Y2=kB+Y_{2}=k if and only if A+Y=kA+Y=k.

𝔼[Z2B+Y2=k,xAY1]=𝔼[ZA+Y=k,xAY1].\mathbb{E}[Z_{2}\mid B+Y_{2}=k,x-A\geq Y_{1}]=\mathbb{E}[Z\mid A+Y=k,x-A\geq Y_{1}].

Note that xAY1x-A\geq Y_{1} is the same as xA+Y1x\geq A+Y_{1} which happens if and only if xA+Y=kx\geq A+Y=k (D.8). Thus,

𝔼[ZA+Y=k,xAY1]=𝔼[ZA+Y=k,xk]=0.\mathbb{E}[Z\mid A+Y=k,x-A\geq Y_{1}]=\mathbb{E}[Z\mid A+Y=k,x\geq k]=0.

Since B+Y2+Z2=A+Y1+Z1+Y2+Z2=A+Y+Z=CB+Y_{2}+Z_{2}=A+Y_{1}+Z_{1}+Y_{2}+Z_{2}=A+Y+Z=C by Equation 13 and Equation 14, we have that BCB\succeq C by D.10 and D.11.

Consider the event A=vA=v for v<xv<x. Since vxv\leq x, both Y1Y_{1} and Z1Z_{1} are 0. Thus, B=vB=v. Hence, Pr(B=v)Pr(A=v)\Pr(B=v)\geq\Pr(A=v).

Finally, consider a function ff such that f(j)=f~(j)f(j)=\tilde{f}(j) for all 0jx0\leq j\leq x. We wish to prove that j=0Vf(j)Pr(C=j)j=0Vf(j)Pr(B=j)\sum_{j=0}^{V}f(j)\Pr(C=j)\leq\sum_{j=0}^{V}f(j)\Pr(B=j). We break the proof into 22 cases. Each one of the following claims handles one such case.

Claim D.12.

Let E1E_{1} be the event A+Y>xA+Y>x. We have j=0Vf(j)Pr(C=jE1)j=0Vf(j)Pr(B=jE1)\sum_{j=0}^{V}f(j)\Pr(C=j\mid E_{1})\leq\sum_{j=0}^{V}f(j)\Pr(B=j\mid E_{1})

Proof.

By D.8, E1E_{1} can equivalently be described as A+Y1>xA+Y_{1}>x. We use these two forms interchangeably in this proof. If E1E_{1} occurs, Y1=YY_{1}=Y and Z1=ZZ_{1}=Z (Equation 11 and Equation 12). Thus, B=CB=C and the claim follows. ∎

Claim D.13.

Let E2E_{2} be the event A+YxA+Y\leq x. We have j=0Vf(j)Pr(C=jE2)j=0Vf(j)Pr(B=jE2)\sum_{j=0}^{V}f(j)\Pr(C=j\mid E_{2})\leq\sum_{j=0}^{V}f(j)\Pr(B=j\mid E_{2})

Proof.

By D.8, E2E_{2} can equivalently be described as A+Y1xA+Y_{1}\leq x. We use these two forms interchangeably in this proof. If E2E_{2} occurs, Z1=0Z_{1}=0 (Equation 12). Thus, B=A+Y1+Z1=A+Y1B=A+Y_{1}+Z_{1}=A+Y_{1} implying xBA+Yx\geq B\geq A+Y. Thus, for all j1xj_{1}\leq x, we have

j=0Vf(j)Pr(B=jA+Y=j1)=j=j1xf(j)Pr(B=jA+Y=j1)f(j1).\sum_{j=0}^{V}f(j)\Pr(B=j\mid A+Y=j_{1})=\sum_{j=j_{1}}^{x}f(j)\Pr(B=j\mid A+Y=j_{1})\geq f(j_{1}).

We have

j=0Vf(j)Pr(C=jE2)\displaystyle\sum_{j=0}^{V}f(j)\Pr(C=j\mid E_{2}) j=0Vf~(j)Pr(C=jE2)\displaystyle\leq\sum_{j=0}^{V}\tilde{f}(j)\Pr(C=j\mid E_{2})
j1=0xPr(A+Y=j1E2)j=0Vf~(j)Pr(C=jA+Y=j1)\displaystyle\leq\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\sum_{j=0}^{V}\tilde{f}(j)\Pr(C=j\mid A+Y=j_{1})
j1=0xPr(A+Y=j1E2)f~(j=0VjPr(C=jA+Y=j1))\displaystyle\leq\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\tilde{f}\left(\sum_{j=0}^{V}j\Pr(C=j\mid A+Y=j_{1})\right)
=j1=0xPr(A+Y=j1E2)f~(j1)\displaystyle=\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\tilde{f}\left(j_{1}\right)
j1=0xPr(A+Y=j1E2)j=0Vf~(j)Pr(B=jA+Y=j1)\displaystyle\leq\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\sum_{j=0}^{V}\tilde{f}\left(j\right)\Pr(B=j\mid A+Y=j_{1})

If A+Y=j1xA+Y=j_{1}\leq x, we have B[j1,x]B\in[j_{1},x] and hence,

j=0Vf(j)Pr(C=jE2)\displaystyle\sum_{j=0}^{V}f(j)\Pr(C=j\mid E_{2}) j1=0xPr(A+Y=j1E2)j=0Vf~(j)Pr(B=jA+Y=j1)\displaystyle\leq\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\sum_{j=0}^{V}\tilde{f}\left(j\right)\Pr(B=j\mid A+Y=j_{1})
=j1=0xPr(A+Y=j1E2)j=0Vf(j)Pr(B=jA+Y=j1)\displaystyle=\sum_{j_{1}=0}^{x}\Pr(A+Y=j_{1}\mid E_{2})\sum_{j=0}^{V}f\left(j\right)\Pr(B=j\mid A+Y=j_{1})
=j=0Vf(j)Pr(B=jE2)\displaystyle=\sum_{j=0}^{V}f\left(j\right)\Pr(B=j\mid E_{2})

We are now ready to prove our main “cleaning” result:

Theorem D.8 (Cleaning).

Consider any mechanism MAM_{A} for any FedEx instance with type space [0,V]×{1,2,n}[0,V]\times\{1,2,\cdots n\}. Let {xi}i=1n1\{x_{i}\}_{i=1}^{n-1} be a sequence of numbers in [0,V][0,V]. For any k1k\geq 1, there exists a mechanism MBM_{B} such that:

  • MBM_{B} mimics MAM_{A} on day 11.

  • For all 1ik11\leq i\leq k-1 and vxiv\leq x_{i},we have bi+1(v)bi(v)b_{i+1}(v)\geq b_{i}(v).

  • For all 1ik1\leq i\leq k, we have BiAiB_{i}\succeq A_{i} and for all i>ki>k we have Bi=AiB_{i}=A_{i}.

  • For all 1<ik1<i\leq k, if the revenue curve RiR_{i} on day ii of the bidder is increasing on [0,xi1][0,x_{i-1}] such that Ri(j)=Ri~(j)R_{i}(j)=\tilde{R_{i}}(j) for all 0jxi10\leq j\leq x_{i-1}, then j=0VRi(j)bi(j)j=0VRi(j)ai(j)\sum_{j=0}^{V}R_{i}(j)b_{i}(j)\geq\sum_{j=0}^{V}R_{i}(j)a_{i}(j).

If it were not for the last two bullets, Theorem D.8 would be trivial. We could have set MBM_{B} to be the optimal mechanism given the allocation of MAM_{A} on day 11. That the optimal mechanism is clean is an easy consequence of Fiat et al’s algorithm [FGKK16]. Intuitively, these bullets say that the curves AiA_{i} can be obtained from BiB_{i} by ‘splitting’ and ‘lowering’ prices appropriately and that these actions do not decrease the revenue generated. This structure ensures that we can convert results for the mechanism MBM_{B} into results for MAM_{A}.

This is exactly how we proceed. We take an arbitrary auction and clean it. We prove that a clean auction that generates high revenue must have a high menu complexity. We then translate this into a result for the original auction.

Proof of Theorem D.8.

Proof by induction on kk. If k=1k=1, we set MB=MAM_{B}=M_{A} and observe that all the conditions are satisfied. We assume the result holds for k1k-1 and prove it for k>1k>1. Let MCM_{C} be the mechanism promised by the induction hypothesis. Since MCM_{C} is feasible, we have

Ck1Ck.C_{k-1}\succeq C_{k}.

Applying subsection D.4 on Ck1,Ck=AkC_{k-1},C_{k}=A_{k} and xk1x_{k-1} , we get BkB_{k} such that

Ck1BkAk,C_{k-1}\succeq B_{k}\succeq A_{k}, (15)

and for all vxk1v\leq x_{k-1}, we have bk(v)ck1(v)b_{k}(v)\geq c_{k-1}(v). Furthermore, if ff is a function that is increasing on [0,xk1][0,x_{k-1}] such that f(j)=f~(j)f(j)=\tilde{f}(j) for all 0jxk10\leq j\leq x_{k-1}, then j=0Vf(j)bk(j)j=0Vf(j)ak(j)\sum_{j=0}^{V}f(j)b_{k}(j)\geq\sum_{j=0}^{V}f(j)a_{k}(j). Define the mechanism MBM_{B} to be MCM_{C} with the allocation CkC_{k} on day kk replaced by BkB_{k}. Equation 15 implies that MBM_{B} is feasible.

Observe that MBM_{B} satisfies all requirements.

D.5 Analysis of clean auctions

If fif_{i} is the marginal density of bidder types on day ii, then the revenue generated by the allocation density aia_{i} is equal to x=0vmaxfi(x)pi(x)\sum_{x=0}^{v_{\max}}f_{i}(x)p_{i}(x) where pi(x)p_{i}(x) is the payment made by the bidder that reports type (x,i)(x,i). Due to Myerson’s payment identity [Mye81], we know that pi(x)=j=0xjai(j)p_{i}(x)=\sum_{j=0}^{x}ja_{i}(j). Using this expression, the revenue generated by the allocation density aia_{i} on day ii is equal to

x=0vmaxfi(x)pi(x)\displaystyle\sum_{x=0}^{v_{\max}}f_{i}(x)p_{i}(x) =x=0vmaxj=0xjai(j)fi(x)\displaystyle=\sum_{x=0}^{v_{\max}}\sum_{j=0}^{x}ja_{i}(j)f_{i}(x)
=j=0vmaxjai(j)(1F(j1))=j=0vmaxai(j)Ri(j).\displaystyle=\sum_{j=0}^{v_{\max}}ja_{i}(j)(1-F(j-1))=\sum_{j=0}^{v_{\max}}a_{i}(j)R_{i}(j).

We will use this expression frequently. subsection 2.1 says that given the allocation density aia_{i}, the revenue generated by the optimal mechanism on days ii through nn is j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j). Of this revenue, an amount equal to j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)R_{i}(j) is generated on day ii and the remainder j=0vmaxai(j)Ri(j)j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)-\sum_{j=0}^{v_{\max}}a_{i}(j)R_{i}(j) is generated on days i+1i+1 and onwards. By 2.3, this is equal to j=0vmaxai(j)R~i+1(min(j,3n+i+1))\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1)).

D.5.1 Analyzing clean auctions

We described the optimal auction for our instance. It has the i1i-1 menu options n+2,n+3,,n+i1n+2,n+3,\cdots,n+i-1 and 3n+i13n+i-1 on day i1i-1. On day ii, the menu option 3n+i13n+i-1 is split into n+in+i and 3n+i3n+i while all the others are carried over. The first lemma we prove tries to formalize that this split is necessary. We intend to use this lemma in our analysis of clean auctions. By definition, a clean auction carries all prices in the range [0,n+i1][0,n+i-1] on day i1i-1 over to day ii. We prove that if a price between n+in+i and 3n+i1/23n+i-1/2 is split without creating a new menu option of n+in+i, a significant revenue loss is incurred. This loss is the difference between what is actually obtained and what what could have been obtained.

If a price of x<3n+ix<3n+i is set on day i1i-1, then the maximum revenue that can be generated day ii onwards is R~i(x)\tilde{R}_{\geq i}(x). The revenue generated by an allocation aia_{i} is at most j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j). We prove that if an allocation is feasible (this translates to j=0vmaxjai(j)x\sum_{j=0}^{v_{\max}}ja_{i}(j)\leq x) but ai(n+i)=0a_{i}(n+i)=0, then the difference between the two terms above is large.

Lemma D.9.

Fix any 1in1\leq i\leq n and n+ix3n+i1/2n+i\leq x\leq 3n+i-1/2. For any allocation density aia_{i}, if ai(n+i)=0a_{i}(n+i)=0 and j=0vmaxjai(j)x\sum_{j=0}^{v_{\max}}ja_{i}(j)\leq x, it holds that

R~i(x)j=0vmaxai(j)Ri(j)n+1i10(2n+1).\tilde{R}_{\geq i}(x)-\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)\geq\frac{n+1-i}{10(2n+1)}.
Proof.

Consider the line joining the points (n+i1,R~i(n+i1))(n+i-1,\tilde{R}_{\geq i}(n+i-1)) and (3n+i,R~i(3n+i))(3n+i,\tilde{R}_{\geq i}(3n+i)). Any point on this line satisfies y=(x)y=\ell(x) for a linear function \ell. Define the function S(x)=min((x),R~i(x))S(x)=\min(\ell(x),\tilde{R}_{\geq i}(x)). The function SS is the minimum of two concave functions and hence is concave. Further, S(x)Ri(x)S(x)\geq R_{\geq i}(x) on all points x{0,1,2,,5n}{n+i}x\in\{0,1,2,\cdots,5n\}\setminus\{n+i\}.151515This can be verified by the interested reader. We omit the calculations. Using these facts and Jensen’s inequality,

j=0vmaxai(j)Ri(j)j=0vmaxai(j)S(j)S(j=0vmaxai(j)j).\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)\leq\sum_{j=0}^{v_{\max}}a_{i}(j)S(j)\leq S\left(\sum_{j=0}^{v_{\max}}a_{i}(j)j\right).

We observe that the function SS is increasing below 3n+i3n+i. Thus,

j=0vmaxai(j)Ri(j)S(j=0vmaxai(j)j)S(x).\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)\leq S\left(\sum_{j=0}^{v_{\max}}a_{i}(j)j\right)\leq S(x).

By direct calculation, we have that

j=0vmaxai(j)Ri(j)\displaystyle\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j) S(x)(x)\displaystyle\leq S(x)\leq\ell(x)
=3n+ix2n+1R~i(n+i1)+x+1ni2n+1R~i(3n+i)\displaystyle=\frac{3n+i-x}{2n+1}\tilde{R}_{\geq i}(n+i-1)+\frac{x+1-n-i}{2n+1}\tilde{R}_{\geq i}(3n+i)
=3n+ix2n+1(n+1i)Si1+x+1ni2n+1(n+1i)(n+Si)\displaystyle=\frac{3n+i-x}{2n+1}(n+1-i)S_{i-1}+\frac{x+1-n-i}{2n+1}(n+1-i)(n+S_{i}) (Equation 9)
=(n+1i)(C(xni)+Si+3n+ix2n+1(121λi1)).\displaystyle=(n+1-i)\left(C(x-n-i)+S_{i}+\frac{3n+i-x}{2n+1}\left(\frac{1}{2}-\frac{1}{\lambda^{i-1}}\right)\right).

We also have that

R~i(x)\displaystyle\tilde{R}_{\geq i}(x) =(n+1i)(C(xni)+Si).\displaystyle=(n+1-i)\left(C(x-n-i)+S_{i}\right). (Equation 9)

Combining,

j=0vmaxai(j)Ri(j)\displaystyle\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j) R~i(x)n+1i2n+1(3n+ix)(1λi112)\displaystyle\leq\tilde{R}_{\geq i}(x)-\frac{n+1-i}{2n+1}(3n+i-x)\left(\frac{1}{\lambda^{i-1}}-\frac{1}{2}\right)
R~i(x)n+1i2(2n+1)(1λi112)\displaystyle\leq\tilde{R}_{\geq i}(x)-\frac{n+1-i}{2(2n+1)}\left(\frac{1}{\lambda^{i-1}}-\frac{1}{2}\right)
<R~i(x)n+1i10(2n+1).\displaystyle<\tilde{R}_{\geq i}(x)-\frac{n+1-i}{10(2n+1)}. (Claim D.1)

A corollary of the above result takes care of the case where ai(n+i)0a_{i}(n+i)\neq 0 but is small.

Corollary D.10.

Fix any 1in1\leq i\leq n and n+ix3n+i3/4n+i\leq x\leq 3n+i-3/4. For any allocation density aia_{i}, if ai(n+i)<1100na_{i}(n+i)<\frac{1}{100n} and j=0vmaxjai(j)x\sum_{j=0}^{v_{\max}}ja_{i}(j)\leq x, it holds that

R~i(x)j=0vmaxai(j)Ri(j)n+1i20(2n+1).\tilde{R}_{\geq i}(x)-\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)\geq\frac{n+1-i}{20(2n+1)}.
Proof.

Construct the allocation

ai(x)={0,if x=n+iai(x)1ai(n+i),if xn+i.a^{\prime}_{i}(x)=\begin{cases}0,&\text{if }x=n+i\\ \frac{a_{i}(x)}{1-a_{i}(n+i)},&\text{if }x\neq n+i.\end{cases}

Note that:

(1ai(n+i))(j=0vmaxjai(j)(n+i))=j=0vmaxjai(j)(n+i).(1-a_{i}(n+i))\left(\sum_{j=0}^{v_{\max}}ja^{\prime}_{i}(j)-(n+i)\right)=\sum_{j=0}^{v_{\max}}ja_{i}(j)-(n+i).\\ (16)

Set y=max(j=0vmaxjai(j),n+i)n+iy=\max\left(\sum_{j=0}^{v_{\max}}ja^{\prime}_{i}(j),n+i\right)\geq n+i. Note that y3n+i1/2y\leq 3n+i-1/2 because:

j=0vmaxjai(j)\displaystyle\sum_{j=0}^{v_{\max}}ja^{\prime}_{i}(j) =11ai(n+i)(j=0vmaxjai(j)(n+i))+(n+i)\displaystyle=\frac{1}{1-a_{i}(n+i)}\left(\sum_{j=0}^{v_{\max}}ja_{i}(j)-(n+i)\right)+(n+i)
100n100n1(2n3/4)+(n+i)3n+i1/2.\displaystyle\leq\frac{100n}{100n-1}\left(2n-3/4\right)+(n+i)\leq 3n+i-1/2.

Thus, subsubsection D.5.1 holds for y,aiy,a^{\prime}_{i} and we have

j=0vmaxai(j)Ri(j)\displaystyle\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j) Ri(n+i)\displaystyle-R_{\geq i}(n+i)
=(1ai(n+i))(j=0vmaxai(j)Ri(j)Ri(n+i))\displaystyle=\left(1-a_{i}(n+i)\right)\left(\sum_{j=0}^{v_{\max}}a^{\prime}_{i}(j)R_{\geq i}(j)-R_{\geq i}(n+i)\right)
(1ai(n+i))(R~i(y)Ri(n+i)n+1i10(2n+1))\displaystyle\leq\left(1-a_{i}(n+i)\right)\left(\tilde{R}_{\geq i}(y)-R_{\geq i}(n+i)-\frac{n+1-i}{10(2n+1)}\right) (Lemma D.5.1)
=(1ai(n+i))((n+1i)C(y(n+i))n+1i10(2n+1))\displaystyle=\left(1-a_{i}(n+i)\right)\left((n+1-i)C\left(y-(n+i)\right)-\frac{n+1-i}{10(2n+1)}\right) (Equation 9)
(n+1i)C(x(n+i))(1ai(n+i))n+1i10(2n+1)\displaystyle\leq(n+1-i)C\left(x-(n+i)\right)-\left(1-a_{i}(n+i)\right)\frac{n+1-i}{10(2n+1)} (Equation 16)
=R~i(x)Ri(n+i)(1ai(n+i))n+1i10(2n+1).\displaystyle=\tilde{R}_{\geq i}(x)-R_{\geq i}(n+i)-\left(1-a_{i}(n+i)\right)\frac{n+1-i}{10(2n+1)}. (Equation 9)

Adding Ri(n+i)R_{\geq i}(n+i) to both sides, we get the result. ∎

subsubsection D.5.1 is our final result about the prices that split. These prices (in our instance) are in the range [n+i,3n+i][n+i,3n+i] on day ii. We next state results concerning prices outside this range. First, we deal with points x>3n+ix>3n+i such that ai(x)>0a_{i}(x)>0. The next pair of lemmas we prove upper bound the ‘allocation mass’ on this tail.

Lemma D.11.

For any 1in1\leq i\leq n and x>3n+ix>3n+i, Ri(3n+i)Ri(x)nR_{\geq i}(3n+i)-R_{\geq i}(x)\geq n.

Proof.

Proof by calculation.

Ri(3n+i)Ri(x)\displaystyle R_{\geq i}(3n+i)-R_{\geq i}(x) βi(3n+i)+(ni)(C(2n1)+Si)(ni)(2nC+Si)\displaystyle\geq\beta_{i}(3n+i)+(n-i)(C(2n-1)+S_{i})-(n-i)(2nC+S_{i}) (Equation 8)
=βi(3n+i)(ni)C\displaystyle=\beta_{i}(3n+i)-(n-i)C
>3n+i2ni2\displaystyle>\frac{3n+i}{2}-\frac{n-i}{2} (Claim D.3)
=n+i>n.\displaystyle=n+i>n.

The next lemma shows that any approximately optimal mechanism for our instance must allocate very little mass to the tail end of the distribution.

Lemma D.12.

Consider any mechanism MM for the FedEx instance described in Section D.1. If MM generates at least a 111000n21-\frac{1}{1000n^{2}} fraction of the optimal revenue 𝖮𝖯𝖳\mathsf{OPT}, then on all days 1in1\leq i\leq n, the allocation curve AiA_{i} of the mechanism MM satisfies,

Ai(3n+i)11200n.A_{i}(3n+i)\geq 1-\frac{1}{200n}.
Proof.

The statement can be verified for i=1i=1 and so we assume i>1i>1. Consider the allocation curve Ai1A_{i-1} on the (i1)th(i-1)^{\text{th}} deadline. Given Ai1A_{i-1}, the optimal revenue that can be generated on days ii through nn is j=0vmaxai1(j)R~i(min(j,3n+i))\sum_{j=0}^{v_{\max}}a_{i-1}(j)\tilde{R}_{\geq i}(\min(j,3n+i)). Since Ai1AiA_{i-1}\succeq A_{i}, we know by the third bullet in Theorem D.6 that:

j=0vmaxai1(j)R~i(min(j,3n+i))j=0vmaxai(j)R~i(min(j,3n+i)).\sum_{j=0}^{v_{\max}}a_{i-1}(j)\tilde{R}_{\geq i}(\min(j,3n+i))\geq\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i}(\min(j,3n+i)).

The actual revenue generated on days ii through nn is at most j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j). Thus, allocating according to aia_{i} on day ii incurs a revenue loss of at least j=0vmaxai(j)R~i(min(j,3n+i))j=0vmaxai(j)Ri(j)\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i}(\min(j,3n+i))-\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j) on days ii through nn. This is at least:

j=0vmaxai(j)R~i(min(j,3n+i))\displaystyle\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i}(\min(j,3n+i)) j=0vmaxai(j)Ri(j)\displaystyle-\sum_{j=0}^{v_{\max}}a_{i}(j)R_{\geq i}(j)
j=3n+i+1vmaxai(j)(R~i(3n+i)Ri(j))\displaystyle\geq\sum_{j=3n+i+1}^{v_{\max}}a_{i}(j)\left(\tilde{R}_{\geq i}(3n+i)-R_{\geq i}(j)\right) (As R~i(j)Ri(j)\tilde{R}_{\geq i}(j)\geq R_{\geq i}(j))
n3n+i+1vmaxai(x)dx=n(1Ai(3n+i)).\displaystyle\geq n\sum_{3n+i+1}^{v_{\max}}a_{i}(x)dx=n\left(1-A_{i}(3n+i)\right). (subsubsection D.5.1)

For a 111000n21-\frac{1}{1000n^{2}} approximate auction, the loss in revenue is upper bounded by 1200\frac{1}{200}. Thus, n(1Ai(3n+i))1200n\left(1-A_{i}(3n+i)\right)\leq\frac{1}{200} implying Ai(3n+i)11200nA_{i}(3n+i)\geq 1-\frac{1}{200n}.

Finally, we prove some technical results for allocating in the range [0,n+i][0,n+i].

Lemma D.13.

For any clean mechanism MM that generates revenue at least 99100\frac{99}{100} times 𝖮𝖯𝖳\mathsf{OPT},

An/3(4n/3)12.A_{n/3}(4n/3)\leq\frac{1}{2}.
Proof.

The Myerson optimal revenue for day ii is maxxRi(x)=βi(3n+i)\max_{x}R_{i}(x)=\beta_{i}(3n+i) by Equation 7. We upper bound the revenue generated by MM on days 11 through n/31n/3-1 by i=1n/31βi(3n+i)\sum_{i=1}^{n/3-1}\beta_{i}(3n+i). We have

i=1n/31βi(3n+i)\displaystyle\sum_{i=1}^{n/3-1}\beta_{i}(3n+i) =i=1n/33ni2niλi+Si\displaystyle=\sum_{i=1}^{n/3}\frac{3n-i}{2}-\frac{n-i}{\lambda^{i}}+S_{i} (Equation 5)
i=1n/313ni23(ni)4+n+i\displaystyle\leq\sum_{i=1}^{n/3-1}\frac{3n-i}{2}-\frac{3(n-i)}{4}+n+i (D.1)
i=1n/317n+5i4<3n249n4.\displaystyle\leq\sum_{i=1}^{n/3-1}\frac{7n+5i}{4}<\frac{3n^{2}}{4}-\frac{9n}{4}.

The revenue generated on day n/3n/3 onwards is upper bounded by j=0vmaxan/3(j)Rn/3(j)\sum_{j=0}^{v_{\max}}a_{n/3}(j)R_{\geq n/3}(j). Suppose for the sake of contradiction that An/3(4n/3)>12A_{n/3}(4n/3)>\frac{1}{2}. If this is the case, then j=0vmaxan/3(j)Rn/3(j)\sum_{j=0}^{v_{\max}}a_{n/3}(j)R_{\geq n/3}(j) is upper-bounded by

j=0vmaxan/3(j)Rn/3(j)\displaystyle\sum_{j=0}^{v_{\max}}a_{n/3}(j)R_{\geq n/3}(j) An/3(4n/3)Rn/3(4n/3)+(1An/3(4n/3))Rn/3(10n/3)\displaystyle\leq A_{n/3}(4n/3)R_{\geq n/3}(4n/3)+(1-A_{n/3}(4n/3))R_{\geq n/3}(10n/3)
Rn/3(10n/3)An/3(4n/3)(Rn/3(10n/3)Rn/3(4n/3))\displaystyle\leq R_{\geq n/3}(10n/3)-A_{n/3}(4n/3)\left(R_{\geq n/3}(10n/3)-R_{\geq n/3}(4n/3)\right)
11n29+11n6.\displaystyle\leq\frac{11n^{2}}{9}+\frac{11n}{6}.

The sum of these two contributions is at most 3n24+11n29=71n2367172𝖮𝖯𝖳\frac{3n^{2}}{4}+\frac{11n^{2}}{9}=\frac{71n^{2}}{36}\leq\frac{71}{72}\mathsf{OPT}, a contradiction. ∎

Lemma D.14.

Let aia_{i} and ai+1a_{i+1} be allocations that satisfy j=0xAi+1(j)j=0xAi(j)\sum_{j=0}^{x}A_{i+1}(j)\geq\sum_{j=0}^{x}A_{i}(j) for all xx. Further, let vi,vi+1v_{i},v_{i+1} be such that Ai(vi)=Ai+1(vi+1)=αA_{i}(v_{i})=A_{i+1}(v_{i+1})=\alpha. Then,

j=0vi+1jai+1(j)j=0vijai(j).\sum_{j=0}^{v_{i+1}}ja_{i+1}(j)\leq\sum_{j=0}^{v_{i}}ja_{i}(j).
Proof.

First suppose that vi<vi+1v_{i}<v_{i+1}. We have:

j=0vi+1jai+1(j)j=0vijai(j)\displaystyle\sum_{j=0}^{v_{i+1}}ja_{i+1}(j)-\sum_{j=0}^{v_{i}}ja_{i}(j) =α(vi+1vi)j=0vi+1Ai+1(j)+j=0viAi(j)\displaystyle=\alpha(v_{i+1}-v_{i})-\sum_{j=0}^{v_{i+1}}A_{i+1}(j)+\sum_{j=0}^{v_{i}}A_{i}(j)
α(vi+1vi)j=vi+1vi+1Ai(j)\displaystyle\leq\alpha(v_{i+1}-v_{i})-\sum_{j=v_{i}+1}^{v_{i+1}}A_{i}(j)
α(vi+1vi)j=vi+1vi+1α=0.\displaystyle\leq\alpha(v_{i+1}-v_{i})-\sum_{j=v_{i}+1}^{v_{i+1}}\alpha=0.

If vivi+1v_{i}\geq v_{i+1}. We have:

j=0vi+1jai+1(j)j=0vijai(j)\displaystyle\sum_{j=0}^{v_{i+1}}ja_{i+1}(j)-\sum_{j=0}^{v_{i}}ja_{i}(j) =α(vi+1vi)j=0vi+1Ai+1(j)+j=0viAi(j)\displaystyle=\alpha(v_{i+1}-v_{i})-\sum_{j=0}^{v_{i+1}}A_{i+1}(j)+\sum_{j=0}^{v_{i}}A_{i}(j)
α(vi+1vi)+j=vi+1+1viAi(j)\displaystyle\leq\alpha(v_{i+1}-v_{i})+\sum_{j=v_{i+1}+1}^{v_{i}}A_{i}(j)
α(vi+1vi)+j=vi+1+1viα=0.\displaystyle\leq\alpha(v_{i+1}-v_{i})+\sum_{j=v_{i+1}+1}^{v_{i}}\alpha=0.

We are now ready to prove a menu complexity lower bound for clean auctions.

Theorem D.15.

Consider any clean mechanism MM for the FedEx instance described in subsection D.1 that is 111000n21-\frac{1}{1000n^{2}} approximate. For any day ii such that n/4i<n/2n/4\leq i<n/2, the mechanism MM satisfies ai(j)1300na_{i}(j)\geq\frac{1}{300n} for all j[n+2,n+n/4]j\in[n+2,n+n/4].

Proof.

Consider any day i<n/4i<n/4 and the pair aia_{i} and ai+1a_{i+1} of allocation densities of the mechanism MM on day ii and day i+1i+1 respectively. Given aia_{i}, the optimal revenue that can be generated on days i+1i+1 through nn is j=0vmaxai(j)R~i+1(min(j,3n+i+1))\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1)). The actual revenue generated on days i+1i+1 through nn is at most j=0vmaxai+1(j)Ri+1(j)\sum_{j=0}^{v_{\max}}a_{i+1}(j)R_{\geq i+1}(j). Thus, allocating according to ai+1a_{i+1} on day i+1i+1 incurs a revenue loss of at least j=0vmaxai(j)R~i+1(min(j,3n+i+1))j=0vmaxai+1(j)Ri+1(j)\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))-\sum_{j=0}^{v_{\max}}a_{i+1}(j)R_{\geq i+1}(j) on days i+1i+1 through nn.

Let vv^{*} be such that Ai(3n+i)=Ai+1(v)A_{i}(3n+i)=A_{i+1}(v^{*}). We define two allocations bib_{i} and bi+1b_{i+1}161616In this definition we implicitly assume that n+ivn+i\leq v^{*}. This holds because Ai+1(n+i)12A_{i+1}(n+i)\leq\frac{1}{2} and Ai+1(v)=Ai(3n+i)11200nA_{i+1}(v^{*})=A_{i}(3n+i)\geq 1-\frac{1}{200n} by subsubsection D.5.1. Since the mechanism is clean and i<n/4<n/3i<n/4<n/3, any ‘allocation mass’ on day i+1i+1 on points [0,n+i+1][0,n+i+1] is pushed forward to all days. Thus, Ai+1(n+i)Ai+1(n+i+1)An/3(4n/3)12A_{i+1}(n+i)\leq A_{i+1}(n+i+1)\leq A_{n/3}(4n/3)\leq\frac{1}{2} by subsubsection D.5.1..

bi(v)={0,vn+iZ1ai(v),n+i<v3n+i0,3n+i<v,bi+1(v)={Z2(ai+1(v)ai(v)),vn+iZ2ai+1(v),n+i<vv0,v<v,b_{i}(v)=\begin{cases}0,&v\leq n+i\\ Z_{1}a_{i}(v),&n+i<v\leq 3n+i\\ 0,&3n+i<v,\end{cases}\hskip 56.9055ptb_{i+1}(v)=\begin{cases}Z_{2}(a_{i+1}(v)-a_{i}(v)),&v\leq n+i\\ Z_{2}a_{i+1}(v),&n+i<v\leq v^{*}\\ 0,&v^{*}<v,\end{cases}

where Z1Z_{1}, Z2Z_{2} are the normalization constants. Since the mechanism MM is clean, we have ai+1(v)ai(v)a_{i+1}(v)\geq a_{i}(v) for all vn+iv\leq n+i by D.1 and therefore bib_{i} and bi+1b_{i+1} are valid allocation densities.

Claim D.14.

Z1=Z23Z_{1}=Z_{2}\leq 3.

Proof.

We first prove that Z1=Z2Z_{1}=Z_{2}. This is because

j=0vmaxbi(j)Z1=Ai(3n+i)Ai(n+i)=Ai+1(v)Ai(n+i)=j=0vmaxbi+1(j)Z2.\sum_{j=0}^{v_{\max}}\frac{b_{i}(j)}{Z_{1}}=A_{i}(3n+i)-A_{i}(n+i)=A_{i+1}(v^{*})-A_{i}(n+i)=\sum_{j=0}^{v_{\max}}\frac{b_{i+1}(j)}{Z_{2}}.

We now prove that Z1<3Z_{1}<3. Since the mechanism is clean and i<n/4<n/3i<n/4<n/3, any ‘allocation mass’ on points [0,n+i][0,n+i] is pushed forward to all days. Thus, Ai(n+i)An/3(4n/3)12A_{i}(n+i)\leq A_{n/3}(4n/3)\leq\frac{1}{2} by subsubsection D.5.1. Also, by subsubsection D.5.1, Ai(3n+i)11200nA_{i}(3n+i)\geq 1-\frac{1}{200n}. Thus,

Z1=1Ai(3n+i)Ai(n+i)11/21200n<3.Z_{1}=\frac{1}{A_{i}(3n+i)-A_{i}(n+i)}\leq\frac{1}{1/2-\frac{1}{200n}}<3.

For the remainder of this proof, we set ZZ denote the common value of Z1Z_{1} and Z2Z_{2}. We are now ready to lower bound the loss in revenue caused by allocating according to ai+1a_{i+1}. Since R~i+1=Ri+1\tilde{R}_{\geq i+1}=R_{\geq i+1} for all jn+ij\leq n+i, we get

j=0vmax\displaystyle\sum_{j=0}^{v_{\max}} ai(j)R~i+1(min(j,3n+i+1))j=0vmaxai+1(j)Ri+1(j)\displaystyle a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))-\sum_{j=0}^{v_{\max}}a_{i+1}(j)R_{\geq i+1}(j) (17)
1Z(j=n+i+13n+ibi(j)R~i+1(j)j=0vbi+1(j)Ri+1(j))\displaystyle\geq\frac{1}{Z}\left(\sum_{j=n+i+1}^{3n+i}b_{i}(j)\tilde{R}_{\geq i+1}(j)-\sum_{j=0}^{v^{*}}b_{i+1}(j)R_{\geq i+1}(j)\right)
=1Z(R~i+1(j=n+i+13n+ijbi(j))j=0vbi+1(j)Ri+1(j))\displaystyle=\frac{1}{Z}\left(\tilde{R}_{\geq i+1}\left(\sum_{j=n+i+1}^{3n+i}jb_{i}(j)\right)-\sum_{j=0}^{v^{*}}b_{i+1}(j)R_{\geq i+1}(j)\right) (R~i+1\tilde{R}_{\geq i+1} is linear in [n+i+1,3n+i+1][n+i+1,3n+i+1])

We next plan to upper bound j=0vbi+1(j)Ri+1(j)\sum_{j=0}^{v^{*}}b_{i+1}(j)R_{\geq i+1}(j) using subsubsection D.5.1. To this end, fix y=max(n+i+1,j=n+i+13n+ijbi(j))y=\max\left(n+i+1,\sum_{j=n+i+1}^{3n+i}jb_{i}(j)\right). Note that n+i+1y3n+in+i+1\leq y\leq 3n+i. We first show that yj=03n+ijbi(j)j=0vjbi+1(j)y\geq\sum_{j=0}^{3n+i}jb_{i}(j)\geq\sum_{j=0}^{v^{*}}jb_{i+1}(j). Adding j=0n+iZai(v)\sum_{j=0}^{n+i}Za_{i}(v) on both sides, we get that this is equivalent to showing that j=03n+iZjai(j)j=0vZjai+1(j)\sum_{j=0}^{3n+i}Zja_{i}(j)\geq\sum_{j=0}^{v^{*}}Zja_{i+1}(j). This holds due to subsubsection D.5.1. Now suppose bi+1(n+i+1)<1100nb_{i+1}(n+i+1)<\frac{1}{100n}. If this is the case, apply subsubsection D.5.1 to the last term in Equation 17 to get

j=0vmax\displaystyle\sum_{j=0}^{v_{\max}} ai(j)R~i+1(min(j,3n+i+1))j=0vmaxai+1(j)Ri+1(j)\displaystyle a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))-\sum_{j=0}^{v_{\max}}a_{i+1}(j)R_{\geq i+1}(j)
1Z(R~i+1(j=n+i3n+ijbi(j))Ri+1(y)+ni20(2n+1))\displaystyle\geq\frac{1}{Z}\left(\tilde{R}_{\geq i+1}\left(\sum_{j=n+i}^{3n+i}jb_{i}(j)\right)-R_{\geq i+1}\left(y\right)+\frac{n-i}{20(2n+1)}\right)
13ni20(2n+1)>1240.\displaystyle\geq\frac{1}{3}\frac{n-i}{20(2n+1)}>\frac{1}{240}.

where the last inequality is because yj=03n+ijbi(j)y\geq\sum_{j=0}^{3n+i}jb_{i}(j) and R~i\tilde{R}_{\geq i} is increasing below 3n+i3n+i.

Since 1240>𝖮𝖯𝖳1000n2\frac{1}{240}>\frac{\mathsf{OPT}}{1000n^{2}}, this is unaffordable. Thus, bi+1(n+i+1)1100nb_{i+1}(n+i+1)\geq\frac{1}{100n} or ai+1(n+i+1)1Z100n>1300na_{i+1}(n+i+1)\geq\frac{1}{Z100n}>\frac{1}{300n}. Since the mechanism is clean, this is pushed forward to all days implying the result.

D.5.2 Analyzing general auctions

Theorem D.15 proves an Ω(n2)\Omega(n^{2})-menu complexity lower bound for clean auctions. In this section, we wish to extend this lower bound to general auctions. Our proof will rely heavily on subsection D.3 which says that any auction can be viewed as the ‘muddled’ version of a clean auction. More precisely, the allocation curves AiA_{i} of any mechanism are stochastically dominated by clean curves BiB_{i}. We prove that removing a lot of menu options from BiB_{i} to get AiA_{i} while maintaining BiAiB_{i}\succeq A_{i} results in a huge revenue loss day i+1i+1 onwards. Since the revenue generated by BjB_{j} for jij\leq i is at least that generated by AjA_{j}(4th4^{\text{th}} bullet in subsection D.3), the huge loss of revenue on days i+1i+1 onwards is tantamount to a huge loss overall.

Proof of Theorem 5.1.

Consider an arbitrary mechanism MAM_{A} and let AiA_{i} be the allocation curve of MAM_{A} on day ii. subsection D.3 says that there exist allocation curves BiB_{i} of a clean mechanism MBM_{B} such that BiAiB_{i}\succeq A_{i} for all ii. Let i(n/4,n/2]i\in(n/4,n/2] be such that the number of menu options on day ii is at most n/8n/8. Let V={vl}l=1kV=\{v_{l}\}_{l=1}^{k} be the vector of menu options on day ii. Let SS be the polygon approximation of R~i+1\tilde{R}_{\geq i+1} defined by the points in V[n+i,vmax]V\cup[n+i,v_{\max}]. We have that the revenue generated by MM day i+1i+1 onwards is at most

j=0vmaxai(j)R~i+1(min(j,3n+i+1))=j=0vmaxai(j)S(min(j,3n+i+1)).\sum_{j=0}^{v_{\max}}a_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))=\sum_{j=0}^{v_{\max}}a_{i}(j)S(\min(j,3n+i+1)).

Since for all ii, we have BiAiB_{i}\succeq A_{i}, and S(min(3n+i+1,x))S(\min(3n+i+1,x)) is increasing and concave, it holds by the 3rd3^{\text{rd}} bullet in Theorem D.6 that

j=0vmaxai(j)S(min(j,3n+i+1))j=0vmaxbi(j)S(min(j,3n+i+1)).\sum_{j=0}^{v_{\max}}a_{i}(j)S(\min(j,3n+i+1))\leq\sum_{j=0}^{v_{\max}}b_{i}(j)S(\min(j,3n+i+1)).

Consider the mechanism that mimics MBM_{B} on days 11 through ii and does optimally thereafter. Since it mimics MBM_{B} on days 11 through ii, the revenue generated on these days is at least that of MAM_{A} (4th4^{\text{th}} bullet in subsection D.3). Day i+1i+1 onwards, this mechanism generates a revenue of j=0vmaxbi(j)R~i+1(min(j,3n+i+1))\sum_{j=0}^{v_{\max}}b_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1)). To prove our result it is sufficient to prove that j=0vmaxbi(j)R~i+1(min(j,3n+i+1))j=0vmaxbi(j)S(min(j,3n+i+1))160000𝖮𝖯𝖳200000n2\sum_{j=0}^{v_{\max}}b_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))-\sum_{j=0}^{v_{\max}}b_{i}(j)S(\min(j,3n+i+1))\geq\frac{1}{60000}\geq\frac{\mathsf{OPT}}{200000n^{2}}. Observe that since the functions SS and R~i+1\tilde{R}_{\geq i+1} are the same for all values at least n+in+i, we have

j=0vmaxbi(j)R~i+1(min(j,3n+i+1))j=0vmaxbi(j)S(min(j,3n+i+1))=j=0n+ibi(j)(R~i+1(j)S(j)).\sum_{j=0}^{v_{\max}}b_{i}(j)\tilde{R}_{\geq i+1}(\min(j,3n+i+1))-\sum_{j=0}^{v_{\max}}b_{i}(j)S(\min(j,3n+i+1))=\sum_{j=0}^{n+i}b_{i}(j)\left(\tilde{R}_{\geq i+1}(j)-S(j)\right).

We know that MBM_{B} has revenue at least that of MAM_{A} and is clean. Thus, by Theorem D.15 for MBM_{B}, we get that bib_{i} has a mass of at least 1/300n1/300n on at least n/4n/4 points in the range [n,n+i][n,n+i]. The function SiS_{i} in this range is defined by V[0,n+i]V\cap[0,n+i] which has at most n/8n/8 points. Thus, there are at least n/8n/8 points in the support of bib_{i} that are not in V[0,n+i]V\cap[0,n+i]. For any such point xx,

S(x)\displaystyle S(x) 12(Ri+1(x1)+Ri+1(x+1))\displaystyle\leq\frac{1}{2}\left(R_{\geq i+1}(x-1)+R_{\geq i+1}(x+1)\right) (18)
=12((ni)Sxn1+(ni)Sxn+1)\displaystyle=\frac{1}{2}\left((n-i)S_{x-n-1}+(n-i)S_{x-n+1}\right) (Equation 9)
=ni2(2Sxn+1λxn1λxn1)\displaystyle=\frac{n-i}{2}\left(2S_{x-n}+\frac{1}{\lambda^{x-n}}-\frac{1}{\lambda^{x-n-1}}\right)
=(ni)Sxnni2λ1λxn\displaystyle=(n-i)S_{x-n}-\frac{n-i}{2}\frac{\lambda-1}{\lambda^{x-n}}
=Ri+1(x)n18n1λxn\displaystyle=R_{\geq i+1}(x)-\frac{n-1}{8n}\frac{1}{\lambda^{x-n}} (Equation 9)
Ri+1(x)3(ni)32n<364.\displaystyle\leq R_{\geq i+1}(x)-\frac{3(n-i)}{32n}<\frac{3}{64}. (D.1)

Thus, the expression j=0n+ibi(j)(R~i+1(j)S(j))\sum_{j=0}^{n+i}b_{i}(j)\left(\tilde{R}_{\geq i+1}(j)-S(j)\right) is at least the same sum over the n/8n/8 points where Equation 18 holds. This sum is thus, at least n83641300n>160000\frac{n}{8}\cdot\frac{3}{64}\cdot\frac{1}{300n}>\frac{1}{60000}.

References

  • [BCKW10] Patrick Briest, Shuchi Chawla, Robert Kleinberg, and S. Matthew Weinberg. Pricing Randomized Allocations. In the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2010.
  • [BGN17] Moshe Babaioff, Yannai A. Gonczarowski, and Noam Nisan. The menu-size complexity of revenue approximation. abs/1604.06580, 2017.
  • [BGO07] Jean-Daniel Boissonnat, Leonidas J. Guibas, and Steve Oudot. Learning smooth shapes by probing. Comput. Geom. Theory Appl., 37(1):38–58, May 2007.
  • [BHR91] Rainer E. Burkard, Horst W. Hamacher, and G nter Rote. Sandwich approximation of univariate convex functions with an application to separable convex programming. Naval Research Logistics (NRL), 38(6):911–924, 1991.
  • [BILW14] Moshe Babaioff, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg. A simple and approximately optimal mechanism for an additive buyer. In the 55th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2014.
  • [Bro08] E. M. Bronstein. Approximation of convex sets by polytopes. Journal of Mathematical Sciences, 153(6):727–762, 2008.
  • [CDP+14] Xi Chen, Ilias Diakonikolas, Dimitris Paparas, Xiaorui Sun, and Mihalis Yannakakis. The complexity of optimal multidimensional pricing. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1319–1328, 2014.
  • [CDW13] Yang Cai, Constantinos Daskalakis, and S. Matthew Weinberg. Understanding Incentives: Mechanism Design becomes Algorithm Design. In the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2013.
  • [CDW16] Yang Cai, Nikhil Devanur, and S. Matthew Weinberg. A duality based unified approach to bayesian mechanism design. SIGecom Exchanges, 2016.
  • [CG00] Yeon-Koo Che and Ian Gale. The optimal mechanism for selling to a budget-constrained buyer. Journal of Economic theory, 92(2):198–233, 2000.
  • [CHK07] Shuchi Chawla, Jason D. Hartline, and Robert D. Kleinberg. Algorithmic Pricing via Virtual Valuations. In the 8th ACM Conference on Electronic Commerce (EC), 2007.
  • [CHMS10] Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. Multi-Parameter Mechanism Design and Sequential Posted Pricing. In the 42nd ACM Symposium on Theory of Computing (STOC), 2010.
  • [CMM11] Shuchi Chawla, David L. Malec, and Azarakhsh Malekian. Bayesian mechanism design for budget-constrained agents. In Proceedings 12th ACM Conference on Electronic Commerce (EC-2011), San Jose, CA, USA, June 5-9, 2011, pages 253–262, 2011.
  • [CR14] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 243–252, New York, NY, USA, 2014. ACM.
  • [DDT13] Constantinos Daskalakis, Alan Deckelbaum, and Christos Tzamos. Mechanism Design via Optimal Transport. In The 14th ACM Conference on Electronic Commerce (EC), 2013.
  • [DDT14] Constantinos Daskalakis, Alan Deckelbaum, and Christos Tzamos. The Complexity of Optimal Mechanism Design. In the 25th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2014.
  • [DDY16] Constantinos Daskalakis, Ilias Diakonikolas, and Mihalis Yannakakis. How good is the chord algorithm? SIAM J. Comput., 45(3):811–858, 2016.
  • [DHN14] Shaddin Dughmi, Li Han, and Noam Nisan. Sampling and representation complexity of revenue maximization. In Web and Internet Economics - 10th International Conference, WINE 2014, Beijing, China, December 14-17, 2014. Proceedings, pages 277–291, 2014.
  • [DHP16] Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 426–439, 2016.
  • [DHP17] Nikhil R. Devanur, Nima Hagpanah, and Christos-Alexander Psomas. Optimal multi unit mechanisms with private demands. Working paper, 2017.
  • [DJF16] Thodoris Lykouris Karthik Sridharan Eva Tardos Dylan J. Foster, Zhiyuan Li. Learning in games: robustness of fast convergence. In the 30th annual conference on neural information processing systems (NIPS), 2016.
  • [DW17] Nikhil R. Devanur and S. Matthew Weinberg. The optimal mechanism for selling to a budget-constrained buyer: the general case. Working paper, 2017.
  • [FGKK16] Amos Fiat, Kira Goldner, Anna R. Karlin, and Elias Koutsoupias. The fedex problem. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC ’16, pages 21–22, New York, NY, USA, 2016. ACM.
  • [GG09] S. Glasauer and P. Gruber. Asymptotic estimates for best and stepwise approximation of convex bodies iii. Forum Mathematicum, 9(9):383–404, 2009.
  • [GN17] Yannai A. Gonczarowski and Noam Nisan. Efficient empirical revenue maximization in single-parameter auction environments. abs/1604.06580, 2017.
  • [HMR15] Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC ’15, pages 45–60, New York, NY, USA, 2015. ACM.
  • [HN12] Sergiu Hart and Noam Nisan. Approximate Revenue Maximization with Multiple Items. In the 13th ACM Conference on Electronic Commerce (EC), 2012.
  • [HN13] Sergiu Hart and Noam Nisan. The menu-size complexity of auctions. In the 14th ACM Conference on Electronic Commerce (EC), 2013.
  • [HR15] Sergiu Hart and Philip J. Reny. Maximizing Revenue with Multiple Goods: Nonmonotonicity and Other Observations. Theoretical Economics, 10(3):893–922, 2015.
  • [LR96] Jean-Jacques Laffont and Jacques Robert. Optimal auction with financially constrained buyers. Economics Letters, 52(2):181–186, 1996.
  • [MR15] Jamie Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 136–144, 2015.
  • [MR16] Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, pages 1298–1318, 2016.
  • [Mye81] Roger B. Myerson. Optimal Auction Design. Mathematics of Operations Research, 6(1):58–73, 1981.
  • [Rot92] G. Rote. The convergence rate of the sandwich algorithm for approximating convex functions. Computing, 48(3-4):337–361, March 1992.
  • [Rub16] Aviad Rubinstein. Settling the complexity of computing approximate two-player nash equillibria. In FOCS, 2016.
  • [RW15] Aviad Rubinstein and S. Matthew Weinberg. Simple mechanisms for a subadditive buyer and applications to revenue monotonicity. In Proceedings of the 16th ACM Conference on Electronic Commerce, 2015.
  • [RZ83] John Riley and Richard Zeckhauser. Optimal Selling Strategies: When to Haggle, When to Hold Firm. Quarterly J. Economics, 98(2):267–289, 1983.
  • [SBN17] Zhiyi Huang Sebastien Bubeck, Nikhil R. Devanur and Rad Niazadehd. Online auction sand multi-scale online learning. In Proceedings of the eighteenth ACM conference on Economics and Computation (EC), 2017.
  • [Tha04] John Thanassoulis. Haggling over substitutes. Journal of Economic Theory, 117:217–245, 2004.
  • [WT14] Zihe Wang and Pingzhong Tang. Optimal mechanisms with simple menus. In Proceedings of the Fifteenth ACM Conference on Economics and Computation, EC ’14, pages 227–240, New York, NY, USA, 2014. ACM.
  • [YG97] XQ Yang and CJ Goh. A method for convex curve approximation. European Journal of Operational Research, 97(1):205–212, 1997.