Mature internet advertising platforms offer high-level campaign management tools to help advertisers run their campaigns, often abstracting away the intricacies of how each ad is placed and focusing on aggregate metrics of interest to advertisers. On such platforms, advertisers often participate in auctions through a proxy bidder, so the standard incentive analyses that are common in the literature do not apply directly. In this paper, we take the perspective of a budget management system that surfaces aggregated incentives -- instead of individual auctions -- and compare first and second price auctions. We show that theory offers surprising endorsement for using a first price auction to sell individual impressions. In particular, first price auctions guarantee uniqueness of the steady-state equilibrium of the budget management system, monotonicity, and other desirable properties, as well as efficient computation through the solution to the well-studied Eisenberg-Gale convex program. Contrary to what one can expect from first price auctions, we show that incentives issues are not a barrier that undermines the system. Using realistic instances generated from data collected at real-world auction platforms, we show that bidders have small regret with respect to their optimal ex-post strategy, and they do not have a big incentive to misreport when they can influence equilibria directly by giving inputs strategically. Finally, budget-constrained bidders, who have significant prevalence in real-world platforms, tend to have smaller regrets. Our computations indicate that bidder budgets, pacing multipliers and regrets all have a positive association in statistical terms.
We present the first non-trivial online algorithms for the non-uniform, multicommodity buy-at-bulk (MC-BB) network design problem. Our competitive ratios qualitatively match the best known approximation factors for the corresponding offline problems. In particular, we show:1. A polynomial time online algorithm with a poly-logarithmic competitive ratio for the MC-BB problem in undirected edge-weighted graphs.2. A quasi-polynomial time online algorithm with a poly-logarithmic competitive ratio for the MC-BB problem in undirected node-weighted graphs.3. For any fixed ε > 0, a polynomial time online algorithm with a competitive ratio of O̅(k {1/2+ε} polylog(n)) (where k is the number of demands) for MC-BB in directed graphs.4. Algorithms with matching competitive ratios for the prize-collecting variants of all the above problems. Prior to our work, a logarithmic competitive ratio was known for undirected, edge-weighted graphs only for the special case of uniform costs (Awerbuch and Azar, FOCS 1997), and a polylogarithmic competitive ratio was known for the edge-weighted single-sink problem (Meyerson, SPAA 2004). To the best of our knowledge, no previous online algorithm was known, even for uniform costs, in the node-weighted and directed settings. Our main engine for the results above is an online reduction theorem of MC-BB problems to their single-sink (SS-BB) counterparts. We use the concept of junction-tree solutions (Chekuri et al., FOCS 2006) that play an important role in solving the offline versions of the problem via a greedy subroutine -- an inherently offline procedure. Our main technical contribution is in designing an online algorithm using only the existence of good junction-trees to reduce an MC-BB instance to multiple SS-BB sub-instances. Along the way, we also give the first non-trivial online node-weighted/directed single-sink buy-at-bulk algorithms. In addition to the new results, our generic reduction also yields new proofs of recent results for the online node-weighted Steiner forest and online group Steiner forest problems.
This article presents universal algorithms for clustering problems, including the widely studied k -median, k -means, and k -center objectives. The input is a metric space containing all potential client locations. The algorithm must select k cluster centers such that they are a good solution for any subset of clients that actually realize. Specifically, we aim for low regret , defined as the maximum over all subsets of the difference between the cost of the algorithm’s solution and that of an optimal solution. A universal algorithm’s solution Sol for a clustering problem is said to be an α , β-approximation if for all subsets of clients C ′ , it satisfies sol ( C ′ ) ≤ α ċ opt ( C ′) + β ċ mr , where opt ( C ′ is the cost of the optimal solution for clients ( C ′) and mr is the minimum regret achievable by any solution. Our main results are universal algorithms for the standard clustering objectives of k -median, k -means, and k -center that achieve ( O (1), O (1))-approximations. These results are obtained via a novel framework for universal algorithms using linear programming (LP) relaxations. These results generalize to other ℓ p -objectives and the setting where some subset of the clients are fixed . We also give hardness results showing that (α, β)-approximation is NP-hard if α or β is at most a certain constant, even for the widely studied special case of Euclidean metric spaces. This shows that in some sense, ( O (1), O (1))-approximation is the strongest type of guarantee obtainable for universal clustering.
Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $\ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {\em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {\em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $\ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.
Let $R$ and $B$ be two point sets in $\mathbb{R}^d$, with $|R|+ |B| = n$ and where $d$ is a constant. Next, let $λ: R \cup B \to \mathbb{N}$ such that $\sum_{r \in R } λ(r) = \sum_{b \in B} λ(b)$ be demand functions over $R$ and $B$. Let $\|\cdot\|$ be a suitable distance function such as the $L_p$ distance. The transportation problem asks to find a map $τ: R \times B \to \mathbb{N}$ such that $\sum_{b \in B}τ(r,b) = λ(r)$, $\sum_{r \in R}τ(r,b) = λ(b)$, and $\sum_{r \in R, b \in B} τ(r,b) \|r-b\|$ is minimized. We present three new results for the transportation problem when $\|r-b\|$ is any $L_p$ metric: - For any constant $\varepsilon > 0$, an $O(n^{1+\varepsilon})$ expected time randomized algorithm that returns a transportation map with expected cost $O(\log^2(1/\varepsilon))$ times the optimal cost. - For any $\varepsilon > 0$, a $(1+\varepsilon)$-approximation in $O(n^{3/2}\varepsilon^{-d} \operatorname{polylog}(U) \operatorname{polylog}(n))$ time, where $U = \max_{p\in R\cup B} λ(p)$. - An exact strongly polynomial $O(n^2 \operatorname{polylog}n)$ time algorithm, for $d = 2$.
Long-distance multi-hop wireless networks have been used in recent years to provide connectivity to rural areas. The salient features of such networks include TDMA channel access, nodes with multiple radios, and point-to-point long-distance wireless links established using high-gain directional antennas mounted on high towers. It has been demonstrated previously that in such network architectures, nodes can transmit concurrently on multiple radios, as well as receive concurrently on multiple radios. However, concurrent transmission on one radio, and reception on another radio causes interference. Under this scheduling constraint, given a set of source-destination demand rates, we consider the problem of satisfying the maximum fraction of each demand (also called the maximum concurrent flow problem). We give a novel joint routing and scheduling scheme for this problem, based on linear programming and graph coloring. We analyze our algorithm theoretically and prove that at least 50% of a satisfiable set of demands is satisfied by our algorithm for most practical networks (with maximum node degree at most 5).
Given a weighted graph $G$ and an error parameter $\epsilon > 0$, the {\em graph sparsification} problem requires sampling edges in $G$ and giving the sampled edges appropriate weights to obtain a sparse graph $G_{\epsilon}$ (containing O(n\log n) edges in expectation) with the following property: the weight of every cut in $G_{\epsilon}$ is within a factor of $(1\pm \epsilon)$ of the weight of the corresponding cut in $G$. We provide a generic framework that sets out sufficient conditions for any particular sampling scheme to result in good sparsifiers, and obtain a set of results by simple instantiations of this framework. The results we obtain include the following: (1) We improve the time complexity of graph sparsification from O(m\log^3 n) to O(m + n\log^4 n) for graphs with polynomial edge weights. (2) We improve the time complexity of graph sparsification from O(m\log^3 n) to O(m\log^2 n) for graphs with arbitrary edge weights. (3) If the size of the sparsifier is allowed to be O(n\log^2 n/\epsilon^2) instead of O(n\log n/\epsilon^2), we improve the time complexity of sparsification to O(m) for graphs with polynomial edge weights. (4) We show that sampling using standard connectivities results in good sparsifiers, thus resolving an open question of Benczur and Karger. As a corollary, we give a simple proof of (a slightly weaker version of) a result due to Spielman and Srivastava showing that sampling using effective resistances produces good sparsifiers. (5) We give a simple proof showing that sampling using strong connectivities results in good sparsifiers, a result obtained previously using a more involved proof by Benczur and Karger. A key ingredient of our proofs is a generalization of bounds on the number of small cuts in an undirected graph due to Karger; this generalization might be of independent interest.
We give an algorithm to find a mincut in an $n$-vertex, $m$-edge weighted directed graph using $\tilde O(\sqrt{n})$ calls to any maxflow subroutine. Using state of the art maxflow algorithms, this yields a directed mincut algorithm that runs in $\tilde O(m\sqrt{n} + n^2)$ time. This improves on the 30 year old bound of $\tilde O(mn)$ obtained by Hao and Orlin for this problem.