TechTalks from event: FOCS 2011

We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com

6A

  • Streaming Algorithms via Precision Sampling Authors: Alexandr Andoni and Robert Krauthgamer and Krzysztof Onak
    A technique introduced by Indyk and Woodruff [STOC 2005] has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called {\em Precision Sampling}. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,\ldots x_n)$, which is useful for the following applications: \begin{itemize} \item Estimating the $F_k$-moment of $x$, for $k>2$. \item Estimating the $\ell_p$-norm of $x$, for $p\in[1,2]$, with small update time. \item Estimating cascaded norms $\ell_p(\ell_q)$ for all $p,q>0$. \item$\ell_1$ sampling, where the goal is to produce an element $i$ with probability (approximately) $|x_i|/\|x\|_1$. It extends to similarly defined $\ell_p$-sampling, for $p\in [1,2]$. \end{itemize} For all these applications the algorithm is essentially the same: pre-multiply the vector $x$ entry-wise by a well-chosen random vector, and run a heavy-hitter estimation algorithm on the resulting vector. Our sketch is a linear function of $x$, thereby allowing general updates to the vector $x$. Precision Sampling itself addresses the problem of estimating a sum $\sum_{i=1}^n a_i$ from weak estimates of each real $a_i\in[0,1]$. More precisely, the estimator first chooses a desired precision $u_i\in(0,1]$ for each $i\in[n]$, and then it receives an estimate of every $a_i$ within additive $u_i$. Its goal is to provide a good approximation to $\sum a_i$ while keeping a tab on the cost $\sum_i (1/u_i)$. Here we refine previous work [Andoni, Krauthgamer, and Onak, FOCS 2010] which shows that as long as $\sum a_i=\Omega(1)$, a good multiplicative approximation can be achieved using total precision of only $O(n\log n)$.
  • Steiner Shallow-Light Trees are Exponentially Lighter than Spanning Ones Authors: Michael Elkin and Shay Solomon
    For a pair of parameters alpha \ge 1, beta \ge 1, a spanning tree T of a weighted undirected n-vertex graph G = (V,E,w) is called an (alpha,beta)-shallow-light tree (shortly, (alpha,beta)-SLT) of G with respect to a designated vertex rt in V if (1) it approximates all distances from rt to other vertices up to a factor of alpha, and (2) its weight is at most beta times the weight of the minimum spanning tree MST(G) of G. The parameter alpha (respectively, beta) is called the root-distortion (resp., lightness) of the tree T. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awerbuch, Baratz and Peleg, PODC'90 and Khuller, Raghavachari and Young, SODA'93. They showed that for any eps > 0 there always exist (1+eps,O(1/eps))-SLTs, and that the upper bound beta = O(1/eps) on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with logarithmic lightness, i.e., beta = O(log 1/eps). This establishes an \emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is eps = 0. In this regime our construction provides a \emph{shortest-path tree} with weight at most O(log n) * w(MST(G)). Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. in SODA'93.
  • Fully dynamic maximal matching in O(log n) update time Authors: Surender Baswana and Manoj Gupta and Sandeep Sen
    We present an algorithm for maintaining maximal matching in a graph under addition and deletion of edges. Our data structure is randomized that takes $O( \log n)$ expected amortized time for each edge update where $n$ is the number of vertices in the graph. While there is a trivial $O(n)$ algorithm for edge update, the previous best known result for this problem for a graph with $n$ vertices and $m$ edges is $O( {(n+ m)}^{0.7072})$ which is sub-linear only for a sparse graph. To the best of our knowledge this is the first polylog update time for maximal matching that implies an exponential improvement from the previous results. For the related problem of maximum matching, Onak and Rubinfield \cite{onak} designed a randomized data structure that achieves $O(\log^2 n)$ amortized time for each update for maintaining a $c$-approximate maximum matching for some large constant $c$. In contrast, we can maintain a factor two approximate maximum matching in $O(\log n )$ expected time per update as a direct corollary of the maximal matching scheme. This in turn also implies a two approximate vertex cover maintenance scheme that takes $O(\log n )$ expected time per update.
  • Which Networks Are Least Susceptible to Cascading Failures? Authors: Lawrence Blume and David Easley and Jon Kleinberg and Robert Kleinberg and Eva Tardos
    The resilience of networks to various types of failures is an undercurrent in many parts of graph theory and network algorithms. In this paper we study the resilience of networks in the presence of {\em cascading failures} --- failures that spread from one node to another across the network structure. One finds such cascading processes at work in the kind of contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. A widely studied model of cascades in networks assumes that each node $v$ of the network has a threshold $\ell(v)$, and fails if it has at least $\ell(v)$ failed neighbors. We assume that each node selects a threshold $\ell(v)$ independently using a probability distribution $\mu$. Our work centers on a parameter that we call the $\mu$-risk of a graph: the maximum failure probability of any node in the graph, in this threshold cascade model parameterized by threshold distribution $\mu$. This defines a very broad class of models; for example, the large literature on edge percolation, in which propagation happens along edges that are included independently at random with some probability $p$, takes place in a small part of the parameter space of threshold cascade models, and one where the distribution $\mu$ is monotonically decreasing with the threshold. In contrast we want to study the whole space, including threshold distributions with qualitatively different behavior, such as those that are sharply increasing. We develop techniques for relating differences in $\mu$-risk to the structures of the underlying graphs. This is challenging in large part because, despite the simplicity of its formulation, the threshold cascade model has been very hard to analyze for arbitrary graphs $G$ and arbitrary threshold distributions $\mu$. It turns out that when selecting among a set of graphs to minimize the $\mu$-risk, the result depends quite intricately on $\mu$. We develop several techniques for evaluating the $\mu$-risk of $d$-regular graphs. For $d=2$ we are able to solve the problem completely: the optimal graph is always a clique (i.e.\ triangle) or tree (i.e.\ infinite path), although which graph is better exhibits a surprising non-monotonicity as the threshold parameters vary. When $d>2$ we present a technique based on power-series expansions of the failure probability that allows us to compare graphs in certain parts of the parameter space, deriving conclusions including the fact that as $\mu$ varies, at least three different graphs are optimal among $3$-regular graphs. In particular, the set of optimal 3-regular graphs includes one which is neither a clique nor a tree.