TechTalks from event: IEEE FOCS 2010

51st Annual IEEE Symposium on Foundations of Computer Science

  • On the Queue Number of Planar Graphs Authors: Giuseppe Di Battista and Fabrizio Frati and J?nos Pach
    allows us to settle a number of both old and recent open problemsWe prove that planar graphs have $O(log^4 n)$ queue number, thus improving upon the previous $O(sqrt n)$ upper bound. Consequently, planar graphs admit 3D straight-line crossing-free grid drawings in $O(n log^c n)$ volume, for some constant $c$, thus improving upon the previous $O(n^{3/2})$ upper bound.
  • Holographic Algorithms with Matchgates Capture Precisely Tractable Planar #CSP Authors: Jin-Yi Cai and Pinyan Lu and Mingji Xia
    Valiant introduced matchgate computation and holographic algorithms. A number of seemingly exponential time problems can be solved by this novel algorithmic paradigm in polynomial time. We show that, in a very strong sense, matchgate computations and holographic algorithms based on them provide a universal methodology to a broad class of counting problems studied in statistical physics community for decades. They capture precisely those problems which are #P-hard on general graphs but computable in polynomial time on planar graphs.

    More precisely, we prove complexity dichotomy theorems in the framework of counting CSP problems. The local constraint functions take Boolean inputs, and can be arbitrary real-valued symmetric functions. We prove that, every problem in this class belongs to precisely three categories: (1) those which are tractable (i.e.,polynomial time computable) on general graphs, or (2) those which are #P-hard on general graphs but tractable on planar graphs, or (3) those which are #P-hard even on planar graphs. The classification criteria are explicit. Moreover, problems in category (2) are tractable on planar graphs precisely by holographic algorithms with matchgates.

  • A Decidable Dichotomy Theorem on Directed Graph Homomorphisms with Non-negative Weights Authors: Jin-Yi Cai and Xi Chen
    The complexity of graph homomorphism problems has been the subject of intense study. It is a long standing open problem to give a (decidable) complexity dichotomy theorem for the partition function of directed graph homomorphisms. In this paper, we prove a decidable complexity dichotomy theorem for this problem and our theorem applies to all non-negative weighted form of the problem: Given any fixed matrix A with non-negative entries, the partition function Z_A(G) of directed graph homomorphisms from any directed graph G is either tractable in polynomial time or #P-hard, depending on the matrix A. The proof of the dichotomy theorem is combinatorial, but involves the definition of an infinite family of graph homomorphism problems. The proof of its decidability is algebraic using properties of polynomials.
  • Overcoming the Hole In The Bucket: Public-Key Cryptography Resilient to Continual Memory Leakage Authors: Zvika Brakerski and Yael Tauman Kalai and Jonathan Katz and Vinod Vaikuntanathan
    In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. Micali and Reyzin (2004) suggested the intriguing possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used. To achieve this impossible-sounding goal, theirs and several subsequent works all required the so-called ``only computation leaks information'' assumption, and it was unclear whether this strong assumption could be removed. We succeed in removing this assumption.

    We show how to securely update a secret key while information is leaked, allowing some leakage {em even during the updates} themselves. Namely, we construct schemes that remain secure, even if an attacker {em at each time period}, can probe the {em entire} memory (containing a secret key) and ``leak out'' a $1/4-epsilon$ fraction of the secret key . The attacker may also probe the memory {em during the updates}, and leak $O(log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness, allows $k^epsilon$ bits of leakage during each update process).

    Specifically, under the decisional linear assumption on bilinear groups, we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct encryption schemes even under the (stronger) assumption of Micali and Reyzin. One of our technical tools is a new linear algebraic theorem (which is unconditional, and requires no computational assumptions), stating that ``random subspaces are leakage-resilient''.

    The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (without the aforementioned strong assumption) and (2) giving a public key {em enc

  • Cryptography Against Continuous Memory Attacks Authors: Yevgeniy Dodis and Kristiyan Haralambiev and Adriana Lopez-Alt and Daniel Wichs
    We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:

    1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the ?outside world? is neither affected by these key refreshes, nor needs to know about their frequency.

    2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.

    In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1).

    Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes.

    Prior to our work, no ?truly CLR? schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that ?only computation leaks information?; (b) the overall amount of key leakage is bounded a-priori for the lifetime of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free ?master secret key? to be stored securely; (e) the

  • On the Insecurity of Parallel Repetition for Leakage Resilience Authors: Allison Lewko and Brent Waters
    A fundamental question in leakage-resilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakage-resilient primitive tolerating $ell$ bits of leakage, we can take $n$ copies of it to form a system tolerating $nell$ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most $ell$ bits are leaked, but $n$ copies of the system are insecure when $nell$ bits are leaked. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption emph{or} in prime order bilinear groups under the decisional linear assumption where the public key systems share a common reference parameter.
  • Black-Box, Round-Efficient Secure Computation via Non-Malleability Assumption Authors: Hoeteck Wee
    We present new and improved protocols for secure multi-party computation. Our main contributions are as follows:

    -- a O(log^* n)-round protocol for secure multi-party computation with a dishonest majority that relies on black-box access to dense cryptosystems, homomorphic encryption schemes, or lossy encryption schemes. This improves upon the recent O(1)^{log^* n}-round protocol of Lin, Pass and Venkitasubramaniam (STOC 2009) that relies on non- black-box access to a smaller class of primitives.

    -- a O(1)-round protocol requiring in addition, black-box access to a one-way function with sub-exponential hardness, improving upon the recent work of Pass and Wee (Eurocrypt 2010).

    These are the first black-box constructions for secure computation with sublinear round complexity. Our constructions build on and improve upon the work of Lin and Pass (STOC 2009) on non-malleability amplification, as well as that of Ishai et al. (STOC 2006) on black-box secure computation.

    In addition to the results on secure computation, we also obtain a simple, self-contained construction of a O(log^* n)-round non-malleable commitment scheme based on one-way functions, improving upon the recent O(1)^{log^* n} -round protocol of Lin and Pass (STOC 2009). Our construction uses a novel transformation for handling arbitrary man-in-the- middle scheduling strategies which improves upon a previous construction of Barak (FOCS 2002).

  • From Standard to Adaptive Hardness And Composable Security With No Set-Up Authors: Ran Canetti and Huijia Lin and Rafael Pass
    We construct the first general secure computation protocols that require no trusted infrastructure other than authenticated communication, and that satisfy a meaningful notion of security that is preserved under universal composition---{em assuming only the existence of enhanced trapdoor permutations.} The notion of security is a generalization of the ``angel-based'' notion of Prabhakaran and Sahai (STOC'04) and implies super-polynomial time simulation security.

    A key element in our construction is a new notion of security for commitment schemes. The new notion, security against chosen-commitment-attacks (CCA security), means that security holds even if the attacker has access to a {em decommitment oracle.} This notion is stronger than concurrent non-malleability and is of independent interest. Our main technical contribution is constructing CCA-secure commitments based on standard one-way functions, and with no trusted set-up. This provides a construction of a primitive whose emph{adaptive hardness} can be based on standard assumptions without set-up.

  • On the Computational Complexity of Coin Flipping Authors: Hemanta K. Maji and Manoj Prabhakaran and Amit Sahai and Dan Schreiber
    Coin flipping is one of the most fundamental tasks in cryptographic protocol design. Informally, a coin flipping protocol should guarantee both (1) Completeness: an honest execution of the protocol by both parties results in a fair coin toss, and (2) Security: a cheating party cannot increase the probability of its desired outcome by any significant amount. Since its introduction by Blum~cite{Blum82}, coin flipping has occupied a central place in the theory of cryptographic protocols. In this paper, we explore what are the implications of the existence of secure coin flipping protocols for complexity theory. As exposited recently by Impagliazzo~cite{Impagliazzo09talk}, surprisingly little is known about this question.

    Previous work has shown that if we interpret the Security property of coin flipping protocols very strongly, namely that nothing beyond a negligible bias by cheating parties is allowed, then one-way functions must exist~cite{ImpagliazzoLu89}. However, for even a slight weakening of this security property (for example that cheating parties cannot bias the outcome by any additive constant $epsilon>0$), the only complexity-theoretic implication that was known was that $PSPACE subseteq BPP$.

    We put forward a new attack to establish our main result, which shows that, informally speaking, the existence of any (weak) coin flipping protocol that prevents a cheating adversary from biasing the output by more than $frac14 - epsilon$ implies that $NP subseteq BPP$. Furthermore, for constant-round protocols, we show that the existence of any (weak) coin flipping protocol that allows an honest party to maintain any noticeable chance of prevailing against a cheating party implies the existence of (infinitely often) one-way functions.

  • Sequential Rationality in Cryptographic Protocols Authors: Ronen Gradwohl and Noam Livne and Alon Rosen
    Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers.

    We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat-free Nash equilibrium that is more permissive but still eliminates the undesirable ``empty threats'' of non-sequential solution concepts.

    To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibrium (Dodis-Halevi-Rabin, Crypto'00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.