%0 Book Section %B Theory of Cryptography %D 2014 %T Can Optimally-Fair Coin Tossing Be Based on One-Way Functions? %A Dana Dachman-Soled %A Mahmoody, Mohammad %A Malkin, Tal %E Lindell, Yehuda %K Algorithm Analysis and Problem Complexity %K black-box separations %K Coin-Tossing %K Computation by Abstract Devices %K Data Encryption %K Discrete Mathematics in Computer Science %K One-Way Functions %K Systems and Data Security %X Coin tossing is a basic cryptographic task that allows two distrustful parties to obtain an unbiased random bit in a way that neither party can bias the output by deviating from the protocol or halting the execution. Cleve [STOC’86] showed that in any r round coin tossing protocol one of the parties can bias the output by Ω(1/r) through a “fail-stop” attack; namely, they simply execute the protocol honestly and halt at some chosen point. In addition, relying on an earlier work of Blum [COMPCON’82], Cleve presented an r-round protocol based on one-way functions that was resilient to bias at most O(1/r√)O(1/\sqrt r) . Cleve’s work left open whether ”‘optimally-fair’” coin tossing (i.e. achieving bias O(1/r) in r rounds) is possible. Recently Moran, Naor, and Segev [TCC’09] showed how to construct optimally-fair coin tossing based on oblivious transfer, however, it was left open to find the minimal assumptions necessary for optimally-fair coin tossing. The work of Dachman-Soled et al. [TCC’11] took a step toward answering this question by showing that any black-box construction of optimally-fair coin tossing based on a one-way functions with n-bit input and output needs Ω(n/logn) rounds. In this work we take another step towards understanding the complexity of optimally-fair coin-tossing by showing that this task (with an arbitrary number of rounds) cannot be based on one-way functions in a black-box way, as long as the protocol is ”‘oblivious’” to the implementation of the one-way function. Namely, we consider a natural class of black-box constructions based on one-way functions, called function oblivious, in which the output of the protocol does not depend on the specific implementation of the one-way function and only depends on the randomness of the parties. Other than being a natural notion on its own, the known coin tossing protocols of Blum and Cleve (both based on one-way functions) are indeed function oblivious. Thus, we believe our lower bound for function-oblivious constructions is a meaningful step towards resolving the fundamental open question of the complexity of optimally-fair coin tossing. %B Theory of Cryptography %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 217 - 239 %8 2014/01/01/ %@ 978-3-642-54241-1, 978-3-642-54242-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-54242-8_10 %0 Book Section %B Advances in Cryptology - ASIACRYPT 2013 %D 2013 %T Adaptive and Concurrent Secure Computation from New Adaptive, Non-malleable Commitments %A Dana Dachman-Soled %A Malkin, Tal %A Raykova, Mariana %A Venkitasubramaniam, Muthuramakrishnan %E Sako, Kazue %E Sarkar, Palash %K Algorithm Analysis and Problem Complexity %K Applications of Mathematics %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We present a unified approach for obtaining general secure computation that achieves adaptive-Universally Composable (UC)-security. Using our approach we essentially obtain all previous results on adaptive concurrent secure computation, both in relaxed models (e.g., quasi-polynomial time simulation), as well as trusted setup models (e.g., the CRS model, the imperfect CRS model). This provides conceptual simplicity and insight into what is required for adaptive and concurrent security, as well as yielding improvements to set-up assumptions and/or computational assumptions in known models. Additionally, we provide the first constructions of concurrent secure computation protocols that are adaptively secure in the timing model, and the non-uniform simulation model. As a corollary we also obtain the first adaptively secure multiparty computation protocol in the plain model that is secure under bounded-concurrency. Conceptually, our approach can be viewed as an adaptive analogue to the recent work of Lin, Pass and Venkitasubramaniam [STOC ‘09], who considered only non-adaptive adversaries. Their main insight was that the non-malleability requirement could be decoupled from the simulation requirement to achieve UC-security. A main conceptual contribution of this work is, quite surprisingly, that it is still the case even when considering adaptive security. A key element in our construction is a commitment scheme that satisfies a strong definition of non-malleability. Our new primitive of concurrent equivocal non-malleable commitments, intuitively, guarantees that even when a man-in-the-middle adversary observes concurrent equivocal commitments and decommitments, the binding property of the commitments continues to hold for commitments made by the adversary. This definition is stronger than previous ones, and may be of independent interest. Previous constructions that satisfy our definition have been constructed in setup models, but either require existence of stronger encryption schemes such as CCA-secure encryption or require independent “trapdoors” provided by the setup for every pair of parties to ensure non-malleability. A main technical contribution of this work is to provide a construction that eliminates these requirements and requires only a single trapdoor. %B Advances in Cryptology - ASIACRYPT 2013 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 316 - 336 %8 2013/01/01/ %@ 978-3-642-42032-0, 978-3-642-42033-7 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-42033-7_17 %0 Book Section %B Advances in Cryptology – EUROCRYPT 2013 %D 2013 %T Streaming Authenticated Data Structures %A Charalampos Papamanthou %A Shi, Elaine %A Tamassia, Roberto %A Yi, Ke %E Johansson, Thomas %E Nguyen, Phong Q. %K Algorithm Analysis and Problem Complexity %K Data Encryption %K Discrete Mathematics in Computer Science %K Systems and Data Security %X We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x 1,x 2,…,x n and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a cryptographic proof to be used for verifying the correctness of the output. Due to the nature of the streaming setting, the verifier can only keep small local state (e.g., logarithmic) which must be updatable in a streaming manner and with no interaction with the prover. Such constraints make the problem particularly challenging and rule out applying existing verifiable computation schemes. We propose streaming authenticated data structures, a model that enables efficient verification of data structure queries on a stream. Compared to previous work, we achieve an exponential improvement in the prover’s running time: While previous solutions have linear prover complexity (in the size of the stream), even for queries executing in sublinear time (e.g., set membership), we propose a scheme with O(logM logn)O(\log M\ log n) prover complexity, where n is the size of the stream and M is the size of the universe of elements. Our schemes support a series of expressive queries, such as (non-)membership, successor, range search and frequency queries, over an ordered universe and even in higher dimensions. The central idea of our construction is a new authentication tree, called generalized hash tree. We instantiate our generalized hash tree with a hash function based on lattices assumptions, showing that it enjoys suitable algebraic properties that traditional Merkle trees lack. We exploit such properties to achieve our results. %B Advances in Cryptology – EUROCRYPT 2013 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 353 - 370 %8 2013/01/01/ %@ 978-3-642-38347-2, 978-3-642-38348-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-38348-9_22 %0 Book Section %B Public Key Cryptography – PKC 2012 %D 2012 %T Efficient Password Authenticated Key Exchange via Oblivious Transfer %A Canetti, Ran %A Dana Dachman-Soled %A Vaikuntanathan, Vinod %A Wee, Hoeteck %E Fischlin, Marc %E Buchmann, Johannes %E Manulis, Mark %K adaptive security %K Algorithm Analysis and Problem Complexity %K Computer Communication Networks %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K oblivious transfer %K Password Authenticated Key Exchange %K search assumptions %K Systems and Data Security %K UC security %X We present a new framework for constructing efficient password authenticated key exchange (PAKE) protocols based on oblivious transfer (OT). Using this framework, we obtain: an efficient and simple UC-secure PAKE protocol that is secure against adaptive corruptions without erasures. efficient and simple PAKE protocols under the Computational Diffie-Hellman (CDH) assumption and the hardness of factoring. (Previous efficient constructions rely on hash proof systems, which appears to be inherently limited to decisional assumptions.) All of our constructions assume a common reference string (CRS) but do not rely on random oracles. %B Public Key Cryptography – PKC 2012 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 449 - 466 %8 2012/01/01/ %@ 978-3-642-30056-1, 978-3-642-30057-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-30057-8_27 %0 Book Section %B Advances in Cryptology – CRYPTO 2012 %D 2012 %T Securing Circuits against Constant-Rate Tampering %A Dana Dachman-Soled %A Kalai, Yael Tauman %E Safavi-Naini, Reihaneh %E Canetti, Ran %K circuit compiler %K Computer Communication Networks %K computers and society %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K PCP of proximity %K side-channel attacks %K Systems and Data Security %K tampering %X We present a compiler that converts any circuit into one that remains secure even if a constant fraction of its wires are tampered with. Following the seminal work of Ishai et. al. (Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who continuously tamper with the circuit, can learn at most logarithmically many bits of secret information (in addition to black-box access to the circuit). Our results are information theoretic. %B Advances in Cryptology – CRYPTO 2012 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 533 - 551 %8 2012/01/01/ %@ 978-3-642-32008-8, 978-3-642-32009-5 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-32009-5_31 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %D 2011 %T A Canonical Form for Testing Boolean Function Properties %A Dana Dachman-Soled %A Servedio, Rocco A. %E Goldberg, Leslie Ann %E Jansen, Klaus %E Ravi, R. %E Rolim, José D. P. %K Algorithm Analysis and Problem Complexity %K Boolean functions %K Computation by Abstract Devices %K Computer Communication Networks %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %K property testing %X In a well-known result Goldreich and Trevisan (2003) showed that every testable graph property has a “canonical” tester in which a set of vertices is selected at random and the edges queried are the complete graph over the selected vertices. We define a similar-in-spirit canonical form for Boolean function testing algorithms, and show that under some mild conditions property testers for Boolean functions can be transformed into this canonical form. Our first main result shows, roughly speaking, that every “nice” family of Boolean functions that has low noise sensitivity and is testable by an “independent tester,” has a canonical testing algorithm. Our second main result is similar but holds instead for families of Boolean functions that are closed under ID-negative minors. Taken together, these two results cover almost all of the constant-query Boolean function testing algorithms that we know of in the literature, and show that all of these testing algorithms can be automatically converted into a canonical form. %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 460 - 471 %8 2011/01/01/ %@ 978-3-642-22934-3, 978-3-642-22935-0 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-22935-0_39 %0 Book Section %B Advances in Cryptology – CRYPTO 2011 %D 2011 %T Optimal Verification of Operations on Dynamic Sets %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %E Rogaway, Phillip %K Computer Communication Networks %K computers and society %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We study the design of protocols for set-operation verification, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present new authenticated data structures that allow any entity to publicly verify a proof attesting the correctness of primitive set operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinear-map accumulators as well as on a primitive called accumulation tree, our protocols achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), while incurring no extra asymptotic space overhead. The proof construction is also efficient, adding a logarithmic overhead to the computation of the answer of a set-operation query. In contrast, existing schemes entail high communication and verification costs or high storage costs. Applications of interest include efficient verification of keyword search and database queries. The security of our protocols is based on the bilinear q-strong Diffie-Hellman assumption. %B Advances in Cryptology – CRYPTO 2011 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 91 - 110 %8 2011/01/01/ %@ 978-3-642-22791-2, 978-3-642-22792-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-22792-9_6 %0 Book Section %B Applied Cryptography and Network Security %D 2011 %T Secure Efficient Multiparty Computing of Multivariate Polynomials and Applications %A Dana Dachman-Soled %A Malkin, Tal %A Raykova, Mariana %A Yung, Moti %E Lopez, Javier %E Tsudik, Gene %K additive homomorphic encryption %K Algorithm Analysis and Problem Complexity %K Computer Communication Networks %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K multiparty set intersection %K multivariate polynomial evaluation %K secret sharing %K secure multiparty computation %K Systems and Data Security %K threshold cryptosystems %X We present a robust secure methodology for computing functions that are represented as multivariate polynomials where parties hold different variables as private inputs. Our generic efficient protocols are fully black-box and employ threshold additive homomorphic encryption; they do not assume honest majority, yet are robust in detecting any misbehavior. We achieve solutions that take advantage of the algebraic structure of the polynomials, and are polynomial-time in all parameters (security parameter, polynomial size, polynomial degree, number of parties). We further exploit a “round table” communication paradigm to reduce the complexity in the number of parties. A large collection of problems are naturally and efficiently represented as multivariate polynomials over a field or a ring: problems from linear algebra, statistics, logic, as well as operations on sets represented as polynomials. In particular, we present a new efficient solution to the multi-party set intersection problem, and a solution to a multi-party variant of the polynomial reconstruction problem. %B Applied Cryptography and Network Security %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 130 - 146 %8 2011/01/01/ %@ 978-3-642-21553-7, 978-3-642-21554-4 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-21554-4_8 %0 Book Section %B Pairing-Based Cryptography - Pairing 2010 %D 2010 %T Optimal Authenticated Data Structures with Multilinear Forms %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %E Joye, Marc %E Miyaji, Atsuko %E Otsuka, Akira %K Algorithm Analysis and Problem Complexity %K authenticated dictionary %K Coding and Information Theory %K Computer Communication Networks %K Data Encryption %K Discrete Mathematics in Computer Science %K multilinear forms %K Systems and Data Security %X Cloud computing and cloud storage are becoming increasingly prevalent. In this paradigm, clients outsource their data and computations to third-party service providers. Data integrity in the cloud therefore becomes an important factor for the functionality of these web services.Authenticated data structures, implemented with various cryptographic primitives, have been widely studied as a means of providing efficient solutions to data integrity problems (e.g., Merkle trees). In this paper, we introduce a new authenticated dictionary data structure that employs multilinear forms, a cryptographic primitive proposed by Silverberg and Boneh in 2003 [10], the construction of which, however, remains an open problem to date. Our authenticated dictionary is optimal, that is, it does not add any extra asymptotic cost to the plain dictionary data structure, yielding proofs of constant size, i.e., asymptotically equal to the size of the answer, while maintaining other relevant complexities logarithmic. Instead, solutions based on cryptographic hashing (e.g., Merkle trees) require proofs of logarithmic size [40]. Because multilinear forms are not known to exist yet, our result can be viewed from a different angle: if one could prove that optimal authenticated dictionaries cannot exist in the computational model, irrespectively of cryptographic primitives, then our solution would imply that cryptographically interesting multilinear form generators cannot exist as well (i.e., it can be viewed as a reduction). Thus, we provide an alternative avenue towards proving the nonexistence of multilinear form generators in the context of general lower bounds for authenticated data structures [40] and for memory checking [18], a model similar to the authenticated data structures model. %B Pairing-Based Cryptography - Pairing 2010 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 246 - 264 %8 2010/01/01/ %@ 978-3-642-17454-4, 978-3-642-17455-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-17455-1_16 %0 Book Section %B Graph Drawing %D 2009 %T Graph Drawing for Security Visualization %A Tamassia, Roberto %A Palazzi, Bernardo %A Charalampos Papamanthou %E Tollis, Ioannis G. %E Patrignani, Maurizio %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X With the number of devices connected to the internet growing rapidly and software systems being increasingly deployed on the web, security and privacy have become crucial properties for networks and applications. Due the complexity and subtlety of cryptographic methods and protocols, software architects and developers often fail to incorporate security principles in their designs and implementations. Also, most users have minimal understanding of security threats. While several tools for developers, system administrators and security analysts are available, these tools typically provide information in the form of textual logs or tables, which are cumbersome to analyze. Thus, in recent years, the field of security visualization has emerged to provide novel ways to display security-related information so that it is easier to understand. In this work, we give a preliminary survey of approaches to the visualization of computer security concepts that use graph drawing techniques. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 2 - 13 %8 2009/01/01/ %@ 978-3-642-00218-2, 978-3-642-00219-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-00219-9_2 %0 Book Section %B Advances in Cryptology – ASIACRYPT 2009 %D 2009 %T Improved Non-committing Encryption with Applications to Adaptively Secure Protocols %A Choi, Seung Geol %A Dana Dachman-Soled %A Malkin, Tal %A Wee, Hoeteck %E Matsui, Mitsuru %K adaptive corruption %K Algorithm Analysis and Problem Complexity %K Applications of Mathematics %K Data Encryption %K Data Structures, Cryptology and Information Theory %K Discrete Mathematics in Computer Science %K non-committing encryption %K public-key encryption %K secure multi-party computation %K Systems and Data Security %X We present a new construction of non-committing encryption schemes. Unlike the previous constructions of Canetti et al. (STOC ’96) and of Damgård and Nielsen (Crypto ’00), our construction achieves all of the following properties: Optimal round complexity. Our encryption scheme is a 2-round protocol, matching the round complexity of Canetti et al. and improving upon that in Damgård and Nielsen. Weaker assumptions. Our construction is based on trapdoor simulatable cryptosystems, a new primitive that we introduce as a relaxation of those used in previous works. We also show how to realize this primitive based on hardness of factoring. Improved efficiency. The amortized complexity of encrypting a single bit is O(1) public key operations on a constant-sized plaintext in the underlying cryptosystem. As a result, we obtain the first non-committing public-key encryption schemes under hardness of factoring and worst-case lattice assumptions; previously, such schemes were only known under the CDH and RSA assumptions. Combined with existing work on secure multi-party computation, we obtain protocols for multi-party computation secure against a malicious adversary that may adaptively corrupt an arbitrary number of parties under weaker assumptions than were previously known. Specifically, we obtain the first adaptively secure multi-party protocols based on hardness of factoring in both the stand-alone setting and the UC setting with a common reference string. %B Advances in Cryptology – ASIACRYPT 2009 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 287 - 302 %8 2009/01/01/ %@ 978-3-642-10365-0, 978-3-642-10366-7 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-10366-7_17 %0 Book Section %B Theory of Cryptography %D 2009 %T Simple, Black-Box Constructions of Adaptively Secure Protocols %A Choi, Seung Geol %A Dana Dachman-Soled %A Malkin, Tal %A Wee, Hoeteck %E Reingold, Omer %K Algorithm Analysis and Problem Complexity %K computers and society %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We present a compiler for transforming an oblivious transfer (OT) protocol secure against an adaptive semi-honest adversary into one that is secure against an adaptive malicious adversary. Our compiler achieves security in the universal composability framework, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity. As a corollary, we obtain the first constructions of adaptively secure protocols in the stand-alone model using black-box access to a low-level primitive. %B Theory of Cryptography %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 387 - 402 %8 2009/01/01/ %@ 978-3-642-00456-8, 978-3-642-00457-5 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-00457-5_23 %0 Book Section %B Algorithmic Aspects of Wireless Sensor Networks %D 2008 %T Algorithms for Location Estimation Based on RSSI Sampling %A Charalampos Papamanthou %A Preparata, Franco P. %A Tamassia, Roberto %E Fekete, Sándor P. %K Algorithm Analysis and Problem Complexity %K Computer Communication Networks %K Data structures %K Discrete Mathematics in Computer Science %K Information Systems and Communication Service %X In this paper, we re-examine the RSSI measurement model for location estimation and provide the first detailed formulation of the probability distribution of the position of a sensor node. We also show how to use this probabilistic model to efficiently compute a good estimation of the position of the sensor node by sampling multiple readings from the beacons (where we do not merely use the mean of the samples) and then minimizing a function with an acceptable computational effort. The results of the simulation of our method in TOSSIM indicate that the location of the sensor node can be computed in a small amount of time and that the quality of the solution is competitive with previous approaches. %B Algorithmic Aspects of Wireless Sensor Networks %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 72 - 86 %8 2008/01/01/ %@ 978-3-540-92861-4, 978-3-540-92862-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-92862-1_7 %0 Book Section %B Theory of Cryptography %D 2008 %T Black-Box Construction of a Non-malleable Encryption Scheme from Any Semantically Secure One %A Choi, Seung Geol %A Dana Dachman-Soled %A Malkin, Tal %A Wee, Hoeteck %E Canetti, Ran %K Algorithm Analysis and Problem Complexity %K black-box constructions %K computers and society %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K non-malleability %K public-key encryption %K semantic security %K Systems and Data Security %X We show how to transform any semantically secure encryption scheme into a non-malleable one, with a black-box construction that achieves a quasi-linear blow-up in the size of the ciphertext. This improves upon the previous non-black-box construction of Pass, Shelat and Vaikuntanathan (Crypto ’06). Our construction also extends readily to guarantee non-malleability under a bounded-CCA2 attack, thereby simultaneously improving on both results in the work of Cramer et al. (Asiacrypt ’07). Our construction departs from the oft-used paradigm of re-encrypting the same message with different keys and then proving consistency of encryptions; instead, we encrypt an encoding of the message with certain locally testable and self-correcting properties. We exploit the fact that low-degree polynomials are simultaneously good error-correcting codes and a secret-sharing scheme. %B Theory of Cryptography %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 427 - 444 %8 2008/01/01/ %@ 978-3-540-78523-1, 978-3-540-78524-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-78524-8_24 %0 Book Section %B Automata, Languages and Programming %D 2008 %T Optimal Cryptographic Hardness of Learning Monotone Functions %A Dana Dachman-Soled %A Lee, Homin K. %A Malkin, Tal %A Servedio, Rocco A. %A Wan, Andrew %A Wee, Hoeteck %E Aceto, Luca %E Damgård, Ivan %E Goldberg, Leslie Ann %E Halldórsson, Magnús M. %E Ingólfsdóttir, Anna %E Walukiewicz, Igor %K Data structures %K Data Structures, Cryptology and Information Theory %K Discrete Mathematics in Computer Science %K Numeric Computing %K Software Engineering/Programming and Operating Systems %K Theory of computation %X A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time algorithms have thus far been obtained almost exclusively for various classes of monotone functions, while the computational hardness results obtained to date have all been for various classes of general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong computational limitations on the efficient learnability of various classes of monotone functions. We give several such hardness results which are provably almost optimal since they nearly match known positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2+1/n‾‾√1/2 + 1/\sqrt{n} ; this accuracy bound is close to optimal by known positive results (Blum et al., FOCS ’98). Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation ’04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O’Donnell (JCSS ’04). %B Automata, Languages and Programming %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 36 - 47 %8 2008/01/01/ %@ 978-3-540-70574-1, 978-3-540-70575-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-70575-8_4 %0 Book Section %B Experimental Algorithms %D 2007 %T On the Cost of Persistence and Authentication in Skip Lists %A Goodrich, Michael T. %A Charalampos Papamanthou %A Tamassia, Roberto %E Demetrescu, Camil %K Algorithm Analysis and Problem Complexity %K algorithms %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %K Numeric Computing %X We present an extensive experimental study of authenticated data structures for dictionaries and maps implemented with skip lists. We consider realizations of these data structures that allow us to study the performance overhead of authentication and persistence. We explore various design decisions and analyze the impact of garbage collection and virtual memory paging, as well. Our empirical study confirms the efficiency of authenticated skip lists and offers guidelines for incorporating them in various applications. %B Experimental Algorithms %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 94 - 107 %8 2007/01/01/ %@ 978-3-540-72844-3, 978-3-540-72845-0 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-72845-0_8 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %D 2007 %T Distribution-Free Testing Lower Bounds for Basic Boolean Functions %A Dana Dachman-Soled %A Servedio, Rocco A. %E Charikar, Moses %E Jansen, Klaus %E Reingold, Omer %E Rolim, José D. P. %K Algorithm Analysis and Problem Complexity %K Discrete Mathematics in Computer Science %K Numeric Computing %X In the distribution-free property testing model, the distance between functions is measured with respect to an arbitrary and unknown probability distribution \mathcal{D} over the input domain. We consider distribution-free testing of several basic Boolean function classes over {0,1} n , namely monotone conjunctions, general conjunctions, decision lists, and linear threshold functions. We prove that for each of these function classes, Ω((n/logn)1/5) oracle calls are required for any distribution-free testing algorithm. Since each of these function classes is known to be distribution-free properly learnable (and hence testable) using Θ(n) oracle calls, our lower bounds are within a polynomial factor of the best possible. %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 494 - 508 %8 2007/01/01/ %@ 978-3-540-74207-4, 978-3-540-74208-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-74208-1_36 %0 Book Section %B Graph Drawing %D 2007 %T Parameterized st-Orientations of Graphs: Algorithms and Experiments %A Charalampos Papamanthou %A Tollis, Ioannis G. %E Kaufmann, Michael %E Wagner, Dorothea %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X st-orientations (st-numberings) or bipolar orientations of undirected graphs are central to many graph algorithms and applications. Several algorithms have been proposed in the past to compute an st-orientation of a biconnected graph. However, as indicated in [1], the computation of more than one st-orientation is very important for many applications in multiple research areas, such as this of Graph Drawing. In this paper we show how to compute such orientations with certain (parameterized) characteristics in the final st-oriented graph, such as the length of the longest path. Apart from Graph Drawing, this work applies in other areas such as Network Routing and in tackling difficult problems such as Graph Coloring and Longest Path. We present primary approaches to the problem of computing longest path parameterized st-orientations of graphs, an analytical presentation (together with proof of correctness) of a new O(mlog5 n) (O(mlogn) for planar graphs) time algorithm that computes such orientations (and which was used in [1]) and extensive computational results that reveal the robustness of the algorithm. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 220 - 233 %8 2007/01/01/ %@ 978-3-540-70903-9, 978-3-540-70904-6 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-70904-6_22 %0 Book Section %B Graph Drawing %D 2006 %T Applications of Parameterized st-Orientations in Graph Drawing Algorithms %A Charalampos Papamanthou %A Tollis, Ioannis G. %E Healy, Patrick %E Nikolov, Nikola S. %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X Many graph drawing algorithms use st-numberings (st-orien-tations or bipolar orientations) as a first step. An st-numbering of a biconnected undirected graph defines a directed graph with no cycles, one single source s and one single sink t. As there exist exponentially many st-numberings that correspond to a certain undirected graph G, using different st-numberings in various graph drawing algorithms can result in aesthetically different drawings with different area bounds. In this paper, we present results concerning new algorithms for parameterized st-orientations, their impact on graph drawing algorithms and especially in visibility representations. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 355 - 367 %8 2006/01/01/ %@ 978-3-540-31425-7, 978-3-540-31667-1 %G eng %U http://link.springer.com/chapter/10.1007/11618058_32 %0 Book Section %B Graph Drawing %D 2005 %T 3D Visualization of Semantic Metadata Models and Ontologies %A Charalampos Papamanthou %A Tollis, Ioannis G. %A Doerr, Martin %E Pach, János %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X We propose an algorithm for the 3D visualization of general ontology models used in many applications, such as semantic web, entity-relationship diagrams and other database models. The visualization places entities in the 3D space. Previous techniques produce drawings that are 2-dimensional, which are often complicated and hard to comprehend. Our technique uses the third dimension almost exclusively for the display of the isa relationships (links) while the property relationships (links) are placed on some layer (plane). Thus the semantic difference between isa links and property links, which should be as vertical or as horizontal as possible respectively, is emphasized. Special reference is made on a certain model, the CIDOC Conceptual Reference Model. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 377 - 388 %8 2005/01/01/ %@ 978-3-540-24528-5, 978-3-540-31843-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-31843-9_38