Mike Rosulek
I'm an assistant professor of Computer Science at Oregon State University, studying cryptography.
Many people understand that encryption can be used to protect data at rest (e.g., stored on a hard drive) or data in transit (e.g., between a website and browser, using HTTPS). It is less wellknown that cryptography can protect data in use. Using cryptographic tools known as secure computation protocols, it is possible to perform arbitrary computations on sensitive data without actually seeing the data. My research is on practical and theoretical aspects of secure computation protocols.
Contact
 rosulekm@eecs.oregonstate.edu
 KEC 3063
 PGP key, fingerprint 4E37 C45E F109 1308 0D5F 817F 7181 A58F DE12 5D6B
Students
Current students: Brent Carmer (PhD)
 Peter Rindal (PhD)
 Ni Trieu (PhD)
 Morgan Shirley (MS)
Teaching
In Fall 2016 I will be teaching CS 321 (Introduction to Theory of Computation).I'm writing a free undergraduate textbook called The Joy of Cryptography.
Publications
click title for more info
Also: DBLP, Google scholar
 Improved Private Set Intersection against Malicious Adversaries

Manuscript 2016.Links: preprintAbstract: Private set intersection (PSI) refers to a special case of secure twoparty computation in which the parties each have a set of items and compute the intersection of these sets without revealing any additional information. In this paper we present improvements to practical PSI providing security in the presence of {\em malicious} adversaries.
Our starting point is the protocol of Dong, Chen \& Wen (CCS 2013) that is based on Bloom filters. We identify a bug in their malicioussecure variant and show how to fix it using a cutandchoose approach that has low overhead while simultaneously avoiding one the main computational bottleneck in their original protocol. We also point out some subtleties that arise when using Bloom filters in malicioussecure cryptographic protocols.
We have implemented our PSI protocols and report on its performance. Our improvements reduce the cost of Dong et al.'s protocol by a factor of $875\times$ on a single thread. For instance, our protocol has an online time of 14 seconds and an overall time of 3.3 minutes to securely compute the intersection of two sets of 1 million items each.  Efficient Batched Oblivious PRF with Applications to Private Set Intersection

.
Links: author versionAbstract: We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries. In an OPRF protocol a receiver has an input $r$; the sender gets output $s$ and the receiver gets output $F(s,r)$, where $F$ is a pseudorandom function and $s$ is a random seed. Our protocol uses a novel adaptation of 1outof2 OTextension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize $m$ OPRF instances is roughly the cost to realize $3.5 m$ instances of standard 1outof2 OTs (using stateoftheart OT extension).
We explore in detail our protocol's application to semihonest secure private set intersection (PSI). The fastest stateoftheart PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bitlength of the parties' items. We implemented both PSI protocol variants and found ours to be 3.03.2x faster than Pinkas et al.\ for PSI of 128bit strings and sufficiently large sets. Concretely, ours requires only 4.6 seconds to securely compute the intersection of $2^{20}$size sets, regardless of the bitlength of the items. For very large sets, our protocol is only 5.2x slower than the {\em insecure} naive hashing approach for PSI.  Garbling Gadgets for Boolean and Arithmetic Circuits

.
Links: coming soonAbstract: We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations.
For arithmetic circuits over the integers, our construction results in garbled circuits with {\em free} addition, weighted threshold gates with cost independent of fanin, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an {\em exponential} improvement over the state of the art for threshold gates (including AND/OR gates) of high fanin.
Our construction can be efficiently instantiated with practical symmetrickey primitives (e.g., AES), and is proven secure under similar assumptions to that of the FreeXOR garbling scheme (Kolesnikov \& Schneider, ICALP 2008). We give an extensive comparison between our scheme and stateoftheart garbling schemes applied to boolean circuits.  Secure Data Exchange: A Marketplace in the Cloud

Manuscript 2016.Links: preprintAbstract: A vast amount of data belonging to companies and individuals is currently stored \emph{in the cloud} in encrypted form by trustworthy service providers such as Microsoft, Amazon, and Google. Unfortunately, the only way for the cloud to use the data in computations is to first decrypt it, then compute on it, and finally reencrypt it, resulting in a problematic tradeoff between value/utility and security. At a high level, our goal in this paper is to present a general and practical cryptographic solution to this dilemma.
More precisely, we describe a scenario that we call \emph{Secure Data Exchange} (SDE), where several data owners are storing private encrypted data in a semihonest noncolluding cloud, and an evaluator (a third party) wishes to engage in a secure function evaluation on the data belonging to some subset of the data owners. We require that none of the parties involved learns anything beyond what they already know and what is revealed by the function, even when the parties (except the cloud) are active malicious. We also recognize the ubiquity of scenarios where the lack of an efficient SDE protocol prevents for example business transactions, research collaborations, or mutually beneficial computations on aggregated private data from taking place, and discuss several such scenarios in detail.
Our main result is an efficient and practical protocol for enabling SDE using \emph{Secure MultiParty Computation}~(MPC) in a novel adaptation of the serveraided setting. We also present the details of an implementation along with performance numbers.  Linicrypt: A Model for Practical Cryptography

.
Links: full version • codeAbstract: A wide variety of objectively practical cryptographic schemes can be constructed using only symmetrickey operations and linear operations. To formally study this restricted class of cryptographic algorithms, we present a new model called Linicrypt. A Linicrypt program has access to a random oracle whose inputs and outputs are field elements, and otherwise manipulates data only via fixed linear combinations.
Our main technical result is that it is possible to decide in polynomial time whether two given Linicrypt programs induce computationally indistinguishable distributions (against arbitrary PPT adversaries, in the random oracle model).
We show also that indistinguishability of Linicrypt programs can be expressed as an existential formula, making the model amenable to automated program synthesis. In other words, it is possible to use a SAT/SMT solver to automatically generate Linicrypt programs satisfying a given security constraint. Interestingly, the properties of Linicrypt imply that this synthesis approach is both sound and complete. We demonstrate this approach by synthesizing Linicrypt constructions of garbled circuits.  Faster Malicious 2party Secure Computation with Online/Offline Dual Execution

.
Links: preprint • seminar slidesAbstract: We describe a highly optimized protocol for generalpurpose secure twoparty computation (2PC) in the presence of malicious adversaries. Our starting point is a protocol of Kolesnikov et al. (TCC 2015). We adapt that protocol to the online/offline setting, where two parties repeatedly evaluate the same function (on possibly different inputs each time) and perform as much of the computation as possible in an offline preprocessing phase before their inputs are known. Along the way we develop several significant simplifications and optimizations to the protocol.
We have implemented a prototype of our protocol and report on its performance. When two parties on Amazon servers in the same region use our implementation to securely evaluate the AES circuit 1024 times, the amortized cost per evaluation is 5.1ms offline + 1.3ms online. The total offline+online cost of our protocol is in fact less than the online cost of any reported protocol with malicious security. For comparison, our protocol's closest competitor (Lindell & Riva, CCS 2015) uses 74ms offline + 7ms online in an identical setup.
Our protocol can be further tuned to trade performance for leakage. As an example, the performance in the above scenario improves to 2.4ms offline + 1.0ms online if we allow an adversary to learn a single bit about the honest party's input with probability $2^{20}$ (but not violate any other security property, e.g. correctness).  Reconciling Nonmalleability with Homomorphic Encryption

Journal of Cryptology 2016.Links: articleNote: This journal paper combines results from the previous conference papers (listed below):
 Rerandomizable RCCA Encryption, CRYPTO 2007
 Homomorphic encryption with CCA security, ICALP 2008
 Towards robust computation on encrypted data, Asiacrypt 2008
Abstract: Homomorphic encryption schemes are useful in designing conceptually simple protocols that operate on encrypted inputs. On the other hand, nonmalleable encryption schemes are vital for designing protocols with robust security against malicious parties, in a composable setting. In this paper, we address the problem of constructing publickey encryption schemes that meaningfully combine these two opposing demands. The intuitive tradeoff we desire in an encryption scheme is that anyone should be able to change encryptions of unknown messages m1,...,mk into a (fresh) encryption of T(m1,...,mk) for a specific set of allowed functions T, but the scheme should be otherwise "nonmalleable." That is, no adversary should be able to construct a ciphertext whose value is related to that of other ciphertexts in any other way. For the case where the allowed functions T are all unary, we formulate precise definitions that capture our intuitive requirements and show relationships among these new definitions and other more standard ones (INDCCA, gCCA, and RCCA). We further justify these new definitions by showing their equivalence to a natural formulation of security in the framework of Universally Composable security. Next, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations T and prove their security under the Decisional DiffieHellman (DDH) assumption in two groups with related sizes. Finally, we demonstrate how encryption schemes that satisfy our definitions can be used to implement conceptually simple protocols for nontrivial computation on encrypted data, which are secure against malicious adversaries in the UC framework without resorting to generalpurpose multiparty computation or zeroknowledge proofs. For the case where the allowed functions T are binary, we show that a natural generalization of our definitions is unattainable if some T is a group operation. On the positive side, we show that if one of our security requirements is relaxed in a natural way, we can in fact obtain a scheme that is homomorphic with respect to (binary) group operations, and nonmalleable otherwise.  Fast and Secure Threeparty Computation: The Garbled Circuit Approach

.
Links: preprintAbstract: Many deployments of secure multiparty computation (MPC) in practice have used informationtheoretic threeparty protocols that tolerate a single, semihonest corrupt party, since these protocols enjoy very high efficiency.
We propose a new approach for secure threeparty computation (3PC) that improves security while maintaining practical efficiency that is competitive with traditional informationtheoretic protocols. Our protocol is based on garbled circuits and provides security against a single, {\em malicious} corrupt party. Unlike informationtheoretic 3PC protocols, ours uses a constant number of rounds. Our protocol only uses inexpensive symmetrickey cryptography: hash functions, block ciphers, pseudorandom generators (in particular, no oblivious transfers) and has performance that is comparable to that of Yao's (semihonest) 2PC protocol.
We demonstrate the practicality of our protocol with an implementation based on the JustGarble framework of Bellare et al. (S&P 2013). The implementation incorporates various optimizations including the most recent techniques for efficient circuit garbling. We perform experiments on several benchmarking circuits, in different setups. Our experiments confirm that, despite providing a more demanding security guarantee, our protocol has performance comparable to existing informationtheoretic 3PC.  Efficient ZeroKnowledge Proofs of NonAlgebraic Statements with Sublinear Amortized Cost

.
Links: preprintAbstract: We describe a zeroknowledge proof system in which a prover holds a large dataset $M$ and can repeatedly prove NP relations about that dataset. That is, for any (public) relation $R$ and $x$, the prover can prove that $\exists w: R(M,x,w)=1$. After an initial setup phase (which depends only on $M$), each proof requires only a constant number of rounds and has communication/computation cost proportional to that of a {\em randomaccess machine (RAM)} implementation of $R$, up to polylogarithmic factors. In particular, the cost per proof in many applications is sublinear in $M$. Additionally, the storage requirement between proofs for the verifier is constant.
 Two Halves Make a Whole: Reducing Data Transfer in Garbled Circuits using Half Gates

.
Links: preprint • proceedings ($)Note: See below for slides from invited talk at Allerton conference.
Abstract: The wellknown classical constructions of garbled circuits use four ciphertexts per gate, although various methods have been proposed to reduce this cost. The best previously known methods for optimizing AND gates (two ciphertexts; Pinkas et al., ASIACRYPT 2009) and XOR gates (zero ciphertexts; Kolesnikov \& Schneider, ICALP 2008) were incompatible, so most implementations used the best known method compatible with freeXOR gates (three ciphertexts; Kolesnikov \& Schneider, ICALP 2008). In this work we show how to simultaneously garble AND gates using two ciphertexts and XOR gates using zero ciphertexts, resulting in smaller garbled circuits than any prior scheme. The main idea behind our construction is to break an AND gate into two {\em halfgates}  AND gates for which one party knows one input. Each halfgate can be garbled with a single ciphertext, so our construction uses two ciphertexts for each AND gate while being compatible with freeXOR gates. The price for the reduction in size is that the evaluator must perform two cryptographic operations per AND gate, rather than one as in previous schemes. We experimentally demonstrate that our garbling scheme leads to an overall decrease in time (up to 25\%), bandwidth (up to 33\%), and energy use (up to 20\%) over several benchmark applications. We also initiate a study of lower bounds for garbled gate size, and show that our construction is optimal for a large class of garbling schemes encompassing all known practical garbling techniques.  How to Efficiently Evaluate RAM Programs with Malicious Security

.
Abstract: Secure 2party computation (2PC) is becoming practical for some applications. However, most approaches are limited by the fact that the desired functionality must be represented as a boolean circuit. In response, randomaccess machines (RAM programs) have recently been investigated as a promising alternative representation.
In this work, we present the first practical protocols for evaluating RAM programs with security against malicious adversaries. A useful efficiency measure is to divide the cost of malicioussecure evaluation of $f$ by the cost of semihonestsecure evaluation of $f$. Our RAM protocols achieve ratios matching the state of the art for circuitbased 2PC. For statistical security $2^{s}$, our protocol without preprocessing achieves a ratio of $s$; our onlineoffline protocol has a preprocessing phase and achieves online ratio $\sim 2 s / \log T$, where $T$ is the total execution time of the RAM program.
To summarize, our solutions show that the ``extra overhead" of obtaining malicious security for RAM programs (beyond what is needed for circuits) is minimal and does not grow with the running time of the program.  Richer Efficiency/Security Tradeoffs in 2PC

.
Abstract: The dualexecution protocol of Mohassel & Franklin (PKC 2006) is a highly efficient (each party garbling only one circuit) 2PC protocol that achieves malicious security apart from leaking an arbitrary, adversariallychosen predicate about the honest party's input. We present two practical and orthogonal approaches to improve the security of the dualexecution technique.
First, we show how to greatly restrict the predicate that an adversary can learn in the protocol, to a natural notion of ``only computation leaks''style leakage. Along the way, we identify a natural security property of garbled circuits called propertyenforcing that may be of independent interest.
Second, we address a complementary direction of reducing the probability that the leakage occurs. We propose a new dualexecution protocol  with a very light cheatingdetection phase and each party garbling $s+1$ circuits  in which a cheating party learns a bit with probability only $2^{s}$. Our concrete measurements show approximately 35% reduction in communication for the AES circuit, compared to the best combination of state of the art techniques for achieving the same security notion.
Combining the two results, we achieve a rich continuum of practical tradeoffs between efficiency & security, connecting the covert, dualexecution and fullmalicious guarantees.  FleXOR: Flexible garbling for XOR gates that beats freeXOR

.
Abstract: Most implementations of Yao's garbled circuit approach for 2party secure computation use the {\em freeXOR} optimization of Kolesnikov \& Schneider (ICALP 2008). We introduce an alternative technique called {\em flexibleXOR} (fleXOR) that generalizes freeXOR and offers several advantages. First, fleXOR can be instantiated under a weaker hardness assumption on the underlying cipher/hash function (relatedkey security only, compared to relatedkey and circular security required for freeXOR) while maintaining most of the performance improvements that freeXOR offers. Alternatively, even though XOR gates are not always ``free'' in our approach, we show that the other (nonXOR) gates can be optimized more heavily than what is possible when using freeXOR. For many circuits of cryptographic interest, this can yield a significantly (over 30\%) smaller garbled circuit than any other known techniques (including freeXOR) or their combinations.
 MultiParty Computation for Polynomials and Branching Programs without Simultaneous Interaction

.
Links: full versionAbstract: Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.
In this work we present a suite of new, simple and efficient protocols for secure computation in this ``onepass'' model. We give protocols that obtain optimal privacy for the following general tasks: Evaluating any multivariate polynomial $F(x_1, \ldots, x_n)$ (modulo a large RSA modulus $N$), where the parties each hold an input $x_i$.
 Evaluating any read once branching program over the parties' inputs.
 Characterizing the Cryptographic Properties of Reactive 2Party Functionalities

.
Links: proceedings (author copy)Abstract: In secure multiparty computation, a reactive functionality is one which maintains persistent state, takes inputs, and gives outputs over many rounds of interaction with its parties. Reactive functionalities are fundamental and model many interesting and natural cryptographic tasks; yet their security properties are not nearly as wellunderstood as in the nonreactive case (known as secure function evaluation).
We present new combinatorial characterizations for 2party reactive functionalities, which we model as finite automata. We characterize the functionalities that have passivesecure protocols, and those which are complete with respect to passive adversaries. Both characterizations are in the informationtheoretic setting.  A Unified Characterization of Completeness and Triviality for Secure Function Evaluation

.
Links: proceedings ($)Abstract: We present unified combinatorial characterizations of completeness for 2party secure function evaluation (SFE) against passive and active corruptions, so that all known characterizations appear as special cases.
In doing so we develop new technical concepts. We define several notions of isomorphism of SFE functionalities and define the "kernel" of an SFE functionality. An SFE functionality is then said to be "simple" if and only if it is strongly isomorphic to its kernel. An SFE functionality F is a core of an SFE functionality \F if it is "redundancy free" and is weakly isomorphic to F. Then: An SFE functionality is complete for security against passive corruptions if and only if it is not simple.
 A deterministic SFE functionality is complete for security against active corruptions if and only if it has a core that is not simple. We conjecture that this characterization extends to randomized SFE as well.
Finally, we apply our new notions of isomorphism to reduce the problem of characterization of trivial functionalities (i.e., those securely realizable without setups) for the case of general SFE to the same problem for the case of simple symmetric SFE.  Must you know the code of f to securely compute f?

.
Abstract: When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit, and then they securely evaluate that circuit gatebygate. In other words, a secure protocol for evaluating $f$ is typically obtained in a nonblackboxway from $f$ itself. As a consequence, secure computation protocols have high overhead (in communication and computation) that is directly linked to the circuitdescription complexity of $f$.
In other settings throughout cryptography, blackbox constructions invariably lead to better practical efficiency than comparable nonblackbox constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, must one know the code of $f$ to securely evaluate $f$?
In this work we initiate the theoretical study of this question. We show the following: A complete characterization of the 2party tasks which admit such security against semihonest adversaries. The characterization is inspired by notions of autoreducibility from computational complexity theory. From this characterization, we show a class of pseudorandom functions that cannot be securely evaluated (when one party holds the seed and the other holds the input) without "knowing" the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing'' the code of the function.
 Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a oneway function in zeroknowledge, without "knowing" the code of the oneway function.
 Universal Composability from Essentially Any Trusted Setup

.
Abstract: It is impossible to securely carry out general multiparty computation in arbitrary network contexts like the Internet, unless protocols have access to some trusted setup. In this work we classify the power of such trusted (2party) setup functionalities. We show that nearly every setup is either {\bf useless} (ideal access to the setup is equivalent to having no setup at all) or else {\bf complete} (composably secure protocols for {\em all} tasks exist in the presence of the setup). We further argue that those setups which are neither complete nor useless are highly unnatural.
The main technical contribution in this work is an almosttotal characterization of completeness for 2party setups. Our characterization treats setup functionalities as blackboxes, and therefore is the first work to classify completeness of {\em arbitrary setup functionalities} (i.e., randomized, reactive, and having behavior that depends on the global security parameter).  Complexity of MultiParty Computation Functionalities

Book chapter.Links: publisher page • author versionAbstract: The central objects of secure multiparty computation are the "multiparty functions" (or functionalities) that it seeks to securely realize. In this article we survey a set of results that constitute a Cryptographic Complexity Theory. This theory classifies and compares multiparty functions according to their secure computability and reducibility to each other. The basic questions studied, under various notions of security and reducibility, include:
 Which functionalities are securely realizable (or are "trivial"  i.e., can be reduced to any functionality)?
 Which functionalities are "complete"  i.e., those to which any functionality can be reduced?
 More generally, which functionalities are reducible to which? Outside of triviality and completeness, this question is relatively less explored.
We treat separately the results on twoparty functionalities, for which the cryptographic complexity is much better understood. In particular, we present unified combinatorial characterizations of completeness and triviality for secure function evaluation using notions of isomorphism and the common information functionality (called the kernel) of a given functionality. Beyond completeness and triviality, we also discuss results on general reducibility, and, in the computationally bounded setting, the connection between these reductions and computational hardness assumptions.
We briefly discuss results on reactive functionalities, which are much less studied than nonreactive (secure function evaluation) functionalities. Finally, we conclude with a selection of open problems.  Exploring the Limits of Common Coins Using Frontier Analysis of Protocols

.
Links: full version • proceedingsAbstract: In 2party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required.
To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the {\em frontier analysis}. This involves analyzing carefully defined ``frontiers'' in a weighted tree induced by the protocol's execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions.
Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the ``cryptomania assumption'' (that there is a semihonest or standalone secure oblivious transfer protocol) in terms of UCsecure reduction among randomized functionalities. Also, we provide an {\em unconditional result} on the uselessness of common randomness, even in the computationally bounded setting.
Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.  AttributeBased Signatures

.
Links: full version • proceedings ($)Abstract: We introduce {\em AttributeBased Signatures (ABS)}, a versatile primitive that allows a party to sign a message with finegrained control over identifying information. In ABS, a signer, who possesses a set of attributes from the authority, can sign a message with a predicate that is satisfied by his attributes. The signature reveals no more than the fact that a single user with some set of attributes satisfying the predicate has attested to the message. In particular, the signature hides the attributes used to satisfy the predicate and any identifying information about the signer (that could link multiple signatures as being from the same signer). Furthermore, users cannot collude to pool their attributes together.
We give a general framework for constructing ABS schemes, and then show several practical instantiations based on groups with bilinear pairing operations, under standard assumptions. Further, we give a construction which is secure even against a malicious attribute authority, but the security for this scheme is proven in the generic group model. We describe several practical problems that motivated this work, and how ABS can be used to solve them. Also, we show how our techniques allow us to extend GrothSahai NIZK proofs to be simulationextractable and identitybased with low overhead.  A ZeroOne Law for Cryptographic Complexity with Respect to Computational UC Security

.
Links: full version • proceedingsAbstract: We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UCsecure protocol for $F$ using ideal access to $G$. This reduction is a natural and finegrained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced.
In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semihonest adversaries), there is a {\bf zeroone law} for the cryptographic complexity of 2party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.
While nearly all previous work classifying multiparty computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automatatheoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic nontriviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al.\ (TCC 2010)).  Cryptographic Complexity Classes and Computational Intractability Assumptions

.
Links: full version • proceedingsAbstract: Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question.
We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study.
In our detailed investigation of large classes of 2party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of oneway functions (OWF) or of semihonest (equivalently, standalonesecure) oblivious transfer protocols (shOT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework.
If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.  Complexity of Multiparty Computation Problems: The Case of 2Party Symmetric Secure Function Evaluation

.
Links: full version • proceedingsAbstract: In symmetric secure function evaluation (SSFE), Alice has an input $x$, Bob has an input $y$, and both parties wish to securely compute $f(x,y)$. We classify these functions $f$ according to their ``cryptographic complexities,'' and show that the landscape of complexity among these functions is surprisingly rich.
We give combinatorial characterizations of the SSFE functions $f$ that have passivesecure protocols, and those which are protocols secure in the standalone setting. With respect to universally composable security (for unbounded parties), we show that there is an infinite hierarchy of increasing complexity for SSFE functions, That is, we describe a family of SSFE functions $f_1, f_2, \ldots$ such that there exists a UCsecure protocol for $f_i$ in the $f_j$hybrid world if and only if $i \le j$.
Our main technical tool for deriving complexity separations is a powerful protocol simulation theorem which states that, even in the strict setting of UC security, the canonical protocol for $f$ is as secure as any other protocol for $f$, as long as $f$ satisfies a certain combinatorial characterization. We can then show intuitively clear impossibility results by establishing the combinatorial properties of $f$ and then describing attacks against the very simple canonical protocols, which by extension are also feasible attacks against {\em any} protocol for the same functionality.  Towards Robust Computation on Encrypted Data

.
Links: proceedings • slidesNote: Subsumed by journal article Reconciling Nonmalleability with Homomorphic Encryption, listed above.
Abstract: Encryption schemes that support computation on encrypted data are useful in constructing efficient and intuitively simple cryptographic protocols. However, the approach was previously limited to standalone and/or honestbutcurious security. In this work, we apply recent results on ``nonmalleable homomorphic encryption'' to construct new protocols with Universally Composable security against active corruption, for certain interesting tasks. Also, we use our techniques to develop nonmalleable homomorphic encryption that can handle homomorphic operations involving more than one ciphertext.  Cryptographic Complexity of Multiparty Computation Problems: Classifications and Separations

.
Abstract: We develop new tools to study the relative complexities of secure multiparty computation tasks (functionalities) in the Universal Composition framework. When one task can be securely realized using another task as a blackbox, we interpret this as a qualitative, complexitytheoretic reduction between the two tasks. Virtually all previous characterizations of MPC functionalities, in the UC model or otherwise, focus exclusively on secure function evaluation. In comparison, the tools we develop do not rely on any special internal structure of the functionality, thus applying to functionalities with arbitrary behavior. Our tools additionally apply uniformly to both the PPT and unbounded computation models.
Our first main tool is the notion of {\em splittability}, which is an exact characterization of realizability in the UC framework with respect to a large class of communication channel functionalities. Using this characterization, we can rederive all previouslyknown impossibility results as immediate and simple corollaries. We also complete the combinatorial characterization of 2party secure function evaluation initiated by \cite{CanettiKuLi03} and partially extend the combinatorial conditions to the multiparty setting.
Our second main tool is the notion of {\em deviationrevealing} functionalities, which allows us to translate complexity separations in simpler MPC settings (such as the honestbutcurious corruption model) to the standard (malicious) setting. Applying this tool, we demonstrate the existence of functionalities which are neither realizable nor complete, in the unbounded computation model.  Homomorphic Encryption with CCA Security

.
Note: Subsumed by journal article Reconciling Nonmalleability with Homomorphic Encryption, listed above.
Abstract: We address the problem of constructing publickey encryption schemes that meaningfully combine useful {\em computability features} with {\em nonmalleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``nonmalleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (INDCCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional DiffieHellman (DDH) assumption.  Harvesting Credentials in Trust Negotiation as an HonestButCurious Adversary

.
Links: tech report • proceedings ($)Abstract: Needtoknow is a fundamental security concept: a party should not learn information that is irrelevant to its mission. In this paper we show that during a trust negotiation in which parties show their credentials to one another, an adversary Alice can systematically harvest information about all of a victim Bob.s credentials that Alice is entitled to see, regardless of their relevance to a negotiation. We prove that it is not possible to enforce needtoknow conditions with the trust negotiation model and protocol developed by Yu, Winslett, and Seamons. We also present examples of similar needtoknow attacks with the trust negotiation approaches proposed by Bonatti and Samarati, and by Winsborough and Li. Finally, we propose possible countermeasures against needtoknow attacks, and discuss their advantages and disadvantages.
 Rerandomizable RCCA Encryption

.
Note: Subsumed by journal article Reconciling Nonmalleability with Homomorphic Encryption, listed above.
Abstract: We give the first perfectly rerandomizable, ReplayableCCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. [CRYPTO 2003]. Our encryption scheme, which we call the Doublestrand CramerShoup scheme, is a nontrivial extension of the popular CramerShoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful "Replayable Message Posting" functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UCsecure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiveranonymity (or keyprivacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme that achieves this enhancement.
Other Projects
 Vamonos
 Browserbased platform for algorithm visualization [website]
 An Annotated Bibliography of Practical Secure Computation
 Reference resource for researchers in secure computation [website]
Professional Activities
Program committees: Other service: IACR communications secretaryOther Writings
 Secure Your Data and Compute on It, Too

XRDS: Crossroads, the ACM Magazine for Students.Links: articleAbstract: Modern cryptography provides techniques to perform useful computations on sensitive data.
 Correspondences regarding cryptography between John Nash and the NSA

Transcribed & typeset by Mike Rosulek.Abstract: In 1955, wellknown mathematician John Nash was in correspondence with the United States National Security Agency. In these letters, Nash proposes a novel enciphering scheme. He also sets forth an important cryptographic principle that now underpin modern computational complexity theory and cryptography. In particular, he proposes a natural definition for "[security] in a practical sense"  that exponential computational effort is required for an enemy to recovery a secret key. Nash further conjectures that this property holds for any suitable enciphering mechanism.
These correspondences, recently declassified by the NSA, have been transcribed and typeset in this document.  The Structure of Secure MultiParty Computation

University of Illinois PhD dissertation, 2009.Links: singlespaced, hyperref • official versionAbstract: Secure multiparty computation is a conceptual framework in which distrusting parties engage in a protocol to securely perform a computational task. Depending on the precise model of security, different sets of tasks admit secure protocols. We take a complexitytheoretic approach to studying the inherent difficulty of securely realizing tasks in various standard security models.
 We give the first alternate characterization of secure realizability in the framework of universally composable (UC) security. This is the first characterization in any model to consider completely arbitrary computational tasks and completely arbitrary communication channels.
 The most longstanding class of computational tasks studied are those in which two parties evaluate a deterministic function. For these tasks, we give the first complete, combinatorial characterizations of secure realizability in the passive, standalone, and universally composable security models, against computationally unbounded adversaries.
 Say that a task \G has ``as much cryptographic complexity'' as another task \F if there is a secure protocol for \F that uses access to a secure implementation of \G. We show that there is an infinite hierarchy of tasks with {\em strictly increasing} cryptographic complexities, with respect to computationally unbounded security. We also show that there exist tasks whose cryptographic complexities are incomparable.
 In contrast, we show that under a standard cryptographic assumption, there exist only {\em two} distinct levels of cryptographic complexity with respect to polynomialtime security. Every task either has a trivial protocol using plain communication channels, or is complete (i.e., given access to a secure implementation of this task, there are secure protocols for all other tasks). This is the first result to derive a characterization of completeness for a class of arbitrary {\em interactive} tasks.
 We give a construction of a homomorphic encryption scheme in which the allowed homomorphic operation is as fullfeatured as possible  namely, one can derive a {\em correctlydistributed} encryption of $f(m)$ given an encryption of unknown message $m$, for some functions $f$  yet it is computationally infeasible to generate a ciphertext that is related to other ciphertexts in any other way. Our contributions involve developing new appropriate security definitions as well as new constructions.
 We show that schemes with such powerful security guarantees can be used to build conceptually simple, efficient, UCsecure protocols for verifiable computations on encrypted data. We show protocols for two tasks related to aggregating encrypted data.
Erratum: Proof of theorem 4.17 is buggy. Please see the full version of our TCC 2009 paper for a correct proof (in Appendix C).
Other Talks
 Towards Optimal Garbled Circuit Constructions
 Allerton Conference on Communication, Control and Computing, October 2015. [slides]
 A Brief History of Practical Garbled Circuit Optimizations
 Securing Computation workshop, Simons Institute, June 2015. [slides] [video]
 Reconciling Nonmalleability and Homomorphic Encryption

Crypto in the Clouds workshop, August 2009.
[slides]
Updated talk at University of Maryland cryptography seminar, January 2012. [slides]  ZeroKnowledge Proofs, with Applications to Sudoku and Where's Waldo
 Educational talk, University of Montana, December 2008. [slides]
Personal
 My personal website, which contains some of my photography and surprisingly little else.
 My Flickr page, in case you want to see even more of my photography.