Mike Rosulek
I'm an assistant professor of Computer Science at Oregon State University, studying cryptography.

Many problems that we'd like to be able to solve with computers seem to be intrinsically hard; there seems to be no way of solving them efficiently. Cryptography is the study of how to leverage these computationally hard problems to control access to information. We design systems for which violating our desired security properties is equivalent to solving one of these intractable problems.

Most of my research has focused on the problem of secure computation: how can we perform arbitrary computations on data without actually seeing the data? What if the data is distributed among many mutually distrusting parties, some of whom might want to undermine the computation?

Contact

Students

Current students:

Publications

Also: DBLP, Google scholar

Two Halves Make a Whole: Reducing Data Transfer in Garbled Circuits using Half Gates
Samee Zahur, Mike Rosulek & David Evans. Manuscript
The well-known classical constructions of garbled circuits use four ciphertexts per gate, although various methods have been proposed to reduce this cost. The best previously known methods for optimizing AND gates (two ciphertexts; Pinkas et al., ASIACRYPT 2009) and XOR gates (zero ciphertexts; Kolesnikov \& Schneider, ICALP 2008) were incompatible, so most implementations used the best known method compatible with free-XOR gates (three ciphertexts; Kolesnikov \& Schneider, ICALP 2008). In this work we show how to simultaneously garble AND gates using two ciphertexts and XOR gates using zero ciphertexts, resulting in smaller garbled circuits than any prior scheme. The main idea behind our construction is to break an AND gate into two {\em half-gates} --- AND gates for which one party knows one input. Each half-gate can be garbled with a single ciphertext, so our construction uses two ciphertexts for each AND gate while being compatible with free-XOR gates. The price for the reduction in size is that the evaluator must perform two cryptographic operations per AND gate, rather than one as in previous schemes. We experimentally demonstrate that our garbling scheme leads to an overall decrease in time (up to 25\%), bandwidth (up to 33\%), and energy use (up to 20\%) over several benchmark applications. We also initiate a study of lower bounds for garbled gate size, and show that our construction is optimal for a large class of garbling schemes encompassing all known practical garbling techniques.
How to Efficiently Evaluate RAM Programs with Malicious Security
Arash Afshar, Zhangxiang Hu, Payman Mohassel & Mike Rosulek. Manuscript
Secure 2-party computation (2PC) is becoming practical for some applications. However, most approaches are limited by the fact that the desired functionality must be represented as a boolean circuit. In response, random-access machines (RAM programs) have recently been investigated as a promising alternative representation.

In this work, we present the first practical protocols for evaluating RAM programs with security against malicious adversaries. A useful efficiency measure is to divide the cost of malicious-secure evaluation of $f$ by the cost of semi-honest-secure evaluation of $f$. Our RAM protocols achieve ratios matching the state of the art for circuit-based 2PC. For statistical security $2^{-s}$, our protocol without preprocessing achieves a ratio of $s$; our online-offline protocol has a pre-processing phase and achieves online ratio $\sim 2 s / \log T$, where $T$ is the total execution time of the RAM program.

To summarize, our solutions show that the ``extra overhead" of obtaining malicious security for RAM programs (beyond what is needed for circuits) is minimal and does not grow with the running time of the program.
FleXOR: Flexible garbling for XOR gates that beats free-XOR
Vladimir Kolesnikov, Payman Mohassel & Mike Rosulek. CRYPTO 2014
Most implementations of Yao's garbled circuit approach for 2-party secure computation use the {\em free-XOR} optimization of Kolesnikov \& Schneider (ICALP 2008). We introduce an alternative technique called {\em flexible-XOR} (fleXOR) that generalizes free-XOR and offers several advantages. First, fleXOR can be instantiated under a weaker hardness assumption on the underlying cipher/hash function (related-key security only, compared to related-key and circular security required for free-XOR) while maintaining most of the performance improvements that free-XOR offers. Alternatively, even though XOR gates are not always ``free'' in our approach, we show that the other (non-XOR) gates can be optimized more heavily than what is possible when using free-XOR. For many circuits of cryptographic interest, this can yield a significantly (over 30\%) smaller garbled circuit than any other known techniques (including free-XOR) or their combinations.
Multi-Party Computation for Polynomials and Branching Programs without Simultaneous Interaction
Dov Gordon, Tal Malkin, Mike Rosulek & Hoeteck Wee. Eurocrypt 2013
Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.

In this work we present a suite of new, simple and efficient protocols for secure computation in this ``one-pass'' model. We give protocols that obtain optimal privacy for the following general tasks:
  • Evaluating any multivariate polynomial $F(x_1, \ldots, x_n)$ (modulo a large RSA modulus $N$), where the parties each hold an input $x_i$.
  • Evaluating any read once branching program over the parties' inputs.
As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, second-price auctions, classification algorithms and some classes of finite automata and decision trees.
Characterizing the Cryptographic Properties of Reactive 2-Party Functionalities
R. Amzi Jeffs & Mike Rosulek. TCC 2013
In secure multi-party computation, a reactive functionality is one which maintains persistent state, takes inputs, and gives outputs over many rounds of interaction with its parties. Reactive functionalities are fundamental and model many interesting and natural cryptographic tasks; yet their security properties are not nearly as well-understood as in the non-reactive case (known as secure function evaluation).

We present new combinatorial characterizations for 2-party reactive functionalities, which we model as finite automata. We characterize the functionalities that have passive-secure protocols, and those which are complete with respect to passive adversaries. Both characterizations are in the information-theoretic setting.
A Unified Characterization of Completeness and Triviality for Secure Function Evaluation
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. Indocrypt 2012
We present unified combinatorial characterizations of completeness for 2-party secure function evaluation (SFE) against passive and active corruptions, so that all known characterizations appear as special cases.

In doing so we develop new technical concepts. We define several notions of isomorphism of SFE functionalities and define the "kernel" of an SFE functionality. An SFE functionality is then said to be "simple" if and only if it is strongly isomorphic to its kernel. An SFE functionality F is a core of an SFE functionality \F if it is "redundancy free" and is weakly isomorphic to F. Then:
  • An SFE functionality is complete for security against passive corruptions if and only if it is not simple.
  • A deterministic SFE functionality is complete for security against active corruptions if and only if it has a core that is not simple. We conjecture that this characterization extends to randomized SFE as well.
We further give explicit combinatorial characterizations of simple SFE functionalities.

Finally, we apply our new notions of isomorphism to reduce the problem of characterization of trivial functionalities (i.e., those securely realizable without setups) for the case of general SFE to the same problem for the case of simple symmetric SFE.
Must you know the code of f  to securely compute f?
Mike Rosulek. CRYPTO 2012
When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit, and then they securely evaluate that circuit gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a non-black-box-way from $f$ itself. As a consequence, secure computation protocols have high overhead (in communication and computation) that is directly linked to the circuit-description complexity of $f$.

In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, must one know the code of $f$ to securely evaluate $f$?

In this work we initiate the theoretical study of this question. We show the following:
  • A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of autoreducibility from computational complexity theory. From this characterization, we show a class of pseudorandom functions that cannot be securely evaluated (when one party holds the seed and the other holds the input) without "knowing" the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing'' the code of the function.
  • Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without "knowing" the code of the one-way function.
Universal Composability from Essentially Any Trusted Setup
Mike Rosulek. CRYPTO 2012
It is impossible to securely carry out general multi-party computation in arbitrary network contexts like the Internet, unless protocols have access to some trusted setup. In this work we classify the power of such trusted (2-party) setup functionalities. We show that nearly every setup is either {\bf useless} (ideal access to the setup is equivalent to having no setup at all) or else {\bf complete} (composably secure protocols for {\em all} tasks exist in the presence of the setup). We further argue that those setups which are neither complete nor useless are highly unnatural.

The main technical contribution in this work is an almost-total characterization of completeness for 2-party setups. Our characterization treats setup functionalities as black-boxes, and therefore is the first work to classify completeness of {\em arbitrary setup functionalities} (i.e., randomized, reactive, and having behavior that depends on the global security parameter).
Complexity of Multi-Party Computation Functionalities
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. Book chapter
The central objects of secure multiparty computation are the "multiparty functions" (or functionalities) that it seeks to securely realize. In this article we survey a set of results that constitute a Cryptographic Complexity Theory. This theory classifies and compares multiparty functions according to their secure computability and reducibility to each other. The basic questions studied, under various notions of security and reducibility, include:
  • Which functionalities are securely realizable (or are "trivial" -- i.e., can be reduced to any functionality)?
  • Which functionalities are "complete" -- i.e., those to which any functionality can be reduced?
  • More generally, which functionalities are reducible to which? Outside of triviality and completeness, this question is relatively less explored.
Reductions yield relative measures of complexity among various functionalities. In the information- theoretic setting, absolute complexity measures have also been considered. In particular, we discuss results regarding which functions have t-private protocols (in which security is required against a passive adversary corrupting t out of n players) and how this set changes as t increases from 1 to n.

We treat separately the results on two-party functionalities, for which the cryptographic complexity is much better understood. In particular, we present unified combinatorial characterizations of completeness and triviality for secure function evaluation using notions of isomorphism and the common information functionality (called the kernel) of a given functionality. Beyond completeness and triviality, we also discuss results on general reducibility, and, in the computationally bounded setting, the connection between these reductions and computational hardness assumptions.

We briefly discuss results on reactive functionalities, which are much less studied than non-reactive (secure function evaluation) functionalities. Finally, we conclude with a selection of open problems.
Exploring the Limits of Common Coins Using Frontier Analysis of Protocols
Hemanta Maji, Pichayoot Ouppaphan, Manoj Prabhakaran & Mike Rosulek. TCC 2011
In 2-party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required.

To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the {\em frontier analysis}. This involves analyzing carefully defined ``frontiers'' in a weighted tree induced by the protocol's execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions.

Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the ``cryptomania assumption'' (that there is a semi-honest or standalone secure oblivious transfer protocol) in terms of UC-secure reduction among randomized functionalities. Also, we provide an {\em unconditional result} on the uselessness of common randomness, even in the computationally bounded setting.

Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.
Attribute-Based Signatures
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. CT-RSA 2011
We introduce {\em Attribute-Based Signatures (ABS)}, a versatile primitive that allows a party to sign a message with fine-grained control over identifying information. In ABS, a signer, who possesses a set of attributes from the authority, can sign a message with a predicate that is satisfied by his attributes. The signature reveals no more than the fact that a single user with some set of attributes satisfying the predicate has attested to the message. In particular, the signature hides the attributes used to satisfy the predicate and any identifying information about the signer (that could link multiple signatures as being from the same signer). Furthermore, users cannot collude to pool their attributes together.

We give a general framework for constructing ABS schemes, and then show several practical instantiations based on groups with bilinear pairing operations, under standard assumptions. Further, we give a construction which is secure even against a malicious attribute authority, but the security for this scheme is proven in the generic group model. We describe several practical problems that motivated this work, and how ABS can be used to solve them. Also, we show how our techniques allow us to extend Groth-Sahai NIZK proofs to be simulation-extractable and identity-based with low overhead.
A Zero-One Law for Cryptographic Complexity with Respect to Computational UC Security
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. CRYPTO 2010
We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2-party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UC-secure protocol for $F$ using ideal access to $G$. This reduction is a natural and fine-grained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced.

In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.

While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al.\ (TCC 2010)).
Cryptographic Complexity Classes and Computational Intractability Assumptions
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. ICS 2010
Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question.

We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi- party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study.

In our detailed investigation of large classes of 2-party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of one-way functions (OWF) or of semi-honest (equivalently, standalone-secure) oblivious transfer protocols (sh-OT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework.

If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.
Complexity of Multiparty Computation Problems: The Case of 2-Party Symmetric Secure Function Evaluation
Hemanta Maji, Manoj Prabhakaran & Mike Rosulek. TCC 2009
In symmetric secure function evaluation (SSFE), Alice has an input $x$, Bob has an input $y$, and both parties wish to securely compute $f(x,y)$. We classify these functions $f$ according to their ``cryptographic complexities,'' and show that the landscape of complexity among these functions is surprisingly rich.

We give combinatorial characterizations of the SSFE functions $f$ that have passive-secure protocols, and those which are protocols secure in the standalone setting. With respect to universally composable security (for unbounded parties), we show that there is an infinite hierarchy of increasing complexity for SSFE functions, That is, we describe a family of SSFE functions $f_1, f_2, \ldots$ such that there exists a UC-secure protocol for $f_i$ in the $f_j$-hybrid world if and only if $i \le j$.

Our main technical tool for deriving complexity separations is a powerful protocol simulation theorem which states that, even in the strict setting of UC security, the canonical protocol for $f$ is as secure as any other protocol for $f$, as long as $f$ satisfies a certain combinatorial characterization. We can then show intuitively clear impossibility results by establishing the combinatorial properties of $f$ and then describing attacks against the very simple canonical protocols, which by extension are also feasible attacks against {\em any} protocol for the same functionality.
Towards Robust Computation on Encrypted Data
Manoj Prabhakaran & Mike Rosulek. Asiacrypt 2008
Encryption schemes that support computation on encrypted data are useful in constructing efficient and intuitively simple cryptographic protocols. However, the approach was previously limited to stand-alone and/or honest-but-curious security. In this work, we apply recent results on ``non-malleable homomorphic encryption'' to construct new protocols with Universally Composable security against active corruption, for certain interesting tasks. Also, we use our techniques to develop non-malleable homomorphic encryption that can handle homomorphic operations involving more than one ciphertext.
Cryptographic Complexity of Multi-party Computation Problems: Classifications and Separations
Manoj Prabhakaran & Mike Rosulek. CRYPTO 2008
We develop new tools to study the relative complexities of secure multi-party computation tasks (functionalities) in the Universal Composition framework. When one task can be securely realized using another task as a black-box, we interpret this as a qualitative, complexity-theoretic reduction between the two tasks. Virtually all previous characterizations of MPC functionalities, in the UC model or otherwise, focus exclusively on secure function evaluation. In comparison, the tools we develop do not rely on any special internal structure of the functionality, thus applying to functionalities with arbitrary behavior. Our tools additionally apply uniformly to both the PPT and unbounded computation models.

Our first main tool is the notion of {\em splittability}, which is an exact characterization of realizability in the UC framework with respect to a large class of communication channel functionalities. Using this characterization, we can rederive all previously-known impossibility results as immediate and simple corollaries. We also complete the combinatorial characterization of 2-party secure function evaluation initiated by \cite{CanettiKuLi03} and partially extend the combinatorial conditions to the multi-party setting.

Our second main tool is the notion of {\em deviation-revealing} functionalities, which allows us to translate complexity separations in simpler MPC settings (such as the honest-but-curious corruption model) to the standard (malicious) setting. Applying this tool, we demonstrate the existence of functionalities which are neither realizable nor complete, in the unbounded computation model.
Homomorphic Encryption with CCA Security
Manoj Prabhakaran & Mike Rosulek. ICALP 2008
We address the problem of constructing public-key encryption schemes that meaningfully combine useful {\em computability features} with {\em non-malleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``non-malleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional Diffie-Hellman (DDH) assumption.
Harvesting Credentials in Trust Negotiation as an Honest-But-Curious Adversary
Lars Olson, Mike Rosulek & Marianne Winslett. WPES 2007
Need-to-know is a fundamental security concept: a party should not learn information that is irrelevant to its mission. In this paper we show that during a trust negotiation in which parties show their credentials to one another, an adversary Alice can systematically harvest information about all of a victim Bob.s credentials that Alice is entitled to see, regardless of their relevance to a negotiation. We prove that it is not possible to enforce need-to-know conditions with the trust negotiation model and protocol developed by Yu, Winslett, and Seamons. We also present examples of similar need-to-know attacks with the trust negotiation approaches proposed by Bonatti and Samarati, and by Winsborough and Li. Finally, we propose possible countermeasures against need-to-know attacks, and discuss their advantages and disadvantages.
Rerandomizable RCCA Encryption
Manoj Prabhakaran & Mike Rosulek. CRYPTO 2007
We give the first perfectly rerandomizable, Replayable-CCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. [CRYPTO 2003]. Our encryption scheme, which we call the Double-strand Cramer-Shoup scheme, is a non-trivial extension of the popular Cramer-Shoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful "Replayable Message Posting" functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UC-secure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiver-anonymity (or key-privacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme that achieves this enhancement.

Other Projects

Vamonos
Browser-based platform for algorithm visualization [website]
An Annotated Bibliography of Practical Secure Computation
Reference resource for researchers in secure computation [website]
Research Experiences for Undergraduates
Let's work on crypto together this summer! [website]

Professional Activities

Program committees: Other service: IACR communications secretary

Other Writings

Correspondences regarding cryptography between John Nash and the NSA
John Nash. Transcribed & typeset by Mike Rosulek
In 1955, well-known mathematician John Nash was in correspondence with the United States National Security Agency. In these letters, Nash proposes a novel enciphering scheme. He also sets forth an important cryptographic principle that now underpin modern computational complexity theory and cryptography. In particular, he proposes a natural definition for "[security] in a practical sense" --- that exponential computational effort is required for an enemy to recovery a secret key. Nash further conjectures that this property holds for any suitable enciphering mechanism.

These correspondences, recently declassified by the NSA, have been transcribed and typeset in this document.
The Structure of Secure Multi-Party Computation
Mike Rosulek. University of Illinois PhD dissertation, 2009
Secure multi-party computation is a conceptual framework in which distrusting parties engage in a protocol to securely perform a computational task. Depending on the precise model of security, different sets of tasks admit secure protocols. We take a complexity-theoretic approach to studying the inherent difficulty of securely realizing tasks in various standard security models.
  • We give the first alternate characterization of secure realizability in the framework of universally composable (UC) security. This is the first characterization in any model to consider completely arbitrary computational tasks and completely arbitrary communication channels.
  • The most long-standing class of computational tasks studied are those in which two parties evaluate a deterministic function. For these tasks, we give the first complete, combinatorial characterizations of secure realizability in the passive, standalone, and universally composable security models, against computationally unbounded adversaries.
  • Say that a task \G has ``as much cryptographic complexity'' as another task \F if there is a secure protocol for \F that uses access to a secure implementation of \G. We show that there is an infinite hierarchy of tasks with {\em strictly increasing} cryptographic complexities, with respect to computationally unbounded security. We also show that there exist tasks whose cryptographic complexities are incomparable.
  • In contrast, we show that under a standard cryptographic assumption, there exist only {\em two} distinct levels of cryptographic complexity with respect to polynomial-time security. Every task either has a trivial protocol using plain communication channels, or is complete (i.e., given access to a secure implementation of this task, there are secure protocols for all other tasks). This is the first result to derive a characterization of completeness for a class of arbitrary {\em interactive} tasks.
In light of these characterizations, the only tasks which are securely realizable in the demanding framework of universal composition are those related to secure communication. Indeed, the framework has been used to define the security of encryption schemes, which has allowed for modular design and analysis of protocols. We consider a similar approach for {\em homomorphic} encryption schemes. A homomorphic scheme is one in which anyone can obtain an encryption of $f(m_1, \ldots, m_n)$, given only the encryptions of unknown messages $m_1, \ldots, m_n$, for a specific set of functions $f$.
  • We give a construction of a homomorphic encryption scheme in which the allowed homomorphic operation is as full-featured as possible --- namely, one can derive a {\em correctly-distributed} encryption of $f(m)$ given an encryption of unknown message $m$, for some functions $f$ --- yet it is computationally infeasible to generate a ciphertext that is related to other ciphertexts in any other way. Our contributions involve developing new appropriate security definitions as well as new constructions.
  • We show that schemes with such powerful security guarantees can be used to build conceptually simple, efficient, UC-secure protocols for verifiable computations on encrypted data. We show protocols for two tasks related to aggregating encrypted data.

Other Talks

Reconciling Non-malleability and Homomorphic Encryption
Crypto in the Clouds workshop, August 2009. [slides]
Updated talk at University of Maryland cryptography seminar, January 2012. [slides]
Zero-Knowledge Proofs, with Applications to Sudoku and Where's Waldo
Educational talk, University of Montana, December 2008. [slides]
The State of the Art in Program Obfuscation
UIUC CS theory seminar, September 2006. [slides]

Personal

Links