Mike Rosulek
I'm an assistant professor of Computer Science at Oregon State University, studying cryptography.
Research into the limits of computation has identified many problems that cannot be solved with any reasonable amount of computational resources. Cryptography is all about leveraging these computationally hard problems to prevent adversarial behavior. In cryptographic systems, mounting a successful attack is not impossible in principle, but equivalent to solving one of these intractable problems.
Most of my research has focused on the problem of secure computation: how can we perform arbitrary computations on data without actually seeing the data? What if the data is distributed among many mutually distrusting parties, some of whom might want to undermine the computation?
Contact
 rosulekm@eecs.oregonstate.edu
 KEC 3063
 541.737.1174 (rarely answered)
Students
Current students: Zhangxiang Hu (PhD)
 Peter Rindal (PhD)
 Brent Carmer (MS)
Teaching
Spring 2015
CS 517: Computational ComplexityPublications
click title for more info
Also: DBLP, Google scholar
 Two Halves Make a Whole: Reducing Data Transfer in Garbled Circuits using Half Gates

.
Links: preprint • proceedings ($)Abstract: The wellknown classical constructions of garbled circuits use four ciphertexts per gate, although various methods have been proposed to reduce this cost. The best previously known methods for optimizing AND gates (two ciphertexts; Pinkas et al., ASIACRYPT 2009) and XOR gates (zero ciphertexts; Kolesnikov \& Schneider, ICALP 2008) were incompatible, so most implementations used the best known method compatible with freeXOR gates (three ciphertexts; Kolesnikov \& Schneider, ICALP 2008). In this work we show how to simultaneously garble AND gates using two ciphertexts and XOR gates using zero ciphertexts, resulting in smaller garbled circuits than any prior scheme. The main idea behind our construction is to break an AND gate into two {\em halfgates}  AND gates for which one party knows one input. Each halfgate can be garbled with a single ciphertext, so our construction uses two ciphertexts for each AND gate while being compatible with freeXOR gates. The price for the reduction in size is that the evaluator must perform two cryptographic operations per AND gate, rather than one as in previous schemes. We experimentally demonstrate that our garbling scheme leads to an overall decrease in time (up to 25\%), bandwidth (up to 33\%), and energy use (up to 20\%) over several benchmark applications. We also initiate a study of lower bounds for garbled gate size, and show that our construction is optimal for a large class of garbling schemes encompassing all known practical garbling techniques.
 How to Efficiently Evaluate RAM Programs with Malicious Security

.
Abstract: Secure 2party computation (2PC) is becoming practical for some applications. However, most approaches are limited by the fact that the desired functionality must be represented as a boolean circuit. In response, randomaccess machines (RAM programs) have recently been investigated as a promising alternative representation.
In this work, we present the first practical protocols for evaluating RAM programs with security against malicious adversaries. A useful efficiency measure is to divide the cost of malicioussecure evaluation of $f$ by the cost of semihonestsecure evaluation of $f$. Our RAM protocols achieve ratios matching the state of the art for circuitbased 2PC. For statistical security $2^{s}$, our protocol without preprocessing achieves a ratio of $s$; our onlineoffline protocol has a preprocessing phase and achieves online ratio $\sim 2 s / \log T$, where $T$ is the total execution time of the RAM program.
To summarize, our solutions show that the ``extra overhead" of obtaining malicious security for RAM programs (beyond what is needed for circuits) is minimal and does not grow with the running time of the program.  Richer Efficiency/Security Tradeoffs in 2PC

.
Abstract: The dualexecution protocol of Mohassel & Franklin (PKC 2006) is a highly efficient (each party garbling only one circuit) 2PC protocol that achieves malicious security apart from leaking an arbitrary, adversariallychosen predicate about the honest party's input. We present two practical and orthogonal approaches to improve the security of the dualexecution technique.
First, we show how to greatly restrict the predicate that an adversary can learn in the protocol, to a natural notion of ``only computation leaks''style leakage. Along the way, we identify a natural security property of garbled circuits called propertyenforcing that may be of independent interest.
Second, we address a complementary direction of reducing the probability that the leakage occurs. We propose a new dualexecution protocol  with a very light cheatingdetection phase and each party garbling $s+1$ circuits  in which a cheating party learns a bit with probability only $2^{s}$. Our concrete measurements show approximately 35% reduction in communication for the AES circuit, compared to the best combination of state of the art techniques for achieving the same security notion.
Combining the two results, we achieve a rich continuum of practical tradeoffs between efficiency & security, connecting the covert, dualexecution and fullmalicious guarantees.  FleXOR: Flexible garbling for XOR gates that beats freeXOR

.
Abstract: Most implementations of Yao's garbled circuit approach for 2party secure computation use the {\em freeXOR} optimization of Kolesnikov \& Schneider (ICALP 2008). We introduce an alternative technique called {\em flexibleXOR} (fleXOR) that generalizes freeXOR and offers several advantages. First, fleXOR can be instantiated under a weaker hardness assumption on the underlying cipher/hash function (relatedkey security only, compared to relatedkey and circular security required for freeXOR) while maintaining most of the performance improvements that freeXOR offers. Alternatively, even though XOR gates are not always ``free'' in our approach, we show that the other (nonXOR) gates can be optimized more heavily than what is possible when using freeXOR. For many circuits of cryptographic interest, this can yield a significantly (over 30\%) smaller garbled circuit than any other known techniques (including freeXOR) or their combinations.
 MultiParty Computation for Polynomials and Branching Programs without Simultaneous Interaction

.
Links: full versionAbstract: Halevi, Lindell, and Pinkas (CRYPTO 2011) recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.
In this work we present a suite of new, simple and efficient protocols for secure computation in this ``onepass'' model. We give protocols that obtain optimal privacy for the following general tasks: Evaluating any multivariate polynomial $F(x_1, \ldots, x_n)$ (modulo a large RSA modulus $N$), where the parties each hold an input $x_i$.
 Evaluating any read once branching program over the parties' inputs.
 Characterizing the Cryptographic Properties of Reactive 2Party Functionalities

.
Links: proceedings (author copy)Abstract: In secure multiparty computation, a reactive functionality is one which maintains persistent state, takes inputs, and gives outputs over many rounds of interaction with its parties. Reactive functionalities are fundamental and model many interesting and natural cryptographic tasks; yet their security properties are not nearly as wellunderstood as in the nonreactive case (known as secure function evaluation).
We present new combinatorial characterizations for 2party reactive functionalities, which we model as finite automata. We characterize the functionalities that have passivesecure protocols, and those which are complete with respect to passive adversaries. Both characterizations are in the informationtheoretic setting.  A Unified Characterization of Completeness and Triviality for Secure Function Evaluation

.
Links: proceedings ($)Abstract: We present unified combinatorial characterizations of completeness for 2party secure function evaluation (SFE) against passive and active corruptions, so that all known characterizations appear as special cases.
In doing so we develop new technical concepts. We define several notions of isomorphism of SFE functionalities and define the "kernel" of an SFE functionality. An SFE functionality is then said to be "simple" if and only if it is strongly isomorphic to its kernel. An SFE functionality F is a core of an SFE functionality \F if it is "redundancy free" and is weakly isomorphic to F. Then: An SFE functionality is complete for security against passive corruptions if and only if it is not simple.
 A deterministic SFE functionality is complete for security against active corruptions if and only if it has a core that is not simple. We conjecture that this characterization extends to randomized SFE as well.
Finally, we apply our new notions of isomorphism to reduce the problem of characterization of trivial functionalities (i.e., those securely realizable without setups) for the case of general SFE to the same problem for the case of simple symmetric SFE.  Must you know the code of f to securely compute f?

.
Abstract: When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit, and then they securely evaluate that circuit gatebygate. In other words, a secure protocol for evaluating $f$ is typically obtained in a nonblackboxway from $f$ itself. As a consequence, secure computation protocols have high overhead (in communication and computation) that is directly linked to the circuitdescription complexity of $f$.
In other settings throughout cryptography, blackbox constructions invariably lead to better practical efficiency than comparable nonblackbox constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, must one know the code of $f$ to securely evaluate $f$?
In this work we initiate the theoretical study of this question. We show the following: A complete characterization of the 2party tasks which admit such security against semihonest adversaries. The characterization is inspired by notions of autoreducibility from computational complexity theory. From this characterization, we show a class of pseudorandom functions that cannot be securely evaluated (when one party holds the seed and the other holds the input) without "knowing" the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing'' the code of the function.
 Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a oneway function in zeroknowledge, without "knowing" the code of the oneway function.
 Universal Composability from Essentially Any Trusted Setup

.
Abstract: It is impossible to securely carry out general multiparty computation in arbitrary network contexts like the Internet, unless protocols have access to some trusted setup. In this work we classify the power of such trusted (2party) setup functionalities. We show that nearly every setup is either {\bf useless} (ideal access to the setup is equivalent to having no setup at all) or else {\bf complete} (composably secure protocols for {\em all} tasks exist in the presence of the setup). We further argue that those setups which are neither complete nor useless are highly unnatural.
The main technical contribution in this work is an almosttotal characterization of completeness for 2party setups. Our characterization treats setup functionalities as blackboxes, and therefore is the first work to classify completeness of {\em arbitrary setup functionalities} (i.e., randomized, reactive, and having behavior that depends on the global security parameter).  Complexity of MultiParty Computation Functionalities

Book chapter.Links: publisher page • author versionAbstract: The central objects of secure multiparty computation are the "multiparty functions" (or functionalities) that it seeks to securely realize. In this article we survey a set of results that constitute a Cryptographic Complexity Theory. This theory classifies and compares multiparty functions according to their secure computability and reducibility to each other. The basic questions studied, under various notions of security and reducibility, include:
 Which functionalities are securely realizable (or are "trivial"  i.e., can be reduced to any functionality)?
 Which functionalities are "complete"  i.e., those to which any functionality can be reduced?
 More generally, which functionalities are reducible to which? Outside of triviality and completeness, this question is relatively less explored.
We treat separately the results on twoparty functionalities, for which the cryptographic complexity is much better understood. In particular, we present unified combinatorial characterizations of completeness and triviality for secure function evaluation using notions of isomorphism and the common information functionality (called the kernel) of a given functionality. Beyond completeness and triviality, we also discuss results on general reducibility, and, in the computationally bounded setting, the connection between these reductions and computational hardness assumptions.
We briefly discuss results on reactive functionalities, which are much less studied than nonreactive (secure function evaluation) functionalities. Finally, we conclude with a selection of open problems.  Exploring the Limits of Common Coins Using Frontier Analysis of Protocols

.
Links: full version • proceedingsAbstract: In 2party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required.
To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the {\em frontier analysis}. This involves analyzing carefully defined ``frontiers'' in a weighted tree induced by the protocol's execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions.
Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the ``cryptomania assumption'' (that there is a semihonest or standalone secure oblivious transfer protocol) in terms of UCsecure reduction among randomized functionalities. Also, we provide an {\em unconditional result} on the uselessness of common randomness, even in the computationally bounded setting.
Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.  AttributeBased Signatures

.
Links: full version • proceedings ($)Abstract: We introduce {\em AttributeBased Signatures (ABS)}, a versatile primitive that allows a party to sign a message with finegrained control over identifying information. In ABS, a signer, who possesses a set of attributes from the authority, can sign a message with a predicate that is satisfied by his attributes. The signature reveals no more than the fact that a single user with some set of attributes satisfying the predicate has attested to the message. In particular, the signature hides the attributes used to satisfy the predicate and any identifying information about the signer (that could link multiple signatures as being from the same signer). Furthermore, users cannot collude to pool their attributes together.
We give a general framework for constructing ABS schemes, and then show several practical instantiations based on groups with bilinear pairing operations, under standard assumptions. Further, we give a construction which is secure even against a malicious attribute authority, but the security for this scheme is proven in the generic group model. We describe several practical problems that motivated this work, and how ABS can be used to solve them. Also, we show how our techniques allow us to extend GrothSahai NIZK proofs to be simulationextractable and identitybased with low overhead.  A ZeroOne Law for Cryptographic Complexity with Respect to Computational UC Security

.
Links: full version • proceedingsAbstract: We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UCsecure protocol for $F$ using ideal access to $G$. This reduction is a natural and finegrained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced.
In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semihonest adversaries), there is a {\bf zeroone law} for the cryptographic complexity of 2party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.
While nearly all previous work classifying multiparty computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automatatheoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic nontriviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al.\ (TCC 2010)).  Cryptographic Complexity Classes and Computational Intractability Assumptions

.
Links: full version • proceedingsAbstract: Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question.
We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study.
In our detailed investigation of large classes of 2party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of oneway functions (OWF) or of semihonest (equivalently, standalonesecure) oblivious transfer protocols (shOT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework.
If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.  Complexity of Multiparty Computation Problems: The Case of 2Party Symmetric Secure Function Evaluation

.
Links: full version • proceedingsAbstract: In symmetric secure function evaluation (SSFE), Alice has an input $x$, Bob has an input $y$, and both parties wish to securely compute $f(x,y)$. We classify these functions $f$ according to their ``cryptographic complexities,'' and show that the landscape of complexity among these functions is surprisingly rich.
We give combinatorial characterizations of the SSFE functions $f$ that have passivesecure protocols, and those which are protocols secure in the standalone setting. With respect to universally composable security (for unbounded parties), we show that there is an infinite hierarchy of increasing complexity for SSFE functions, That is, we describe a family of SSFE functions $f_1, f_2, \ldots$ such that there exists a UCsecure protocol for $f_i$ in the $f_j$hybrid world if and only if $i \le j$.
Our main technical tool for deriving complexity separations is a powerful protocol simulation theorem which states that, even in the strict setting of UC security, the canonical protocol for $f$ is as secure as any other protocol for $f$, as long as $f$ satisfies a certain combinatorial characterization. We can then show intuitively clear impossibility results by establishing the combinatorial properties of $f$ and then describing attacks against the very simple canonical protocols, which by extension are also feasible attacks against {\em any} protocol for the same functionality.  Towards Robust Computation on Encrypted Data

.
Links: proceedings • slidesAbstract: Encryption schemes that support computation on encrypted data are useful in constructing efficient and intuitively simple cryptographic protocols. However, the approach was previously limited to standalone and/or honestbutcurious security. In this work, we apply recent results on ``nonmalleable homomorphic encryption'' to construct new protocols with Universally Composable security against active corruption, for certain interesting tasks. Also, we use our techniques to develop nonmalleable homomorphic encryption that can handle homomorphic operations involving more than one ciphertext.
 Cryptographic Complexity of Multiparty Computation Problems: Classifications and Separations

.
Abstract: We develop new tools to study the relative complexities of secure multiparty computation tasks (functionalities) in the Universal Composition framework. When one task can be securely realized using another task as a blackbox, we interpret this as a qualitative, complexitytheoretic reduction between the two tasks. Virtually all previous characterizations of MPC functionalities, in the UC model or otherwise, focus exclusively on secure function evaluation. In comparison, the tools we develop do not rely on any special internal structure of the functionality, thus applying to functionalities with arbitrary behavior. Our tools additionally apply uniformly to both the PPT and unbounded computation models.
Our first main tool is the notion of {\em splittability}, which is an exact characterization of realizability in the UC framework with respect to a large class of communication channel functionalities. Using this characterization, we can rederive all previouslyknown impossibility results as immediate and simple corollaries. We also complete the combinatorial characterization of 2party secure function evaluation initiated by \cite{CanettiKuLi03} and partially extend the combinatorial conditions to the multiparty setting.
Our second main tool is the notion of {\em deviationrevealing} functionalities, which allows us to translate complexity separations in simpler MPC settings (such as the honestbutcurious corruption model) to the standard (malicious) setting. Applying this tool, we demonstrate the existence of functionalities which are neither realizable nor complete, in the unbounded computation model.  Homomorphic Encryption with CCA Security

.
Abstract: We address the problem of constructing publickey encryption schemes that meaningfully combine useful {\em computability features} with {\em nonmalleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``nonmalleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (INDCCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional DiffieHellman (DDH) assumption.
 Harvesting Credentials in Trust Negotiation as an HonestButCurious Adversary

.
Links: tech report • proceedings ($)Abstract: Needtoknow is a fundamental security concept: a party should not learn information that is irrelevant to its mission. In this paper we show that during a trust negotiation in which parties show their credentials to one another, an adversary Alice can systematically harvest information about all of a victim Bob.s credentials that Alice is entitled to see, regardless of their relevance to a negotiation. We prove that it is not possible to enforce needtoknow conditions with the trust negotiation model and protocol developed by Yu, Winslett, and Seamons. We also present examples of similar needtoknow attacks with the trust negotiation approaches proposed by Bonatti and Samarati, and by Winsborough and Li. Finally, we propose possible countermeasures against needtoknow attacks, and discuss their advantages and disadvantages.
 Rerandomizable RCCA Encryption

.
Abstract: We give the first perfectly rerandomizable, ReplayableCCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. [CRYPTO 2003]. Our encryption scheme, which we call the Doublestrand CramerShoup scheme, is a nontrivial extension of the popular CramerShoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful "Replayable Message Posting" functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UCsecure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiveranonymity (or keyprivacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme that achieves this enhancement.
Other Projects
 Vamonos
 Browserbased platform for algorithm visualization [website]
 An Annotated Bibliography of Practical Secure Computation
 Reference resource for researchers in secure computation [website]
 Research Experiences for Undergraduates
 Let's work on crypto together this summer! [website]
Professional Activities
Program committees: Other service: IACR communications secretaryOther Writings
 Secure Your Data and Compute on It, Too

XRDS: Crossroads, the ACM Magazine for Students.Links: articleAbstract: Modern cryptography provides techniques to perform useful computations on sensitive data.
 Correspondences regarding cryptography between John Nash and the NSA

Transcribed & typeset by Mike Rosulek.Abstract: In 1955, wellknown mathematician John Nash was in correspondence with the United States National Security Agency. In these letters, Nash proposes a novel enciphering scheme. He also sets forth an important cryptographic principle that now underpin modern computational complexity theory and cryptography. In particular, he proposes a natural definition for "[security] in a practical sense"  that exponential computational effort is required for an enemy to recovery a secret key. Nash further conjectures that this property holds for any suitable enciphering mechanism.
These correspondences, recently declassified by the NSA, have been transcribed and typeset in this document.  The Structure of Secure MultiParty Computation

University of Illinois PhD dissertation, 2009.Links: singlespaced, hyperref • official versionAbstract: Secure multiparty computation is a conceptual framework in which distrusting parties engage in a protocol to securely perform a computational task. Depending on the precise model of security, different sets of tasks admit secure protocols. We take a complexitytheoretic approach to studying the inherent difficulty of securely realizing tasks in various standard security models.
 We give the first alternate characterization of secure realizability in the framework of universally composable (UC) security. This is the first characterization in any model to consider completely arbitrary computational tasks and completely arbitrary communication channels.
 The most longstanding class of computational tasks studied are those in which two parties evaluate a deterministic function. For these tasks, we give the first complete, combinatorial characterizations of secure realizability in the passive, standalone, and universally composable security models, against computationally unbounded adversaries.
 Say that a task \G has ``as much cryptographic complexity'' as another task \F if there is a secure protocol for \F that uses access to a secure implementation of \G. We show that there is an infinite hierarchy of tasks with {\em strictly increasing} cryptographic complexities, with respect to computationally unbounded security. We also show that there exist tasks whose cryptographic complexities are incomparable.
 In contrast, we show that under a standard cryptographic assumption, there exist only {\em two} distinct levels of cryptographic complexity with respect to polynomialtime security. Every task either has a trivial protocol using plain communication channels, or is complete (i.e., given access to a secure implementation of this task, there are secure protocols for all other tasks). This is the first result to derive a characterization of completeness for a class of arbitrary {\em interactive} tasks.
 We give a construction of a homomorphic encryption scheme in which the allowed homomorphic operation is as fullfeatured as possible  namely, one can derive a {\em correctlydistributed} encryption of $f(m)$ given an encryption of unknown message $m$, for some functions $f$  yet it is computationally infeasible to generate a ciphertext that is related to other ciphertexts in any other way. Our contributions involve developing new appropriate security definitions as well as new constructions.
 We show that schemes with such powerful security guarantees can be used to build conceptually simple, efficient, UCsecure protocols for verifiable computations on encrypted data. We show protocols for two tasks related to aggregating encrypted data.
Other Talks
 Reconciling Nonmalleability and Homomorphic Encryption

Crypto in the Clouds workshop, August 2009.
[slides]
Updated talk at University of Maryland cryptography seminar, January 2012. [slides]  ZeroKnowledge Proofs, with Applications to Sudoku and Where's Waldo
 Educational talk, University of Montana, December 2008. [slides]
 The State of the Art in Program Obfuscation
 UIUC CS theory seminar, September 2006. [slides]
Personal
 My personal website, which contains some of my photography and surprisingly little else.
 My Flickr page, in case you want to see even more of my photography.