Security Against Covert Adversaries: Efficient Protocols for Realistic Adversaries

Yonatan Aumann, Yehuda Lindell
J Cryptology 2010 [pdf] [bibtex]


A model of covert adversaries is defined, which lies between the standard semi-honest and malicious models. A covert adversary is allowed to deviate from the prescribed protocol but does not wish to be caught. Informally, the "success condition" for a covert adversary is cheating without detection.

A protocols providing covert security is parameterized by a deterrence factor ⚠ {$\epsilon$}, guaranteeing that a cheating adversary will be detected by the honest parties with probability at least ⚠ {$\epsilon$}. Note that covert security collapses to malicious security when ⚠ {$\epsilon$} is overwhelming in the security parameter.

Three approaches to formally defining covert security are discussed:

  • Failed-simulation: Consider an adversary that makes the real and ideal views (in the sense of semi-honest security) distinguishable with probability ⚠ {$\Delta$}. Intuitively, higher values of ⚠ {$\Delta$} mean the adversary is "cheating more." We require that the honest parties detect cheating with probability at least ⚠ {$\epsilon \cdot \Delta$}.
  • Explicit-cheat: The ideal functionality is augmented in the following way. An ideal adversary can send a special "cheat" command to the ideal functionality. Upon receiving such a command, the ideal functionality hands full control of itself to the adversary, but with probability ⚠ {$\epsilon$} also informs the honest parties. Note that in either case (cheating detected or undetected), the adversary has full control over the functionality, and in particular sees the honest parties' inputs.
  • Strong explicit-cheat: This formulation is similar to above, except that the adversary gets control of the functionality only if the cheating is undetected (i.e., in the probability ⚠ {$1-\epsilon$} event that the functionality does not notify the honest parties). Hence, when an adversary is caught cheating, the honest parties are guaranteed that no information was leaked.

A simple covert 2-party protocol is described as well. It is based on the cut-and-choose approach applied to Yao's garbled circuit protocol. The sender sends ⚠ {$\ell$} garbled circuits, of which a random ⚠ {$\ell-1$} are checked. The remaining circuit is used for evaluation. Since only one circuit is evaluated, there is no need for input consistency checks, making this protocol considerably simpler than cut-and-choose approaches for malicious security. Furthermore, instead of computing ⚠ {$f(x_1, x_2)$}, the parties compute ⚠ {$f'(x_1, x_{2,1}, \ldots, x_{2,m}) = f(x_1, \bigoplus_i x_{2,i})$}, in which the receiver's true input ⚠ {$x_2$} is split into ⚠ {$m$} shares. This protocol achieves covert deterrence factor of ⚠ {$\epsilon = (1- 1/\ell) (1- 2^{-m+1})$}, of the strong explicit cheat variety.

Categories:

SecurityModels