Computer Science/Discrete Mathematics Seminar I

Shuffling is Universal: Statistical Additive Randomized Encodings for All Functions

The shuffle model is a widely used abstraction for non-interactive anonymous communication. It allows $n$ parties holding private inputs $x_1,\dots,x_n$ to simultaneously send messages to an evaluator, so that the messages are received in a random order. The evaluator can then compute a joint function $f(x_1,\dots,x_n)$, ideally while learning nothing else about the private inputs. The model has become increasingly popular both in cryptography, as an alternative to non-interactive secure computation in trusted setup models, and even more so in differential privacy, as an intermediate between the high-privacy, little-utility {\em local model} and the little-privacy, high-utility {\em central curator model}.

The main open question in this context is which functions $f$ can be computed in the shuffle model with {\em statistical security.} While general feasibility results were obtained using public-key cryptography, the question of statistical security has remained elusive. The common conjecture has been that even relatively simple functions cannot be computed with statistical security in the shuffle model.

We refute this conjecture, showing that {\em all} functions can be computed in the shuffle model with statistical security. In particular, any differentially private mechanism in the central curator model can also be realized in the shuffle model with essentially the same utility, and while the evaluator learns nothing beyond the central model result.

This feasibility result is obtained by constructing a statistically secure {\em additive randomized encoding} (ARE) for any function. An ARE randomly maps individual inputs to group elements whose sum only reveals the function output.
Similarly to other types of randomized encoding of functions,  our statistical ARE is efficient for functions in $NC^1$ or $NL$. Alternatively, we get computationally secure ARE for all polynomial-time functions using a one-way function. More generally, we can convert any (information-theoretic or computational) ``garbling scheme'' to an ARE with a constant-factor size overhead.

Joint work with Saroja Erabelli, Rachit Garg, and Yuval Ishai.

Date & Time

May 18, 2026 | 11:00am – 12:00pm
Add to calendar 05/18/2026 11:00 05/18/2026 12:00 Computer Science/Discrete Mathematics Seminar I use-title Topic: Shuffling is Universal: Statistical Additive Randomized Encodings for All Functions Speakers: Nir Bitansky, New York University More: https://www.ias.edu/math/events/computer-sciencediscrete-mathematics-seminar-i-626 The shuffle model is a widely used abstraction for non-interactive anonymous communication. It allows $n$ parties holding private inputs $x_1,\dots,x_n$ to simultaneously send messages to an evaluator, so that the messages are received in a random order. The evaluator can then compute a joint function $f(x_1,\dots,x_n)$, ideally while learning nothing else about the private inputs. The model has become increasingly popular both in cryptography, as an alternative to non-interactive secure computation in trusted setup models, and even more so in differential privacy, as an intermediate between the high-privacy, little-utility {\em local model} and the little-privacy, high-utility {\em central curator model}. The main open question in this context is which functions $f$ can be computed in the shuffle model with {\em statistical security.} While general feasibility results were obtained using public-key cryptography, the question of statistical security has remained elusive. The common conjecture has been that even relatively simple functions cannot be computed with statistical security in the shuffle model. We refute this conjecture, showing that {\em all} functions can be computed in the shuffle model with statistical security. In particular, any differentially private mechanism in the central curator model can also be realized in the shuffle model with essentially the same utility, and while the evaluator learns nothing beyond the central model result. This feasibility result is obtained by constructing a statistically secure {\em additive randomized encoding} (ARE) for any function. An ARE randomly maps individual inputs to group elements whose sum only reveals the function output. Similarly to other types of randomized encoding of functions,  our statistical ARE is… West Lecture Hall and Remote Access a7a99c3d46944b65a08073518d638c23

Location

West Lecture Hall and Remote Access

Speakers

Nir Bitansky, New York University