Calculating encryption schemes’ theoretical security guarantees eases comparison, improvement.

Larry Hardesty | MIT News Office
October 30, 2014

Most modern cryptographic schemes rely on computational complexity for their security. In principle, they can be cracked, but that would take a prohibitively long time, even with enormous computational resources.

There is, however, another notion of security — information-theoretic security — which means that even an adversary with unbounded computational power could extract no useful information from an encrypted message. Cryptographic schemes that promise information-theoretical security have been devised, but they’re far too complicated to be practical.

In a series of papers presented at the Allerton Conference on Communication, Control, and Computing, researchers at MIT and Maynooth University in Ireland have shown that existing, practical cryptographic schemes come with their own information-theoretic guarantees: Some of the data they encode can’t be extracted, even by a computationally unbounded adversary.

The researchers show how to calculate the minimum-security guarantees for any given encryption scheme, which could enable information managers to make more informed decisions about how to protect data.

“By investigating these limits and characterizing them, you can gain quite a bit of insight about the performance of these schemes and how you can leverage tools from other fields, like coding theory and so forth, for designing and understanding security systems,” says Flavio du Pin Calmon, a graduate student in electrical engineering and computer science and first author on all three Allerton papers. His advisor, Muriel Médard, the Cecil E. Green Professor of Electrical Engineering and Computer Science, is also on all three papers; they’re joined by colleagues including Ken Duffy of Maynooth and Mayank Varia of MIT’s Lincoln Laboratory.

The researchers’ mathematical framework also applies to the problem of data privacy, or how much information can be gleaned from aggregated — and supposedly “anonymized” — data about Internet users’ online histories. If, for instance, Netflix releases data about users’ movie preferences, is it also inadvertently releasing data about their political preferences? Calmon and his colleagues’ technique could help data managers either modify aggregated data or structure its presentation in a way that minimizes the risk of privacy compromises.