Information theory — the discipline that gave us digital communication and data compression — also put cryptography on a secure mathematical foundation. Unfortunately, as a group of researchers at MIT and the National University of Ireland (NUI) at Maynooth, demonstrated in a paper presented at the recent International Symposium on Information Theory, that assumption is false.
“We thought we’d establish that the basic premise that everyone was using was fair and reasonable,” says Ken Duffy, one of the researchers at NUI. “And it turns out that it’s not.” On both papers, Duffy is joined by his student Mark Christiansen; Muriel Médard, a professor of electrical engineering at MIT; and her student Flávio du Pin Calmon.
The problem, Médard explains, is that information-theoretic analyses of secure systems have generally used the wrong notion of entropy. But in cryptography, the real concern isn’t with the average case but with the worst case. A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file in order to begin to deduce further correlations. In the years since Shannon’s paper, information theorists have developed other notions of entropy, some of which give greater weight to improbable outcomes. Those, it turns out, offer a more accurate picture of the problem of codebreaking.
Read more here…
FOLLOWUP:
ReplyDeleteQUOTE
So the article is fluff, the details can be found in the linked paper. The just of it is a refutation of the following assertion: if you have a set of symbols chosen with identical independent distributions and subject to some kind of coding, the result can be approximated as a uniform distribution.
The paper asserts, with a few citations to some examples, that this is a common cryptographic assumption. It is, as far as I can tell from reading the literature and talking to other practitioners, not a common assumption at all. In fact, in standard encryption systems, we assume that the plaintext is chosen with a known distribution that can be arbitrary(indeed, attacker chosen), and keys are chosen randomly.
In practice, keys are not chosen randomly, they are chosen using cryptographically secure random number generators. Those can fail, but not in the way the paper is talking about.
Certain papers, such as maybe the cited ones about biometrics and passwords, might make this erroneous assumption, but it's not common and certainly doesn't relate to what most non-practitioners would consider "encryption."
Moreover, it appears that you still can't make guesses about messages in polynomial time with this technique, you can just make them in faster exponential time. It depends highly on how the assumption was used if this maters in practice. Not having read the papers which do make this assumption, I can't say for sure, but if doing this breaks those systems in a practical sense, the schemes likely had other issues.
A better title for the article is: a few cryptographers made some dumb mistakes. Mistakes neither pervasive or of massive consequence.
UNQUOTE
Read more here: http://crypto.stackexchange.com/questions/9741/mit-says-mathematical-theory-behind-encryption-is-wrong-what-are-the-consequen