Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really damaging.

Not only will this cause other countries to put up barriers against US (and UK) services and products, it's going to affect uptake of standards developed here.

On the lighter side, a treasure hunt was just announced. Can you find one of these vulnerabilities, or evidence of the NSA having attacked a particular system to steal keys?

----

[Edit 1] Some speculation:

By careful hardware design -- and lots of it -- the NSA may be able to find keys large enough that we would be mildly surprised but not shocked. It's not well known that searching for many keys in parallel amortizes well -- it's much cheaper than finding all the keys individually. DJB has a great paper about this:

http://cr.yp.to/snuffle/bruteforce-20050425.pdf

If I were looking for subverted hardware, I'd be really interested in reverse engineering Ethernet chips and BMCs. The CPU would be an obvious choice as well -- could there be some sequence of instructions that enables privilege escalation?

On protocols, the best sort of vulnerability for the NSA would be the kind that is still somewhat difficult and expensive to exploit. They want the security lowered just far enough that they can get the plaintext, but not so far that our adversaries can.

There is some history with not taking timing attacks seriously enough. Perhaps careful timing observation, which the NSA is well positioned to do, could give more of an edge than we suspect. Or perhaps you could push vendors to make their products susceptible to this kind of attack, secure in the belief that it may be difficult for others to detect.

[Edit 2]

I gave a talk that discussed what I think we as engineers should do here:

https://www.youtube.com/watch?v=c7oK59DZwR4#t=1m46s

And Phil Zimmermann and I discussed a number of these issues in a Q&A session:

https://www.youtube.com/watch?v=W42i8zCEizI#t=49m55s



I would not be at all surprised to learn that the major advance these disclosures refer to is an on-demand RSA-1024 factoring capability. RSA-1024 is already known to be unsafe (Eran Tromer estimates a 7 figure cost for a dedicated hardware cracker, which is approximately the threshold DES was at in the late '90s, when nobody believed DES was secure). On-demand offline RSA-1024 attacks would have major implications, would be a huge advance in the state of the art, but also seems feasible given an effectively unlimited budget.


That makes sense. I think it unlikely they've discovered an actual break through. They do have their own fab, how many chips do you need to build to Mae that worthwhile? It's the US government after all, a machine with 10million specialized RSA chips doesn't seem impossibly difficult, just expensive.


Governments are big, dumb animals, so make whatever you're trying to protect very expensive ($20-50 Billion range) to brute-force within usability constraints.

Btw... apart from Scrypt paper, has anyone put together a practical guide on crypto parameter brute force costs? (say volume pricing of gear and asics in huge qty)


I think we know very well which encryption has been foiled by the NSA. This is not speculation, but quasi-certainty: 1024-bit RSA.

- Crytographers all acknowledge 1024-bit RSA is dead [1].

- Attack cost 10 years ago was estimated to be a few million USD to build a device able to crack a 1024-bit key every 12 months [2].

- "Much of" the "secure" HTTPS websites use such weak key sizes [3].

- NSA had a budget of 10.8 billion USD in 2013.

Drawing a conclusion is not very hard.

[1] http://arstechnica.com/uncategorized/2007/05/researchers-307... [2] http://www.cs.tau.ac.il/~tromer/twirl/ [3] https://www.eff.org/pages/howto-using-ssl-observatory-cloud


Some popular browsers still do not support newer versions. We tried turning this on with a newer, more secure key and ended up having downtime for some customers.


Which browsers in particular?


Android Browser on Google TV, and Java libraries hitting our APIs. Google TV and Android browsers are critical to our business.


I have written lots of Java code accessing HTTPS sites with 2048 or 3072-bit RSA. This is perfectly supported. You do not even need the Unlimited Strength Jurisdiction Policy Files to use such RSA key sizes (other algorithms are restricted).

I can't comment on Android Browser on Google TV, but I very highly doubt it fails to support 2048-bit RSA keys. If that was the case, half the HTTPS websites would be unbrowsable(!) [1]

[1] Per the EFF SSL observatory dataset, roughly 1 in 2 websites uses key lengths strictly higher than 1024 bits.


We had downtime for this, so I am 100% sure. We isolated it to the key, and reverting the cert/key back to 1024 fixed it. It was just an option on GoDaddy one of the engineers picked to generate a 2048 cert. They only offer 1024 and 2048. One key worked, the other didn't.


It must have been something else that broke it, not the key size. Android Browser definitely supports 2048-bit RSA certs. Maybe a root cert was absent from the browser (GoDaddy would be using a different root for 2048-bit certs?). Or maybe intermediate certs were missing in the certificate path. It sounds like your engineer did not spend much time trying to figure out what aspect of SSL/X.509 was actually causing the problem.


There were no problems with accessing the site with Chrome or other modern browsers. What you described would have been a problem with all browsers, and anyway GoDaddy supplies all the files you need in a single zip file, including the intermediate certs. We did simply revert the SSL key, once we isolated that to be the problem. There is no pressing business need for a 2048-bit key.


That is incorrect. Mobile browsers, the JVM, etc, notoriously lag behind desktop browsers when it comes to updating the list of root certs (and intermediate certs too, but that seems irrelevant in your case). The consequence is that a site can be accessed from the desktop, but not from a mobile.

It was a recurrent problem at a previous job with a Java app accessing HTTPS sites. We could not always update the JVM (which comes with the most recent list of roots in "cacerts"), so we had to develop a solution to push the latest cacerts truststore to our application. Problem fixed.

Do you know if Android Browser users reported at least an ability to click through an SSL warning to get to the site?


Yes, now I remember, I think you are correct. They were able to click through the SSL warning but because we use socket.io they had additional problems. Some of our customers do not employ full time engineers and whatever script they were using with our API were using libraries that couldn't handle the SSL cert change and they could not easily update them. We couldn't ask our customers, mostly sales and customer service oriented directors, to handle a complicated certificate change either.


Very unlikely. Virtually all browsers support 2048-bit RSA. Keys larger than 2048 bits, however, are not always supported (which is probably what you tried).


I am confused. When I see HN or facebook certs they show 128 bit encryption in the browser box. 128 bit seems pretty low.


In that case, you are seeing the size of the symmetric key being used. The bigger numbers mentioned above (1024, 2048, etc) are referring to the size of the public/asymmetric keys. The public keys are only used to set up the initial exchange of symmetric keys, which are then used to secure your browser's encrypted connection.


This is the size of the AES key. AES is a symmetric algorithm and 128 bit are still considered solid there, although the trend is moving towards 256 bit. What we're talking about here is the key size of RSA, which is an asymmetric algorithm. If you don't know the difference, go find a basic crypto tutorial. As you can read above, 1024 bit RSA is probably borken. I wouldn't trust 2048 bit too much as well. Also, progress in breaking RSA is happening a lot faster than with AES.

In the context of SSL, an assymetric algorithm like RSA is used to exchange symmetrc keys, which are used afterwards.


That said, 256-bit isn't really that much of an improvement for AES - its favored since that's the US standard for Top Secret classification, but in practice any attack which brings down AES-128 will almost certainly get AES-256 as well. I've switched most of my SSH servers over to default to 128-bit AES ciphers since the difference in difficulty seems small enough that it won't matter if someone actually tries targeting it and can succeed.


128 talks about symmetric key encryption, not RSA.

I'm not sure what the crypto best practice is regarding key strength for 128 bit for symmetric crypto, but presumably it would depend on the cipher used.


Probably AES, not RSA. IIRC a 128-bit AES key is about equivalent in security to a 2048-bit RSA key.


2048 bit RSA is usually described as roughly equivalent in security margin to a 112 bit symmetric key, and 3072 bit RSA to be 128 bit symmetric equivalent.


The article you linked to in [1] doesn't explicitly say that generalized 1024-bit RSA is dead. They found a way to exploit a special case key (Mersenne number keys). Searching around the internet, I found a bunch of articles about supposed cracks, but they all involved additional sources of information. I'm not doubting that the NSA has found ways to crack all sorts of crypto, but is there really a known way to break 1024-bit RSA without other special qualifiers?


When asked whether 1024-bit RSA keys are dead, Lenstra said: "The answer to that question is an unqualified yes."

Generalized 1024-bit RSA keys are dead. Lenstra is making a comment on generalized 1024-bit RSA keys in this sentence. Not on Mersenne number factorization (which is, yes, the main topic of this article).

My link [2] tells you concretely how to break 1024-bit RSA and estimates the cost to $10M, well within NSA's capabilities.


Considering http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/ I wouldn't bet on "deep encryption" either ...


There are two very different questions:

1. What are the chances that crypto will keep the NSA out of your communications generally? and

2. What are the chances that crypto will keep the NSA out of your communications if they really, really want to read them?

Those questions are very different. I wonder, for example, of what would happen if all internet traffic was encrypted end to end with something as weak as DES. Could the NSA brute force it? Of course. Could they brute force all of it? doubtful.

One of the clear in-between-the-lines things in the article is that crypto is still problematic to the point where the NSA prefers to attack endpoints and get access that way instead of attacking the crypto itself.


One of the vulnerabilities was already discovered by researchers in 2007: http://rump2007.cr.yp.to/15-shumow.pdf

At the time, it wasn't clear if this was a deliberate backdoor or an accident, but it was proven that there there was a possibility that there was a secret key that would allow someone to predict future values of a pseudo random number generator based on previous values. Now it looks pretty clear that it was a deliberate backdoor.

This really reduces trust in US based cryptographic standards. And US based cryptographic hardware, as they mention in the article that they convinced hardware manufacturers to insert backdoors for hardware shipped overseas.


This is almost definitely not "one of the vulnerabilities" implicated in the story today, because nobody uses CSPRNGs based on Elliptic Curve.


so what was the vulnerability found by ms in 2007 that they are referring to? (search for 2007 in single page version at http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet...)

edit: reading in more detail around there, i am pretty sure that section of the article is referring to the CSPRNG vulnerability above. the article covers a lot of ground and not all of it is about problems with ssl. that particular section seems to be arguing that the nsa is trying to put backdoors into standards wherever it can.


I don't know. I'm just saying, weakening a CSPRNG design that nobody uses or is ever likely to use (it's extremely expensive) is not a particularly meaningful action.


It sounds an awful lot like that's the one the New York times was describing. Can you think of any other standard that was published in 2006 by NIST which two Microsoft researchers discovered a flaw in in 2007? That sounds exactly like Dual_EC_DRBG

> Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

> Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology, the United States’ encryption standards body, and later by the International Organization for Standardization, which has 163 countries as members.

> Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

> “Eventually, N.S.A. became the sole editor,” the memo says.

Now, that may not have been an effective technique, as you point out it's so slow that no one is ever going to use it, and this vulnerability was discovered not long after it was published.

So, that's obviously not a vulnerability that they are actively exploiting. If they are actively exploiting a vulnerability that they introduced, it must be something else. It wasn't clear from the article that that's actually the case; it may be that the vulnerabilities they are exploiting are ones they've found, not introduced deliberately.

But it does appear to be an example of a vulnerability that they were able to get standardized, in the hopes of being able to exploit it. Until now, it has been only speculation that it was a deliberate vulnerability, but it now seems clear that it was.


Sure. I think we agree. If "it" is a crypto weakness they are actually exploiting, "it" is not Dual-EC DRBG.


Ah, yes, I wasn't trying to say they were exploiting that particular vulnerability. Just that we now have better evidence that that really was a (rather poor) attempt to subvert standards to make them easier to decrypt.

The NSA seems to be really divided between SIGINT and COMSEC. COMSEC wants to provide good, strong encryption, that can help secure US government and corporate communication. SIGINT wants to be able to read everyone's traffic.

For example, they changed the DES s-boxes in a way that made it more secure against differential cryptanalysis. They've released SELinux. There is a part of the NSA that does actually try to make encryption standards stronger.

But then there's the part that advocates for the Clipper chip, advocates for controls on exporting strong crypto, or strongarms NIST into standardizing Dual-EC DRBG. And that part does real damage, as everyone suffers from the weak export crypto (either people overseas have to work on strong crypto, or products are released with weak or no crypto because regulatory compliance is too complicated), or people stop trusting US software and hardware.

The NSA seems to be doing some real damage to technology companies in the US. I had thought that they had gotten better about it, after they gave up on the clipper chip and lifted most of the export controls, but it looks like I was wrong, they've just decided to take more covert routes to do the same thing, with the hope that none of the tens or hundreds of thousands of people who could find out about it would leak that information.


Nitpick: they changed DES's S-boxes. DSA doesn't have S-boxes. Skepticism about NSA's involvement in any crypto standard (a decade ago!) led NIST to document precisely the mechanism used to generate DSA's parameters.

I think maybe it's the fact that I started in the industry during the era of Clipper that stuff like this doesn't faze me much.


Gah. I even noticed that typo while writing it, then forgot about it after having edited another part of my comment. Yes, I meant DES, not DSA.


As you may well know, the NSA has its own ciphers (Suite A) it uses for top secret classified traffic, which to me is positive proof you can't trust anything they recommend (AES) - when they don't even use it themselves.


Not positive proof -- merely suggestive. Which is true of a lot of things in the secret world of the intelligence services.

The more you use a secret cipher, the easier it is to break. It is simply good operational practice to use a different cipher for a small fraction of communications -- namely, the most secret ones. Just like certain antibiotics are reserved for drug-resistant organisms. You don't want it to lose its effectiveness through overuse.

First, security-by-obscurity does indeed buy you some additional time. Because the cipher is secret, your opponent has to figure out the algorithm as well as the attack.

Second, this reduces the amount of traffic that the opponent can analyze. For example, suppose that only 1% of messages use Suite A, and 99% of messages use Suite B. With fewer messages to analyze, the job of breaking the cipher becomes much harder.

Third, the reduced volume also makes known-plaintext attacks more difficult. Especially if you avoid committing the cardinal sin of repeating the same message using two different ciphers.


I'm not sure that Suite A is actually stronger than Suite B. In fact, it may be weaker, for practical reasons (efficiency of encrypting high-bandwidth streams in resource-constrained devices), and so they are relying on an additional layer of security-through-obscurity to help keep it safe for longer.

There is some information known about some of the algorithms. Wikipedia has pages on BATON https://en.wikipedia.org/wiki/BATON and SAVILLE https://en.wikipedia.org/wiki/SAVILLE. You may notice that these are frequently used for hardware implementation in radios, smart cards, encrypting video streams, etc; devices that are probably fairly resource constrained, and would be hard to replace with new hardware if attacked.

If you look at the description of BATON, it has a 96 bit Electronic Code Book mode. Yes, ECB, the one that is famous for leaking information, as you can tell which blocks are identical and get a good deal of information out of that.

But even with fairly efficient hardware implementations, adversaries have been able to use off-the-shelf software to intercept Predator drone video feeds because encryption was disabled for performance reasons: http://www.cnn.com/2009/US/12/17/drone.video.hacked/index.ht...

The NSA has approved both Suite A and Suite B for top-secret material. I really don't think that they have any worries about the security of Suite B (though as Schneier points out, you may want to be a bit paranoid about their elliptic curves, as it's possible that they have ways of breaking particular curves that other people don't, like they did with the Dual EC DRBG that they promoted). I suspect that Suite A is around for legacy reasons, as they have been implementing it for longer than Suite B has existed and many of the implementations are in hardware or otherwise difficult to update.


They use AES when dealing with "outsiders", but I have trouble believing that they use it internally (and instead use the Suite A ciphers -- it's impossible for others to break them if they don't know anything about them, right?)


> (and instead use the Suite A ciphers -- it's impossible for others to break them if they don't know anything about them, right?)

It is possible to break an unpublished cipher. Just more difficult, because you've got to figure out the algorithm as well as the key. As long as it is similar to existing algorithms, you can try and look for differences.

For example, American cryptanalysts broke the Japanese Purple cipher during World War II entirely from encrypted messages. It was only at the end of the war that they managed to recover parts of one machine from the Japanese embassy in Berlin. No complete machine was ever found.

(In contrast, Enigma machines were captured, so cryptanalysts could directly examine the mechanism and use this knowledge to look for weaknesses.)

Of course, if the algorithm is completely novel, and bears no resemblance even to any principle used in published cipher, then it's a lot more secure. It would be hard to even begin to analyze it.


That said, it's unlikely the NSA has truly novel algorithms. They recruit from the general public like everyone else. There principle advantage is that they're big (working for the NSA is appealing) and can classify in-house breakthroughs.


> The NSA seems to be really divided between SIGINT and COMSEC.

Anybody care to guess which group was responsible for the FUBAR that gave Snowden the keys to the kingdom?


Heh. It would be funny if people in COMSEC actually allowed this material to be leaked because they were disgusted about SIGINT putting so many vulnerabilities into publicly available crypto, and wanted to let it be revealed to stop that practice.

Unlikely, though. More likely that Snowden was just acting on his own. And he didn't really have "the keys to the kingdom"; just more access to a fileserver that had lots of PowerPoints on it than he should have had. If you note, almost everything that has leaked so far is PowerPoints where various branches of the NSA describe to each other and other government agencies what capabilities they have, but not the actual details of those capabilities. He probably had access to some fileserver used by the higher level executives at the NSA, but they do compartmentalize information and as they mentioned were very secretive about exactly what those vulnerabilities consist of, so there's a good chance that he didn't have access to systems where that was described.


This may be a case of the Times assuming that since 1.5 rounds to 2, 1.5 + 1.5 = 4. "The NSA breaks crypto" + "The NSA backdoored Dual-EC DRBG" = "The NSA breaks crypto via backdoored Dual-EC DRBG".


Not a crypto expert at all, but did they knew in advance that nobody would use it? Otherwise it could just be a failed attempt.


I don't know what they expected, but Dual-EC is self-evidently noncompetitive.


A RNG that is reducible to a different believed-hard problem has possible features, so it's not like there could never be a reason for someone to choose this generator. What we could be seeing is the discovery of one failed attempt of a shotgun approach to promulgate insecure primitives. It's hard to know what will happen to become commercially successful, so spray and pray.

Something this blatant does seem like a severe misstep, but perhaps what led to discovery of this case is the wide body of public knowledge on number theoretic crypto. The energy of the public sphere seems mostly devoted to studying problems with interesting mathematical structure. Symmetric crypto has been around a lot longer, and is sufficient for state security purposes, so one would expect the NSA to have a deep analytic understanding of it (hence the differential analysis olive branch). It's not hard to imagine that they'd have ways of creating trapdoor functions out of bit primitives, generating favorable numbers with plausibly-impartial explanations, etc.


> A RNG that is reducible to a different believed-hard problem has possible features

I think you got it backwards... shouldn't you reduce hard problems down to the problem whose difficulty you're trying to understand?


Yep, I misspoke. I simply meant 'is based on', and shouldn't have used big words so cavalierly.


Nobody uses them because they came out of the NSA with little precedent in the open literature, and independent analysis quickly uncovered this vulnerability.


Also nobody used it because it's a CSPRNG that requires bignum multiplication.


Imagine this is proved in France, this would add some weight to the investigation and case against the US, esp if they can prove personal encryptyed information was stolen!

http://www.reuters.com/article/2013/08/28/us-usa-security-fr...


> but not so far that our adversaries can.

Please clarify what you mean by "our".

Please clarify what you mean by "adversaries".


Come now, we know enough about the NSA at this point to know that

our, adversaries = America, !America

right?


The NSA are "our" adversaries.


I think that "our = {NSA,U.S. government}, adversaries = !our" is more accurate nowadays.

If you are not the U.S. government, you are their adversary (even if you are a U.S. business or citizen).


In this context, an adversary is any encrypted data they want to be able to decrypt. Anything.


Adversary in security means anyone you don't want reading your data.


In this pdf[1] , they discuss security issues in intel chips.They mention strange responses from intel. Also it's possible, but very hard to exploit those issues , which is optimal in this case.

1[]http://pavlinux.ru/jr/Software_Attacks_on_Intel_VT-d.pdf


Forgive me if I don't open a PDF from a .ru domain. (and yes, I know how silly that response is)


If you were actually interested in the content, there are countless ways to open a PDF safely.

* Open the PDF in a non-adobe reader such as Foxit and Sumatra w/ JavaScript disabled

* Both FF and Chromes internal PDF viewers ignore JS

* You can preview a PDF in Google drive

* Open the PDF in a sandboxed VM.

Guessing for your history of spammming 1 line pointless comments, you probably already know this.


You are right. I didn't know that PDF.js in Firefox was somewhat safer, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: