This is a good example of what can go wrong when you try to use too much cryptography. When designing a protocol/application, you should first try to do it without crypto, then with hashes/symmetric crypto, and only as a last resort public key crypto (e.g. signatures, RSA), since the more crypto you use the more things can go wrong.
Here, ACME was using signatures for its challenges when it could have gotten by with no crypto at all (just putting the account ID and CA name in the challenge would have been secure, and easy to analyze) and got tripped up by a counter-intuitive property of signatures (signatures do not uniquely identify a person).
These things are hard to get perfect and attackers generally have more resources. I have a question though: wouldn't encrypting the data in .well-known/acme-challenge/some_file with a LE public key (this can be loaded out of band to prevent MitM) prevent this signature attack?
Maybe. But why would you try that when you can just not use signatures, which were the wrong tool in the first place? My point is that you usually need less crypto, not more. Trying to fix fundamental problems by tacking on more crypto might make a protocol more secure, but it definitely will make the protocol harder to analyze, which makes it harder to find problems.
I'm implementing the ACME protocol to automatically renew certificates and the claims in the article don't make sense to me.
In the attack by Eve, it is claimed that Eve recovers the account. Unless she knows the private key of the account, this is not possible because there is no function in the ACME protocol to recover an account.
The attack also suppose that the special file stored in `/.well-known/acme-challenge/` is downloaded by Eve. But this file is usually automatically deleted after the certificate renewal is completed. It's not even clear from the explanation what Eve could do with this file.
The whole security of the ACME protocol relies on the assumption that nobody except the owner of the domain name can return a special crafted data when `http://<domain>/.well-know/acme-challenge/<token>` is requested with the GET method. Cryptographic signatures aren't really needed here.
Any key pair can be used to renew any certificate with any associated private key. I thus don't understand the point the author of this article is trying to make.
The attack described in the blog post dates from 2015. The ACME challenge protocol has been updated a few times since then. You are completely correct that signatures aren't really needed for the challenges, yet this was the initial design and it is a property of the signatures that leads to the described attack.
The trick of Andrew Ayer's attack is that the various challenges were just a particular signature under the account's public key, with the assumption that the signature uniquely identifies the public key (of the Let's Encrypt account) which controls the website. Unfortunately, this isn't true.
If you make a signature for some message and your public key. I can pick my own public key such that your signature verifies under my public key. This is maybe a little counter intuitive.
So when someone uploaded their challenge, an attacker could make a new public key and claim "hey, that challenge works under my public key!" and Let's Encrypt would then issue them with a certificate. Deleting the challenge after use would partially help, but in practice DNS is pretty slow to update and HTTP challenges might also be cached.
Disclosure: I'm an author on the paper linked in the latter half of the blog post.
Keep in mind the vulnerability was in a 5 year old version of the protocol and has since been fixed, so the description of the protocol won't match what you're familiar with.
> In the attack by Eve, it is claimed that Eve recovers the account
The article says Eve "recover[s] the example.com domain". It should say request a certificate for example.com. I've mentioned this to the author.
> But this file is usually automatically deleted after the certificate renewal is completed.
Usually, but not always, and an attacker could always try to race the legitimate certificate request.
> It's not even clear from the explanation what Eve could do with this file.
Eve takes the signature and constructs her own ACME account key that produces the same signature as Alice's account key. Note though that the HTTP challenge wasn't practically exploitable because Eve would get a different token which wouldn't exist on Alice's server. The attack is better explained in terms of the DNS challenge, which was practically exploitable. You can find such explanations in my blog post (https://www.agwa.name/blog/post/duplicate_signature_key_sele...) or IETF post (https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJC...).
> The whole security of the ACME protocol relies on the assumption that nobody except the owner of the domain name can return a special crafted data when `http://<domain>/.well-know/acme-challenge/<token>` is requested with the GET method. Cryptographic signatures aren't really needed here.
Cryptographically it seems as though all we need in HTTP is that we get back a special token when we ask for it. But it's essential in designing real-world security systems to understand real world practice. Historically (prior to the Ten Blessed Methods explicitly forbidding this) it was not uncommon for HTTP-based DV to go like this:
Cryptographically it's fine, who else but the owner could make this test pass? But in the real world lots of web servers when you ask them for average-12345678 will go "Sorry, average-12345678 isn't available. Maybe you'd like to visit our home page?" and that text matches the token and passes the test.
That really happened, to real commercial CAs which are still trusted today, because they hadn't thought about real world problems (also in one case because they goofed an HTTP response code check so if you gave that reply as a 404 their code wouldn't notice it was a 404 before passing the test)
Let's Encrypt was designed to prevent this mistake (even though embarrassingly it was still happening at other CAs years later until the Ten Blessed Methods were de facto imposed by Mozilla policy) but managed to make a very similar mistake in tls-sni-01 which was dangerous because Apache httpd (and maybe nginx?) has crazy default behaviour for virtual hosts on HTTPS. Again, in principle tls-sni-01 looks safe, who else but the real owner could answer TLS setups with bogus SNI information? But the real world gives us an answer: Anybody sharing a cheap bulk hosting site with you if the host used one of the world's most popular web servers.
You can see analogues in the real world too. We had a sub-thread on HN recently about RFID entry badges. Most use a very passive design which is easily cloned. But you can buy hard-to-clone secure systems for this role. Having done so you might assume only employees and legitimate visitors can get into your facility. And then you see that in the real world your employees are still letting people tailgate and leaving fire doors open to take a smoke break and you realise that cryptographic security of the RFID entry cards was not in fact your big problem in controlling which people are in the building.
It’s not assuming MITM or that the attacker can upload the signature to the site.
The attack is that the attacker can reuse the already uploaded signature in a way that allows them to get certificates issued under their account instead of the initial owner.
This blog is a little confusing about that since it does read like they are supposed to upload their own sig with the graphic used.
> The attack was found merely 6 weeks before major browsers were supposed to ship with Let's Encrypt's public keys in their trust store.
This is both wrong in a small way and misleading in a larger way.
The first big browser to add keys from "Let's Encrypt" (actually from ISRG, the Internet Security Research Group, a 501(c)(3) entity which exists to run Let's Encrypt) to their trust store in a shipping browser was Firefox, in November, rather more than six weeks away.
But that's misleading because even in say, Internet Explorer on Windows, which uses Microsoft's trust store (of course) and didn't trust ISRG until several years later of course a certificate from Let's Encrypt worked fine on the first day of production.
What makes your leaf certificate trusted is that it's signed by an Intermediate CA which is trustworthy, and while ISRG's Intermediates (at that time Let's Encrypt Authority X1 and Let's Encrypt Authority X2) were signed by ISRG they also had copies of more or less the same certificates (same public keys) signed by an existing trusted CA - DST Root CA X3, which was in most public trust stores for years and currently belongs to IdenTrust. Those copies are used (by default, you can swap in the ISRG versions if you want) today to give you a trusted path back to even quite old web browsers.
This is a pretty pedantic point and the inaccuracy doesn't detract from the rest of the post, though I will mention this to the author as accuracy is important.
Here, ACME was using signatures for its challenges when it could have gotten by with no crypto at all (just putting the account ID and CA name in the challenge would have been secure, and easy to analyze) and got tripped up by a counter-intuitive property of signatures (signatures do not uniquely identify a person).