I find it surprising that so many CDNs were vulnerable. They are literally services that operate open front ends that accept unauthenticated requests and issue (expensive) backend requests based on the front end requests. Wouldn’t counting the number of requests issued and the number in flight per outside IP, network, etc be a basic part of the process?
Sure, HTTP 1.1 makes it extremely awkward to have lots of backend requests in flight per front end connection. But HTTP 2.0 makes a factor of 100 variation in the backend / frontend ratio trivial, and that’s a big number already.
Calling this critical vulnerability in the protocol seems like an odd way to phrase what I see as an accounting failure in the overall system that apparently affected everyone. If I were implementing such a system, I wouldn’t have wanted to ignore the difference between an HTTP 1.0/1.1 connection with a single backend request and a 2.0 connection with 100, and catching the case where a 2.0 connection had 100 live backend requests and a couple thousand more orphaned requests seems like a natural result of accounting the requests correctly in the first place.
I think even if it was then it's likely better to have the attackers show their hands by attacking (comparatively) irrelevant targets.
I would assume that there are insights to be gained to even more effectively mitigate potential future attacks by this.
So, how much does this, if at all, slow overall internet traffic. I am talking in terms of just bits getting from one place to another over the tubes and not, I can't get to google because it's dealing with 400M rps.
Sure, HTTP 1.1 makes it extremely awkward to have lots of backend requests in flight per front end connection. But HTTP 2.0 makes a factor of 100 variation in the backend / frontend ratio trivial, and that’s a big number already.
Calling this critical vulnerability in the protocol seems like an odd way to phrase what I see as an accounting failure in the overall system that apparently affected everyone. If I were implementing such a system, I wouldn’t have wanted to ignore the difference between an HTTP 1.0/1.1 connection with a single backend request and a 2.0 connection with 100, and catching the case where a 2.0 connection had 100 live backend requests and a couple thousand more orphaned requests seems like a natural result of accounting the requests correctly in the first place.