Yeah, the full HTTP/2 support was a pleasant surprise! Wonder if Google's (and other prominent ones) web properties have switched over to HTTP/2 from SPDY yet.
Yes. According to Patrick McManus (owner of Gecko's networking stack), 9% of Firefox requests are already using HTTP/2 with the draft implementation enabled for Google (and now Twitter). With Google alone, HTTP/2 usage is already higher than SPDY.
Sprites and concatenated js/css are still useful. HTTP/2 lets you make 100 requests over a single connection, and that's a lot better than making 100 requests over 100 connections, but it's still not going to beat one request over one connection.
That would be an interesting thing to benchmark, particularly since separate resources can be cached independently.
If you're using every single image, line of JavaScript, etc. as soon as the page loads, a single large file will probably still be faster. If, however, you have a mix of things which are only used for optional features, not on every page, etc. there's room for some nice improvements because independent resources can be loaded immediately whereas a huge concatenated JavaScript file has to wait for the entire transfer before it can be safely executed. If your site really needs jQuery and 90 plugins to work at all, that doesn't help, but if a fair amount of your code can be loaded independently the overall load time can be shorter if some of the code executes while other resources are still fetching rather than waiting until it has everything.
The independent fetch + caching is also more valuable for repeat visitors who have cached copies of everything which didn't change – i.e. if you only touched an icon in the footer of your page, the experience is better if your header logo displays instantly out of the cache rather than having to wait for an entire sprite to reload.
Performance on the Web isn't RAM-bound at this time, unless you're running on a Raspberry Pi or somethin. Network performance is a much stronger limiting factor, which is why protocols like HTTP/2 are being made in the first place.
I'm also skeptical that sprites must necessarily be significantly more demanding memory-wise than smaller images. Implementations may vary, but at least in current browsers, when you make an Image tag/object with the same source as one you've already loaded, the second one loads instantly. This at least makes it appear that both image tags are getting some kind of lightweight view into the same image data -maybe just a pointer to the same bucket of bits- as opposed to having to re-download and/or re-copy the image data for each instance.
The memory usage of CSS sprites (which are implemented as css background images rather than Image objects) was definitely an issue a couple of years back. Inlining the images as base64 data uris seems much better.
I think he means the empty space that is demanded by sprites, in memory it occupies space as a bitmap. If you use a smart sprite builder that optimizes stacking, it will probably be less than 10% overhead, but still relevant.
There are installations which don't support ECC, usually due to hardware limitations, but do support RSA-4k. Often this cert is the trust anchor for various other uses than just SSL, so the result is you end up using RSA-4k instead of ECC.
There's also folks who don't entirely trust ECC because it's "too new". To be fair, RSA isn't broken, and 4k is sufficiently big to be safe even with a partial break.
Actually, not just embedded systems. There's HSMs (hardware security modules) which also can't be updated to support new functions. Often this is because the underlying primitives have been implemented in fixed-function hardware to prevent timing, power and even RF analysis.
BTW, is there already support for 4096 certs? Or is that coming later?