Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow this is a big release! HTTP 2.0, phasing out 1024 certs, sync, will-change.

BTW, is there already support for 4096 certs? Or is that coming later?



Yeah, the full HTTP/2 support was a pleasant surprise! Wonder if Google's (and other prominent ones) web properties have switched over to HTTP/2 from SPDY yet.


Yes. According to Patrick McManus (owner of Gecko's networking stack), 9% of Firefox requests are already using HTTP/2 with the draft implementation enabled for Google (and now Twitter). With Google alone, HTTP/2 usage is already higher than SPDY.

http://bitsup.blogspot.com/2015/02/http2-is-live-in-firefox....


According to http://spdycheck.org/#google.com google.com supports "h2-14" and "h2-15" which I'm guessing are drafts of HTTP/2?


Right. "h2" is what the final spec should give.


firefox 36 will negotiate any of {h2, h2-14, h2-15}


Using FirefoxDevEdition with SPDY indicator shows me Google, Youtube and Twitter over HTTP/2.


Anyone know of a brief list of workarounds/schemes/optimizations that probably won't be needed (or will be less needed) with http2?

(or just add to this list)

- css sprites

- monolith js/css-files

- rotating through 'alias'-subdomains for resources

- trimming cookies / cookie-less domains (for request-headers)


Sprites and concatenated js/css are still useful. HTTP/2 lets you make 100 requests over a single connection, and that's a lot better than making 100 requests over 100 connections, but it's still not going to beat one request over one connection.


That would be an interesting thing to benchmark, particularly since separate resources can be cached independently.

If you're using every single image, line of JavaScript, etc. as soon as the page loads, a single large file will probably still be faster. If, however, you have a mix of things which are only used for optional features, not on every page, etc. there's room for some nice improvements because independent resources can be loaded immediately whereas a huge concatenated JavaScript file has to wait for the entire transfer before it can be safely executed. If your site really needs jQuery and 90 plugins to work at all, that doesn't help, but if a fair amount of your code can be loaded independently the overall load time can be shorter if some of the code executes while other resources are still fetching rather than waiting until it has everything.

The independent fetch + caching is also more valuable for repeat visitors who have cached copies of everything which didn't change – i.e. if you only touched an icon in the footer of your page, the experience is better if your header logo displays instantly out of the cache rather than having to wait for an entire sprite to reload.


One image with 100 icons also still compresses better than 100 files with 1 icon each.


Sprites are way more demanding memory wise than smaller images.


Performance on the Web isn't RAM-bound at this time, unless you're running on a Raspberry Pi or somethin. Network performance is a much stronger limiting factor, which is why protocols like HTTP/2 are being made in the first place.

I'm also skeptical that sprites must necessarily be significantly more demanding memory-wise than smaller images. Implementations may vary, but at least in current browsers, when you make an Image tag/object with the same source as one you've already loaded, the second one loads instantly. This at least makes it appear that both image tags are getting some kind of lightweight view into the same image data -maybe just a pointer to the same bucket of bits- as opposed to having to re-download and/or re-copy the image data for each instance.


You probably have fewer tabs open than I do...

The memory usage of CSS sprites (which are implemented as css background images rather than Image objects) was definitely an issue a couple of years back. Inlining the images as base64 data uris seems much better.


I think he means the empty space that is demanded by sprites, in memory it occupies space as a bitmap. If you use a smart sprite builder that optimizes stacking, it will probably be less than 10% overhead, but still relevant.


Why 4096-bit RSA certs when you can have ECC certs?


There are installations which don't support ECC, usually due to hardware limitations, but do support RSA-4k. Often this cert is the trust anchor for various other uses than just SSL, so the result is you end up using RSA-4k instead of ECC.

There's also folks who don't entirely trust ECC because it's "too new". To be fair, RSA isn't broken, and 4k is sufficiently big to be safe even with a partial break.


> There are installations which don't support ECC, usually due to hardware limitations

Isn't RSA much more computationally expensive than ECC? What hardware can do RSA but not ECC?

/me sits down to be schooled


Since I can't edit:

EDIT: Unless you're referring to embedded systems that can't be updated?


Bingo :)

Actually, not just embedded systems. There's HSMs (hardware security modules) which also can't be updated to support new functions. Often this is because the underlying primitives have been implemented in fixed-function hardware to prevent timing, power and even RF analysis.


I for one welcome our EdDSA-ECDHE-AES-GCM overlords.

(I know we're not there yet :P)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: