Ex-USDS here. Did 6 months at USDS, still federally employed as a USDS reserve.
Just wanted to chime in on "Who in their right mind would go to work for the government, for minimum wage, unless they were already independently wealthy?" GS-15 pay is nowhere near minimum wage. If you actually think so, I'm sorry to say that you are out of touch with society. And while some USDS employees may be independently wealthy, the majority certainly aren't.
There are also the nice perks of USDS, like working on things that critically matter to the lives of millions of the neediest demographic (both domestically and internationally), and collaborating with a bunch of A+ human beings who, despite the personal sacrifices, passionately want to do what they can to improve the welfare of others.
To answer your specific question about why not in TCP, it's not deployable. Here's why:
* In order to get multiplexing and other features introduced with HTTP/2, you need to change protocol framing. However, this means that the protocol is no longer backwards compatible. There are many ways to roll out non-backwards-compatible changes, but for TCP, they were deemed unacceptable. For example, you could negotiate the protocol change via a TCP extension. However, TCP extensions are known not to be widely deployable (https://www.imperialviolet.org/binary/ecntest.pdf) over the public internet. You could use a specific port, but that doesn't traverse enough middleboxes on the public internet (http://www.ietf.org/mail-archive/web/tls/current/msg05593.ht...). Yada yada.
* More importantly, TCP is generally implemented in the OS. That means, updating a protocol requires OS updates. That means we need both server and client OS updates in order to speak the new protocol. If you look at the very sad numbers on Windows versions uptake, Android version uptake, etc, you'll understand why many people don't want to wait for all OSes to update in order to take advantage of new protocol features.
"""
I understand the points you make, and am sympathetic. I feel the same way when I see people abusing HTTP to provide async notifications, etc.
The fact is, however, if it isn't deployed, no matter how nice it would theoretically be when it is, it isn't useful and people WILL work around the problem.
That is true regardless of whether the problem is at the application layer (e.g. HTTP), or at the transport layer (e.g. TCP), or elsewhere.
The primary motivation is to get things working, and making things that lack redundancy, or are elegant comes in at a distant second.
Deployed is the most important feature, thus things which quickly move protocols and protocol changes from theoretical to deployed are by far the most important things.
The longer this takes, the more likely that the work-around becomes standard practice, at which point we've all "lost" the game.
-=R
"""
Basically, this is another instance of Linus's quote (https://lkml.org/lkml/2009/3/25/632): "Theory and practice sometimes clash. And when that happens, theory loses. Every single time." Theoretically, it'd be far more elegant to fix this in the transport layer. But in practice, it doesn't work. Except, maybe if you implement on top of UDP (e.g. QUIC), since that would be in user-space and firewalls don't filter out UDP as much.
Speaking as a Chromium SPDY & HTTP/2 developer, we are very much focused on standardization. SPDY is an experimental protocol meant to drive the standards process, not become a de facto standard itself. Therefore, it's critical for us to kill off old SPDY versions.
And note that your reference is slightly misleading (I don't think you intended this). It's true, the vast majority of hosts supporting SPDY are running nginx. In practice though, the vast majority of these hosts are Cloudflare or WordPress.com hosted sites. Both run newer versions of nginx with SPDY/3.1 support.
Right, so my point stands that it is interesting that current stable browsers don't support current stable web servers. So unless someone has the resources to test and deploy unstable web servers, it effectively means you shouldn't bother with spdy at this point.
I think it's a fair assessment that supporting experimental technologies requires more engineering resources. Everyone has to do the cost/benefit analysis themselves.
You seem intent on arguing about something I have never contested.
I just made the (implied) observation that anyone deploying nginx shouldn't even bother enabling spdy in 1.4 (stable) because effectively nothing will use it anymore. It was something that people who use "stable" software were able to benefit from starting in May 2013, but it is now no longer and won't be again until nginx reaches 1.6. That is, unless spdy/3 is dropped in favor of spdy/4 by then.
Er, I thought I was agreeing with you. What do you think I'm arguing? To be clear, I view these two statements as grounded in the same logic, although perhaps one is more strongly worded than the other:
Yours - "So unless someone has the resources to test and deploy unstable web servers, it effectively means you shouldn't bother with spdy at this point."
Mine - "I think it's a fair assessment that supporting experimental technologies requires more engineering resources. Everyone has to do the cost/benefit analysis themselves."
Don't even bother building nginx 1.4 with spdy support or configuring it since no one can use it. From May 2013 until now there was a benefit to end-users. There no longer is.
If you want to provide the speed benefits of spdy to users, you now need to run unstable nginx or mod_spdy.
Right. I think the point is that you shouldn't be relying on experimental standards for one's web architecture unless you recognize its experimental status.
As an outside observer, it seems like this kind of pattern happens pretty regularly on HN. It's always kind of funny. There seems to be two distinct flavors:
Pattern 1:
> Statement
>> Counterstatement!
>>> You're saying the same thing.
>>>> I am? Oh. I *am*. Whoops!
Pattern 2:
> Statement
>> Agree with tone of disagreement.
>>> Disagree on disagreement, while offering up facts to somehow agree *harder*
>>>> Rebuttal, optional restatement of disagreement, optional statement(s) further
>>>> reinforcing the common point on which we just can't agree to agree
I think for people who actually want spdy3 support, they will evaluate whether nginx 1.5 is stable enough for them. Stable is just a word, and unless it's a straightjacket "stable or nothing" culture you're referring to, yes people will have to make their own decisions about whether 1.5 is stable enough for their needs.
As the posts below, "mainline" is considered stable enough for production.
Well, the difference is that breaking changes are made to 1.5 so when a security update comes out (like today) you have to do regression testing instead of just grabbing and building the latest version.
I don't see why you think this is a semantic requirement of HTTP. Perhaps there's some confusion over what HTTP semantics are. Let me refer you to http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-2.... It doesn't discuss exposing all HTTP traffic to network intermediaries. Perhaps you're thinking of the HTTP messaging layer http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-2.... Also, I think your statement about allowing an in-path intermediary to act as a CDN is weird, since a CDN is defined as "a large distributed system of servers deployed in multiple data centers across the Internet. The goal of a CDN is to serve content to end-users with high availability and high performance." [1].
It's true, HTTPS is full of tradeoffs. You've identified some of them.
What do you see in HTTP/2 that "codifies this workaround"? That wasn't immediately obvious to me. Recall that HTTP/2 is basically just multiplexing with prioritized streams. There's no requirement on TLS in the spec, although all current browser deployments (of SPDY) require TLS.
The specification indeed is about proxying http resources, not https ones. So it's not initially as alarming as some other proposals discussing trusting proxies to intercept SSL connections. For more details, you can refer to https://insouciant.org/tech/http-slash-2-considerations-and-....
This specific proposal is interesting because it specifically is related to opportunistic encryption proposals, in particular, the one that allows sending http:// URIs over an unauthenticated TLS connection: http://tools.ietf.org/html/draft-nottingham-httpbis-alt-svc-.... The problem here for proxies is, if you mix http and https (authenticated) traffic on the same TLS connection, the proxy cannot tell if it can safely MITM the connection. The proxy vendor would like to know if it can do so, probably for network management / caching / content modification reasons. Of course, the point of the opportunistic encryption proposal is to increase security (although its actual effective impact is controversial: https://insouciant.org/tech/http-slash-2-considerations-and-...). But if you believe in opportunistic encryption's security purposes, then it doesn't seem to really make sense to make the MITM'able traffic identifiable so proxies on the network path can successfully MITM them without detection.
If someone can install a root cert onto your computer then you are already owned - there is no end to the other things they can do too. Call it a virus, call it an enterprise, but call it a day - you're owned and there is no in-charter policy this working group can enact to change the security level of that user for good or for bad..
The good news is not everyone is already owned and SSL helps those people today.
Thanks for this correction; I was under the impression that opportunistic encryption had already been chosen based on HTTP/2 descending from SPDY, but I clearly am not following the WG all that closely.
Is a fair reading of your blog post that it has a high likelihood of succeeding?
Only time will tell. It's all still in progress. Of all the major browser vendors (Firefox, Chromium, IE) present at the Zurich HTTP/2 interim meeting, only Patrick McManus (Firefox) has expressed interest. Notably, he's a co-editor of that Alternate-Services internet-draft.
I think the author of this blogpost has a few things off:
- HTTP2 != httpbis. Both work is being done by the same working group "httpbis". http://datatracker.ietf.org/wg/httpbis/charter/ covers this. httpbis (http://stackoverflow.com/questions/9105639/httpbis-what-does...) was originally chartered to revise HTTP/1.1 (RFC2616)
The working group will refine RFC2616 to:
* Incorporate errata and updates (e.g., references, IANA registries, ABNF)
* Fix editorial problems which have led to misunderstandings of the specification
* Clarify conformance requirements
* Remove known ambiguities where they affect interoperability
* Clarify existing methods of extensibility
* Remove or deprecate those features that are not widely implemented and also unduly affect interoperability
* Where necessary, add implementation advice
* Document the security properties of HTTP and its associated mechanisms (e.g., Basic and Digest authentication, cookies, TLS) for common applications
As for the HTTP/2 work, here's a snippet from the charter on that:
The Working Group will produce a specification of a new expression of HTTP's current semantics in ordered, bi-directional streams. As with HTTP/1.x, the primary target transport is TCP, but it should be possible to use other transports.
- He seems to think the httpbis folks gratuitously redefined 301. It should be noted that RFC2616 (which, by definition, predates the httpbis work since httpbis is defined to revise RFC2616) had already noted the issue with 301 (http://tools.ietf.org/html/rfc2616#section-10.3.2):
Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request.
- It's unclear to me whether or not the author acknowledges the existence of buggy implementations as noted in section 10.3.2. It's an open question as to what to do in the presence of buggy implementations. From a server standpoint, if the client is buggy, and you don't want to break the client (willingness to break clients probably depends on how many of the server's users use that client), then you will attempt to work around it, irrespective of what the standard says. Therefore, it's simply pragmatic to ignore the spec if it doesn't mirror reality, and pragmatic spec editors may update the spec to acknowledge this difference.
- As far as current status of the various 308 usages, Julian (author of the 308 draft) is lobbying major user agents to adopt this, and has written up a status update on the Chromium bug tracker: https://code.google.com/p/chromium/issues/detail?id=109012#c....
It sounds like you're well-marinated in standards bodies, and that's a good thing -- it's tough, often thankless work that someone needs to do.
For the rest of us, the language from OP sure sounds like it's saying, "yeah do whatevs with 301, we give up".
People often read RFCs in a hurry. Wouldn't this be a great place to use the a "SHOULD NOT" (change the request method)?
If you're saying "MUST NOT" would be bad because the horse is out of the barn, I understand. But the draft language now sure sounds like "MAY", and the OP has a good point that it's likely to encourage more wrong behavior, not less.
At least IMHO. Again, I am not a standards lawyer, so please take this feedback accordingly.
I guess I should out myself as a Chromium HTTP stack maintainer (since 2009, so this behavior predates me). One might consider me a domain expert here. I participate in IETF HTTPbis for the HTTP/2 work as the primary Chromium representative. I am not involved with the RFC 2616 revision work as that's tough, thankless work, that thank god we have Julian Reschke and Roy Fielding working on. As far as I'm concerned, I owe them a drink everytime I see them. They do an awful lot of legwork talking to various implementations and trying to build consensus on actually conforming with the standard and all its edge cases. It's really quite unfortunate to see this blog post author treat them so unfairly, although I can see how one might easily jump to his conclusion.
Now, as far as "SHOULD NOT", that's a reasonable thought for people not aware of what popular user agents currently do. The thing is, the majority of major browsers rewrite POST to GET on a 301. Here's my browser's code for it: https://code.google.com/p/chromium/codesearch#chromium/src/n.... Here's Firefox's code for it: http://mxr.mozilla.org/mozilla-central/source/netwerk/protoc.... To my knowledge, all browsers implement this behavior. We basically copied IE's behavior, because, IE did it and websites expected all user agents to do what IE did. Story of the web, sound familiar? :P
"SHOULD NOT" implies that our implementations are behaving badly. Now, it's true, our implementations may not be behaving ideally from a spec cleanliness point of view, but interop trumps spec cleanliness, at least from the perspective of anyone who actually deploys real software on the internet. So it's probably best for the spec to acknowledge this and officially allow this. Specs that don't mirror reality are...probably not just useless, but actively harmful.
From the rest of the article, I can only assume it means 'correctly' in the sense of the reality arrived at by both user agents and servers after years of standards-flouting, which is now 'correct' if unspecified behavior.
Makes sense. Clears up the table at the end too. In reality, as with HTML5, such a change matches historical and current browser behavior while trying to offer a future behavior that differs. The pragmatic approach has been and will likely continue to be the use of ?method=post at least until we get better browsers adopted across the board...
The fact that browsers are buggy does not allow to list that as expected behavior. It's like "Some cars are made of paper and everyone dies in any collision >20km/h, but let's make this expected behavior" or "some chefs use cyanide instead of salt, therefore force salt manufacturers to add cyanide to all salt".
These days it's not like deploying new version of browser to user requires major update to operating system, which would take 5 years to cover 90% of users.
Let's be clear that the original CRIME attack was against request header secrets. Therefore, disabling response header compression (as nginx defaults to) does not prevent that. SPDY/3.1 request header compression is a client-side choice, not server-side.
That's true, which is why I was careful to say in its original form :) Since the original attack was on cookies (request headers). To my knowledge, no other SPDY server defaults response header compression to off. But yeah, if your application does pass secrets in response headers, you should be careful.
Just wanted to chime in on "Who in their right mind would go to work for the government, for minimum wage, unless they were already independently wealthy?" GS-15 pay is nowhere near minimum wage. If you actually think so, I'm sorry to say that you are out of touch with society. And while some USDS employees may be independently wealthy, the majority certainly aren't.
There are also the nice perks of USDS, like working on things that critically matter to the lives of millions of the neediest demographic (both domestically and internationally), and collaborating with a bunch of A+ human beings who, despite the personal sacrifices, passionately want to do what they can to improve the welfare of others.