Hacker Newsnew | past | comments | ask | show | jobs | submit | apparentlymart's commentslogin

> better standards once a decade

Funnily enough, the first drafts of this protocol (back then, called PubSubHubbub) were written circa 2008, so this specification is about a decade in the making.

At the time it was distributing content between a number of the bigger blogging/publishing platforms of the day, and also notifying search engines so they could update their indexes more quickly.

If anything it seems like the standardization process was too long and missed the boat here (this particular problem is now most often solved by proprietary protocols), rather than being "rushed through".

Can't deny that the world has changed a lot during the lifespan of this idea, though. Cellular-connected computers in our pockets were barely on the radar when this spec was first written. I'm sure some would argue that the burdens of publishing have now shifted on to the reader (probably battery powered, spotty connectivity) whereas in this spec's original universe the burdens were on the publisher (CDNs not yet as widespread, more independent publishing from web hotels, etc).


In this context, I think it usually means that the same rendering paths are available on both the client _and_ the server, whereas formerly it was often the case that the server was responsible for the bulk of the rendering and the client just made minor adjustments to the DOM in response to events.

I agree that it's a bit of a misnomer, but I can see why people would call it that given the path we took to get here: first we rendered almost everything on the server, and then it became popular to render everything on the client, but now we're finding that a compromise is best, and that compromise is easiest to achieve when you have similar technology on both sides.

("We" in the above is intended to represent some vague idea of the web development community at large, not any particular individuals or groups.)


For many purposes passing in secrets by any of the above mechanisms can be fine, though as you noted there are caveats. Whether they matter depends strongly on your situation.

There are some things you can do outside of the physical means of passing the secrets that can help reduce risk:

- Give applications very constrained credentials that only give them access to what they actually need. This won't stop the credentials from getting compromised, but it at least reduces the risk of them doing so and -- assuming the credentials also serve as identification for the application -- allows you to trace back a compromise to the application it came from, which may make recovery easier.

- Use time-limited credentials that need to be renewed periodically. This is harder to implement without logic in the application, but it can potentially be done by having a file on disk (possibly ramdisk) that the application re-reads somewhat often and that gets updated by some other process.

- Pass an application a one-time-usable token via any of your given mechanisms and then have it reach out to some other service with that token to get the real secrets it needs. Since the token is invalidated on first use, the window of compromise is small and in particular this can help mitigate the child-process-related vulnerabilities as long as the app is careful to read its credentials before doing any other work.

Hashicorp Vault combined with the utility "consul-template" (which is now a bit of a misnomer since it talks to vault too) can be helpful building-blocks for implementing these ideas.

As noted above, whether this is warranted or suitable depends on your context and risk-tolerance. All security stuff requires cost/benefit analysis, since nothing is a magic bullet.


At work we use Packer, Terraform and Consul across all of our apps, and little smatterings of other stuff in some places. A little on each:

- Packer: Not my favorite, honestly. I can't argue that it's doing the job, but it feels inflexible and hard to integrate into a coherent workflow. It seems that Hashicorp Atlas can smooth this over in principle; we don't use it, because it didn't seem to fit with our use of Terraform at the time we got started, and we have a semi-home-grown alternative now.

- Terraform: We're using Terraform not only for low-level infrastructure stuff (VPCs, subnets, etc) but also for application deployment. I'd say our success with Terraform was due to a couple things. First: we picked up Terraform at a time when we were in the process of a total infrastructure rework in our org anyway, so we were effectively starting from scratch. Second: I spent a few months using Terraform for toy things and learning what it was good at, what it was less good at, and building a "pattern library" of techniques that had worked out. Once we started applying it to real problems, we just cherry-picked suitable patterns from that library and used them. I expect that Terraform is tougher for someone who already has significant infrastructure deployed and is trying to manage it with Terraform with few changes, since there are definitely approaches that are harder to model in Terraform than others.

- Consul: I really enjoy the simplicity of Consul. Getting a cluster up and running is pretty straightforward. Once you have it running, you get a highly-available, datacenter-aware key/value store and a service registry. We honestly don't use the service registry very much, but we have made extensive use of the consul-template utility in conjunction with Terraform's consul_key_prefix resource to have applications/services announce where their endpoints are for consumption by their clients.

We actually decided against using Vagrant because it was "more bulky" than our app developers were willing to tolerate. Instead we continued with our previous solution (running the apps direction on the users' laptops with a README in each app describing how to get it running) being optimistic that the new Docker for Mac and Docker for Windows would be awesome enough to get the good parts of Vagrant in a lighter package.

Vault showed up a bit late for our "architecture remix" so we solved our Vault-ish problems in other ways. I like its design in theory, and would probably give it a try if the opportunity arose.

Similar story with Nomad: too late for us, and we'd gone down an alternative path before it showed up. Can't really speak to it, since I only dabbled with it very briefly.

I'm sad but honestly not surprised to see Otto phased out. I was initially excited when it was announced last year but I could never really figure out how to get it to behave in the way I expected... I always felt like I was fighting it, and doing things in a way it didn't expect. I think there's room for the Hashicorp family of tools to "tessellate better", but Otto seemed like a very coarse, heavy solution -- essentially wrapping and templating the complex tools underneath -- where I was more hoping for the tools themselves to grow features to close the gaps.

This turned in to a bit of a rant, so I'll stop. :D


Realized I missed a key point on Terraform:

I advise anyone using Terraform in production to wrap it up in some sort of automation. Hashicorp would of course like you to use Atlas :D but you can get a long way with CI/automation tools like Jenkins, Rundeck, ...

We have a wrapper script which: - configures the remote state in a predictable way (setting up remote state properly is one of the more fiddly parts of Terraform usage) - takes a snapshot of the current state - runs "terraform plan" to produce a plan file - takes a snapshot of the current state, which has now been refreshed by Terraform - pauses here and waits for human approval of the plan - takes a snapshot of the current state one more time, even though it's usually just another copy of the last state we snapshotted - runs "terraform apply" to apply the plan created earlier - takes a snapshot of the final state

All that state-snapshotting is an insurance policy against Terraform getting itself confused. There are definitely some gotchas in this area[1] but honestly we've only actually made use of these zealous state snapshots on two separate occasions, and they were both on our pre-production staging environment (which we deploy to more carelessly, as a dry run for production) rather than our production environment.

I have thought about open sourcing that wrapper script but sadly it has some assumptions about our environment built into it (e.g. locking using a specific service in our world, so that two deploys can't run concurrently) and I've not had the time to scrub them out and generalize it.

[1] https://gist.github.com/apparentlymart/657885e730d1e5abc6ea


Just set remote with versioned s3 bucket, usually it's enough for insurance.


I'd rather use BOSH, which has an explicit compare-and-repair model.

On the other hand, Terraform is much easier to get started with and much less opinionated.

Disclosure: I work for Pivotal, we donate the majority of engineering on BOSH.


git rebase -i master


That would seem to suggest that you commit either after every keypress or at least before using the backspace or delete keys, right?

It feels to me that "what really happened" only really applies to "what did we release?" rather than "how did we get to what we released?". Completely agreed that every release should be tracked in version control without modifications, but I'm skeptical that auditors care that you forgot to run the linter before committing and then you did a follow-up commit to add a semicolon where one was missed, but all of that happened between releases.


It really doesn't matter why you (or I) think. It matters what the auditors will accept.


From my (admittedly probably limited) experience with LLVM IR from writing my own compiler, it seems like if you were writing a program entirely in LLVM IR you could use a subset that could be compiled and run on any fully-supported LLVM target.

Of course that's a different proposition than e.g. compiling a C program to LLVM IR using Clang and then trying to compile that IR on a different target, or trying to interact with non-LLVM-IR functions that conform to the platform ABI.

Of course, the resulting native code may not be the best for the target, since e.g. a native integer on one platform might become a pair of smaller integers on another platform. But it could work.

With all of that said, I expect the proposal was to use the Rust compiler to compile the Rust compiler to IR, and I imagine Rust is complex enough that it must generate at least some target-specific IR. Perhaps one could take the generated IR and "normalize" it, but it's questionable whether it would be worth it.


rustc needs to interact with C ABI functions. To start, rustc needs to be able to call LLVM. As LLVM is not written in Rust, this is done with LLVM C API.


I think the spec you are looking for is JavaScript.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: