Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Spikability - An Application's Ability to Handle Load (iron.io)
11 points by edsrzf on June 6, 2012 | hide | past | favorite | 5 comments


I'm confused: how do you use message queues to handle traffic spikes? If a user request is sitting in a queue, aren't they staring at a white screen until I get back to them?

edit: "And you can launch more servers to eat away at the queues if they keep growing." sounds a lot like autoscaling to me. The graph is misleading.


The pattern is to put the work in a queue, respond to the user immediately, then process in the background, outside the request cycle.

Regarding auto scaling, it is scaling your worker servers to work down the queues, but it is not as urgent/critical as auto scaling your app servers if they had to handle the load.


So, I understand they're trying to sell their product here, but please...

In most web-apps there is exactly one user-initiated task that can be deferred: sending e-mail. Last time I checked my "spikability" was not bound on sending e-mails.


I was going to use iron.io for a Hackathon, but didn't end up finding a way to use it in my project. I looked through the docs though and it's really interesting.


You might consider looking again. Check out this post (http://blog.iron.io/2012/05/new-ironworker-command-line-inte...) about new .worker files and ng gem (https://github.com/iron-io/iron_worker_ruby_ng). Docs are really improving rapidly too. Finally, don't hesitate to stop by the support channel (http://get.iron.io/chat) with any questions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: