"some book some java dudes wrote 20 years ago" — If you're referring to "Design Patterns", the Gang of Four were writing about C++, it was 1994, and Java wasn't out yet.
No one denies that capacity planning is hard. There are books written on the subject. The points you make are exactly the reason why you need to do capacity planning and plan for mitigating failures. If you aren't planning on 2x (in fact more) growth then I'm confused as to what kind of growth you really expect in your service.
If you aren't giving yourself room for expected and unexpected loads, you're doing it wrong. Add capacity and load testing to your process.
If you work on systems where you have the occasional 2x spike in traffic or planning for 2x capacity requirements in the future is easy then you don't have the same problems as suhail has.
I work in advertising for example. We could have 10 partners at 1x. Add 10 more and be at 1.1x or 2x, then add a large partner and be at 7x. There isn't a pattern to when we get partners from any of these groups but when we get them they need to go live as quickly as possible and sourcing and prepping hardware in situations like that isn't feasible. Nor is it feasible to have hardware on standby for the occasional 7x partner since you don't know when they are coming along and they could end up being a 10x partner.
If you aren't giving yourself room for expected and
-->unexpected<-- loads, you're doing it wrong.
You're using that word, I'm not sure it means what you think it means.
Over here in the real world, many applications (and notably web-applications) have one thing in common: They change all the time.
Your capacity plan from October might have been amazingly accurate for the software that was deployed and the load signature that was observed then.
Sadly now, in November, we have these two new features that hit the database quite hard. Plus, to add insult to injury, there's another old feature (that we had basically written off already) that is suddenly gaining immense popularity - and nobody can really tell how far that will go.
Capacity planning isn't just hard, it is costly. You have to profile every new version of your app, and every new version of the software you depend on. You have to update your planning models with that data, and then you have to provision extra hardware to handle whatever traffic spikes you think you'll be faced with within your planning window. Most of the time, those resources will be idle, but you will still be paying for them. Plus in the face of an extraordinary event, you'll be giving users a degraded experience.
Using "the cloud" doesn't solve all those problems but your costs can track your needs more closely, and with less up-front investment. Rather than carefully planning revisions to your infrastructure you can build a new one test it, cut over to it, and then ditch the old one.
You should still profile your app under load so you can be confident that you can indeed scale up easily, but even that is easier. You can bring up a full-scale version to test for a day and then take it down again.
I'm not against capacity planning, but it has it's time and place.
The fundamentals of capacity planning do not change based on the magnitude of your data growth. Why would they?
We're mostly talking about looking at your data growth curve and extrapolating points in the future. Why would that become impossible just because the curve is steep?
If you weren't paying such an enormous premium for your hardware, you'd have a lot more cash. On a per-dollar basis, you're paying anywhere from 2-10x the price for computing power on the cloud, depending on which resource you look at (CPU, Memory, Disk IOPS, etc).
suhail like myself works in an industry where one partner turning on can 2-4x capacity requirements and 10 other partners won't change your traffic profile much at all. It's easy to say plan in advance for growth but when there is a lot of varience in your growth then this becomes a problem. You will often find yourself overspending for unused capacity or struggling to meet new capacity. If your growth is a smooth line then yes you can say it is easy to figure out. But not everyones growth follows such a simple line.
The problem is not that is hard; the math can be done, the necessary capacity can be calculated, servers can be ordered.
The problem is that buying 3x the number of servers (or number of data centers) that you need "baseline", to handle the spikes, is a staggering expense.
The comparison with Torvalds is bizarre. Linus has done an enormous amount of self-promotion (warranted as it is), and even admits to having an ego the size of Joe's biceps. So I don't know what point you're trying to make there.
Next and most importantly. While you (a bastard love-child of Kernighan perhaps) may find this feature banal, a decent number of C programmers aren't familiar with this method; and if they are, they haven't done shit with it. So hating on someone for actually writing free code that makes a bunch of dev's lives easier is, well, just kinda jealous.
Lastly. If a comment like your comment, does, in fact, generate a large number of 'U Mad' replies -- well, that is strong evidence that you are, in fact, mad.
(btw, Joe is not 20 years old. "20" actually refers to the number of women Joe sleeps with in a typical night.)