Right now, data is cleaning bandwidth's clock. My internet connection is just barely faster than five years ago, but now I want to ship around pictures, music, and movies wholesale, whereas back then I really didn't care. I'd estimate two orders of magnitude of increase in my data desires... and I was a power user five years ago (as well as today, though epic torrenters have me beat). My wife has gone from email and web browsing to shipping around video, too, for her it could be even more.
Is it feasible to expect data growth to level off long term? Tough to say. Audio probably won't grow much, but pictures and movies can get a lot bigger, but eventually you have enough resolution. But in the fuzzy long term, there are other possibilities for large-scale data movement that are harder to predict.
That's because processing power is no contest. Local processing power is growing radically faster than anything else, and this will continue. Local processing power is growing so radically faster that it has long since penetrated the architecture itself, and we now have deep, deep memory cache architectures desperately trying to catch up to the increase in local processing power... even reaching out five inches as opposed to five nanometers has a huge performance impact.
If you want to take advantage of that, you're going to move data closer to the processor. Processors are quite likely to live in the home, for a variety of legal and technical reasons. (The updated version of rms' Right to Read story is the Right to Compute Locally... long term, you are a fool of epic proportions to give up all computational power to the owners of the cloud!) Due to some fundamental latency issues, and the fact the cloud will always be partially underprovisioned due to simple economic reality, there may still be a desire to move large amounts of data around in the future. Bandwidth is hard, and easily consumed.
(And while I'm here, since I've built the foundation: I don't think the cloud is the total future for these reasons. The cloud will never be able to be as reliable as local resources, because the cloud will always be distant, latent (as in latency), and busy. Local resources backing to the cloud is a far, far better proposition than a pure-cloud-with-dumb-terminals-everywhere play.)
I'm not so sure about the cloud. That huge amount of data comes with a lot of complexity as well and much of the data is only really valuable if you can share it (video) or combine it with data from others (data mining). If data needs to be close to the CPU, it could mean that both live locally or both live in the cloud.
Local resources are idle most of the time. I wonder if that's economically sustainable in the long run. On the other hand you're always going to need idle resources to cope with peak demand. It's never going to balance out completely, even in the cloud.
I don't think data growth is going to level off. There are so many untapped sources; sensor data, 3D models, biometric data, etc. But I'm not sure people will process all that data locally. I'm not even sure where locally is. People move around, so the latency issue does not just arise when they're at home, next to their game console super computer thingy.
I worked on location based systems in the past. Latency was a big problem then (and probably still is), but having a PC at home with 16GB of RAM and a TB hard disk doesn't help.
The scarcest resource in mobile devices is not memory or bandwidth or the CPU, it's the battery. So for the time being, serious number crunching is not going to happen locally on mobile devices.
There are so many factors and I know so little of potential technological breakthroughs or even plain physics. Technology doesn't advance linearly. Batteries could suddenly sustain 24 hours of biometric number crunching. There could be Gigabit WiSuperMax in the air in 5 years time. I don't know.
But I feel people don't want super computers at home. I even block flash simply because it keeps my laptop fan busy. I tried that world community grid software a while back, but I just couldn't stand all the noise and heat.
I doubt that people will keep buying high powered general purpose computers. My guess is that we'll have our game consoles, and our mobiles and various sensors and the data is going to be streamed straight into to google's or Microsoft's data centers. It makes me feel a little uneasy though.
Benefits are value received. The value that I get from a home computer do not depend on how little time it spends in its idle loop (or even turned on). The value also doesn't necessarily depend on how often I use it. (A home defibrillator is extremely valuable even if it's only used once, and fairly valuable even if it's never used.)
There are opportunity costs. If I buy a hard drive that I never use I could have bought shares of amazon instead. By your definition, money could never be wasted or used inefficiently. If I waste money and my competitor does not, then it's not economically sustainable for me.
Opportunity costs have nothing to do with utilization. Moreover, they're covered in my initial comment "Economic sustainability is a function of cost and benefits, not utilization."
Low utilization may imply "waste" but cost and benefit is what matters. Benefits that I get for the costs I incur determine whether buying a new PC is a good idea, not how idle it is.
In fact, trying to drive to maximum utilization frequently results in higher costs and lower benefits.
If you deny any link between benefits and utilization, then I understand your argument. But I think your argument is wrong because it contradicts the very purpose of buying capital goods.
You buy capital goods because utilizing them makes you a profit. Just owning them does not. But I agree that this is just a general rule which is not true in all situations, particularly not in economic downturns.
If I buy 3 TB of hard disk space expecting to serve 300000 customers and then only 100 customers turn up, I have wasted much of my expenditure because it's useless. If I buy a dynamic amount of hard disk space from a cloud service on demand, then I have not wasted that money.
So, clearly, opportunity cost has a lot to do with utilization if you assume a positive connection between utilization and profit, which you apparently don't. Go ask any airline about it :)
> If you deny any link between benefits and utilization, then I understand your argument.
I deny it because there isn't any link - "excess" resources don't reduce benefits.
> But I think your argument is wrong because it contradicts the very purpose of buying capital goods.
The purpose of buying capital goods is to receive more benefits than the costs incurred, the metric being benefits-costs/costs. (Note - benefits are not profit.)
Benefits don't depend on utilization. (While the amount of money that 100 people will pay me to fly them from NYC to LA might depend on how full their plane is, it isn't affected by me having 1 vs 1e6 planes parked in AZ.)
Utilization can affect costs, but the connection isn't always strong.
For example, my PC could usually get by with 400MB of ram. However, several times a year, I get significant benefit from having 1GBy. That benefit easily exceeds the cost of 500MBy of ram, even the "utilization" of said extra ram is low. (And no, the cost of renting ram when I need it isn't less.)
Note that the "utilization is key" argument would suggest that I'd be better off with 400MB of RAM than 500MB. That's clearly absurd because 400MB costs more than 500MB.
And then there's the difibrillator example. If I only use it once, is it useless?
You're misunderstanding me. The link between benefits and utilization is that in order to gain future benefits you spend a certain amount on capital goods, based on your expectation of utilization. You must have an idea of utilization because that's what determines cost and hence profit. That's the link I was talking about and it doesn't contradict your parked airplanes example at all.
The original question we were discussing was whether buying excess resources was economically sustainable. I think we can agree that economic sustainability depends on profits. Higher costs reduce profits and therefore threaten economic sustainability.
Now, you say the connection between utilization and costs is not always strong. No, it's not always strong, but it is strong enough for entire industries to focus on exactly that issue.
People are fired right now because they cannot be "utilized". Planes are rented away so they do not rot in AZ hangars. It's because machines lose value over time. Owning them without being able to make money out of them means you lose money every month. So spare capacity becomes a huge problem if growth doesn't meet expectations.
Paying only for the resources you actually use increases flexibility and reduces costs. If your expectations are wrong, you haven't spent the money buying the hardware either. Many dot coms would've been very happy to accept that logic in 2000.
But of course you pay a premium for that flexibility and if it turns out your growth expectations were right, you may have been better off not paying that flexibility premium. It's an insurance contract against the risk of wrong expectations.
> You're misunderstanding me. The link between benefits and utilization is that in order to gain future benefits you spend a certain amount on capital goods
I'm not misunderstanding you at all. I'm pointing out that your terminology is non-standard and inconsistent.
There is no link between benefits and utilization. Utilization only affects costs.
> Now, you say the connection between utilization and costs is not always strong. No, it's not always strong,
The original claim was that low utilization was always unsustainable. Conceding that the relationship between utilization and costs is not always strong is an admission that said claim was wrong. It is strong in some circumstances but not others.
The specific example was mostly idle computers, the claim being that having an idle computer is unsustainable.
> Paying only for the resources you actually use increases flexibility and reduces costs.
Not always. Consider my home PC. I could have gotten one less powerful for somewhat less money. The less powerful one would suit my needs much of the time. Moreover, said computer is idle much of the time. However, the costs of trying to optimize for utilization are significantly higher than the costs of just having a reasonable computer, even though the utilization is low.
The problem with the "actually use" argument is that there are costs associated with flexibility. That's why I mentioned 400MB of ram. (Which reminds me - that use of "flexibility" is somewhat idiocynratic. Having 1GB of ram is more flexible than having 200mb and going out and renting 800mb on demand.)
However, that's just me. So, let's find out if you're any different. Do you own any devices that are significantly underutilized? Why?
I forgot about your difibrillator example. If you actually use it it's not useless. If you don't use it you can say with hindsight that it was indeed useless (apart from giving you peace of mind maybe).
But it's a bad example because you don't own more or less defibrillator. Either you own one or you don't. And then it's not an ecoomic question. Because the answer to the question what is the right price for saving my life is always the same: All I have, all I can borrow and all I can steal.