I'll buy the crazy part. The "100% dead" bit loses me right off the bat. Surely there will be space for hardware abstraction, process models, memory protection mechanisms, networking, etc... in your futurist OS. And, amusingly, that code is already there in the OSes of today! Storage management is merely a big subsystem.
And as for dropping "files", I think that's missing the point. Files are just pickled streams, and there will always be stream metaphors in computing. How else do you represent "inputs" to your program (which, remember, still has to run "from scratch" most of the time, if only on test sets)?
I guess I see this as a much narrower optimization. A computer built out of perfectly transparent NVRAM storage is merely one with zero-time suspend/restore and zero suspended power draw. That doesn't sound so paradigm breaking to me, though it sure would be nice to have.
I think you're assuming a level of opacity to the OS that is true theoretically, but not realistically. Conceptually we can treat the computer as a black box which does the same thing, eventually, whether it's using cache, RAM, HDD, or the network, but realistically the limitations leak out all over the place and are embedded all over user-facing workflows in the form of opening, saving, uploading, and such things. They may always be happening in some sense, but there is no intrinsic need to involve the user in them.
Assuming that NVRAM becomes dense enough to replace storage in practice -- which is a big assumption, but it's happened to tape and is happening to hard drives right now -- concepts like launching a program, opening and closing a file, even booting will become mostly academic. Certainly they'll be of no crucial interest to users, to whom the distinction between what something is and what it does has never made that much sense.
Sure you could apply all the same abstractions over the top, but if you were designing your OS from a blank slate, why on Earth would you? And it will only be a matter of time before one of those blank-slate OSes is compellingly superior enough to the old-school paradigm, and users will start switching en masse.
Fine fine. Let's just say that the last "blank-slate" OS to acheive commercial success did so, what, 35 years ago? If I'm assuming too much transparency in the NVRAM technology (and honestly, I don't think I am -- DRAM is hardly transparent already, c.f. three levels of cache on the die of modern CPUs), then you're assuming far more agility in the OS choice than is realistic in the market.
Well, there hasn't been a major upset in the PC paradigm in 35 years :) Really, I agree with that bit-- marketing the thing would be a nightmare. But whether it's the first company to try it or the tenth, at some point the utility benefits will become too great for users to ignore.
> DRAM is hardly transparent already, c.f. three levels of cache on the die of modern CPUs
Keep in mind I'm talking about interface, not implementation. My code might care about cache misses, but my users have no reason to (except in the very, very aggregate). We leak a user-facing distinction between storage and memory because the difference is too significant to pretend it doesn't exist.
I think that OS choice agility is increasing rapidly. Consider the number of people whose primary computers are mobile phones (replaced every 1 to 3 years) and whose secondary computers are glorified web browsers. This is rapidly becoming true for businesses as well, as they adopt more web-based tools.
All mobile phone OSes are still based on a filesystem. If you want to claim that the user's perspective of the computer is going to move away from a "file", then I agree. If you think the underlying software is going to do so simply because it got no-power-to-maintain memory, I think you're crazy.
Straw man. I said nothing of my opinion on non-volatile memory. I was only pointing out that more and more users are less and less tied to any particular operating system.
Uh... the whole subthread was about NVRAM and the likelihood of it replacing the filesystem with different storage models. You'll have to forgive me for inferring an opinion about the subject we were discussing; I just don't see how that can be a straw man. It's just what happens when you inject a non sequitur into an existing discussion.
Not a non sequitur at all. You wrote "you're assuming far more agility in the OS choice than is realistic in the market" which was a point to support your case about NVRAM. I was merely stating that point was weak because, realistically, the agility in OS choice is increasing within the market.
I actually wrote in defense of the traditional filesystem model in another post on this thread. Just because I don't agree with your reasoning doesn't mean I don't agree with your conclusion.
the stream metaphore isn't appropriate for all types of data. memory allows a storage paradigm to be picked for the task at hand.
Consider redis which is a great example of this. How do you store redis' data efficiently? Well it turns that aof files are slow to start up and disk backed virtual memory is slow. The problem goes away instantly with NVM - the jib is done with no filesystem api used.
I dare say that the stream metaphor is a better fit for more types of data than Redis is. To first approximation, all data in the modern world is video files. You really want to store those in a raw memory space?
That's ok until you need to read hundreds of multiple streams from a stream device. You end up with random access then at which point the stream paradigm breaks down and you have to use memory.
Filmmakers don't work with compressed video files (which I picked precisely because they're inherently a stream, and because I'm not kidding when I say that they're basically all the data there is in this world). And seem to be doing quite well with DRAM anyway.
Fair point. Although, I'd wonder if it's right that video is inherently a stream, and not that that's a limitation imposed by storage speeds. I might only want to watch video front-to-back, but (without knowing much about encoding) I could easily imagine that my experience might be improved by my computer having random access.
Modern video encoding is based on interpolating from the last frame, that is, only storing a real frame occasionally, and most of the time just having diffs from the last frame. ^+ This means that the data is inherently a stream -- a decoder needs to have completed decoding the previous frame before it can start on the next.
(^+ and B-frames, but let's not overcomplicate things)
It's still effectively a stream even if you skip around in time. Each frame is so large that it is a stream itself. So even if you watch a few seconds here and a few there, your use case is still pretty much optimized for streaming data.
And as for dropping "files", I think that's missing the point. Files are just pickled streams, and there will always be stream metaphors in computing. How else do you represent "inputs" to your program (which, remember, still has to run "from scratch" most of the time, if only on test sets)?
I guess I see this as a much narrower optimization. A computer built out of perfectly transparent NVRAM storage is merely one with zero-time suspend/restore and zero suspended power draw. That doesn't sound so paradigm breaking to me, though it sure would be nice to have.