The desktop is broken not because of the file/folder paradigm but because we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop, instead of through their Web interface. You should be able to save a Facebook post or Tweet and their replies to disk.
Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.
> this argument is dead and nobody will ever go back to native apps again
I agree with your post in general but not this. I see a lot more interest in self-hosting stuff lately, precisely because of the concern you mention that online services do ads and tracking. More and more people seem to be doing this.
And I personally really prefer native apps over web apps or electron stuff.
When self hosting is as easy (for your grandma who likes cats) as downloading a container from the Windows Store and double clicking it to open, install and start the server. Then we've truly made leaps and strides in fixing centralization.
Self hosting is hard because ops is hard [1]. People aren't yet in the mentality that you should provide an official modular deployment every time you provide a server binary. Everything-as-a-container wouldn't be the worst way to take the desktop [2]. Isn't that kind of how Apps work on OS X?
[1]: The ironic part is that most people don't keep anything long enough for a hard drive or any other component to fail anymore. So the argument about the cloud abstracting away the physical hardware maintenance for every day consumers like yourself or I is ... dubious.
[2]: Yes.. I just spent the weekend learning nix-build. Goodness, it's super easy to container all the things! :) I'm feeling like a zealot when I start using nix to do what I could have done with a zip file. But there's something magic to having a zip file of assets with its own shell.
Self hosting is hard because we are limited by our ISPs and what they allow us to do with our connections. Why don't they allow people to self host and when traffic starts to get big have an option to migrate that data to the cloud instead of clogging the municipal pipes if that's really the problem. The way companies like Comcast and DSL providers can succeed in this arena is to make self hosting easier. Why do I as an end user need to understand the infrastructure when all I am trying to do is share data or information?!?! I am playing the role of an uneducated user here and not as a Software Engineer as a career. It's disheartening that basic things like this aren't solved for people on the ISP level. I should have to find a service like Weebly or, DO or AWS, etc. I just want to be able to share my info and my ISP at the very least should be able to provide the basic framework for me to do it. When my content becomes populate then adjust accordingly.
I don't think that anyone self-hosting is able to make a single dent in the consumer internet infrastructure. Usually when you're self-hosting I guess you're handling either just your friends and family or perhaps at most up to 1000 strangers that follow your hobby.
Hypothetically in a future where self hosting is non hostile, will you see people self-hosting startups and the like? Yeah, maybe feasible. I think at that scale you start to care about things like uptime and maintenance. But I think the biggest winner to self hosting is the photographer who saves $5/mo on the blog that people rarely read or the kid who doesn't have to pay $10 to play Minecraft with their friends, or the family friend with 2TB of family data that doesn't want to pay $20/mo for Dropbox or the like when they already have the hard drive to store it. Yes it needs to be simpler for these people! There's a whole economy in making it difficult for these!
> Hypothetically in a future where self hosting is non hostile, will you see people self-hosting startups and the like?
Huh, is that not a thing anymore for startups today? In my experience, starting in 2003, across two companies, was that if you need any service fast and on the cheap, you're forced to host it yourself. Run your own mail server, web server, DNS, server housing, all the web apps, etc. and of course spend most of the day developing your actual product. Have the hosted options actually gotten so cheap and reliable these days? Is it a mindset thing?
My gripe with the cloud or SaaS solutions I've used over the years is always that management of them and backup is difficult tending towards impossible and without those you can't rely on these services, it's just voluntary vendor lock-in. When self-hosting, your actually (forced to) learn how they work and be able to fix them. Self-hosting, to me, is the simpler, more reliable option, if you depend on a service for your business and it enables you to get it fixed yourself, when it is not working. It doesn't prevent you to out source the work, if you have the money, but worst case, you still have direct access to your property.
With the hosted options, in my experience, you get all these fancy promises on availability and it being the latest, smartest tech and then the service is down for half a day, your data got restored from days old backups (if at all) and there is nothing you can do except tell your customers that "were sorry and working on it" while you wait for it to come back. :-(
>and then the service is down for half a day, your data got restored from days old backups (if at all)
It's not unheard of for the same thing to happen with in-house systems.
The difference is that you may have a wider range of options to avoid and respond to any outage if you run things yourself. That seems to be a rather theoretical advantage though. Every time there's a new ransomware attack, it's always those in-house deployments that are hit hardest and take the longest to recover.
I think under optimal conditions self-hosting is superior. But conditions are rarely optimal. As soon as you have to convince non-technical management to invest in non-productive necessities or in contingency planning you're already in a sub-optimal position.
The biggest win we could achieve for self-hosting doesn't involve people actually self-hosting. The idea of personally maintaining your own infrastructure is unlikely to scale to general population - but the thing we're really after is ownership of data and the ability to own infrastructure.
So, I think, the ideal situation would be to have a combination of big and small companies offering storage and compute as a commodity. You'd pay a fee to keep your stuff hosted somewhere; any time you see a better offer, you can migrate to a different provider without much hassle, and with near-zero downtime. Cloud services would work by shipping their code to your data, not the other way around[0]. And if you were so inclined, you could just build your own infra, or even buy a turn-key "self-hosting in a box" kit.
Pieces of that vision are already here. Compute providers are plenty. You can order "self-hosting in a box" kits. Internet architecture makes everyone's computer equal (at least in theory, ISPs mess it up with NAT, and their T&Cs). The only thing missing is the part where you own your data, and SaaS vendors serve you - the bit that makes SaaS truly be Software as a Service, instead of Serfdom as a Service.
--
[0] - Preferably with homomorphic encryption preventing SaaS vendors from putting their hands in the cookie jar, if we can get that to work without creating another blockchain-level environmental disaster.
You mean good old webhosting?
What I don't get about the HN crowd is making every simple, already existing, already proven, and already solved problem so damn hard.
I'm in EU, I host my websites at multiple local webhosting companies. They're small enough to care about support and big enough to guarantee speedy and reliable service. By law they're not allowed to go through my data (ofcourse they can and I have no way of proving that they did), so the legal deal between me and them is crystal clear. Who do you call when your Amazon Web Shit serverless thingy doesn't work no more?
I can and did move several webapps and websites from one webhosting company to the other. It works flawless. And besides waiting a couple of hours for a DNS change, it's almost instant. I get it, you can't do that easily with a system with million users.
This so called problem was already solved decades ago. It was called personal computers and the internet. When people started calling the internet 'the cloud' then things went downhill.
Excuse me for the slight rant. I should go outside and see some more sun. ;)
Yes, this is solved. I'm saying that there's another problem that needs to be solved too: control over data.
Regardless of what you use for hosting your own stuff, if you want to use a third-party SaaS as a user, they own the data. Want to make a document on Google Docs? That document lives on Google's servers, it's forever tied to their service and mined by them. There's no artifact you can hold on to, other than your user account.
What we need is a system where the data for that Google Docs document lives in a place you control - be it your own hardware, or some hosting you rent somewhere. It's the SaaS that should come to the data, and operate on it there. That way, if you lose your Google Docs account, or decide to edit the document with something else, you actually have that document, in its canonical form. Same for all other SaaS.
In an ideal world, yes. In practice, that's impossible to do up front, so the next best thing would be open formats - i.e. openly and fully documented ones. The goal is to break the leverage a vendor has over the users when using a closed, proprietary format.
> There's a whole economy in making it difficult for these!
And there is your niche right there. Individual companies or persons do not need to make 'dents in markets' all by themself. A host of people doing the same might. But for the individual - especially when having sustainable income objectives, not hockeystick growth - there's a good place in the market I think.
" I think at that scale you start to care about things like uptime and maintenance."
I guess none of members of your mentioned groups will ever grow to care about a scale. And thats a good thing because they are distributed. And i think it creates a better internet, which is kind of wide network instead of shallow graph of couple mega nodes.
Self-hosting isn't hard, it's currently impossible.
The real problem is who owns your data. Because if you use Word or Photoshop, your files are locked inside Word and Photoshop. And this is still true if you use FOSS alternatives, because there's only very limited support for metadata-aware sharing between applications of all kinds.
It would be super-useful to have (for example...) seamless links between text editors, web design applications, web hosting systems, and even video editors and ebook publishing tools. But that's not where we are now. There's some limited interchange, but most cross-domain transfers are difficult and fragile, and some are impossible.
Cloud is just the online version of the same model. When you have proprietary control of user data through proprietary file formats which actively frustrate open sharing of data between applications, it doesn't matter of the data is stored locally or in the cloud. It also doesn't matter if you're using a mobile or desktop UI.
The FOSS people have always been looking through the wrong end of the telescope. The real revolution would be open data which is wholly and exclusively owned by users (or user groups for collaboration) and loaned to proprietary software for specific limited tasks.
Which is the opposite of things work now. Applications and products own your data and they let you access it - but only if you ask them nicely. And - increasingly - if you pay annually for the privilege.
So containerisation or self-hosting or whatever is a non-solution unless it also gives data back to users.
Which is also why the fragments idea won't work. There's limited use in trying to automate or manage or otherwise AI-ify access to data that you don't truly own anyway.
In fact a new kind of shared Internet would be a very useful thing. But it would need a ground-up redesign of everything, including browsers, mobile apps, desktop applications, operating systems, search, and the financial and legal frameworks surrounding them.
I'd love to see that happen. But right now in 2021 it just doesn't seem likely.
> The real revolution would be open data which is wholly and exclusively owned by users (or user groups for collaboration) and loaned to proprietary software for specific limited tasks.
How would that work in practice? By its very nature open data would also be accessible to proprietary programs although the reverse need not be the case.
Seems like we will need open data and code. Either one open will not do.
> How would that work in practice? By its very nature open data would also be accessible to proprietary programs
That is a good thing. If program X works best on my data I want to use it. There are a few examples where people do mix programs from different companies. Musicians use MIDI to connect their favorite keyboard to a synthesizer from a different company all the time - sure it is tied to hardware, but it need not be and is a perfect example of what should be possible for any user data: mix and match.
> although the reverse need not be the case.
It doesn't have to be, but if users demand it, it will be.
> Seems like we will need open data and code. Either one open will not do.
Open data means we can create the code. Closed data is a lot harder to deal with than closed code.
The reason MIDI is a success is the actual data is quite simple. It's just a stream of keypress and numeric control updates. And there's no incentive for a synthesizer manufacturer to block access to incoming or outgoing data.
When your data is complex, the processing done to it will be complex, especially if you need to guarantee invariants (eg. referential integrity or database constraints).
One example for this that i have been thinking of is chat. Instead of having 15 different apps, we should be able to choose which UI we want to use, and then that opens the chat streams. Kind of like email, where you can use any client.
However most big companies would fight this every step as they loose control.
Edit: I envision it a bit like streams of data that you can subscribe / push to. Think RSS mixed with a pub/sub type of model. You would subscribe to the hackernews datastream, and submitting articles and comments are done using push. The push message would have some predefined metadata fields that are obligatory (article url, title, summary or comment text).
This is centralisation from the bottom up! You don’t want this. If everything talks the same protocol, some giant will eventually own the protocol in the same way Google stole the web with Chrome.
Can you imagine if Android licensed iMessage instead of building Hangouts? Yes, we’d all be texting on the same protocol, and yes we’d have a choice of clients, but at what cost?
True, I was mainly thinking about the HN community (and the more technical people around me) instead. This is not for my grandma who likes cats (PS: So do I!)
Yes this is kinda how apps work on macOS, but not all apps yet. The new sandboxed container model isn't mandatory yet. Some apps are just a folder of files with no kind of containerisation at all (other than the .app itself being a folder rather than a file). More modern apps store all their data in a containerised filesystem though.
It leads to some cool things you can do like easily capturing the icon of an app or even changing it without having to change the app itself.
But on the self-hosting side, docker is making big strides there. Almost everyone deploying something like Home Assistant for example will do it through a tree of managed docker containers, in many cases without even knowing it :) It comes with a supervisor (which is also dockerised) which manages that part very well.
I think most consumers I know keep their computers for long enough by the way.. Phones come and go, but desktops and even laptops tend to stick around until they do break and I can't fix them anymore or just heave a sigh and go like "NO this time you really have to get a new one, Windows Vista hasn't been supported for years"
> But on the self-hosting side, docker is making big strides there.
Docker is only part of the puzzle. Even being proficient with Docker (and running services in a traditional way), there was always a big barrier to self-hosting for me in networking. If you lose all of your cloud-equivalent functionality once you leave your home network, it isn't a realistic alternative. I'm not a network whiz and the builtin VPN functionality of my NAS is sadly not up to par either.
In comes Tailscale[0], which made it dirt easy to self-host my stuff and have it available anywhere (given there is a Tailscale client for it which has been the case for all my devices). Since I started using it, I've completely migrated my contacts, calendar, zettelkasten (Trilium) to self-hosting and started some home automation projects. For someone who tried and was stuck at the networking part in the past, it truly is a game changer (and I'd like to think I'm not the only one that is held back by that).
Selfhosting these days is actually a breeze with docker.
I don't like using docker for delivery or for deployment of saas, but as a delivery mechanism, it's really great
While self-hosting as commonly understood is not for everyone, I really hope and think that small-community hosting as a service will become a thing. Basically there's a number of places like https://syntaxserver.io/ which will host (for example) nextcloud for you. You should be able to get that as a supported service from a local company/organisation/group for a price comparable to what huge SaaS businesses would charge. With the difference that you can move the backup anywhere you want, and your data does not touch other people's data.
In reddit, colleagues around me etc. Hacker News posts/initiatives. 2-3 years ago everyone was all "Hey look at my new Office 365 setup". Now the cool new thing is more self hosting, often driven by a desire for more privacy. There's also new businesses around this usecase. Look at https://www.beeper.com/ for example. It's a hosted matrix service but all the bridges containing your private data are self-hosted.
> My money's on your anecdata being 100% from a techie bubble.
Absolutely. But this is where things start before they get mainstream. Things pick up traction here. Then they mature and commoditise and make their way to the mainstream.
> Absolutely. But this is where things start before they get mainstream. Things pick up traction here. Then they mature and commoditise and make their way to the mainstream.
1. Where is the money in that aka is there more money in that than in services?
2. Will it be braindead easy to use?
My money is on no and no so I don't see how the chasm crossing will happen.
> The desktop is broken [...] because we stopped using files to represent information.
This is it right here. Our entire world and notion of the internet is based on serving data stored in a file from one person to another. Once the developer started drawing too many conveniences and started to "move fast and break things", we thought it's good enough to just store everything in a database, or serve it as Javascript. These technologies are great, but they go completely against everything our computing paradigm stands for.
The file system is nice because it's the same database. I don't want all my programs to do their own database, since then how do I search and move stuff in bulk?
No, but all your programs can/could share the same database, and you could cross-reference anything whenever needed. At least that's how I manage it. I have a single postgresql database with per-program/per-instance schemas.
I just did my taxes and figuring out what to pay was just a single select over some tables in the paypal and various per-bank schemas + a schema that had a table of currency conversion rates from our central bank applicable for each month of the year for tax purposes, expected by the tax man (or woman). Quick and easy. I don't even bother with UI, for these once or twice a year needs, just like I wouldn't bother with writing UI for some mp3 conversion task. Just a simple script will do.
Filesystem is great for arbitrary data/files with no schema. Random pdf files, code, programs, etc. But anything that has some obvious schema and comes in large quantities and perhaps needs to be modified/synced with third party data source, I like having such things in the database. It's so much more useful that way, because it's much easier to do something with the data.
A local database will store data in a file, but I get what you're saying. Files are a common interface that allow you to pipe data around, keep it portable and malleable.
Yes. One with a generic schema, and to which the user has full access rights.
Fundamentally, it's not the files that make filesystem great. You could devise different models, perhaps a relational one, and paper complexity over with well-designed UIs. They'd probably still be more complex than the filesystem - files are about the simplest data storage abstraction you can invent[0] - but they'd be serviceable, and users would learn.
What makes the filesystem great is that it's an old abstraction, designed in the ancient days back when computing was still about enabling users. Bicycles for the mind and all that. People cared about making things useful to users, instead of just shamelessly exploiting them. So, designed back then, the filesystem grants users the vocabulary to manage their data and freedom to do so, and it's so ingrained that - despite their best efforts - companies weren't able to completely take it away.
Filesystem persists for the same reason e-mail persists. Despite its warts, it's one of those technologies made before the computing industry became exploitative.
--
[0] - Despite the frequent claims to the contrary, coming from the web and mobile world. But guess what, data magically held in app and "shared" by magic isn't easier to understand, it's just not understood at all - users have no mental model for this. The mobile app approach works only because it removed all data management features except the share button.
You are missing the point here by seeing only the technical aspect. You are misleading technical description with abstraction.
A database is hardly if not accessible to the user. The files are an abstraction that enables the user to owns its data. Once you have a file, you can do more than opening it in your app. You can store it wherever you want, edit it with any software you wish, arrange it the way you are confortable with.
As a user, you can’t do that with your SaaS database, you must rely on the « export / share » function of your SaaS provider, hoping it will export all data, in a readable format and that the import function exists and is reliable. You don’t own anything and as soon as you stop paying your subscription, you are stuck with nothing.
File formats like DOCX or PSD (fitting extension) are almost impossible to parse 100% correct and render without the (hired) software used to create them. While you may be able to copy your files, without the software they are quite useless.
You are right and it's a real issue, but you can share them with whoever you want, however you want. You can just keep them and be pretty confident that they will be readable again in years : even if it could become difficult as the time passes, if it is nothing too exotic, chances are you will be able to at least read them.
And I prefer a "95%" correct DOCX or PSD that I can recover and rework if needed than a "0% this App is not available in the WhateverStore anymore".
For popular formats that would be possible, but there are a lot of binary only formats that are completely impossible to parse these days. You're free to copy it but the bits are essentially useless.
Try to load a current ML model 20 years from now. Probably tied to proprietary software and if you're unlucky also hardware (like CUDA)
Perhaps the concept of file needs to evolve from a locally stored collection of bytes to a more generalized notion of locally identified collection of bytes, data links, and functional relations. All that should also have an ability of become fully localisable akin to 'clone'.
By now the spectrum of interactions using computers is visible enough to be able to generalize such meta-file formats. Should it be some form of database or a kind of system-level support for defining and assembling such meta-files is a question of experimentation.
It's more like an active-book paradigm vs a file, where one could tie together multiple contexts, yet being able to present it to user in some human perceivable form. Some analogies could be a project or an activity based collections of files, links, collaborations etc.
I keep repeating this argument in different forums, but: if you every lived in China you would realize that the native app is the past and the future. Since governments can block websites, that means they can block your web apps too. When I came to China I completely lost access to Google Docs, Gmail, Facebook, etc. Relying on web apps is exactly like giving governments the right to uninstall applications on your computer.
Right now this is not a big deal in most countries. Right now. But as the web becomes increasingly balkanized (and I believe it will) and as countries become less democratic (always a possibility) the native app with local data will reassert its prominence in people's lives.
This is an important point, but then "native" iOS has the same vulnerability. Apple arguably has more power to block iOS apps than China has to block the Web, so China just tells Apple to block what they want blocked, and Apple does it.
If it makes you feel better, efforts are being made to bring non-tyrannical operating systems to mobile in a braindead-easy, consumer-friendly fashion. You can search up loads of articles about the amazing stuff that the peeps running organizations like Pine64 and LineageOS have been up to.
> Relying on web apps is exactly like giving governments the right to uninstall applications on your computer.
To some extent, governments, across the globe, are already exercising this right in some form. Quite often we hear that some government has banned some app and it becomes illegal to use that app in that country.
> (spoiler: this argument is dead and nobody will ever go back to native apps again)
Ahaha, I don't touch web apps unless I really really have to. And even then it's lighter stuff, like chats. Can you imagine using an actual productivity application in the browser?? No thanks. The thought that Slack/Ms Teams/Discord all run slower than MSN Messenger did in 2005 despite my having a computer ~50x more powerful is depressing enough (and each of them consuming as much RAM as I had on my desktop back in the day!).
> Can you imagine using an actual productivity application in the browser?
I'm not sure what type of applications you're referring to, but if you're including word processing in that, then I can't imagine not using either Google Docs, or whatever might one day come along and be a reasonable alternative. If I'm using a word processor, it's because I want someone else to be able to read and, quite often, contribute to what I'm creating. That is far, far less painful using Google Docs than Pages or whatever the alternative might be.
Photopea is a browser based photoshop; it is very good. VS Code is an electron based IDE, also very good (and there are various web variants as well). There was some kind of high quality online video editor my old landlord (who has a small video production company) used to edit 4k video with his macbook air. I don't know about audio, though I suspect there are more than a few options - yet probably less than for everything else, I'm not sure how good those physical device interfaces are now.
Aa for 3D stuff; it looks like solid works have some kind of cloud platform. How much of the workstation load is there I don't know.
Those are not productivity applications, they are all tailored to specific purposes. The only one that I can say might be considered a productivity application is text editors/IDEs, but even then it's tenuous. No one is going to call Blender or Ableton a productivity application
Most likely. It's not the best IDE but it is likely the best free one. Out of the box VS Code is pretty much good to go while others are either paid or take a lot of config.
I have attempted to use vim but I just can't be bothered working out how to turn it in to VS code. If I have to decide between multiple ways to install plugins with their own tradeoffs, its already too much work when vs code just works.
You can collaborate on office docs in Office365, Teams and Sharepoint. Also, Google will convert Word. Quite a few people and organizations still prefer the richer feature set and combined suite in Office, or even iWork. Excel is still king of the spreadsheet and doesn't look to be going anywhere.
Electron apps definitely have performance issues and I do not defend them. I just also want to make sure we're recognizing that apps like MS Teams are doing a lot more than MSN Messenger did back then. Whether you use/need those features or not, the apps are much more capable. Our computers have gotten an order of magnitude faster for sure, but the workloads they're tackling have not stayed flat.
> I just also want to make sure we're recognizing that apps like MS Teams are doing a lot more than MSN Messenger did back then.
Can you elaborate? MSN Messenger did text chat, voice chat, video chat. I don't see what more features MS teams has that explain the order-of-magnitude increase in footprint.
It's more than one OOM, closer to two. From what I remember, MSMSGS used few dozen MB at most, since typical machines of the time had between 64 and 256MB of RAM. Meanwhile, I've seen Teams go over 4GB simply being idle, and others have reported much worse.
Teams does have quite a few extra features: multiplatform, easy meeting recording, media embedded in text chat (quite useful for my current work), shared editing of Word or Excel documents within calls, sharing of Powerpoint presentation controls within calls, per-channel wiki (a bit anemic, but it's there), pretty extensive Sharepoint integration, crazy extensibility...
The downside is of course that you pay for all of that even if you don't use it.
>Teams does have quite a few extra features: multiplatform, easy meeting recording, media embedded in text chat (quite useful for my current work), shared
I am curious if people would generally consider an editor / IDE a productivity app. If so, VS Code's popularity seems to be more than "many people do", in certain segments, it's the leading development environment [1].
And oddly, if we consider it an IDE (I would, if the relevant extensions are installed for whatever you are doing), it seems to use less resources than most of the others I use on a regular basis, while seeming more performant. It's really an odd app.
Personally I don't view browser based and Electron apps as equivalent. I don't really use any Electron apps, but to me they aren't as annoying as software which run within my browser window.
Sometimes I just close my browser, because to many tabs have built up. I also want a Dock icon, PWAs and Electron apps will provide that and allow normal cmd+tab to work.
PWAs are worse than Electron apps, because they die when you exit your browser. Google Chat is the worst, they had a standalone app, but now it's a PWA. So I either have to have Chrome running (a browser I don't actually use) or a window or tab open with the web-version.
Electron apps are generally fine, from my perspective, they use more RAM and don't feel completely native, but in lack of better options they're completely fine.
Yes, I can get behind this argument, it's significantly different from the standard opinion that Electron apps are trash quality in terms of performance. VS Code is generally either seen as a counter to that argument, or as the exception that proves the rule; nevertheless I am frequently left unsatisfied with such arguments. My core point is: the norm seems to be lazily written native apps, to the point that well written Electron apps can compete, assuming that the app is non-trivial. Clearly having an Electron app for a task list is unlikely to be a good idea, but as a core productivity tool that is open for months at a time, it's fine.
Coincidentally, I stumbled upon StackBlitz [1] a long time ago, and was generally very impressed with it. It is essentially a very slick, online version of VS Code optimized for rapid prototyping, or small team projects. I could easily see people working with it as a main IDE. The argument then would be whether your browser of choice provides a nice enough environment for it to rival your OS when it comes to pinning tabs, navigating between them, etc. I would agree that it's unlikely to be an awesome experience, but with some work, it could be good enough. I mean, I guess a large percentage of people use Gmail via the web interface, and what is email if not the quintessential productivity app.
Under the hood, Electron is powered by the Chromium rendering engine and NodeJS. So why do you view browser based and electron apps as different? Because you cant see the browser menu bar?
Because they run as separate processes. I frequently just close all my browser windows, which would exit any browser based apps. Electron app are their own thing, and lives outside my browser window. Electron apps also have their own dock icon and exist independently when I tab through my open applications.
That's also why I strongly dislike PWAs, they DO NOT exist independently of my browser. They are very much tied to the browser (well Chrome), which seem illogical, given that they have their own icons/launch thingy and pretend to exist as their own process.
I get that Electron and browser based apps work more or less the same, but I interact with the two types applications in a very different way. That's what I care about, the interaction, the underlying technology is irrelevant.
I work at Google and I do all my work in chrome, email, chat, IDE, ssh, docs/slides/sheets, etc. I used to use iterm and vim but during wfh I converged all my work into the browser and it's been pretty convenient accessing everything from the same interface.
I know Google is not like most places and that a lot of web apps suck, but when the web apps work well, it's pretty nice, at least for my workflow
>spoiler: this argument is dead and nobody will ever go back to native apps again)
Not with that attitude it won't. Yeah. 90% of tech offerings only work based on monetization and hostage taking of consumer data. Normal people are starting to pick up on this. All the techbros looking to score that hot, sweet -aaS money are blind to it and desperately hoping their market dominance and spend can keep people from digging through the native computing stack.
Need end to end encrypted file transfer? VPN and NFS is your friend. Need chat? IRC, at your service. Also exposable through VPN, so you can limit the audience. Want Doc updates where your secrets and in progress stuff is guaranteed to not get pored over by some intern or Admin somewhere? See above. Want no bloody ads and not to be snooped on? Set that stuff up homey. I've got a young'un whose mind is blown away at the fact games used to even exist that didn't need n internet connection.
The desktop metaphor is fine. What isn't fine is the normal person's technical education/on ramping. The old approach was teach programs first, then the protocols and problem classes they solve. Now it seems to stop at just teaching programs, because there is so much out there that is doing the same exact thing, but different branding, there's never enough time to dig into what is going on under the covers.
Just.. no. Please stop pushing IRC. It's had decades to evolve and still today lacks really basic QoL things like a good mobile story (always-on connections won't fly, bouncers are limited hacks), permissions, or user registration that doesn't look like an 80s xterm.
Matrix is carrying that torch now. IRC is an evolutionary dead-end that will only ever be used by techies.
I use IRC; it is good. I also use NNTP is also good. There are other protocols can be good too. They can be good for different purposes.
There is also I can store all of the files on my own computer. I don't need to store them elsewhere, except to make backups. I use DVDs for local private backups. For public files, I also store them on DVDs, but also on other internet services (such as chiselapp), too.
There are many computer games that don't need internet connection. Some of them, but not all of them, are designed for older computer systems such as NES/Famicom, which can be emulated on many computers, so it doesn't matter what operating system you are on, it will likely work.
Better documentation is helpful. Describe the program, the protocols, file formats, etc. This is how to learn working with computer. Other programs doing same kind of things can differ in many more ways other than only the branding though; some have different features, source code availability, etc.
I for one want nothing to do with this. File system access is for good actors only, and the advertising assholes have poisoned the well for a web browser being anything other than a dumb document browser with the privacy settings turned up to the max for me.
A neutral file system and standardized file formats are a huge part of what has made computing able to do interesting things in the past 40 years. The fact that one application can output a file, and another can open it and operate on it is basically at the core of the unix philosophy, and the reason we can have things like developer workflows.
If I, as an application author, can only work on data in ways that are intended and officially blessed by another application, we basically have the situation we have on mobile where everything is siloed, and the state of the art is limited by the imagination individual application developers.
How then are web apps supposed to become part of a more durable workflow (i.e. opening/saving/moving/backing up files) if we do not permit them the same privileges as native apps? I don't think web apps should have total control over your filesystem, but why not at least allow them to operate within a folder?
They aren't. Web apps can't be trusted with access to barely anything on your system, lest they copy it to sell as advertising. As long as the internet is fueled by advertising, essentially all web apps are adware.
BTW both Safari and Firefox are not going to implement the File Access non-standards that Chrome pushes. They expose too much, there are no good ways to limit the exposure etc.
When you think about it, the modern internet resembles a welfare state. Everyone's day to day sustenance is sponsored by a few wealthy benefactors, meanwhile they essentially hoover up what remaining potentials there is. There could be so much more than what we think is possible now.
That can be a better idea. (Unfortunately the HTML file input in any web browser that I have tried does not allow the user to change the file name to a different name than the local file name. This ought to be fixed.)
When it asks the user for a file, can also specify the wanted access: read, write, read+seek, or read+write+seek. Requested format can also be specified, but the user should be allowed to ignore the requested format if wanted and instead specify an arbitrary file. For writes, estimated file size can also be specified as a hint, which can also be ignored. Then the user can type in a file name, or for the non-seeking modes, a pipe is also possible. For write non-seeking, the user can specify append or overwrite. For seekable files, a pipe is not valid. For writing to files, the user can also optionally specify the maximum size that the file is allowed to have.
That's basically what file system access is like in Android, unless you give an app Storage permission. Seems sandboxed enough if no other web apps can view it.
> this argument is dead and nobody will ever go back to native apps again
I don't see this. The native program ecosystem is alive and well as far as I'm concerned. All the programs on my computers are native, despite noone "going back".
> Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
Now go and check out syncthing, you're in for a really good time.
Yes, I want to save the files locally to the disk. I don't use Figma, Slack, Notion, Facebook, Twitter (except sometimes for reading, using Nitter), or Google Docs. You could save the HTML, but that isn't always ideal. Having defined file formats can help, which is the case when using email, NNTP, ActivityPub, IRC, etc.
FTP is no good. There are better protocols, such as HTTP, Gopher, Gemini, Plan9, etc. I had made up a file format for serving directory listings by HTTP (but I don't know how to configure Apache, or to write an extension for Apache, to be able to use it).
About "who think the browser should be for browsing documents and not virtualizing apps", it is badly designed for virtualizing apps. (I have thought of some better ways.)
Also, the file system paradigm does not fail with shared content; you could have the program to mount a remote file system and then access it using local programs, if wanted. (You can then also easily to copy files between your computer and remotely, in this way, by using the standard operating system commands for doing so, and will work just as well with command-line or GUI. Similarly, for SQL databases, you can have an extension to expose remote data as a virtual table, and you can then easily copy data between locally and remotely.)
> FTP is no good. There are better protocols, such as HTTP, Gopher, Gemini, Plan9, etc. I had made up a file format for serving directory listings by HTTP (but I don't know how to configure Apache, or to write an extension for Apache, to be able to use it).
WebDAV is actually fine for this use case and supported everywhere (e.g. you can enable WebDAV for a certain directory in Apache, map it as a network drive in Windows, and everything will just work).
There are some things that I don't really like about WebDAV, including the use of XML.
However, the HTTP directory listing specification that I made has been described as being like a simpler and better (in some ways) version of WebDAV by some of the other people who have seen it. (It does do a few more things than only directory listings, but directory listings is its main intention.)
>Sounds good. Where is such program? This is surely not a very novel idea, but where is it?
Every office in the nineties. Windows 95 + Office on the desktop and a Windows NT Server sharing the files. I'm not saying we should try and wind the clock back but it was a solved problem.
It's probably less that they suck, and more that they aren't designed for the workload the developers want. Files are a good idea when you have exclusive write access to them at any given moment; less so, when you want to support concurrent access. Real-time collaborative creation happens at a finer level of granularity - people are manipulating individual aspects of documents, objects in the application's internal model. This doesn't work well when your atomic unit of synchronization is entire database.
Not that I disagree with your overall point, but there are multiple products in that space. It's clearly not a deal-breaker for most; but if anyone's lamenting the lack of a GoogleDocsFS, they can get one:
I mount a cloud drive containing about 3 TB data on my laptop using rclone and it works great. And the cloud provider does not even have a native Linux client. I am so happy with rclone and will totally recommend it.
Maybe, maybe not, but even then, wouldn’t implementing those be easier (for both you and the rest of the world) than creating a completely new protocol?
> It still leaves the issue of it using multiple ports.
Aren’t multiple ports only an “issue” if you assume NAT (and CGNAT, shudder) as a natural state of things?
> we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop
A good observation, but I think it conflates two different trends:
1. Some software platforms deliberately limit what data is stored on your local machine under your control
2. There's been a shift in UI/UX away from using files as a first-class abstraction
There's a good case to be made that a UI should generally hide the specifics of its data storage. As an example, it's a good thing that most email clients present the user with their emails, rather than with a raw folder of files. (Internally, the email client might make use of a database rather than a directory, so it might make good back-end sense too.) Of course, that's not the same thing as the email client being hostile to data-portability.
iOS strongly commits to this ideal, even at the expense of constraining user actions. The podcast app doesn't let you upload your downloaded podcast episodes to your desktop computer, for instance.
Aside: iOS has very poor support for dealing with files in the usual ways, to the point that you pretty much need to use a third-party app to do so. I've found the freeware Documents app by Readdle to be very good for this.
> this argument is dead and nobody will ever go back to native apps again
I agree that the web as a GUI toolkit is here to stay, but native apps are alive and well too. There's a trend to try to push users off the mobile web and onto native apps (Facebook, Gmail, reddit), rather than the other way round.
I'm not really sure how Slack saving documents to disk would really make sense. IRC was around before SaaS took off and webapps replaced desktop apps, but I can't think of a client that implemented "IRC documents" saved to disk.
Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor. They weren't in a special IRC format, and you didn't open them with your IRC client. Hell, you couldn't open them with your IRC client. There's no reason you can't just ctrl-C a bunch of stuff out of a Slack chat and ctrl-V it into your text editor. Only difference between that and IRC logging is that you have to do it manually.
> Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor. They weren't in a special IRC format, and you didn't open them with your IRC client.
Yes, that's exactly the point. You owned this data, you could do whatever you wanted with it, and it was stored in a format that was both trivial and most fitting for the data stored.
> Hell, you couldn't open them with your IRC client.
I'm pretty sure you could in some clients, and some definitely pulled stored logs to backfill the chat after restart. Though I haven't done that myself (instead I relied on a bouncer to supply the backlog on connection).
> There's no reason you can't just ctrl-C a bunch of stuff out of a Slack chat and ctrl-V it into your text editor. Only difference between that and IRC logging is that you have to do it manually.
That's a world of a difference. In Slack, it's painful to do, and if you haven't done it when you first saw a message, it's going to be even more painful to do after the fact.
> they were just text files you opened in any text editor. They weren't in a special IRC format
Of course they are! Using anything other than text files to store chat logs would be idiotic. The main point is that slack is a user-hostile application that does not even allow you to do that. Why people put up with this is beyond me.
> Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor.
Right, my point is why doesn't Slack do that? Then you could use `grep` or `find` to search across all your messages and avoid paying a monthly fee to access your entire message history and...oh, right.
You can only search a log for the time that you were logged in and saving it. You can search the entire history of many Slack channels from the very first message posted in it onwards, even before you'd ever joined it. That's a significant advantage over a local file.
I'm not particularly keen on Slack but suggesting search would be better locally is plain silly. Search is obviously better done at the server.
> You can only search a log for the time that you were logged in and saving it.
That ignores the possibility of the data being synced between local system and the server. Slack is already in the cloud, so cloud options are on the table. And so is syncing data.
> suggesting search would be better locally is plain silly. Search is obviously better done at the server.
The reverse is obvious to me. Slack search, like all web SaaS search tools, is really bad. I could do better with grep - and it would work faster, and I could actually trust that it searches through all the messages, instead of giving me some eventually-consistent view into results of a query that is only tangentially related to what I requested. And they wouldn't be able to tell me which properties I can or can not search by.
And I'm not talking theory - in the past, I did some spelunking in many years' worth of IRC logs in a folder on my drive, and the experience was much better than searching for anything in Slack.
> Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it?
Just curious, does anyone know of any hybrid file formats that store information both locally and online?
It seems like one solution to this problem would be a document that stores an editable copy locally and a revision hash in its metadata, then decides whether to serve up the local or cloud copy depending on whether the user is connected to the internet.
Sure, this could cause conflicts between online / cloud files if someone else edits the file at the same time as you, but that's true of any cloud sync service like iOS Notes.
I guess in retrospect I'm just describing Dropbox which, while it's more a container for standard files than a file format in itself, has largely the same effect.
Yes, filesystem sync protocols like rsync do it at the FS level and if you want to go deeper than the FS level, you get into the realm of operational transform and other rather complex algorithms.
A very insightful comment up thread observes that filesystem-centric computing worked for as long as collaboration was very limited. Once apps needed to move beyond that to collaboration at a finer grained level it fell apart and apps started needing databases, and in particular, databases that could link data from different users together, implying a shared privacy domain.
Was this change inevitable? The long since exiled and forgotten Hans Reiser wrote about this problem a lot back in the day (he murdered his wife and obviously his ideas lost any traction at that point). His thesis predated a lot of the concerns about privacy and central control that we see today, but briefly, he argued a part of why this was happening was that filesystem technology was not good enough because it couldn't handle very small files and because POSIX had some unnecessary limitations. Due to this lack, apps were constantly forced to invent filesystem-within-a-file formats, e.g. OLE2 and OpenDoc were both centred around this concept, SQLite obviously is one too, ZIP yes, but really most file formats can be viewed as a collection of small files within a file.
The idea was, if you upgrade filesystem tech, you can radically change how apps are written.
The problem is that operating system tech on servers and desktops has been stagnant for years. Microsoft and Apple lost interest in their primary operating systems and the open source world has never really been interested in going beyond 1970s design ideas, largely because cloning and adding small elaborations to commercial designs is the way the community stays unified. Look at the mass hysteria that followed systemd, which is one of the only upgrades to the core UNIX OS design patterns in decades. Actually making changes to the core of POSIX isn't something that's going to come out of that community. It'll probably take some company that wants to innovate on the core ideas again.
> > Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it?
> Just curious, does anyone know of any hybrid file formats that store information both locally and online?
I'm sure there are better ways to do it, but MS Office can, AFAICS, at least kind of do that: Documents stored in -- wossname, OneNote? SharePoint? One of those, I think -- can be edited in-place by Office Web apps, or downloaded for editing in the regular desktop apps and then saved back on-line and/or locally. If they can do that, I'm sure other apps can also do it (and probably better).
> this argument is dead and nobody will ever go back to native apps again
This snippet has really lit the touchpaper. A long time ago, I predicted that the world was course to deliver insta-compiled applications through a browser, as though we'd have a Visual Basic runtime environment plugin. However, we're now there, essentially, with XHR and many-megabyte JS bundles, manipulating the DOM through the browser's "widget" engine.
There's really a tipping point for each application, where the application's functionality determines where it is better served. For instance, no one's going to make a web app out of Logic Pro any time soon. However, if someone comes up with a stateful protocol to implement in current browsers, then that tipping point flips to the web for a whole bunch of applications.
I'd argue that in recent years, most people can't tell the difference between a native Swift app on iOS and a Kotlin app running on a midrange Android phone. I agree that Apple's approach is more technically correct here, but Android's approach is also pretty sustainable.
I think those are both the native app targets of their respective platforms though? The Kotlin app targeting Android APIs is not cross platform. I gathered they were more drawing a comparison between native and some HTML/JS/CSS thing.
Ah, so this takes us into the question of what "native" means.
Some people use the word native to mean "the way apps were written in the 90s and on Apple platforms, still are written". It's short hand for manual memory management, full commitment to the operating system vendor's APIs, and so on.
Apps written that way have some big advantages for end users - consistency, low memory usage, and so on. But they suck for developers. Manual memory management sucks, having your app market share be limited to the operating system's market share sucks, often the vendor APIs suck.
Some people use the word "native" just to mean "uses the operating system specific APIs". The other aspects like being written in an AOT compiled manually memory managed language don't count. For those people Android apps written in Kotlin running on a JVM are native, but the other people, not so much.
> spoiler: this argument is dead and nobody will ever go back to native apps again
I think in the world of app stores this is a little odd to argue. Native apps on the desktop do seem to be on the way out, but less so on tablet and mobile phone.
Most of my work is in native apps. It's just a better experience. The browser is great for communication, and it really shines for text based communication, but in my experience that's the only place where it outshines native apps. And remember, it's not like native apps can't back things up to the cloud, so there is a false dichotomy that you have to do everything in a browser that you want backed up to a cloud. I have no problem working with IntelliJ products and then pushing to a remote repo. My Photos, Music, etc are backed up to iCloud but aren't viewed in a browser either, etc. Zoom is launched with a browser link, but it opens a native app. MS Office has a cloud drive and is even sold as a service but I use the native Excel and Word rather than in-browser versions. I just don't see this migration to browser delivery for the apps I've been using.
It's 2021, people really aren't using the App Store like they used to, imo mainly due to the insane rise of subscription based applications for things as small as calculator apps
There is lots of good thoughts in your argument, but I disagree with the "should save their information to disk".
This may make sense for technical people with a specific goal, but for most users, they shouldn't care where it is saved, ala dropbox. They just want to access their files. Online, offline, everywhere, that's what they want.
> but for most users, they shouldn't care where it is saved, ala dropbox. They just want to access their files.
Yes, but it does matter where it's saved, because the location and method confers ownership of the data. "Possession is nine-tenths of the law" is the rule of modern Internet. It shouldn't matter whether my photos live on my drive or in a third party's cloud, but it does - because in the cloud setting, the company dictates what I can and cannot do with my data, can pull shenanigans like applying strong lossy compression to uploaded photos, and they will eventually take my access away - either I cross the ever-expanding terms of service somehow, or they'll just go out of business.
In my experience of both managing my own data and helping non-tech people, filesystem vs. cloud data durability is really a wash. People seem just as likely to lose their local data due to drive failure or accidental deletion, as they are to lose access to the cloud storage (or have the company disappear from under them).
It would be nice if they didn’t have to care where the information is stored. And maybe that is the case 90% of the time. But that other 10% matters a lot and I don’t see that changing anytime soon.
It's not only ads. Autodesk I believe does online rendering now, and for most CAD drawings even a cheap APU can handle that level of geometry, but it's harder to justify a recurring revenue model for a fully local application.
I have trouble believing that this is the fault of the techie lobby, considering that said lobby otherwise has no meaningful accomplishments under its belt. My explanation would be that the web is massively successful because it enables user to navigate safely without being in danger of leaking their files. If a user isn't willing to install an app to do a task, it is precisely because they fear that such an app will be able to do unknown damage to their computer. Allowing the same thing of web apps eliminates their advantage and endangers users.
We are living in a multi-device, instant-access, access-anywhere, cloud-based world, and the desktop file-based paradigm has trouble with this reality. The vast majority of non-technical people would struggle with desktop-based files when they want everything everywhere all the time on every device at any moment.
Is that what they want, though? Most people that work an office job, at least, still deal with Excel, Word, and Outlook. Maybe they’ll setup work email on their phone, but that’s probably it. I’ve noticed a pushback (true for myself, too) ever since the push from work-supplied devices to BYOD.
People are realizing mixing private and work communications on the same device is a bad idea. The kicker is, this isn't even some kind of corporate conspiracy. It's just human nature - if you hook up your business e-mail to your private phone, you are going to be checking it after work hours, you will start responding to e-mails, and your work habits will shift to account for that.
That's why I just don't do that first step. The only connection between my current smartphone and my work is some TOTP keys in the authenticator app, to enable more convenient login to some cloud services the employer makes us use. I talked with a co-worker recently, who made the mistake of installing work communications on their personal smartphone, and they very much regret it - not because the company is exploiting it, but because they can't discipline themselves to not check business messages after work.
Yeah, well, it's not 100 % his fault. Browsers should default to "Ask every time" for download file location, in stead of just bunging everything into a default "Downloads" folder.
But that's really fucking easy to change, so still 90 % his fault. (Or 95, 99...?)
> don't know if there is any real way we can go back to the P2P paradigm without destroying NAT
I think it's time for NAT to go for residential ipv6 and the numbers I've seen show that TURN isn't required for most connections. Unfortunately universities and businesses will probably never remove NAT as there is limited incentive to do so.
I think the best we can do is have somewhat decentralized networks with limited yet trusted centralized authorities (the need for discoverability will always remain, even when using otherwise decentralized networks like SSB). This could be IPFS with bootstrap nodes for their DHT, or as I have been using to circumvent NAT when latency is unimportant, Tor directory authority to host ephemeral, local onion services.
Not just that. We have a lot more metadata these days and not everything can be a file. If you keep all the metadata and database-like files accessible to the user, how do you handle store corruption?
EG, a video recording/playback app that allows the user to save bookmarks/timestamps. You'd need to have some place to store those bookmarks, extract frames, generate multiple resolutions for both the video and frames (for gallery previews etc), possibly add some more metadata...
It's much easier to hide the actual files from the user and give them the option to export the data in some user-readable format.
Apple is notorious for this. Everything is a soup of folders and files with hashes and .plist files. Similar story with iOS and Android.
> allow you to e.g. load and save documents off your disk.
Isn't this trivial? A download button = "save" stuff from the app to disk. An upload button = "load" from disk to the app. AFAICT, webapps can already do this via existing file API's.
This isn’t the same: download/upload can be used to simulate a file system, but they don’t preserve file identities in exactly the way open/read/write does.
I don't think that writing a native app using Electron qualifies it as a web app. VSCode is not normally used from the browser, it is downloaded and used more or less fully disconnected from the MS infrastructure you used to download it (give or take some plugin updates).
Those social media contents you've listed there is nothing anybody wants to keep. Let's be real here. This is throwaway information. While outside of the edgy cool startup bubble the rest of the professional information is still being saved somewhere. Sometimes even saved AND printed. No matter if it was on Slack, Teams or wherever.
Here’s an interesting example of an app that runs in browser and opens/saves SQLite GeoPackage files on the computer running the browser instead of a remote server.
Some other reasons for saving files in the cloud not mentioned:
- lets you access them cross-device more easily (replicating the files on each device could be could, but uploading to the cloud seems easier);
- backup, in case my device breaks or is lost
> (spoiler: this argument is dead and nobody will ever go back to native apps again)
Oh god, please, no. I don't really understand the disconnect between your love of filesystems and yet disdain for native apps. The exact same arguments apply; can't serve ads locally, don't autoupdate (a feature! they rot slower), can access any time, don't need a network.
Why would you want a webapp? Because it has flashier animations and SVG? There's a long conversation to be had about this, but the summary is no, no, god no, please give me back my native applications with drop-down menus and boring file picker dialogs. As long as they have decent keyboard shortcuts, I'll manage.
Who said they disdain native apps? They simply believe that web apps have/will continue to take over. You aren't going to get back native applications for every product just because you prefer it or that to you it is better. There are factors more powerful than that which are determining that web apps are more suitable.
For example: ability to serve ads, autoupdate, AB testing, tracking
Maybe those things are bad for you but are they bad for the people who make the product? No, they're pretty good things for the company, maybe even expected in 2021. Your preference against those things doesn't change that fact
As someone who has been designing stuff in native file-based apps for 30 years, I consider Figma a godsend -- because of their collaboration / multiplayer features. And I don't miss files at all, though of course I understand that I do not control my data stored in Figma.
It points to the need for native software to allow collaboration and this is actually happening slowly.
I still hate Figma though. It's too primitive and I'm sick of being given Figma links where the necessary resources are 50% chance not isolated, or not vector, or not exposed at all.
That depends what you mean by local files: On the Mac, at least, it will keep local backups. See the Backups menu item. Admittedly, the backups are not in a convenient format, rather being json files containing encrypted blobs. But there is a utility for decrypting all the files. So if you are merely worried about losing access, you're covered. So long as you don't lose your password, that is.
But for sure, other solutions work better if you have less stringent security requirements.
> Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content.
This i do not understand - mobile and web content has easily been monetized for a long time now, why would desktop software be any different?
For example, i use software called RaiDrive for mapping network drives on Windows (https://www.raidrive.com/). In their free version, they show ads on the main app window after you open it.
Why isn't this the norm on desktop - ad supported but free software? Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
> Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
Good lord, please, no. Desktop adware should remain a bad dream from the 90s and early 2000s. It ruined software like Opera, and was often bundled with spyware.
I'm glad advertisers mostly embraced the web, where I can run their code relatively sandboxed and easily block it. A desktop app has much less restrictions over the resources it can access, so allowing software that actively wants to track and manipulate you to run in that environment doesn't seem like a good idea. The fact it was acceptable in the 90s with the complete lack of security of the popular OSs of the era is a bit nutty, and while modern OSs are much more secure, I still wouldn't run anything ad supported. F/LOSS or paid apps only for everything under my control. Subscriptions are tolerable in some cases.
> It ruined software like Opera, and was often bundled with spyware.
Wait, I hate ads as much as the next person, but how did they ruin Opera? Opera was originally trialware-only, then for several years replaced the trial with an ad-supported version (with the full version still available for purchase), and then became entirely freeware.
I suppose the nuance of "ruined" is down to personal preference, but I was annoyed by the large banner ad placement and stopped using it shortly after they added it. Purchasing wasn't an option back then for me.
This was also done in other software like Go!Zilla and it made the UI unusable IMO. It was a very disruptive and obnoxious way to monetize a project. Not sure if they improved this later before the move to freeware, since soon after I switched to Phoenix/Firebird and never looked back.
Right, but if you couldn't purchase it, Opera wasn't "usable" prior to their ad-supported version either - it had to be purchased once the trial ran out.
Oh, I'm not necessarily advocating for it but the reasons behind it seem interesting, whatever they might be.
Is it a matter of differences in cultures, that people don't seem to mind ads as much online, or perhaps there'd be backlash from OS app distribution channels, were devs to attempt to monetize software in the Windows Store for example (I don't really use UWP apps so no idea), or perhaps it's something else entirely...
That said, I feel like the option not even being there is limiting in of itself. Suppose I'm a developer who wants to create software that's free to download and use, but ad supported. Now I cannot possibly do that. As for those who would prefer no ads, there would always be the possibility of a paid version with no ads, the possibility of altering the hosts file to block ads, or possibly downloading the source code of the app, removing the ad integration code and compiling the app themselves.
Though there are also interesting technical aspects as well, such as us not being able to sandbox most native apps (short of AppImage and Flatpak as well as Snaps, but even then there are other challenges), which may contribute to spyware in desktop apps. Plus I bet there's a large difference between showing an ad in an app and being able to mess around with the OS default browser settings and so on...
> Why isn't this the norm on desktop - ad supported but free software? Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
My guess is because the desktop model doesn't assume live internet access. My understanding of ad networks is that they involve a live bidding process against interested parties at the moment you load the page. This allows integration of live geo data, what you just searched, what you just looked at previously, etc. And you don't need to reconcile what was served if an app goes offline then rejoins for charging the advertising account.
How would Facebook guarantee you use their app to look at their stuff? If the answer is "some proprietary format that only the app can read" then what's the point?
That dialog also invokes so many issues with computers today.
When I open a program, I want to use that program. I don't want to update it, I don't want to see all the new features, I want to USE it. I opened it because I had a task to complete and all this junk is getting in my way.
And same when I close a program, as the author hits on very well.
Basically, the computer/program/etc always wants me to do something for it, but it never asks for those things at an opportune time.
No, I don't want to update my computer right now, and no, updating overnight tonight isn't good either because I need to keep this program running until tomorrow. I understand that your new UX is better for me, and I'm sure I'll love it, but forcing that on me right now is preventing me from doing what I need to do. I see your error dialog describing some odd issue, but I don't have time to triage that right now and decide to take the time to fix it.
I wish software would respect the human element more. My time and attention is valuable, please don't interrupt it carelessly.
Personally I almost never agree with these "the desktop is dead" kind of articles. If anything, the problem this author has to be exacerbated by the move away from the traditional desktop style than an embrace of it. That preview close dialog is not something you'd see 20 years but is something you see in "apps" all the time.
Perhaps the problem is, as you allude to, that software is hostile to the user now. There's always something more than simply being a tool that you buy, use, and close. Now most programs are merely portals for "services".
> That preview close dialog is not something you'd see 20 years
That’s because the application would straight up lose the files without prompting.
Also pretty sure text editors had something similar 50 years back, that’s why you :q! from vim, :q would tell you that you have unsaved buffers (or whatever the lingo is) and refuse.
TFA is literally abusing a crash-resistance feature out of laziness, and will no doubt complain when it fails them. 20 years back we’d all trained ourselves to save with regularity to avoid issues, that feature would be considered manna from the heavens, and tfa absolutely mental.
Completely stupidly too, I must add:
> This is the dialog that I see every time I want to close the Preview app to clear my desktop of all that clutter in a hurry for a video call.
MacOS has a shortcut to minimize all windows! It even has a shortcut to hide all aplications but the current one! You don’t need to close anything to “clear your desktop in a hurry”!
Yeah but the semantic difference between "hide" and "quit" is a very technical one that doesn't need to exist. On mobile it doesn't happen, and we can thank Android for that, as they insisted from day one that multi-tasking APIs should not require the user to manually manage memory on the device. When Apple added multi-tasking to iOS later they copied the Android design more or less exactly. Browsers do something similar for browser tabs now.
However this was never brought to the desktop, largely because native desktop APIs are dead and none of the companies that fund them are interested in improving them. We get the web and if it sucks, well too bad.
> Yeah but the semantic difference between "hide" and "quit" is a very technical one that doesn't need to exist.
Funny, because I think the exact opposite. "Hide" means, "stop showing it for now, but keep it around". "Quit" means, "I'm done with you and I expect you to not be doing any work until I start you again". It's a very significant usability difference.
> On mobile it doesn't happen, and we can thank Android for that
To this day I consider it to be the worst, most dumb idea Android ever had. I hate it. It's the reason I hated my first smartphone. It's the reason why, from the second phone on, I only ever buy top-of-the-line most overpriced flagships available - I have to overprovision resources to ensure the phone works smoothly, because there are countless of background processes doing god knows what, that cannot be killed and prevented from restarting.
I know best when I'm done with an app. I want to be able to kill it, and all its background processes, and I want them to stay dead until I change my mind. If I were designing a mobile OS, I'd make the distinction between "hide" and "quit" as clear as day, and make it an app store policy violation for an app to execute anything in the background after being "quitted" by the user (polling for notifications would be handled by the system).
I agree with you about this distinction being important, but I come at it from a different angle - I don't mind if most apps operate in the background as they need, with preemptive scheduling like Android. But I really, Really mind if I don't have a rock solid way to quickly and easily kill an app (and all related tasks/processes/async jobs/etc).
1. There are times I want to make sure an app is not running (think zoom/hangouts with video access), or a work app that records location
2. The single best way to fix a problem, any problem, is still 'turn it off and back on again'.
I don't want to kill the current activity and hope that the background jobs stop at some point in the nearish future with absolutely no feedback or insight. I want "killall <root app process>" then give the app 5ish seconds to cleanup and if it's still around "killall -9 <root app process>".
On mobile there are ways to force quit apps. They just aren't front and centre, nor mandatory for users to understand, like it is on the desktop.
Too many apps abusing resources and affecting UI speed is a separate issue, and mobile OS's developed a lot of techniques and technologies to allow apps to work in the background without bothering the user, and without overloading the device's resources. It's a nice set of systems, really, and I would be happy if desktops worked the same way. Nobody would be taking force quit away from me, even if such a system were to be implemented.
> On mobile it doesn't happen, and we can thank Android for that, as they insisted from day one that multi-tasking APIs should not require the user to manually manage memory on the device.
Yes, and this turns my phone into an amnesiac the moment I have a hundred tabs and fifteen apps open. Switch to a different app, switch back... Oops, all gone! Enjoy recreating the state from history manually, if that's even possible.
There are no words to express how unhappy this makes me. We've got gigabytes of memory on our handhelds these days, why can't the phone just deal with it?
This is spot on. People want to hand over responsibility for their computations to some big company offering a "free" service so long as you use the official app, and then they wonder why they don't feel like they have any power or freedom.
> When I open a program, I want to use that program. I don't want to update it, I don't want to see all the new features, I want to USE it. I opened it because I had a task to complete and all this junk is getting in my way.
Jonathan Blow did a talk similar to this a few years ago, about how "software is terrible".
In a part of it, he double clicked on a .psd file to open in Photoshop and timed how long it took to open the image. It took around 8 seconds for Photoshop to display that image on screen. But the first thing the app did was show a splash screen image, almost instantly.
His argument was, "Why, when I double click this image file that I want to view, does it instantly load an image which isn't the one I want to view, and then take 8 seconds to load the one I do want to view. If it can instantly load an image when I double click the file, why isn't that image the one I double clicked?"
I’ve seen that talk too, and it struck me as a bad example. Photoshop is not primarily a tool for viewing images, but for editing them. It would be great if it could get me there faster, but as a comparison I just tried timing how long it takes to launch The Witness, and ended up with something like 13 seconds.
If it takes 8s to load the file, the better design is to take 8.05s instead to display the splash screen and a progress bar to indicate to the user that their request is being processed. Good interfaces are never unresponsive. So, bad example. :)
In a parallel world, the image loads first, alone on the screen, then the toolboxes, then the window frame. Then the splash screen for 8s because the marketing guys always have it ;)
It’s a balance though isn’t it. If they hide everything away so as to respect your time, then another user will be frustrated by the constant magic going on in the background. Even you might want more prompts in some situations.
Maybe some applications could have an alert mode, similar to logging levels. But then it will probably get more buggy.
Honestly, I rarely get annoyed by the number of popups in most of my software. I’ll happily take a few extra dialog boxes for extra control.
Because in practice if you don't push people to update, they just don't update. And then complain about bugs, security issues, bad performance, missing features, etc... all available in the updates they have been postponing for years.
I 100% feel this pain. And for security updates and major fixes, I see the need for frequent updates.
Though setting aside practicality for a bit I wish we would design software to be more backwards compatible so old versions of things could continue to work for longer. I shouldn't have to buy a new phone every three years, for example.
Also for a tool like Audacity, I rarely need to update it. It doesn't depend on a service, so I don't need to worry about an API falling out of support and he security risks are much lower. I wish we could design more of our software to work this way.
Obviously for internet connected things like browsers this isn't possible as the security impacts are significantly higher, but why should I need to update Word every month if I don't plan to use the online components, for example.
that is a problem but servers are one place where there should absolutely not ever have automated updates that cannot be overridden and turn off as you are apt to break things if it isn't tested first.
Just spitballing here: what if we selectively notified users of updates when they'll be affected? So, when I'm using my spreadsheet program, and I go to sort the rows, a little note pops up informing me that there's a bug in sorting non-ascii characters that was fixed recently and would I like to update my word processor now or continue anyway?
I'm imagining some automated tooling where changelogs are tied to specific git commits, and when you go down a code path that has a changelog entry tied to it in this way, it pops up the dialog. The program downloads the changelogs in the background (if the user chooses), but doesn't pop up a notification unless it's relevant to the user.
I don't think it would entice more people into upgrading. People don't necessary want to stay on old versions, they just don't want to be interrupted. To make it more probable that a user updates, you need to make the interruption as little as possible: download stuff in the background, make it patches rather than full blown installer, ... to follow in your direction, if I have a popup that says "you just sorted rows alphabetically but there's a bug, do you want me to fix it ?" and the install takes 5 seconds then it's good. If it takes 30 minutes then screw your update
I think what we need is another way of architecturing software. I'm thinking of Erlang's actors and hot reloading: the whole language and its environment were built to allow upgrading parts of the app without turning the whole thing off, thanks to actors that compartmentalize state and minimize the places where it can change. That's extremely useful for servers, but why can't I have the same for desktop apps ? For your example again, maybe have some "actor" that can give me a sort, and when updates come the old instance is killed and the new instance takes its place. No need to turn down the app, to lose my current work, to waste dozens of minutes.
Unfortunately desktop apps are pretty much dead for the masses, and webapps solve both the problem of not updating (next time you come it'll be updated) and atomic updates (a client side app can load some javascript when needed). If we want desktop apps to come back (I do) we need to find a way to do the same
> I don't think it would entice more people into upgrading. People don't necessary want to stay on old versions, they just don't want to be interrupted.
It might, if you did it like GP posted:
>> a little note pops up informing me that there's a bug in sorting non-ascii characters that was fixed recently
>> The program downloads the changelogs in the background (if the user chooses)
Note the stark deviation from standard practice: GP would like the popups to actually tell you what's in the update.
In my experience with both techies and non-techies alike, the main reason everyone hates updates is that it's always a cat in a bag. You'll likely get some bug fixes. You may get some new features. Some of those may even be useful. It's more likely the app will become slower instead of faster. It's quite likely there will be disruptive UI changes. Often completely gratuitous ones.
Random recent example: I noticed my wife has an OS update on her phone pending for the past three months. I asked, she agreed, I did the update round. Told her, "wow, you're getting a version bump, I envy you". I immediately noticed some user-set icons were wrong[0], but quickly corrected it without saying anything. I though that's the worst of it. Couple hours later, she comes back to me complaining that the update messed up notification sounds and the new keyboard has changed scaling, breaking her muscle memory for touch typing. Also the battery life dropped significantly.
Updates today are like loot boxes in games, except most of the "rewards" make your life worse.
One solution to this problem would be to have a separate stream for feature updates, and a separate one for bugfixes and security updates - and only ever bother the user about the latter. But that would increase the costs for the software vendor, and we can't have that. Who are you, the user, to complain about what your lord does? SaaS stands for Serfdom as a Service.
--
[0] - The ones you assign to your SIM cards in dual-SIM use scenario. The update added new icons to the set, and apparently the OS must have been storing user choice as an index into that set, instead of using icon ID.
Can't agree more with everything you're saying, except for the last part: if bugfixes/security updates are in one stream, and new features are in another, the first one should be applied all the time without asking, and the second one should be the one that prompts a pop-up.
> SaaS stands for Serfdom as a Service
It's interesting that the SaaS term was born from the Cloud, but its tenets still apply to desktop applications
> bugfixes/security updates (...) should be applied all the time without asking
I'd agree in most cases, but I'd still leave some grace period for updates that require rebooting the app or the OS. Nag and cajole the user all you want, but don't ever lose their data, or interrupt them during a meeting.
> SaaS term was born from the Cloud, but its tenets still apply to desktop applications
There are two main facets of SaaS - a nice one, and a nasty one. The nice one is that it outsources ops/maintenance. You're just using the software, someone else is keeping it running and up to date - and it's more efficient this way. The nasty one is, the party controlling the software is in position of power - they can make you pay rent, lock you in and hold your data ransom, and exploit your data in ways you don't want - and you can't do anything to stop it.
Desktop apps relinquish most of the benefits of SaaS, but the drawbacks were backported to local software by means of normalizing subscriptions and automatic updates.
I just want updates to happen in a separate 'thread'. Not (necessarily) in the literal sense, in the sense of my human attention. Update in the background, don't interfere with my use of the application itself — at most, give me an icon that tells me the application will be a newer version the next time I restart it, a bit like how Google Chrome manages its updates.
I generally agree. And Chromes method is definitely better than Steam's where you have to wait 30 seconds on boot every few days for it to load.
But that introduces issues where if Google adds some shitty new feature to Chrome I sorta get forced into it instead of having an opportunity to choose, so there's definitely a tradeoff and also a responsibility on the part of the vendor to make sure they aren't abusing their ability to install things on my computer.
I don't like automatic updates either. I can just acquire the new version somehow and update it that way (perhaps handled by the package manager, when I tell it to update the package); it shouldn't be a part of the program itself, I think.
I totally agree that provided enough time and users, changing the user interface will almost always break some users experience. “Better” is subjective.
The way Linux puts the package manager front-and-center is pretty cool, but in my experience it doesn't result in a very up-to-date system. You might have a version of OpenOffice that is years out of date. Oh, you want the latest version? You need to update your Ubuntu 16.04 first, it'll say!
I'm a programmer and still this makes no sense to me. Windows and Mac somehow let me use the latest OpenOffice even if I haven't updated my OS in forever.
> Oh, you want the latest version? You need to update your Ubuntu 16.04 first, it'll say!
Well, yes, that's the flipside of updating everything at the same time. You have to update everything :)
And if you're using an LTS version of an OS, you'll usually get a related LTS version of the apps from the same era, especially for the older OS versions. Because they prioritise production stability. If you just use it as a working desktop you'd be better off using the bi-yearly versions.
Sure, but why? I can easily install the latest Firefox on Windows XP most likely (or could until a few years ago), but I can't on Ubuntu 20.04 (or couldn't until Firefox started doing auto updates on Ubuntu as well)?
Because Linux folks like to bash on Windows, while in reality they live like every application would be putting their stuff into C:\Windows, like Windows used to do in the old day pre-XP.
So naturally when all applications depend on the same shared libraries and configuration files, they need to be updated on a full swoop.
Yeah, when one knows their way around UNIX it is possible to actually install new software that works around this, but that is exactly the problem for most desktop users, "one knows their way around UNIX".
On my Linux systems, both server and desktop there are a set of programs that I don’t use the package manager for, because they’re usually out of date.
This includes browsers, where I let the auto updates handle it, and programming environments (Python, R, Node), where I often use their specific updaters.
I don’t think it’s realistic to expect a global curated package manager to stay on top of the bleeding edge all the time.
The extreme alternative to "device, please don't ask me when you want to update something, just do it" is losing control of your computer. If you're comfortable, like the author mentions, having your digital "home" be away from your devices, to "live in the cloud", I guess this doesn't matter (at least, while your favorite cloud providers and services don't shut down taking all your data with them, decide to hike prices, or ban you for breaking some ToS rule).
If you want to fully own your computer, it matters. You don't want to give away all control over it.
I don't think those are the only options. For example, Linux Mint shows an icon in the system tray to indicate when repository updates are available. I fully own the computer, and it is my choice if/when to apply updates. However, that doesn't imply that each individual program needs to yell at me at startup to get updates.
For security updates it still depends on risk though.
A flaw in my browser is extremely high risk and I require it fixed and updated quickly. But a flaw in my SSH server that is only accessible from behind a VPN is lower. And the risk is lower still if it's a flaw in a feature which I do not use.
I don't see an easy solution of course, it's a hard problem. But I wish that I had more granular control over what and when I update, and was given a clearer way to see the impacts of the update so I could weigh risk.
It would be lovely if it became a standard part of everybody's work day to spent 15 minutes doing TLC to their computer.
You'd clear away all the stubs of work, and the computer would say "Hi! Did you have a good work day? Here are a few things that would be great for you to do:
1. Upgrade these two apps
2. Do you want to check out how Dark Mode works? No? Ok
3. You've had these 46 tabs open since that one day six weeks ago you got into the idea of building your own treehouse. Want me to save them all in a notebook entry and remind you about it next summer?
Really, any machine needs maintenance. Even an electric stove or a microwave needs an occasional clean if it gets dirty. Why should computers be the only special appliance that needs no maintenance whatsoever?
I'm generally in favor of more control too. Though that control extends to wanting to be able to control when my software updates and when I get to have new features, settings,etc.
My comment above is more of a rant than a recommendation.
The notification center or whatever it's called that's available in pretty much every modern OS would be a good location for such stuff. It's available, retrievable and not in the way when you don't want it.
Better software doesn't always exist and when it does exist is not always possible to switch (or has other downsides).
But for example Steam updates on boot, which is a (albeit minor) pain when I want to use it. Similar with Spotify. There aren't really any great alternatives to either of them that have equivalent libraries of content.
Or going right to the core, Windows and it's annoyingly frequent updates. Sure I could use Linux (and I do on a few machines), but that isn't possible on my work machines, and introduces incompatibilities on my personal machines.
Unfortunately that's not always feasible. I use Cura a 3D printing slicer (takes a 3D model, generates instructions for your printer), and it's one of the best at what it does (slicing). The UX is horrible though and is clearly 'designed by programmers' with little thought into how people will actually use it. And every time I upgrade there's a 50/50 chance it'll forget all my settings (there's lots of things you need to tweak and customise specific to your printer) and I'll need to start from scratch. Dealing with these issues is easier that switching to different software and learning it from scratch.
I think that random examples without reference to a particular arena would be...random. If you would like a recommendation for an email client, or a web server, or a programming language, I would be happy to share what works for me. I do everything on Linux, by the way.
I’m really confused how the author generated 88 documents in
Preview that are unsaved. What kind of documents and changes were these? Why were they not saved when the changes were made? Preview is mainly used to view image files and do simple editing. Those edits are auto saved as you go long.
I’m sure the author has a point but I’m just distracted by this strange use case.
As it stands, the only thing keeping my Pinephone from replacing my BlackBerry is better messaging notifications. Calling, messaging (via Matrix), and Firefox are all mostly good enough for me.
Note I'm not saying it's good, but good enough for me. I've been using Arch, as I find the rate of improvement shows in a rolling release distro, and in my experience, Mobian is no more stable.
The closest thing to secure, actually usable Linux distro on phones is Android; if you are serious about security and degoogling GrapheneOS especially. The bad thing is that you have to have a Pixel for that.
Source: I own a Pinephone and has tried both Phosh-based and KDE-based offerings, there is also Sailfish though. Out of these Sailfish is the only one close to usable as of now.
I think some are getting quite good. I’m following Manjaro/Phosh, and it’s showing promise. I’m not using it for much yet, just watching it develop. I believe other distros are further along.
The (graphical) software program with guaranteed internet connectivity has become a control vector for the software author, many times the company employing the software author, over the user. Users are being "used", as suggested by the original author of GCC.
For whatever reasons, non-graphical software seems to suffer less from this problem.
> When I open a program, I want to use that program. I don't want to update it, I don't want to see all the new features, I want to USE it.
Indeed. Microsoft Office I'm looking at you! I don't think I've ever actually looked at one of those "Hey cool new stuff for you" popups more than trying to identify how to get rid of it as quickly as possible.
And even worse is "Hey here's a new feature we like so we turned it on for you, good luck figuring out where to turn it off again lol"
Google Maps on Android is the worst offender I've ever seen with this. Practically every time I open the app I get spammed with some dialog box or extraneous UI elements. The worst part is Google Maps is kind of a "just in time" affair. I only open it when I'm going somewhere and I need it to function right now. Alas.
Exactly. I mean, how hard is it to just give people a proper changelog? Am I the only one who's excited about the new features, when I get to read a proper list of them before installing a new version of the product?
> When I open a program, I want to use that program. I don't want to update it, I don't want to see all the new features, I want to USE it. I opened it because I had a task to complete and all this junk is getting in my way.
> And same when I close a program, as the author hits on very well.
The weird thing is, that is a solved problem. App Store apps (Mac, iOS) will automatically update in the background and, unless the developers build it into the app, the changelog if any will be on the store page. All platforms have something similar. It's just that a lot of developers opt to release and update their thing separately for a variety of reasons.
I agree on the OS updates though, some years ago there were a few... claims? Promises? Something that implied restarting to update would no longer be a thing. Dear reader, they were wrong.
Yeah and Windows 10 designers take note: there’s no situation where it’s OK to randomly decide you’re going to log me off in 5 minutes without a freaking cancel button. I lost 20+ hours of work on a process that can’t resume itself because of this insanity.
It didn’t used to be like that. But then people never updated their os/software and we ended up with lots of vulnerabilities.
I like where Windows is now, I get notified of required updates but I can defer it for a few days/week. For non-technical people who don’t know how to setup defers I’d imagine they would also be the crowd before who didn’t update. I’m totally fine with my parents occasionally grumbling about a forced reboot overnight.
Likewise all the software I use, besides online games, let me defer updates. Lots of stuff also now has the update next start, which I also like. I don’t even notice my browser updating most of the time until I open dev tools and see “what’s new”
> But then people never updated their os/software and we ended up with lots of vulnerabilities.
Perhaps the issue was that we insisted every program be internet connected. If everything were offline a la Battlestar Galactica, security exposure would be greatly reduced - pretty much only the standard USB stick attack vector would work.
> But then people never updated their os/software and we ended up with lots of vulnerabilities. <
Seems like we have a fundamental security issue that results in having to keep fixing vulnerabilities across most software, which sounds like applying bandaids and treating symptoms rather than coming up with a more secure approach.
Seconded. It seems like over the years I spend a greater and greater fraction of my screen time either setting something up or un-fucking it after an update/new version ruined it.
> No, I don't want to update my computer right now, and no, updating overnight tonight isn't good either because I need to keep this program running until tomorrow. I understand that your new UX is better for me, and I'm sure I'll love it, but forcing that on me right now is preventing me from doing what I need to do.
I especially love when Firefox decides that he really needs to update, and I cannot open any new tab until I do so. Who cares if I am on a crappy connection and the download that I need to finish will take 20 minutes more; it's either restart it or wait until I finish. And that other thing in that tab that I need time to finish but I cannot really save ? Who cares.
That Firefox issue only happens on Linux when you upgrade Firefox on disk using your package manager while Firefox is running. The ABI mismatch between the running version and the new version makes starting new Firefox processes unstable, so it's disabled to avoid crashes.
Simply don't run system updates during a Firefox session and you're golden.
Yeah, it’s a linux distro issue. Alternatively, run NixOS that instead of swapping the running executable file under itself will create a new environment with a new firefox and start that one next time.
The essay did not convince me the need to rethink the desktop or sufficiently explain the idea of fragments versus work products.
Yes, search is great when it works. You know what also works? Organization. I used search within folders of very large category to rapidly narrow stuff down.
If the desktop ain't broke, it doesn't need fixing.
Instead, I am thinking about all the dark patterns and anti-patterns, as well as performance hog, endless constant update, as well naggers trying to upsell you shit.
There's a reason why I returned to linux. Microsoft, please fix your shit.
This works great if you are dealing with files, and are disciplined enough to organise them, but in other cases it turns into a big mess. My wife is not disciplined in that way so her computer has many layers deep of temporary and non-temporary documents on the Desktop, and Windows is constantly complaining there is no disk space. Her iPhone is the same because of podcasts and WhatsApp groups that auto-download media but never clear out the old stuff.
The idea of fragments as TFA discusses really resonated with me, because most of my life is not in files. Giving a real example, a few months ago I read a blog post about some house planning software (I'm in the process of building a house), and later needed to refer back to it. I didn't save it anywhere (bookmark, save as PDF, Notion/Roam etc) because it didn't seem that important at the time.
Its the same if you watch a YouTube video or read an article here. Sure you could take notes and save those notes in a text file that you can later search, but who has the time to that for every single piece of content they consume?
Something like Spotlight is a good start, but it needs much more meta-data. All of your search and watch history needs to be there, but in a world that's becoming every more siloed (how do I record the photo a friend shared with me a few months ago on Facebook Messenger?) that's tricky.
The problem with magical solutions like the fragment idea is that they require magic to implement.
Most software can't be bothered to integrate rich paste support (e.g. the way you can paste an Excel spreadsheet into a Word document), and we expect them to implement a complex API to define these fragments in a way where the OS/Spotlight could actually build up that history?
This would be a massive undertaking requiring buy-in from every piece of software before it is even marginally usable. It is a pipe dream (and may even be a nightmare if it were realizable).
How would one organize better, by flooding the system with even more data? Fragments would be no different than files, just smaller, yet still objects to organize away. Whats neccessary are tools which need to make use if them, and for this there were many many attempts over the years. Several more are still going even now. But all of them fail, because the basic problem still can't be solved by software, which is the responsability of the user to organize their stuff.
> Its the same if you watch a YouTube video or read an article here. Sure you could take notes and save those notes in a text file that you can later search, but who has the time to that for every single piece of content they consume?
Those already exist. The browser has them all in it's history. Google similar has many if them stored for their services. To some degree they use them for the user, but there is simply limited use for those informations.
The biggest thing I miss in the file folder paradigm is dividers/sections. Like folders, but without nesting.
I used to have a tendency to over-categorize and nest things too deeply, which makes them hard to find again. Concrete example: nesting classes by semester. When looking back for something, I would usually remember what class it was, but I'd often forget the semester. So now I have to search through a bunch of folders (manually or automatically) to find it. Ugh. A flat structure is much easier to follow.
However, I still want all my classes grouped together when I'm listing them. Currently the only way I know to achieve this is by adding a prefix to the file or folder name. This works, but you end up with some grotesque names. The implementation could be regular folders with a flag to indicate their contents should be displayed inline one level above.
I've started putting timestamps at the start of filenames recently and is been working pretty well. yyyymmddhhmmss, then an optional filename if needed.
it might still be your idea of grotesque, and the timestamp does take up a bit of space unfortunately but at least its consistent.
going with your example above, it would allow you to just have folders for classes and then have files from all years in the same class folder. then you can sort of hack together a divider feature by adding some brightly coloured jpgs with a timestamp at the start of the year which would let you see at a glance where each year starts and ends.
it also helps if your file browser will remember to open each folder with the newest files show first at the top
How many people are actually effectively organizing their data in files? If a system doesn't work for vast majority of users, then it does deserve rethinking. Its great that desktop model works for you, but I strongly suspect you are not representative of the greater user base.
Except it does not work for those users because automatically managing vast unordered, uncategorized data sets is not a problem we know how to solve. If we knew how to solve that problem, we could just implement it in the existing hierarchical manually-organized paradigm by just having one hierarchical level and dumping all your data into it. Given that the new model fits in the old model, the only reason for demolishing the old model is for efficiency reasons and there is very little in the way of efficiency that can be gained by converting the relatively shallow organizational trees that are commonly used for a single-level store.
Why do I believe we do not know how to solve the problem? As the article inadvertently points out, the unorganized data they want is not accessible and Mac Spotlight, a tool that operates on unorganized data, does an "OK Job, but not a miraculous one". If Apple knew how to super search the vast trove of unorganized data and file formats for content-specific data and always get it right would there be any reason they would not do so? It would clearly be strictly superior to their existing offering. For that matter, they bring up browsing history in Safari which is an even easier problem and also claim it hardly works. For evidence outside of Apple, think of all the in-app search systems that can barely even search plaintext for things you know exist. I frequently have problems with Gmail not finding emails despite giving exact string matches. These problems are all subsets of the proposed problem, which must be solved to solve the harder problem, and which would offer immediate material benefits for their solution yet do not or barely exist.
The problem is not a mismatch between solution and user workflow. The problem is that we do not know how to solve the problem for all meaningful workflows. The best we have achieved is creating and supporting a workflow that allows the problem to be solved in many useful cases. Luckily, if the problem can actually be solved in the future, it can easily be bolted onto the existing solution to test viability before going all-in.
> It would clearly be strictly superior to their existing offering. For that matter, they bring up browsing history in Safari which is an even easier problem and also claim it hardly works. For evidence outside of Apple
For evidence outside of Apple, just try to use browser history in Chrome or Firefox. They're completely unreliable, entirely useless. For me, it's 50/50 chance I'll get a result when searching for a site that I visited few days earlier.
> think of all the in-app search systems that can barely even search plaintext for things you know exist
Exactly.
Most of the search systems I've worked with have one, big problem: they don't feel reliable. You type in a query, a system starts a search. You get some indicator of progress, which eventually expires. Typically, search systems don't tell you if the search was exhaustive - did it check every possible thing that could match, or did it bail out after hitting some time limit? They also often meddle with the query, modifying it or doing fuzzy matches, in ways not communicated to you in the user interface. With such systems, if the thing I'm looking for isn't in the result set, I don't feel confident it isn't there.
Manual file organization (or any direct data access) has this property of being exhaustive: a file is either there or it isn't. Direct file access means that if you don't trust your file searcher, you can always look for yourself.
There are many other challenges ahead if we want "universal search" to work, but a big step forward would be addressing these trust issues. It's entirely an UI issue. Instead of "0 results found", say "0 results found after searching contents of all text files in Documents folder (symbolic links not followed)". Instead of "about 123 results", say "123 results found, there may be more matches, [click here] to read about limitations of the search method".
(And on a general point: users aren't as dumb as the common claim is. Our industry is treating them as dumb, taking away every opportunity to learn and build mental models - and then complains that users "are dumb".)
I feel like the author’s proposed solution of better searching through tagging and semantic associations over manual organization also requires user care and effort. I don’t think the desktop model is broken so much as the average person isn’t very organized to begin with.
This tagging idea resurfaces every few years. I have to confess I don’t really see where the revolutionary improvement is. Filenames are already namespaced tags and symlinks allow for multiple tags. Not quite the same thing, I know, but close enough for tagging to appear to me to be a convenient enhancement rather than something game changing.
And as you imply, when you have a lot of files that you need to organise, you tend to start compiling them into directories.
So your solution to organizing files... is more files? How would multiple files use the same "tag" under that scheme, without duplicating it everywhere?
Tags are very powerful if used properly. They describe sets by definition and allow quick filtering with intersections and other set operations. If the filtering is a bit smarter it can support boolean operators, or use tag distance and order as a meaningful data point so that e.g. "discussion board" doesn't return the same results as "discussion snow board".
They're a more flexible way to organize data than hierarchical directories. Hierarchy can easily be expressed with tags (use any character as separator, e.g. "os.linux"), but files and directories are not nearly as expressive enough for all the use cases tags can be used for.
I've been working on a file organization system that incorporates tags, and I must say I agree with the GP. Tags improve display and search over hierarchies, but they require no less discipline on the part of the user. Without some way to automatically mass-assign meaningful tags to items, which is an insanely difficult problem in the general case, the user is forced to manually tag every single item they add to the system. AI could help do this, but not much. First, consider that mistagging an item can make it practically impossible to find again. Second, consider that semantic tagging can be almost arbitrarily abstract; imagine tagging a text document with "cyberpunk" or an image file with "parody".
The genius of Google search was leveraging hyperlinks to add semantics. The only way I can see something somewhat equivalent happening is by exposing all my interactions incl. e.g. the juggling and browsing of other documents beyond the one worked on to search. I would find this scary.
> So your solution to organizing files... is more files? How would multiple files use the same "tag" under that scheme, without duplicating it everywhere?
You could put the same file in multiple directories. You could have a file in ~/vat and symlink it to ~/urgent and ~/accountant.
You're right, that could work in theory. In practice it would be difficult to manage and keep track of all the links, update them if the file is moved, etc. I suppose hard links would avoid that, so that might be a reasonable way to implement tags, I agree. My main goal would be to manage this via a tag-like UI, so that I don't have to do this manually. And at that point I might as well just store the metadata in a proper DB instead of the filesystem...
I'm not dismissing the utility of tags, and I like your idea. I'd definitely install it as an application and give it a go.
I'm really addressing the argument that tags are a revolutionary change that should replace the directory-based filesystem entirely. The cost seems too high to justify the benefit.
Yeah, it still requires management and discipline from the user for it to be useful. FWIW I use Pinboard daily and finding something among thousands of bookmarks is a breeze. I can usually find what I'm looking for with a single tag or an intersection of just two tags. Finding anything on the filesystem is much more difficult, even with good directory structure discipline.
I'm not actually interested in building such a system, it's been done before[1,2]. Though I haven't actually used any extensively, since it does require a shift in how file management is done, and I'm quite familiar with filesystems to bother to change, but it's on my list. :)
You'll get a biased answer because you'll include a group of people who won't succeed no matter what approach you take. People have tried that approach and ended up with things like Microsoft Bob.
The real question is what is effective for individuals and business trying to accomplish real work, who are at least somewhat capable of learning or training staff.
Most users have no idea of what is even possible with a computer, organization-wise.
I don't disagree that interfaces should be designed for users, but they should be designed to EMPOWER and teach users, not to just dumb everything down to the lowest common denominator. Because, let's face it, most people are awful at organizing stuff that materially matters a lot to their lives and well-being.
Rethinking is fine, if it leads to action. If the desktop isn't working for the vast majority of users, then it's certainly the prerogative of anyone in the world to code up something which is guaranteed to make every one of those users happy.
The majority of users don't even realize they're interacting with files. The mantle falls entirely on developers to ensure that their programs are simple.
>If a system doesn't work for vast majority of users, then it does deserve rethinking.
No, if a commercial for profit system doesn't work for the vast majority of users, then it does deserve rethinking. Linux is a great example of an operating system that was not written for the highest profitable denominator until recently.
Just a counterpoint: I literally save every document to one giant folder, and give every file a longer descriptive name so I'll be able to search for it easily.
I've been doing this for almost ten years, and it works great.
Dear Ben,
what you wish to achieve is to turn upside down desktops for everyone based on the needs of a disorganized person, you.
Don't do that.
There are lots and lots of people using computers not for just tossing around casual fragmented piece of information but for coherent content creation, for products that need to be reliable and many many times inherently complex, made by organizations with reliable practices.
Shortly, there are other needs than what you are familiar with, try not to act for everyone based on isolated views.
Btw. the problem with today's desktops stem more like in the urge to make those work like handhelds made for the flow of fragmanted causal pieces of information. The UX of desktops is really poisoned with the limited concepts of handhelds, by the trends (I so hate this word representing appearences rather than thoughtful behaviour) of UX designers who go after coined concepts addressing some isolated views. If they do at all, sometimes just doing differently for the sake of difference, which is not too intelligent thing.
So much this.
I spotted the problem as soon as he said he has 75 plus tabs opened, justifying like “humans use tabs as a todo list”. No, that’s how you use it.
Want to group by projects? Use tags or folder
Don’t want to spend hours cleaning your download folder? Move every new file you download in the correct location
And at some point he said computers should automatically delete some files? WHAT
And last but not least, of course mobile systems handle informations better, they don’t store nearly as much information as someone that uses his desktop regularly.
A response article to this would be: “Just learn to use a computer”.
I don‘t know how many times I sat in a meeting or worked with somebody and the presenter/user could‘t find the document(s) they wanted to present because of the endless apps and tabs open at once or the endless stream of unorganized files on the desktop or working folders. Same goes for disorganized shared folders (like Sharepoint) or Confluence-spaces without clear organization. Then when information is required the people responsible for the mess start to search in said mess and can‘t find their stuff. And mostly it‘s not because there is no decent search, mostly it‘s because they never wrote down/ saved/ filed the information because they lost track of it or thought they could remember not to close a tab in between 100 others or simply can‘t figure out how to use the search in the first place.
75+ tabs are only feasible if you have an extension like Tree Tabs for Firefox. Of course, no more than ten (at the most) are ever loaded into memory at any one time.
The true shame is that trees of tabs aren’t natively implemented by the browsers because all the extensions that enable tree functionality have glitches caused by the native tab ordering.
I beg to differ. I have, in a vanilla firefox UI, about 100 tabs, in a single window.
I group them (loosely) by their themes, in the bar. It's... manageable.
> A response article to this would be: “Just learn to use a computer”.
A rephrasing of his problem would be: "I'm too busy to learn how to become more efficient with a computer," and I've heard this excuse for 27 years now.
Classic case: At one company, I rewrote an engineering program that my dad had written years earlier as a command-line program in BASIC. The operator would have to use note cards and go through a whole series of menus to make it do its thing, like "A... 1... C... 3... 5..." I rewrote it in Visual Basic, and made the whole function take no more than 3 clicks, usually just 1. I came back later, and noticed the operator was still using my dad's program, and I asked why. He said my program was "too complicated." I started to protest, but knew it was futile. Other people were loving it, so I just dropped it.
I've never found a good way to redirect people who are defensive about learning new ways to use a computer to think about things differently. If anyone has had luck here, I'd love to hear about it.
I am, admittedly, on the "one app open at a time" end of things, but so much of this article seems like a user complaining about a mess they made for themselves.
>> I have 37 windows open, with some 75+ tabs
>> I have a Desktop scattered with (currently) 132 icons
>> Programs need to cut it out with the dreaded “do you want to review the 88 open documents” crap.
There is no technology that can solve those problems. Just close your stuff, man.
The author seems to wear his extreme disorganization as a badge of honor. Also from the article:
> having 500 tabs open in mobile Safari, which I always do, doesn’t hurt my system performance at all because the tabs are freeze-dried when I’m not using them.
When you have so many open tabs, there's no difference between a "freeze dried" tab, and just an entry in your browser's history. If your system performance on desktop is hurt by having 37 windows open, the solution is to just close all these windows. I guarantee you're not going to miss them.
Well of course there's a difference between a "freeze dried" tab and an entry in the browser's history. The latter is just one line in an undifferentiated m(a|e)ss. The former is right in front of your face, one click away, maybe with an icon in view.
No. If you have 500 tabs open in any program, chances are the one you're interested in is not in view right in front of you just a single click away. There's, at a guess, a 95 percent chance[1] you'll have to scroll around and look for it. And if you're going to have to do that anyway, you can just as well do it in the browser history as among open tabs.
___
1: It's probably not exactly the same in Safari / iOS, but gotta be the same ballpark: In Firefox / Android I can see fewer than ten tabs at a time. (Hm, that number must have shrunk in a recent update.)
I have said this elsewhere, but the fact that so browsers have cashed in on tabs and that tabs are opaque to window managers means that you literally cannot search through your tabs to find one that is already open with the title or content that you want. This is something that technology can trivially and efficiently solve, yet it does not.
I wrote my own bash summon/banish script to pull existing windows whose titles match a pattern and send them away. It is better at dealing with 20+ browser windows and 500+ tabs than anything that the UI teams at billion dollar companies seem to have come up with in literally my entire lifetime. This tells me that whatever the UI teams are doing it is a joke, because I have no idea what I am doing and it is working better. I still can't summon windows based on a search of their contents, but I'm sure I can solve that for at least some Emacs buffers.
The thing that is stupid is that opening a new window and typing in a url is usually faster than finding the already open version of that same window (which has the old state). Browsers can sort of do this with tabs, but why the heck isn't there a single box that will take you to the information that you were already viewing locally? The user interaction workflows for managing large amounts of information are fundamentally broken.
The UX for funnelling users into parting with their dollars and data are flawless, but somehow it is impossible to take me to the window I already had opened on the subject faster than it is for me to open it again from scratch? That is a sad, sad commentary on the dystopian nightmare that we are trapped in.
I say this with all the love and compassion in my heart: have you tried bookmarks?
Not trying to be snide, but having that many tabs open and expecting to ever come back to the same one twice is 100% incomprehensible to me. But, I freely admit, I start trying to pare down when my tabs get narrower than the default width. I've gathered that I'm an anomaly
I hear you. I do use bookmarks, and sometimes they work better than a google search, but they are a solution to a different problem (a problem for which sometimes I wonder whether it doesn't make sense to store the raw html of every page I ever visit and stick it in a full text search index ... org capture maybe?), and they are siloed inside their specific apps or websites. I got burned by using reddit's save functionality without realizing that you can only ever see the last 2k saved pages and now I bookmark things locally so at least sort of unified there.
The issue that I'm thinking about here is finding active UI elements that already contain what I am looking for. I have been able to do this to a certain extent in Emacs by writing commands to go to a specific buffer that is already open, except that even that doesn't quite for a variety of reasons including the fact that sometimes you want some exact state and sometimes you want a clean slate and in Emacs in particular doing that can completely mangle buffer ordering. At some point I will get fed up with the current state of things and fix whatever brain dead heuristic destroys the buffer ordering but for now I find workarounds.
Essentially the issue is that yes, it would be great to be able to put a pin in the high dimensional state space that is a computer + network and just ... go back to that point, but even ignoring the engineering challenges my experience is that the dominate UI paradigms haven't really even considered the issue (though the research community surely has).
I have the Tree Tabs extension to blame for enabling me to be a member of the 100+ tab club. What it allows that bookmarks do not is a clear way to swap between dynamic groups of related tags.
A somewhat real-world example: I have 5 tabs open for reference on an Elixir project I plan to return to over the weekend and 12 tabs open for a Python project I’m currently working on. When I’m done with today’s Python work, I expect to have no more than seven of those Python tabs remaining.
> There is no technology that can solve those problems.
Sure there is. It's just not implemented in most software - including, in particular, web browsers.
I currently have 192 buffers opened in the Emacs session on my desktop. 143 on my work machine. That's quite low - both sessions have less than a month of uptime, and I was on ~10 days of leave during that period. I can easily run to 300+ buffers in a few days of work. I've never complained about too many buffers being opened.
The trick that makes it works is the tooling around management of open buffers.
I only ever see one or two, maybe four, at any given time. There's no tab bar. Switching is done with incremental search - I start typing a substring of a buffer's name, and in 3-5 keystrokes narrow the list to the one buffer I want. The list of candidates starts in most-recent-first order. I could enable fuzzy matching, or matching by both name and type, but I personally don't need it, I have a good memory for buffer/tab/file names.
If and when I need to manage buffers in bulk, I fire up the ibuffer interface (which I do by holding CTRL a bit longer, to turn `C-x b` to `C-x C-b`). There I get a table of all buffers, with columns showing the buffer name, size, modification status, mode (i.e. "buffer type" - associated application for handling it), and filename or process bound to the buffer. With a few keypresses, I can sort and filter the list by any of these properties (and a bunch of others). Filters can be incrementally combined. I can mark entries on the list (whether manually, or by a filter), and perform bulk operations on them.
And in case you think this is hardcore techie interface - it isn't. It's essentially a streamlined, keyboard-oriented version of what Windows Explorer in "Details" view gives you for your files. I think even casual, non-tech-savvy users are pretty familiar with this view.
I'm pretty sure the entire problem of "too many tabs" would go away if the browser did these three things: get rid of the tab bar, provide a decent switcher UI that, beyond the mouse, supports incremental search from keyboard, and provide a table view for managing the tabs.
Text editors like sublime have been auto saving unsaved files along with dirty flags to temp files for a long time. And sublime for example does not pester with dialogs on close (or at least it didn’t when I was using it)
Most editors I use at least have recovery temp files. The step from recovery files to not asking annoying questions is easy.
Easier said than done unfortunately. One thing that has helped me with tabs a lot is a browser extension that always opens links in the same tab. This makes it so that I need to be explicit if I want to open something in a new tab, and I end up with way fewer tabs to clean up
Touch UIs like what we find in iOS and Android devices disincentivize clutter pretty well. I mean, yeah - close your stuff - but let's acknowledge that designers have actually solved the described problem on non-desktop platforms.
I thought the same thing. If having all this stuff open is a useful thing, then the desktop paradigm is definitely not a problem. If having all this stuff open is a distraction, the desktop paradigm is still not the problem.
In one way, it sounds like transient information-hoarding, something few would really desire on a real desktop.
The desktop is broken because the big players abandoned it. The idea itself is perfectly serviceable, extendable, and usable. It mates well with a CLI. It was nearly (and neatly) solved back in the late 80s / early 90s with System 6 (with plenty of good work done on Systems 7–9; Windows 3.1 was at least consistent, and the fine work done on NeXTSTEP was simply ignored), and ever since then has been left to rot.
The fact that the Finder actually did what it was supposed to, including remembering folder / file locations and states, but no longer does (this has been conspicuously left out in every OS X+ release), is good evidence that Apple (and MS, given the state of W10), simply doesn't care. People use the web, people use Spotlight, and they go on without too much complaint, mainly because they don't know that it could be any better.
Even simple window management is a disaster — there are W10 systems we use at work for presentations, and it's a crapshoot as to whether a window will expand to fill the screen, allow me to drag it, or do something unexpected when the top bar is used to drag the window to another location. Mac windows no longer expand to fit content. It isn't that the desktop concept has failed — it's been left to die, when it could be the rock-solid basis for any modern interface. This is not progress.
Okay, that was a weird read for me. The dissonance element is that the author seems to have transitioned away from using a computer and forgot to change their equipment. Let me explain what I mean by that.
I would speculate that the majority of "computer users" today, are simply "logged in users at terminal equipment"[1] as they say. Their terminal, which is typically a web browser (or a thinly skinned web browser "light" client), is connected to an application on the actual computer somewhere very far away. FWIW this is a very 70's and 80's vibe where you could access your application from any terminal that was connected to the computer hosting it after going through an authentication process. If you had a fancy "mult-page" termainal that could run several virtual terminal sessions then you could have "tabs" open to multiple applications. Today we have really really fancy terminals, they can have 50, 100, even 200 different sessions going on at once.
And with the author I agree, if your terminal is actually a computer that happens to be running a terminal application so that you can get to the things you really use to do your job or what not, then the metaphor is uses for organizing itself as a computer is outdated. For Apple fans iOS is that terminal of choice, Chrome "OS" is that for Google apps. There isn't a Microsoft version of this yet that I'm aware of.
In my opinion, tt is this "terminal-ness" of the usage that is most single handedly responsible for the absolute stagnation in perceptible performance improvements. No matter how fast your local machine runs, it can't make the network or the remote system any faster so your user experience is the same. Not true for locally hosted applications like games of course (not something terminals were able host locally.)
All of that is a long way to say "You want to design a better iOS/ChromeOS then go for it."
[1] This is the origin of the term "lusers" or logged in users.
This used to be the default option! I still remember the WWDC keynote where the feature (called Resume) was introduced. Having it on by default made for a really cool demo, where all the windows were restored after a restart:
We have rethought the computer 'Desktop' as a concept and it is what we have with Android / iOS and it consists of two separate launchers.
The Applications that run on these devices also generally use a paradigm where they don't really store files but really store objects which can have Hierarchies. They do have search and some of them support a stylus as input.
Another aspect is the ability to use your voice to interact with computers which is still quite limited.
VR looks like it may offer something different eventually but it will have to be explored more.
The problem with replacing the computer 'Desktop' as a concept is that it generally is extremely hard to replace. It looked like a combination of a stream of information with the ability to search a stream or separate streams could have replaced it but we only see this in web applications (a good example is Facebook or Twitter).
While I kind of like the idea of fragments, and them being universally searchable, IDK: it sounds like a pluggable search engine is what that OS needs, more than rethinking the FS.
Search is great, when it works. When it doesn't, it is frustrating. Github's file search regularly fails to find some files that I know exist (one of our YAML files at work, in particular, is invisible; I've learned to just not even try). "Shell" won't find the sea shell emoji in OS X. "sea" will. "Place of Interest" (the command key symbol) is also similarly frustrating.
> who cares, disks are big, save it all, forever
Except they're not. Laptop's preference for SSDs means my work laptop has <300 GiB, which was full within a year.
Okay, I'm dating myself, but when I was in grad school, there was a campus computer store, and they gave out a little pamphlet: "Do I need a personal computer?" It listed a number of pro's and con's, but the message that has stuck with me is this one:
Don't expect your computer to organize you. If you have a messy desk, you will have a messy computer.
Decades later, I have both a messy desktop and a messy computer. I think there is something about personal organization that, if it eludes you, it will elude your computer too. I've made peace with the fact that I will never be a hyper organized person. The best thing I can do is to put my stuff somewhere and hope that it's searchable.
I have all the stuff I work on on my desktop. When that gets full, I move everything into a folder "Old Desktop" which I place on the desktop. Of course this isn't the first time I did that. So this folder contains a chain of many such "Old Desktop" folders :')
It leads to interesting discussions sometimes such as "why do you have a 200GB roaming profile???". Organisationally though I can find stuff back really easily. While I traverse the chain of "Old Desktop"s I see other documents from the same timeframe which brings me back really easily.
In fact more organised people are often surprised how quickly I manage to dig up ancient stuff for them. They tend to lose track of things over time, probably when they move to a new PC or file hoster or whatever. I've really become the "Do you still have that inventory from 2007?" guy at work. I see no reason to change it at all.
I also found that the best system is having the first layer of folder organization be "which period of my time is this from?".
Conceptually, it's easier to think of "music from high school" than about the specific mix of subgenres from my playlist back then. Same for documents that I saved. Those ICQ logs from high school are there. They don't belong in the same folder as the stuff I wrote yesterday, even though they could be of a similar nature.
I have a command line tool `today` that creates and enters a directory based on today's date. Anything I do today goes in `today`, and I don't have to see yesterday, but it is easy to find. I wish my desktop were like that.
I'm incredibly annoyed by needing to think about where my files are, at all. It shouldn't matter. All I should need to do is tell how many times I want a file replicated on my personal device network, and optionally tag it.
Well, steps were made in exactly that direction, with ReiserFS and Microsoft was working on the same thing around the same time in the Longhorn days (called WinFS).
For some reason it never took off. ReiserFS was abandoned when Hans Reiser killed his wife but Microsoft also abandoned it with no (known) killings involved. Something must have not worked out.
It's a very interesting idea to revisit though. I do still think there's merit in it. Probably part of the issue was how to make it work gracefully with legacy apps.
However in this kind of scenario you totally absolutely want to tag it with multiple tags. Otherwise this'll become a haystack you'll never find anything back in. I bet this was another issue surrounding implementations.
Fun fact: in Windows, the location of the Desktop is dynamic and can be changed via SHSetKnownFolderPath. This allows you to display the contents of any folder in the desktop.
Years ago I made a gadget that sat on the top right corner of the desktop[1]. It contained a stack of buttons, one for each folder that I was working on. You could add more folders by dragging them on the gadget.
It's liberating not having to keep a file explorer window to access the current project, and you can easily access it with the Windows+D hotkey that minimizes/restores all windows. Use it to open files, or drag stuff to/from other open applications.
Reminds me of Deskmate (on the Tandy 1000) which allowed you to choose a current directory. It would display files in the current directory under the icon for the relevant program.
What happens (on the desktop) when you have too many files to show? Many of my 'working' folders have 1,000s of files.
Personally, I have the complete opposite strategy:
Using Windows file/folder security, deny yourself access to your own 'Desktop' folder. Not only does doing this prevent you from lazily dropping/saving files onto it, it also has has the added effect of hiding ALL the desktop icons, so you end up with 100% clean desktop.
My main uses are writing software and reports, none of which creates 1,000's of files under the main folder. So I guess my gadget would not be appropriate for this use case.
> Not only does doing this prevent you from lazily dropping/saving files onto it
That's the thing, it's only considered "lazy" because it's faster but the files end up in a disorganized location. By pointing the desktop somewhere more useful, you get all the speed benefits without the disorganization downsides.
You do have to weight it against your love of your wallpaper, though.
MacOS specifically does terrible job at visually managing apps.
I frequently end up with a bloated dock with a ton of icons signalling that the app behind them is open (the little dot on top of them).
When I close all the windows of an app why on earth does the app stay open? Why do I need to have Powerpoint open in the dock with no Powerpoint windows open? Especially today that we have SSDs and fast processors that can launch an app within a couple of seconds?
If an app needs to run in the background without UI, there is the menu bar for them.
Linux and Windows have much more rational UX regarding desktop usage.
It still might be doing something. Mail, for instance, is sitting there occasionally checking for new mail even if you have zero windows open.
And I dunno about you, but my 2017 Macbook Pro still takes a significant amount of time to launch Big Serious Apps with a ton of plugins. I like being able to tab over to Illustrator and hit command-n and be fucking around in a new canvas with absolutely no waiting. We have virtual memory, open apps in the background doing nothing but waiting around to be used will get frozen to disc, then get restored a lot faster than they boot up. I generally hate it when apps auto-close themselves without asking when I close the last document like Windows tends to.
And that's a terrible abstraction that, for some reason, happens for any application, whether they have a background job or not. System tray + taskbar is miles better.
> When I close all the windows of an app why on earth does the app stay open?
Apple agrees and indeed many of its first-party apps work like that. Close the last window in Numbers, Keynote, Preview, and the app will disappear from the Dock.
This is opt-in by the app for compatibility reasons, and Powerpoint has not opted in, perhaps because there's no obvious benefit to the app for doing so.
Wish the dock could be disabled completely. I have it set to the smallest size and hidden on the left side, but it still gets triggered occasionally. Having it standard size and chewing up space at the bottom of the screen seems insane to me - what a waste of screen real estate.
Like you, I use Cmd-Tab to know what I have open (and you can obviously Cmd-Q while there to close apps) and Cmd-Space to open apps. I never use the Dock or the grid of icons view or any equivalent (not sure what they're even called).
I don't have a mac anymore to check but there was a magic pref that I set from the commandline to adjust the auto-show delay. I set it to something like 10s which made sure that it effectively never appeared by accident. If you really don't want it I assume you could set the delay to a couple of hours.
I also rarely use the dock (unless I'm forced to — e.g. to navigate to the freakin' Trash/Bin folder) but that's separate to the point of applications staying open even when all their windows have been closed.
The idea is that windows are documents, not the whole application. You can close all of the documents. Then you can start a new document, open an existing document or quite the app.
Someone who has worked with Macs for a while will just close the app with CMD-Q or use the menu. It seems weird to have to close the individual documents to close the app. Many of my apps are setup to reload the same documents when the app restarts do quitting the app doesn’t abandon the documents, it just files them away until the next time the app is opened.
There are non-document oriented apps that only use one window and when that window is closed, they are setup to close the app.
You obviously spent a long time on Windows where the windows were the app and closing the window closed the app. Documents are secondary.
The UI only offers a big red X button. You have no idea what it does, until you click it. Maybe it will close a window. Maybe it will close the last window leaving the app running in the background with no UI. Maybe it will quit the app.
In KDE/Gnome/Windows you know what the X buttons does. It closes an instance of an app. If it happens to be the only instance of the app, it will close the app as well. You don't have to babysit the open apps.
For the very few exceptions that an app needs to run in the backround, you will be notified that by pressing X the app will go to the taskbar.
MacOS also has a taskbar for apps that run in the background, but it also has apps running in the background in the dock. It made sense a decade ago with the slow hard drives, now it is just a peril of the past. Similar to the C drive in windows.
This isn't just a difference in UI, but also in the way the OS handles processes. Under macOS, the vast majority of programs host all windows under the same process. In other words, windows do not represent instances of apps, even under the hood. You can spawn additional instances of an app with the terminal, which will give each instance its own dock icon, menubar, and set of windows.
So shifting window closing behavior to function like that of Linux or Windows would actually require a much more fundamental change than it might seem.
The Apple HIG says there's a difference between applications and documents. An application might have a bunch of utility windows but those are hidden unless the application is in the foreground. So an application with no open documents running in the background isn't visually cluttering the screen. Application launch tends to be (and definitely used to be twenty years ago) pretty expensive in terms of resources. So leaving an application open without documents tends to make opening new documents faster.
The red button on windows on macOS is meant to convey the action is potentially destructive to the window's contents. Even if the "destruction" of a utility window just means it goes away. It's not tied to quitting the application, with exception of single-window applications like System Preferences, for the above launch cost reasoning.
This has all gotten muddy over the past twenty years. The App Resume/Restore feature in Lion (peak Forstall) is terrible and I disable it on every Mac I use for more than a minute. I can see it being helpful for some people but it breaks the way I expect the system to behave after using Macs for 30+ years.
As a user of Macs since 1984, I love that most apps now save ongoing changes and document state so that I can close most apps and restart them later with the same document reloaded and any changes automatically saved. Documents are closed when I am done with them.
And it never leaves the app running with no UI either, because the menu is part of the UI, and it's still running.
This is probably not obvious if you're not used to macOS, but that's a bad reason not to do things this way IMHO. It's completely obvious to me.
The red dot closes windows. All three coloured dots are window commands.
Not that I use them much, I use Moom for window sizing and Cmd-W to close windows and tab (Shift-Cmd-W to close a window full of tabs). But that's just me: the dots are there if you want to use them, and what they do is completely consistent.
You're not used to what they do, but that's not the same thing.
Some apps definitely do behave the 'closing the last window closes the app' way, although this is obviously bad from a consistency point of view.
> the menu is part of the UI
I would argue that that global menu is not part of the application's UI. True, it's application-specific, but it's very easy for a novice user to miss the context switch and just be left with the impression that the app is some kind of 'phantom'.
It's not really up for debate, the menu is part of an application's UI in macOS, that is, part of the Interface which the User uses to interact with the application. I assume you're right about a novice user, I haven't been one in many years.
That's why the name of the application is always directly to the right of the apple icon, that tells you which application is in focus, and all the menu items next to it are application-specific. The global part of the menu is just that apple icon, and the menu tray off further to the right.
Can you point me to an example of a macOS app which closes when the last window is closed? I literally can't think of one.
I'll let it go because it's not a very interesting debate and it's not the main point anyway. I haven't been a novice user for ~10 years, but I can still remember what it was like moving from Windows to MacOS — quite confusing!
> Can you point me to an example of a macOS app which closes when the last window is closed?
Most apps that close when the windows closes are not document-oriented apps. For them, there is only one window and one function. The calculator app you mention is a good example. Once you close that window the app has no other purpose it is setup to auto-close the app.
Document-oriented apps expect that, after you close the document, you might want to open another one or start a new document.
What you say makes logical sense, but I share some of the OP's frustration, and it was particularly confusing when I started using a Mac, having come from the Windows world. Heck, if we're being really logical, why isn't opening stuff the exact opposite of closing stuff? If that were true, opening a file would do nothing if its application weren't already running — and I don't think anyone wants that.
The specific use case, which I do fairly frequently, is to Cmd-W the last open window and then Cmd-N for a new document, or Cmd-O to open an existing document.
When I try that on Windows, the program closes, and I have to re-launch it. I daresay that's more annoying than discovering that a program hasn't quit when you think it should have.
Where do you type Cmd-N/Cmd-O "into" if you just closed the last open window? I am not aware of macOS continuing to give kbd focus to an application with no open windows, but maybe I just don't use macOS enough to realize that it works this way.
Well it depends on the program you are using but programs that follow a document-style layout usually allow you to close a document without quitting the program (in the File menu or Ctrl-W).
The big 'X' will always quit the program though as that's what the user would expect. Imagine Chrome/Firefox's big 'X' only closing the current tab (is that how it works on a Mac?).
Yeah, I see your point. It really was about initial expectations — now I'm used to it, it's fine, but it was incredibly confusing coming from the Windows world ("why's this application still open? i can't even 'see' it, what's it doing?")
An interesting sidebar does Mac handle swapping out applications better than Windows/Linux do? Both have a history of becoming unusable. I also suspect its more common for more expensive Mac machines to have more than minimally adequate RAM whereas many cheap machines are often sold with barely enough especially historically.
Someone who cut their teeth on other platforms may regard the death of an application with its last window as an essential part of their machine staying usable.
I don't know if you ever owned one of those really old windows ce devices but in a certain era it had the delightful workflow that closing them didn't kill them and after a while you could open as many applications as memory could hold and you would need to go into a different menu entirely to close unused applications or you couldn't open anything else. Like the worst of both worlds.
>MacOS specifically does terrible job at visually managing apps.
I'd say that Windows traditionally did a terrible job of allowing applications to have multiple documents open at once and visible on a large monitor.
The traditional Windows Multiple Document Interface UI was a nightmare if you wanted to have more than one document (especially documents from more than one app) visible at once on screen.
I do not follow. I have two excel documents open. Each is presented to the user like an independent excel app. So in my windows taskbar there will be two excel icons stacked. If I hover I will get a quick preview of all of the excel instances that are open.
If I close one document the excel instance that is running that window will close too. The other instance of excel with its document will remain unaffected.
Have you not noticed all the comments complaining that Macs see closing a document and closing a program as two separate things, while Windows still does not?
Edit: It wasn't until Office 2007 that some Microsoft apps (but not others) started breaking the Win32 MDI UI.
I remember feeling fancy as a teenager when I implemented MDI for a Windows application. Likely an HTML editor. Probably in VB6, maybe VC++. I just did it because I could and now I wonder how many big corps also did that because they could.
I’ve been switching away from the web and back to native in a big way and it’s making my life much easier. The web sucks. It’s the worst part of modern computing by far. It’s like wading through treacle. Even this article took an age to load and it’s just text. Frankly, I’m amazed Medium even allowed me to view it. Usually I’ve exceeded my ration card.
The web sucks because why would I possibly want random people to write code to execute on my machine when I just want to view content?
The web is chock full of shit JavaScript; most of which is not adding value for me (ad placement, etc) and some of which is at odds with my interests (trackers, etc).
We have every single web site wanting to give me some sort of unique experience when all I want is to navigate to the information I want and maybe buy something.
I think that the web is insane. I want the web to deliver content and I want my client to present it to me in a way I control. Instead, every server wants to control my browser so they can control my experience and extract data from me.
Twitter, followed closely by YouTube, is the epitome of this disease. Displaying short paragraphs of text with images and video is something web browsers have been perfectly capable of doing for decades (perhaps a little later with the video, but still more than a decade ago), and so is posting such short pieces of text and uploading files containing images and video.
Yet these days, to do that somehow requires a disturbingly slow-loading and resource-consuming JS "webapp" that complains if you're not using one of the very few "supported" browsers or even just a slightly older version thereof, refuses to work with a "sorry, an error occurred" message frequently, and has a UI that looks like it was made for illiterate 5-year-olds? The average web forum works fine with absolutely NO JS, yet has a UI with a superior set of functionality and richness.
Surprisingly, Twitter and YouTube used to be far more user-friendly and less resource-consuming than they are today. Then... I'm not sure what happened. I guess it could be called "the modern web".
The sad part is that Twitter wasn't always like that. I remember old versions of Twitter being pretty snappy.
Facebook too... the latest redesign is an abomination, and as much as I like having a dark mode, the new UI is so slow and so laggy that it makes me not want to use Facebook anymore. But I've been using Facebook since 2005 when it was just a LAMP system that served server-rendered HTML, and it was amazing. I remember my peers talking about how Facebook was so much better than MySpace because it was so clean and snappy.
As much as I love and miss native interfaces -- old Win3.1 games that used actual windows for the UI, or newsgroup readers, or for that matter native email clients -- I gotta say, the web as an app platform means I can migrate off one platform and onto another and know my basic needs are met.
The web as an app-delivery platform is amazing. I don't miss documents that only open in WordPerfect 5.1, or having to UUCP files around, or being forced to write non-platform-specific software once for Windows, once for Mac, maybe once for Unix, and hell with anything else.
To the point of TFA, the desktop as a concept is well past its expiration date. Apple ships amazing phones and tablets that are touch-based, and is migrating their touch-based UX to MacOS, but for some damn reason refuses to ship touchscreens, which is just bonkers. We're halfway between paradigms right now, so it's rough going sometimes, but I'm hopeful we'll get somewhere great.
> migrating their touch-based UX to MacOS, but for some damn reason refuses to ship touchscreens, which is just bonkers
Touch based computing doesn't scale to larger devices very well.
It works well on phones since our fingers are just doing small movements, but on larger devices, users get "gorilla arms" when using large touch screen devices while having to hold their arms in the air.
Gorilla arms are definitely a thing, and purely touch-based UIs on the vertical monitors we have now don't work very well and will never become popular.
However, I had a touchscreen XPS 13 a while back. And being able to tap the occasional button or slider was _super_ helpful. Certainly more helpful and less jarring than the touch bar my Macbook has.
And consider things like the Microsoft Surface Studio, a desktop PC with a touch display designed to angle at a position that would be more usable with a pen or touch controls for longer periods of time.
I think open standards address your issue just as effectively. You mentioned newsgroup readers and mail user agents, for example.
The web apps that I enjoy using are the ones that enable real-time collaboration. I think that's where the web app wins out over native. But the vast majority of things we do online are nothing to do with that.
I reckon I skip 80% of the links I engage with here because I'm immediately presented with a GDPR dialogue, or because the website won't load because it relies on some external domain that I've blocked.
As for iOS and macOS, I really don't think they're serious productivity platforms. I've migrated everything that matters to me to Linux, and when my iPhone finally dies, I'll move over to a Linux phone as well.
>The web sucks because why would I possibly want random people to write code to execute on my machine when I just want to view content?
Your OS sucks, because it doesn't make it safe for you to do so. The freedom to just run something without worry about side-effects is what we used to have back in the days of two floppy disk MS-DOS machines, and it was amazing. Your OS was safe because of the backups, and the write protect tab. Your data disk was all you risked, and you had copies of that as well. Your side effects were well known and contained.
We need to reclaim our general purpose computers. Capability Based Operating systems are how we do it.
>Your OS was safe because of the backups, and the write protect tab. Your data disk was all you risked, and you had copies of that as well. Your side effects were well known and contained.
What about an attacker reading secrets off your system? How does the OS protect that especially with Spectre around.
Native is now broken/deteriorating because of the web/online first obsession.
I moved house last year and was on a very poor internet connection, 0.5Mbps up, 1Mbps down at best, often unreliable.
This made using my computer (Mac's) unbearable!
When I turned on the computer and it started up, apps that auto start in the task bar would phone home checking for updates or whatever and block until they timed out / got a response. The computer wasn't usable until all completed due to blocked UI.
Opening an application the same. When opening an app it'd phone home to do what ever it does while blocking everything else until complete so can continue with the startup process. The obsession with analytics in apps made latency in apps much higher as I suspect analytics where being sent from the main thread instead of best effort in a background thread pulling from an event queue.
Then there's the obsession of packaging a SPA webapp in it's own browser to run locally as a perceived native app, aka Electron. Take Teams/Slack both clunky, unresponsive user interfaces and resource hogs for what they are.
Most everything works fine offline (think: airplane mode) - where it all goes to crap is when there is a connection but it’s slow, high latency, or losing packets.
In fact, sometimes it’s worth it to hit airplane mode if your connection is bad until after everything is running.
Exactly this. My personal hell recently has been the Audible app on iOS. I walk a lot and listen to audiobooks and often on my walks my cellular connection is not great, one bar or so. The Audible app had a redesign a year or two ago where it now instead of showing your library letting you play whatever you were listening to, it loads a home screen that tries to get you to buy more books, and you can’t do anything else until that finishes loading. But since it fails to load on a crappy connection, I usually end up killing the app, enabling airplane mode, and then re-opening the app. So stupid.
For reference, my audiobooks are stored locally, a network connection is not required to continue playing. Yet somehow they have hindered the main functionality of the app by what I can only assume is sloppy coding.
I don't think the Web sucks, just what many have been building on it, certainly the most popular options. . .
It is up to us to build something better, and the Web offers a better framework than most other options:
Human-readable code, mostly text
Decentralized and independent
Great tooling for just about any platform
25+ years of public knowledge
I think we're only beginning to see a new epoch of Web tools which:
Can be run locally and remotely
Support just about any browser, certainly no-JS and Lynx
Work hard to help the user without demanding anything
Allow easy migration of content between instances and instances between hosts
Full honesty and transparency of internal works
I might be a bit biased, since I have been writing such a thing, but almost every day I see another project here on HN which has similar values and functionality.
I hope you’re right. Personally I think that epoch peaked with jQuery. I’ve been watching the culture of web development steadily descend from what you describe to whatever it is we have now. But software history is circular, so maybe those good times are coming again.
Yeah, I've been watching the same as you describe, and I think jQuery was the peak, but also the beginning of the descent into abstraction.
A couple of years ago, I realized that the art of web development is about to be lost, and I started to re-learn all those lost skills, such as direct DOM manipulation, feature-checking, backwards compatibility testing, etc.
I now have a reasonably functional website which works back to the earliest HTTP/1.1 browsers, like IE3 and NN2, classics like Windows Safari and Opera 12, and many many other lovable, once again usable browsers.
software history is like rest of history. it won't repeat, but it will rhyme.
c.f. endless attempts/experiences rediscovering basic principles behind UI toolkits now taking place in web-dev land. Not identical to 1990-2005, but rhymes like crazy.
Here are some shortcomings of Gemini which come to mind...
There are few choices in clients...
A lot of older hardware is left behind...
The TLS requirement means no human-readable protocol...
But the biggest one, in my opinion as an experienced IT person, is that it is a "clean start", meaning all the same problems which have already been solved in HTTP and NNTP worlds will present themselves again, one at a time, and require solution.
It is true, although it is not too difficult to write. (A new web browser could include Gemini (in addition to Gopher, HTTP(S), and possibly others).) However, even if you do not have other client software, the "openssl s_client" can be used, too (although having the specialized software works better, it doesn't mean that such specialized software is required). (It also ought to be added into curl I think, but they didn't do that.)
> The TLS requirement means no human-readable protocol...
I agree that this can be a problem. My suggestion is to add a non-TLS version using "insecure-gemini:" as the URI scheme name.
> ...all the same problems which have already been solved in HTTP and NNTP worlds will present themselves again...
Gemini can be a substitute for HTTP and Gopher (in some cases); it is not a substitute for NNTP, I think.
Yes, it is possible to do it well (the things you mention are helpful), even though it is often done badly.
One idea is to improve the design of the web browser (including the engine), since the current design makes many things difficult for the end user to customize.
No-JS is almost always suitable, although not quite always. When scripts are needed, you should include a <noscript> block, including the documentation, raw data, other protocols, etc as appropriate. (There are some web pages that do this, but it seems to be rare.)
If I'm running a hotel, for example, my valet has to know how to accomodate old cars and new cars, manual and automatic, electric and combustion, and possibly motorcycles and scooters and such...
If someone comes to the front desk, I have to be polite to them no matter what, whether they look dirty or clean, normal or weird, I can't tell them, your appearance is not good enough, or you don't speak the right language to be at my hotel.
I think the idea of telling the user their browser isn't good enough is the same level of rudeness, it's an act of laziness and arrogance to not even try...
And yes, there are so many different browsers and levels of convenience, and if you want the GUEST to have a good experience, you have to accomodate the shortcomings of their own vehicle, which they may have no choice over, or may even enjoy using and have an attachment to...
The thing I've been working on is mentioned in my profile, the live site is only a few days behind my local dev. Here's a short video I just made with you in mind:) 5igA8Zz56NU
I don't watch a video. I did look at what is in your profile, though. It is the kinds of thing I might use NNTP instead (although having a web interface can also be helpful for the users who prefer that). (Actually, I have a NNTP server myself, and have partial work of a optional web interface too, for the users who prefer that; it displays a link to the NNTP if the scripts is not enabled so it can work even on Lynx (which supports NNTP) or just in case the user prefer to use the NNTP.) I do like that you have accesskey attributes; many web pages don't use it. I did see the git repository about it too, and I had read the documents there too.
(If you have a computer with internet, and if it will serve a plain ASCII text connection even not needing HTML or Telnet commands, nor VT-100 or anything like that, nor non-ASCII characters, then it can work if you have internet, if you have a computer with a keyboard (using a telnet client, or just nc, or whatever); that is the minimal possibility. So, it is another possibility maybe; of course is not only possibility.)
> I’ve been switching away from the web and back to native in a big way and it’s making my life much easier.
if you are on mobile it doesn't matter. mobile native is in at least half the cases just as post-file cloud-ified as the web. this change is coming for native apps too. no one escapes.
blaming the web for this is shallow & sad to me. everyone is clamoring for adoption, for ease of use, for just works, and putting all the data in the cloud (and de-prioritizing user access) has been a hugely hugely successful trend (alas).
I have no idea what web you are on but perhaps you should be using better sites that aren't waded through like treacle. person Nally this article loaded in under 3s on my 4 year old phone with no ad blockers (shame on me). load times vary a lot & it's easy to complain about but there are plenty of bad apps, not efficiently coded, that could better make use of features too. few apps are lively enough, have access to anywhere like the vast troves of data most sites interiorly contain, so it's also an absurd comparison.
I think you've got the wrong end of the stick. I'm not saying that I'm ditching the Twitter website for the native Twitter app. I'm saying that I'm ditching Twitter.
I'm also not blaming the web as a technology. I'm saying that websites are becoming unusable.
I have no idea what web you are on but perhaps you should be using better sites that aren't waded through like treacle.
We're both on Hacker News so I suspect we just have different standards.
person Nally this article loaded in under 3s
We have different standards. Let me show you how Medium compares to a website that isn't a pile of crap.
Or if you can't drop Google Drive, use something like Cryptomator. I use it on top of my OneDrive and it works really well, both on all my desktops (Windows, Linux, Mac) and Android.
> The web sucks. It’s the worst part of modern computing by far.
It does and I understand you're speaking in the context of desktop. But what about mobile apps?
I cannot imagine computing without web equivalent for mobile apps which often collect unnecessary data and on-average users are powerless to do anything about it.
Worst are the mobile only applications, Hence applications like Clubhouse can get away with using mobile number as identifier. Not that I miss using Clubhouse on web, But just an example of what no-web computing looks like.
I still belong to a WhatsApp group, but I’m plotting a course out of the social media empire.
We should still have web apps where they add value - I’m writing one at the moment - but I’ve reduced my reliance on Google Suite in particular and I’m more productive as a result.
I can learn things so quickly because of the web. When I was growing up, I couldn’t have dreamed that I could have access to so much without going to multiple libraries, and the thousands of video tutorials are just something new to humanity. The web doesn’t suck imho, although it does introduce some serious privacy issues
I was referring more to the web as a runtime for apps. YouTube is full of great content but the app on my TV is pretty rubbbish. The website’s not bad in fairness
I don’t disagree with you, but I think the web was far better for learning a decade ago than it is now. Now I have to wade through paywalls, clickbait farms, SEO scams, useless auto-generated articles, etc…
Some specific examples: When looking up resources on high school level mathematics, it appears that fully half of the content is paywalled. Often this paywalled content isn’t particularly valuable in the first place (eg, hiding a basic multiplication chart behind a paywall)
And a more technical example: when I was learning the Angular framework, damn near all the results on Google were SEO scams. It was content masquerading as tutorials, but when you reach the end (often after investing significant time), they try to extract money from you to read the rest of the guide.
It’s gotten to the point where I’d be willing to pay money to filter out all the garbage from the web.
> Everything that is outdated in computer desktop usability is captured by that dialog box.
No, that's just bad UX design not a problem with the desktop metaphor
> I have 37 windows open, with some 75+ tabs
> I have a Desktop scattered with (currently) 132 icons
> I have a Downloads folder with (currently) several hundred files, most of which I could safely delete, but who has the time to decide?
> Meanwhile, there’s all sorts of stuff that I can’t find
> Newer mobile OSs (e.g. iOS) get this right — having 500 tabs open in mobile Safari, which I always do
This person has the equivalent of an incredibly messy desk piled high with paperwork, but then complains about the desktop. It's not the desktop, it's the lack of any organization
> Maybe Windows is awesome now, I don’t know, my last real exposure was with XP when I worked at Microsoft.
There are awesome tools on Windows, like voidtools Everything to find files by name in a fraction of a second by using the NTFS journal, grepWin for searching inside documents and Link Shell Extension by Hermann Schinagl for managing symbolic links.
It's not that hard to get organized. The author should try it.
The idea was that the main element of a computer shouldn't be an application but a document... and a document is a composite of pieces like spreadsheet, video, etc.
At the time, the idea was to have an open standard to replace the Microsoft apps... and to be able to replace MS apps with any other, piece by piece. It didn't work out then but, in a way, we now have a kind of open standard to build composite document (even including applications inside): HTML (with javascript).
Maybe it could be a good time to give this idea a new look?
I see some analogues in Sandstorm.io, for the web: they’re self-hosted web apps, where instead of the app being the atomic unit, each document (“grain”) is separately managed by sandstorm, and there’s a separate instance of the app running for each document. It pulls things like security and access control up a level, and will have even more benefits as sandstorm grain management matures.
There are multiple desktop search solutions, both supported and discontinued. While nice in theory, the problem is that anything that is not text requires lots of metadata. The same can be said about tags, relational DBs that Microsoft dreamed about long ago, etc.
- if you don't have enough metadata or they are of poor quality, your query has to be more precise, which is not very different from organizing your files manually and remembering the structure.
- if you add/reformat metadata yourself or even just make sure it's consistent, you're probably micromanaging it too much, losing all the convenience.
Now, some sort of ML system that automagically expands your stuff with relevant metadata could possibly work for this. Maybe. Existing AI assistants are still too dumb to be useful in most cases.
Also the entire UI shouldn't rotate around one genius paradigm that works best for everyone. There's no such paradigm. Manual file/folder organization is perfectly good for most cases. Fuzzy search works best for textual data like notes or documents, or to discover the content you forgot you ever had. Tags work well for photos.
This, but in the other direction. The most liberating thing about computing for me was learning how to use a Unix shell effectively. Things that used to take 10s of minutes of tedious pointing, clicking, and dragging now take about as long as it takes to type out a one-liner pipeline.
Nowadays, when given the choice, I'll gladly trade desktop pictionary for a direct conversation with the computer. I think we should be optimizing for that experience.
It may be worth pointing out that the original desktop metaphor is about extending available space. It's about stacking, overlapping windows as a third way, right through the demarcation line of the dilemma of switching versus tiling, to provide multiple views on a single screen. (The "tiling desktop" is actually some of an oximoron".)
Further, the desktop as envisioned at PARC, is explicitly not skeuomorphic. As Alan Kay put it, "it's magic", meaning, it's a magic environment adhering to its own concepts and genuine metaphors. That these may sometimes pick from everyday experiences is rather a matter of learning and intuition, which allows the user to construct a model space of the computer and its internal, hidden concepts. In this sense, folders and documents are really more an alliteration than a metaphor, rather an invocation of related, familiar concepts hinting at the internal metaphors of the computer than being the metaphor in itself. To the workings of the desktop metaphor folders are accidental, not essential. Icons, on the other hand, are in themselves crucial metaphors of the WIMP desktop, regardless of whether they may be hinting at real-life objects or not. Similarities to real-life objects or workflows is not what's meant by the term "desktop metaphor".
Spatial navigation, however, is yet another concept, historically imported from Dataland (MIT, 1979) [1]. – E.g., the Apple LISA was to have an abstract, tagging filing mechanism (the so-called "Twenty Questions Filer") and the spatial Finder was added last-minute [2]. Its limitations were noticed early on, even in development, especially regarding the integration of external file-spaces, but these were mostly ignored in times, when "external" meant floppies or maybe a 10MB HD.
First is that we are still in the age of "search" superceding the directories (google over yahoo), that is wrong, Google, Instagram, Facebook etc are now curators. Gone are the days where searching for something brought up whatever else people were searching for, now it's curated both for generally good reasons ("Asian" doesn't turn up porn featuring Asian women like it did in 2008) and bad reasons (suggests ads up front and specific services under google.
Second their suggested paradigm feels like a hotbed for abuse of users' privacy. Given the author also wrote an article titled "Sorry, Big Brother Is Here to Stay" I'm not sure they are sensitive to that concern perhaps. One of the values of the more ephemeral bits of information is that it's ephemeral, and god we know google et al would kill to have a slice of that data. That said, back in 2009 or w/e I literally had a text file full of shit like copypasta you'd see spammed on 4chan, a meme folder, etc. It's not like people aren't capable of saving the ephemeral bits, it's just a conscious effort (which is what I think it should be) over one that is done automatically.
That said in conclusion, this reads a lot like trying to fix something that isn't broken. If anything, that too much is online and hard to actually save to your computer itself is a problem but I don't think the "desktop" metaphor is the problem here... may be they made the connection in the article but I didn't come away convinced their suggestion ameliorates anything.
Plain files stored in a tree (hierarchy) is often underrated but is simple, elegant and efficient.
Storing documents in web apps is, in my opinion, inferior in many ways because this is mostly opaque and thus bad at interoperability.
The not much financial incentive to do so, but everything should be local first, using open formats and protocols, the cloud should only be used as dumb backup.
This article has made me realize that perhaps trends in computing are not headed in any particular direction. Perhaps they are headed in every direction.
The author's description of his computing life couldn't sound more alien to me. I put thought into my file organization, and prefer files to "fragments." I don't have anything but a couple of my most oft-used shortcuts on my desktop. I only keep files in my downloads folder if I suspect I'll be needing archives or installers again. I don't use webapps. Any of them. I don't allow javascript to run in my browser unless a site I trust needs it for something that makes sense.
The desktop suits me just fine and the browser is becoming less and less relevant to me as it trends toward things I find uninteresting and not useful.
I've thought about this a fair amount and have a working app/os that shares a lot of underlying ideas with this article here are my additive thoughts:
I'm building a ui from first principles, ostensibly in 2012 it started as an itch to scratch because I found that no note taking software met my needs, the way people used email sucked, jira et al sucked, and I couldn't wrangle non-nerds into interoperating with me on emacs.
Instead of the computer as desktop or some other abstraction I started with an interface predicated on the idea that reality itself has 3 first class citizens: Time, Space/Structure & People/Minds.
As an organizing principle applications are just metadata on data structures (_App_tributes on a node if you will) in the same way a function is a file in a directory or hosted on a cloud service. "Data first" happens when you get rid of the "container"/desktop metaphor.
First Class:
Nodes, People, Time
Second Class Enablers:
Namespaces, Fragments, Timestamped Messaging, Specialized sub-interfaces
The reason projects like chromebook try to hide or delete structure is because App based interfaces are more conducive to advertising and because people use APP as a visual reminder of "functionality". A person or org must have complete "write" control of their data if they are using a first class data/structure interface (MS Word can't have in doc advertising), apps are a weird abstraction that make it easier to sneak "ads" into your workflow.
I am nearing 80% of my time in this interface, the plan is to have a consumer friendly note taking/sharing app (the best damn cross platform note taking app) that becomes the core UI experience to replace existing OS interface in future. As an aside, I muse that the way computing evolved from TTY interfaces created strong adherence to single line CLIs and software engineers never really overcame that, and that's one of the core oversights of human interfaces in computing.
I also find this interesting. Should the data structure really the primary entity though? One can imagine the same data being organized in different data structures with different performance characteristics, functionality, etc. What if an App node wants to alter the data structure. Any other App node that is dependent on the data from a different node will then have to be rewritten.
What one wants is the ability to query the data of another program and send external commands to that program to alter the data. The data structure itself is somewhat irrelevant, don't you think?
> [Referring to Yahoo!] Within only a few years, this system was hopeless. The web exploded exponentially, in terms of “sites” but also it evolved ever-shifting ways of presenting and producing bits and pieces of content, such that the directory format of top-level sites was not even a great way of cataloging the world, let alone finding your way in it.
While this is true, personal directories are still tremendously useful - and could definitely threaten search again - now that search has lost its way (SEO, sponsored items, etc.)
Curated personal directories can be really powerful in most niches. Many subreddits have these as well. It’s just the monolithic ones that have lost steam.
This article appears to me to miss a key point: the "Desktop" is not your computer. The "Desktop" is just another app, one that only gets used very rarely.
Technically, my computer has a "Desktop", but I never use it. I launch apps from a "Start" type menu (I run Linux, not Windows, but the Linux environment I use has the same basic setup) or from icons on my task bar at the bottom of the screen. I access documents and files using a file manager app. I surf the web using my browser. I write and test code using a text editor and terminal window. The only time I even look at the "Desktop" on my computer is when I've just plugged in a USB stick and I want to open it up.
In short, the basic premise of the article, that we all still think of our computer like it's the top of an actual desk with papers and files and folders on it, is false for me, and I suspect it's false for most computer users today.
Not some bad ideas in there, but it suffers a bit from not taking a universal perspective on computation.
The flexibility of computers means you don't have to try to come up with one thing that works for everyone. Something different can always be implemented for those who want that something different.
The article would be stronger if, in addition to prescribing some macro ideas, it grounded them in a first attempt at implementation. For this new type of non-desktop computer focused on tracking fragments in an air traffic controller model ... where is the code?
I think we are law the old desktop metaphor - but for different reasons.
Once upon a time there was a "manager" whose job primarily was to communicate via memo with his peers and superiors, and the cycle time for data to pass from his (yes, his) employees, to him, then he processed by him on his desktop and sent out to his peers etc was at least a day usually a week.
So there was plenty of time for him to arrange things in a single "document" called a spreadsheet, and maybe update a memo on Wordstar and send that via the typing pool etc etc
But the cycle time is now down to maybe hours if not immediate - and if the company is doing its job right in automation terms there is no need for manager to send out his documents - the data is in several warehouses already.
The desktop metaphor is as dead as middle management.
Want a new metaphor - look at Jupyter Notebooks - that's a layer on top of the existing data - kind of like middle managers were on top of their employees.
> ... just close stuff after awhile if the user doesn’t even look at it. Don’t delete it, file it away with rich metadata so it can be found if necessary.
Is it just me, or does it sound like "Just do the thing I will want later, and no, I don't have time to tell you, you should just figure it out!"?
Imagine a program randomly closing stuff and saying "Oh you weren't using me for five hours so I thought you didn't need it any longer?"
My read of that was "preserve a useful reserve of resources (most especially RAM) without my having to consciously manage it, but guard and respect my state at all costs."
Virtually all current compute contexts violate at least one if not both these principles. Memory management is left to the user (effecting such powerful and expressive mechanisms as the Spinning Beach Ball of Doom), whilst at any moment the operating system and/or hardware may elect to preempt any user-designated tasks or workflows with its own (system updates, virus scans, etc., etc.)
Android, I've recently learned, at least makes a stab at the first. It apparently utilises AI to allocate system resources. In my experience, it does so by shitting out any user state without recourse.
This is not an especially encouraging direction of progress.
I guess this article is only about Macs, or something. I found I didn’t know what he was talking about. Why do you have to “clear your desktop” to make a video call?
No kidding. I don't see why you'd need to clear your desktop unless you're screen sharing and have things open you really don't want the other person seeing. If you're trying to find something on the Windows desktop, Win+D once to minimize everything, then a second time to bring all your windows back.
> 88 untitled preview images, 37 windows open, with some 75+ tabs.
The author of this article doesn't have a problem that "rethinking the computer desktop as a concept" will help with - they have an organisational/concentration problem. With 75 tabs or 88 preview images the user can't possibly see the wood for the trees - or the problem he's working on from other clutter.
I have 20 or so projects on the go but have 6 or so apps and rarely more than 4 or 5 browser tabs open for longer than a few minutes.
A decent note/research taking app, image management tool (pinterest?), todo list app would be a great start. Or more likely a self management technique like GTD (Getting Things Done).
I personally like and appreciate how messy and chaotic the desktop computing experience is. My computer is super disorganized, practically every file is some variation of ooisajfisajfasjfdaosijfdaoisdjfaoisfdjasdfoijsafd.jpg. But when I need to go find something, it's fun, because I get to shuffle through all my files and see everything, and that experience is like perpetually a fun reminder of all the stuff I've collected on my computer. If it was easier to find things instantly every time, I'd never go looking at all the stuff scattered around everywhere, like someone who has a bunch of old pictures they have in their attic and never see again.
A well-organised archive with document adjacency also affords this capability.
As examples, you'll find library patron waxing eloquently over the joys of card-catalogue search and shelf-browsing. (Clifford Stoll being an example of the former.)
Digital browsing is possible, though of course the dynamics differ.
I have been thinking about some user-configurable UI systems like Smalltalk, Plan9, Emacs, Factor language and so on.
I think we need a name for this, it's like an operating system but "backwards". The operating system is a common API/abstraction for programs/applications to access hardware devices, and control access to resources. But this is different, these UI systems are a common abstraction for human interacting with different programs/applications. So it is human (not a device) who is being abstracted by them, and the applications written to that abstraction can be run under the UI.
> user-configurable UI systems like Smalltalk, Plan9, Emacs, Factor language and so on
The way I think about them, these are not "UI systems". These are operating systems.
That's how I see my Emacs. It's an operating system I work in, running as a guest in whatever traditional OS I happen to have on the machine. It provides a common API/abstraction to access resources and interface with the user[0]. It coordinates applications running on top of it.
The magic of Emacs comes from a slightly different approach: it's more focused, less restrictive, and deeply introspectable. By focused I mean, the core paradigm is that applications communicate with users through means of "buffers" containing rich text. Less restrictive - everything can talk to everything else, including poking at its internal state if it so desires. Because Emacs embraces this level of deep interoperability, it provides tools and culture[1] that help manage what would be otherwise ridiculously large interaction surface. And almost everything is thoroughly documented, you can quickly discover what code is being run to make something happen, jump to the definition of anything (including Emacs' C core, if you built from source), and potentially modify it and have the changes applied without restarting.
Being able to easily modify the UI you work with is a nice side effect of these features, but it's nowhere near the whole reason they're there.
--
[0] - More of the latter than of the former by default, but you can expand the API surface to cover more of the responsibilities of a typical OS if you need to.
[1] - For example, defadvice is a tool you can use to extend, modify and suppress any individual function in any Elisp executing in Emacs. It allows users to do things that would otherwise be bug-prone hacks, but it's still too much power for most use cases. This is where the culture comes in - both Emacs and third-party packages typically expose great many hooks around various activities, in anticipation of things people may want to override or extend. In practice, these hooks cover 90% of the things anyone may ever want, so defadvice is a rare sight.
I am making a case we should not call these environments "operating systems", they are more like "shells". Because the integration of applications (common API) in the OS happens towards the hardware, while here, it happens towards the user. So it's different from an OS, and ideally, you should be able to mix and match the environment and the OS as you wish (as you do, you said you run on a "guest" OS). Then we wouldn't care so much whether the applications are on the desktop or in the cloud.
> I have a Desktop scattered with (currently) 132 icons.
I don't have anything on my desktop because I use Emacs. Terminals predate the desktop and they behave more like fractals than the happy little two-dimensional workspaces we created for our office workers. If the author is unhappy with the desktop then a good place to start with reinventing it would be to read the vision written by Vannevar Bush that inspired it.
I agree with most of these complaints, but only if you're speaking purely about MacOS. Everything is fine on Windows and Lubuntu (I use all three more or less daily.) Only MacOS treats my every instruction with either helplessness or malicious compliance. I think the author should try a different desktop before they give up on desktop computing entirely.
While I agree that saving open work into temporary storage automatically and restore it with next launch would be incredibly convenient, this article is still weird to me, it's like other world. I don't use a computer in that way at all. I have one browser window with 3 tabs. I'm fine with folder/files metaphor. I don't use iMessage/Slack/Twitter/Facebook/Instagram. I don't paste memes between windows (and I mute people who spam me with memes over Whatsapp). I definitely don't create fragments, I write code source which could be qualified as a document. My Desktop does not have a single file (because Gnome does not support notion of Desktop, LoL), my Downloads folder is almost empty, because I clean it periodically, it does not take much time. And so on.
And I definitely don't want to rethink my computer desktop. It's fine. What I want is to polish it. Every desktop I tried is unpolished.
We did rethink the desktop, about twice on a large scale.
MacOS tried a scorched-earth approach at supporting hardware, which allowed for a pretty impressive software stack to be maintained with (relative) stability. The issue is that MacOS takes countless shortcuts to reach that final level of presentation. APFS is a mess compared to it's contemporary filesystems, and the entire underlying ethos behind getting an app to work on MacOS depends on how well you're willing to work with Apple and integrate into that central stack. Their idea of a desktop is one where the first party is in control, and they provision you permissions where they see fit.
Windows has to accommodate for a much larger pool of hardware, but also has the advantage of market dominance. Everyone develops for Windows because it powers more than 70% of consumer PCs. It's a no-brainer if you want the biggest audience possible. Microsoft also breaks from Apple in providing a much more robust compatibility layer for legacy software. It's reliance on antiquated internals also helps suppress newer technologies from entering the desktop. The Windows shell pared-back and lacking, even compared to most Linux desktop environments. Their idea of a desktop is one where the third parties are in control, and you provision them permission where you see fit.
I've seen people be productive with both, and I certainly can't knock them for being the predominant platforms, but I eventually just got fed up with fighting my computer to do basic tasks and switched to Linux. Everything is a file here, that's canon. Microsoft won't try to sell you more OneDrive storage, and Apple won't second guess your authority here either, since the user is sacred. It's that kind of dedication to simplicity that tips the scales in Linux's favor for me, and I can't imagine I'll be going back to Windows or MacOS until either of them show a similar dedication to empowering the user with simple tools.
While desktop might need a rethink, the motivation here is... hard to follow.
Just don't share your desktop? Share a single window?
But I'm even more flabbergasted by the idea that mobile, of all places, is a good place to mimic. Have you tried finding documents on your phone? Go on. Give it a spin. It's a disaster area worse than desktop.
It's a good learning opportunity, though - it makes clear that if you really want to fix things, the very first thing you get to tackle is interoperability - because that central air traffic controller you want? That needs access to all the files. That's already tough - commercial interests counteract it - but it also sits at the heart of privacy. If we have excellent interop, we have excellent interop with any and all actors. That includes ones you might not want to have access, unless we succeed at building a good and understandable security model.
Personally, after all these years, I think the desktop metaphor is still ingenious. When I watch kids using it, and touch screen phones and tablets, it's pretty amazing how fast they get it.
I didn't read the the article but from others have said here they didn't offer a better one.
This is like treating symptoms, not the cause of the disease.
Th actual issue is that a sheet of paper never changes unless you do something to it. And the way our cognitive system perceives the ever changing screen is very different from how it sees the paper.
The real screen is the one you see when your eyes are closed (or when your switch you attention to 'there' without closing your eyes).
Plus, you can't use your hands directly to work with things on the screen. With handwriting each letter has individual motion pattern. Now this is lost. Tactile feedback is lost too.
This imbalance affects you ability to learn and evolve very badly. That's the real issue.
Also involution is known to happen very quickly. Losing abilities takes only two (!) generations to persist.
on a side note I will never understand this trend of having browsers with huge amount of tabs open. i very very rarely go above like 10, maybe 15 or 20 when actively learning something new and doing tons of cross referencing etc. but this 50+ across multiple windows and crap? just close it or reading list. you're never coming back to all that crap
The fact that a common operational mode is unfathomable to you does not make it useless to others.
The topic comes up regularly in HN posts and comments.
I've taken my own stab at it: tabs are a preferable, readily-utilised, if cumbersome, form of state management that's otherwise lacking from web browsers.
>"I have a Downloads folder with (currently) several hundred files, most of which I could safely delete, but who has the time to decide ..."
If you keep your house messy this is what you get. All my stuff is in a single place nicely organized, sorted and backed up. I do not need to "rethink" and I do not depend on cloudy stuff (being connected which I am is a totally different subject).
Sure I do not mind if someone clever enough comes up wit concept that improves ergonomics but other than that please save me from "innovations" that complicate my life and cost me extra dosh.
Search on the desktop has been tried (and is being tried) today: Unity (of Ubuntu) and Gnome Shell (of Gnome 3) are mostly search-focused interfaces (interestingly, Unity worked much better for me at finding the right stuff, and it was based on Xapian vs the Gnome provided tracker based on Lucene IIRC).
On Ubuntu (recent versions have Gnome Shell, older versions had Unity), press the "Windows" key, and type away...
There's certainly a lot to complain about in both of those implementations (esp speed), but to get a feel for what such UX would look like, try them out.
There are numerous filesystem indexing tools. Within the past decade they've become useful on most systems (Linux, Mac, presumably Windows though I've not touched that in well over a decade).
Ripgrep will compile a list of all files in your home directory in seconds. Combine it with dmenu or fzf and you’ve got yourself a pretty handy search tool
In the 90s we actually thought human multitasking was possible. This is the Desktop environments fatal flaw. There’s pop-up dialogs, control bar icons with bits of noisy info, widgets and sidebars with extra info. And many OS-X apps don’t honor system notification preferences.
Given a choice between writing on an iPad w/ keyboard or my Mac laptop, I always choose the iPad. It’s a UI that forces a notification paradigm on every app (by virtue of an App Store). It has full screen by default. Ironic that the “phone OS” is better built to respect my attention.
Multi-tasking isn't the only argument for a windowed or tiled layout. There are many tasks which involve working with information from (or to) multiple sources.
If I'm writing, I may be referencing material, or excerpting what I'm writing into another document or communication, or reading a source and a manual at the same time.
A long-ago development environment I used had three principle windows. One was the program source code, another was the generated report output, and the third was a set of runtime messages --- notes, warnings, and errors --- from the run. Roughly equivalent to stdin, stdout, and stderr, in a Unix context (and presented as same on Unix).
Mind that this is not the same as having chat, email, stock and weather tickers, etc., all visible at all times, a mode I find exceptionally annoying and utterly hostile to focus and concentration. That's not the only possible usage, though.
(Mobile devices' presumption that owner's attention is interruptable at any time is its own crime against humanity.)
It truly is a much less noisy environment. A lot of people malign full screen apps on the Mac as useless, but I find it to be a fantastic way to get rid of all distractions. I just wish that OS X had the “Slideover” feature from iPadOS, so I could quickly reference another app while in the full screen environment.
As I read this article I’m reminded of the failed WinFS project that was set to launch with Longhorn (which of course became Vista). It was meant to replace the classic notion of working with named files with a seamless relational database of metadata. It was supposed to be a signature feature of Longhorn but it was likely way too ambitious for its time and was eventually cut.
That's not the only metaphore that doesn't work any more. There are others as well:
- files and folders (the article speaks about them). That's so ...50s. We need 'piece of information' to be the basic building block, and abstract away the storage details.
- processes (as in programs that run on a computer). Again, a 50s concept. Today we need ...functions. Programs running should be reusable functions put together by anyone, changeable by anyone (within the security context provided by the origin), debugable by anyone.
- network. No one cares about the why and how of networks. We just want to get stuff done.
- environment variables, the console, the environment etc. Another 70s technology that has survived and nowadays creates as many issues as it solves.
- program installations. We should simply press a button and applications would appear running on our machines. How this could be achieved? lazy downloading, local caching, on the fly updating if a resource is updated, etc. We already do this with web applications, but the next step shall be to do this without referring to the 'web'.
This doesn't mean the technical details of the above shouldn't exist. They obviously should exist. But not for 'mainstream' use of a computer. This kind of detail should only be available to technicians that troubleshoot technical issues.
Well, "we" might have already abandoned the metaphor. On my Chromebook there is a filesystem but I can ignore it, and the filesystem lacks a thing called "desktop". There's no way to litter the root window with files, like there is on macOS or Windows or Ubuntu. Ephemera such as screenshots and downloads go in a little stack in the corner of the screen and eventually disappear, unless I pin them. Access to files is generally by search instead of folder traversal.
I use Linux, and my computer doesn't draw files (or anything else other than the default stipple) on the root window either; I have no desktop environment installed, because I do not use it and do not like it. (I don't use the Desktop, Downloads, Documents, etc directories, so I have deleted them. I can add directories with shorter names with whatever categorizations I need, and those aren't them.)
Files can be accessed by typing in their full path (with tab completion in some cases). (Some programs insist to use the GUI to select/list files rather than you can just type them in, and I hate this.) For searching files by name, ordering, etc, there is such commands as ls, and stuff that can be used.
For downloads, I will always type in what I want to save it as and in what directory, and usually I will use curl to download and redirect output to a file, or sometimes to another program is useful.
For screenshots, again I can use command-line programs; a program can make the screenshot, and then piped to the one that encodes it as PNG, and redirect its output to a file in order to save it to disk.
>"You have 88 untitled Preview documents": This is the dialog that I see every time I want to close the Preview app to clear my desktop of all that clutter in a hurry for a video call.
Why do you have "88 untitled Preview documents" open?
What exactly do you use Preview (an existing-image/pdf viewer) for?
>Everything that is outdated in computer desktop usability is captured by that dialog box.
So, for normal users without dozens of untitled new documents open, computer desktop usability is 100% fine?
> Ever since the dawn of the Graphical User Interface operating system on mass-market computers in the mid-1980s, they have been designed primarily around what was then called the “desktop metaphor,” and is more usefully described as a document-centric system
I’m not a big fan of the Desktop as a file storage location. My experience on the Mac is that it just becomes cluttered with screenshots and old documents unless you babysit it.
My solution has been to have screenshots saved in a ~/Screenshots directory. You can change the save location of screenshots with an easily Googlable command. I have a shortcut to this folder in my dock.
I also have a separate folder ~/Temporary, which also has a shortcut in the dock. This is where I put any documents that I need to quickly reference only a handful of times. I have it set up to automatically send any documents that haven’t been opened in 30 days to the Trash, where you’ll have one final opportunity to recover the files before they vanish forever.
This strategy has essentially obviated the Desktop as a file storage location, and just generally allows for more organized computing. No more ugly desktops with 50 icons scattered around.
(I know this isn’t 100% related to the article, but it’s something potentially helpful that came to mind while reading it)
You're taking the desktop metaphor too literally. It wasn't primarily about storing everything to the 'Desktop', it was that the UI represented the capabilities of the computer in terms of everyday terminology and things people were used to seeing on their desks / in their offices. The 'Desktop' location was just a convenient default location for files but is only a small piece of what the desktop metaphor was about (i.e. you could have a collection of files and folders that you were actively working with on your desktop. The intent was that at some point they would be put away... well, we all see how that went) The desktop metaphor included things like cut and paste to/from a clipboard (back when people actually used to cut and paste pieces of paper this made a lot more sense), files being stored in folders (they were previously more commonly referred to as directories in the text UI world), you put files you wanted to get rid of in the trash etc.
I absolutely agree that we need to move away from the idea of physical folders. The file hierarchy IS very useful way of categorizing your data, but a file should be able to belong to multiple categories. That's why we have hard links, but I digress.
I wrote an emacs package called SFS(search file system) to solve exactly this set of problems. It is just an interface on top of the excellent Recoll full-text indexer. Among other things, SFS allows you to create hierarchies of queries, the search analog of a folder hierarchy. A parent "directory" is just the logical OR of all of it's content named queries. Everything works great for me, but the installation process is a bit painful(especially cross-platform), and indexing can take a serious amount of time and space initially, so I would say it's still a ways from being really comfy to use. But it is totally possible to have your own little google for your file system. I have gifs on the GitHub page to at least give you an idea:
Hard links prove quite fragile (many tools will remove the link then replace it with a new unlinked file) and occasionally dangerous.
Tagging may work better, though you'd need a tags-aware toolchain to work with them if the metadata are associated with the filesystem. An external tagstore might be resilient but could find itself out-of-sync with filesystem state.
Hmm, do you have some example of these dangerous scenarios? I suspect a lot of tools have come to think of the "file path" as the identifier for a file. So of course they would break if that classification were to be reorganized, like if two parent categories were to swap. I would call that an abuse of the file system though. But the status quo is what it is. Most file systems already have lots of other metadata built in that can be used to access the inode or whatever you call your data structure. My point is, accessing data in a more general case is a search operation.
As far as an external tag store, that is basically what a search index is. And Recoll is full-text, so each file has a shit-ton of tags associated with it. You then just pass the -m flag to the indexer, and it monitors for file modifications, and updates the index accordingly. I have not noticed a significant performance impact there. Mostly just the initial index operation sucks.
Some I know of, some I'm presuming, and there are all but certainly others.
Hardlinked directories create all kinds of mischeif. That's the principle issue. It's often entirely disabled. Recursive directory trees are all kinds of fun. (Moreso than even the symlinked version.)
Given a hardlink exists, a tool which operates by 1) removing the file (deletes the local directory entry to the hardlink inode), 2) creates a new file (same name, new inode, not hardlinked), and then 3) populates that with new content, creates the issue of a presumed identical hardlink existing where that's not the case.
Hardlinks with relative directory references will reference different files, or configurations, or executables, or devices, from different points on the filesystem.
... or within different filesystem chroots.
Hardlinks might be used to break out of a chroot or similar jail. A process which could change the hardlink could affect other processes outside the jail.
As for tags: These are ... generally ... not the same as what most people have in mind as a full-text index, or at the very least, a special class of index. I'm thinking of a controlled-vocabulary generally instantiated as an RDF triple, though folksononmies and casual tagging systems are also often used.
The problem occurs when you've got a tagged data store that's being modified by non-tag-aware tools. There are reasons why that might be permitted and/or necessary, though also problematic. My sense is that robust tagging probably needs implementing at the filesystem level.
Yeah, not a fan of recursive directory trees. Sysfs for example is pretty wonky esp when you're searching for some specific attribute of the device. Not hard links or real files ftm, but same idea. Hard linked directories breaks the category system.
Now the same name for a different inode in two directories is a point well taken, but I would argue that does not fully describe the inode, that name is just one component of the metadata for that file. People are just so unaware of all that other metadata because the interface rarely shows it to them. So many people have taken to packing all that data into the filename. Version numbers, code names- it's one way to achieve portability i guess, but what an ugly compromise! And with all the virtual environments now for pythons and such, it's quite easy to find yourself using the wrong version of something if you don't really know what you're doing and just look at the filename.
Hard linking links all that metadata, which of course does include that unique ID that open returns, so I think it's okay. I would just like to see our file interfaces more adapted to showing all that important metadata in a comfier way
The same name / different inodes problem isn't a filesystem issue, it's a tools issue. Specifically, the fact that tools which modify files (editors, shells, archival utilities, scripting languages, any random executable) only see the local filehandle, not the fact that it's "supposed" to be a single chained copy across multiple directories.
There might be some way to muck around with that using attributes (at least in theory, I don't know of any that do this now), but presently, the only way to accomplish this is through workflow and integrity-checking systems (e.g., that "filename" at any of numerous specified points should be identical to and/or a hardlink of a specified canonical source).
Oh, and one more: since hardlinks apply only to a single filesystem, any cross-filesystem references are impossible.
I think you also end up with issues in almost all cases of networked filesystems: NFS, SSHFS, etc.
It is mostly a tools problem. It should be way easier than it is to see the metadata for a given file in your shell command, browser or whatever. Dired for example has a pretty darn good visual model for this, that could really be taken much further I think. The reason we don't see more metadata like extended attributes and such, is that they are still not standardized across different file systems. So we get left with the lowest common denominator. But a reasonably designed system could just show it if it's there.
I've just always thought a tree is a very elegant way to represent categorical data. Now that I think of it, placing files in the tree is a way to preindex a search for all objects in a given category, basically the ls command. It really affects how we reason about our data. Huh.
Soft links honestly seem like a hack to me, to get around our shitty distributed file system model. And then, because oh no, what if my file is on another server, I guess everyone should just use soft links for absolutely everything. Like, why not just concatenate the host string to the file ID, and have the OS figure out how to handle it? Sorta like tramp.
This article reminds me of some of the thoughts from the "As We May Think" paper by Vannevar Bush back in the mid-1940s[0]...not as an insult, just that in our modern times, it seems we have some new but still some old challenges around ready access to actionable/useful info. I am a fan of the fragments idea. I constantly have a "scratch" note file open where I save all manner of text for me to re-use/refer to later on. Sort of something between a clipboard and an actual document...of course this falls far short of something really useful; plus can't copy/paste an image into the file. (And, no, I'm not going to use Word or some file like that because it doesn't work with the rest of my workflow.) Anyway, i sympathize with this article.
> What does matter is that if you were to fire up an old Macintosh from the 1980s (or even 1990s) today you’d find the interface to be surprisingly familiar but you’d be stuck asking: OK but does it… do anything?
Followed by:
> Twitter, Instagram [...] iMessages and Slack [...]
URLs and meme gifs [...] Photos from the phone [...] Video clips of toddler nieces
Among all those 8 things of one kind, only 2 things of another kind:
> a PDF that we download to print out or fill in and email. Data that we copy from one spreadsheet to another.
Never mind the old 80s/90s UI: Does he ever "… do anything?"
And finally:
> The whole thing with the piles and piles of windows open all the time has never been a frictionless experience. But now sharing your screen for a meeting is commonplace. Everyone gets to look at your messy desktop. Or you have to tidy it up real quick before the meeting.
Which harks back to, from the beginning:
> How many browser windows do you have open right now? (I have 37 windows open, with some 75+ tabs.) Email, calendar, cloud document collaboration, Twitter, Instagram — are mostly or all in the browser. What else? iMessages and Slack are apps that require the internet to do anything at all. None of this stuff existed in 1985.
And almost none of it seems to have anything to do with getting any actual work done… But, anyway, to tie this back to that last paragraph: A measly 75 tabs? YTF do you have them spread over a million windows? That's one, or at most two or three windows' worth of tabs. Just get into the habit of opening something innocuous -- your work calendar, shtuff like that -- pin its tab, and drag it to the left-most position. Then a present-able desktop is just a quick Ctrl-1 away.
Oh yeah, and I almost forgot: That dialog he's whining about is from a stupid application. It has fuck-all to do with the desktop metaphor and local files. Sheesh. Geroffmylawn.
In addition to the aging paradigms of local storage and the perfect/gone dichotomy of in-memory storage of active items, which I definitely agree should go away, we should also prepare for the breakdown of the "local compute" paradigm that underpins it. Discussed in part 2 of http://www.thelowlyprogrammer.com/2021/04/the-rise-of-datace... (note: self link), I'll summarize as _"expect most compute power you use for anything non-trivial to end up being remote"_. The 'fragment' based model discussed in the parent article fits well with this, with the idea that a fragment is not bound to the local computer in either production or consumption, but just a handle to something sufficiently durable for the purpose.
Somehow this kind of writing irritates me a lot. Maybe it is because they present their view as ideal for everyone's use cases? Then again, I am also quilty on the same thing. As I curse the modern design, when it hides technical information.
<Random rant>
Operating systems and boot messages. Why these need to be hidden? It does not require any user action, but provides valuable information when things go bad.
And those freaking spinning progress indicators, that do not tell if the progress is actually happening or working thread has crashed or got locked.
Everything is connected to some company's server and dark patterns are everywhere.
And it feels like all the sites are designed for showing no actual information only full hd carousels with stock images.
> And instead of creating documents — a sort of heavyweight work product — a lot of what we work with on our computers now are fragments: URLs and meme gifs that we copy paste between windows or chats, a PDF that we download to print out or fill in and email. Data that we copy from one spreadsheet to another. Photos from the phone that we crop so we can upload somewhere else. Video clips of toddler nieces doing something cute.
So this reminds me an awful lot of Apple's OpenDoc initiative from the late '90s. Such a shame Jobs pulled the plug on it when he returned.
It also reminded me a bit of Microsoft and KDE doing stuff with COM/OLE and KParts/KIO, though OpenDoc always came off as way more ambitious than those.
I've used soooo many alternative desktop concepts. They all seem great at first until you realize one thing. They have to be 100% perfect all the time. That one time you need to find something super important and the methods don't work. You are up the creek without a paddle. You don't even know how to begin searching or repairing a lost file situation.
Unfortunately computers are no where near reliable enough for a paradigm shift. I will always have a better mental model of my data then the PC will at this time. Especially for important stuff. If I have to keep the old ways for important stuff than there is little point to learn a new paradigm for my pointless stuff.
Having used the desktop metaphor for 30+ years, I'm still not really sure what I'm supposed to be doing with it. The desktop, that is — i.e. the ~/Desktop folder. What am I supposed to keep there? ~/Downloads, I get. ~/Documents more or less makes sense. But what the hell should I use the magic Desktop for, given that I'm hardly ever looking at it (I usually have at least one window open!)?
The Desktop folder might make some sense if it were the root of the filesystem, but as it is, it's just somewhere that temporarily stores my screenshots, that I have to frequently clean-up by moving its contents into ~/Pictures.
Around 2015-16 I was astounded to observe a then-colleague, who had worked with computers for decades, exhibit this workflow when switching back and forth between different application windows on their desktop computer[1]:
* Move hand from keyboard to mouse / touchpad
* Click "Minimize window" button in upper corner
* Mouse over to task bar / dock[1] and click the other app
* Do whatever it was they were doing (copy/paste?) there, then...
* Repeat the process to switch back to the first app.
Yes, I mentioned the existence and function of Alt-Tab[2] a couple times. Not too many. Apparently not enough. I have later observed the same thing with at least one current colleague[3], also with decades of experience in their line of work.
Eh, sorry, "Why are you recounting all this?", I hear you ask? OK, to get to the point: People who work this way do actually get to see their OS desktop a lot.
I think most people used to do that back when the graphical "desktop" UI was new; I may well have done so myself, 30-35 years ago -- not for very long, though, AFAICR. And I wouldn't have thought that very many do nowadays; I'd have guessed most people who've started to work with computers after ~the turn of the century would have faster, more fluent, ways of working ingrained.
___
[1]: Can't recall if they used Windows or a Mac at the time I noticed this.
In Windows, the Desktop folder _is_ the root folder, conceptually (but not really). If you open the My Documents folder and press Up you will be in the User folder. Press Up again and you will be in a "Desktop" folder that also includes such things as the Recycle Bin and Control Panel. Press Up again and you will be in My Computer with a list of storage devices.
In reality, that Desktop view is not the _actual_ folder, which is located in the User folder. It's just a special view of the desktop that probably causes more confusion than it is worth.
I use it as a place to put files I’m working with in the moment. Once I’m done with them, I either file them away or delete them. Although I don’t use a desktop on Linux anymore, I still use the directory for that purpose.
I'm on macOS, and have a ton of untitled open / edited Preview documents (annotated PDFs, marked up screenshots)
When I hit Cmd+Q Preview closes without any dialogue, opening Preview again restores all the documents in their correct state
I have a suspicion the author has some combination of "Close windows when quitting an app" turned on in System Preferences -> General (not the default choice), or "Ask to keep changes when closing documents"
I think a lot of people enabled these settings when macOS did away with the "Save As..." options and made everything save and restore by default
The article is titled as some Bill Joy-esque paper. Then you actually read it and it's just some blog post about a guy complaining how he's disorganized and keeps 73 tabs open.
Then some naive proposal for a weird uniform interface known as "fragments".
Yes, the userspace is becoming increasingly browser centric. I don't understand what this guy is dreaming for. That future desktop environments wrap everything in some meta data structure, on top of a preexisting file system?
A symbolic link to a url is a file, meme gifs are files.
Since when is a program allowed to decide to delete my information on its own accord? If not me, who authorised this? Not me.
Really that kind of threat of deleting documents is completely unacceptable. It implies that I, the user, don't have any say. That I must make this decision right now. Because I have nothing better to do. That the computer cannot simply store these documents and restore them later when preview reopens. (Which it most definitely can)
Its just lazy developers, or, much more likely: just bad ux.
God, I do not use my Mac in any way resembling this hell of URLs and meme gifs that this dude is complaining that it is badly suited for.
And if I wanted to hide the Preview window full of my porn for a video call (I'm assuming that's why he wants to close that Preview full of a bunch of stuff), I'd just hit f3 for Exposé or whatever they're calling it this year and switch to the desktop I keep tedious public-proper work shit on and leave Preview where it is.
Google isn't the top choice because it is the best. It's because it has the most brand awareness. It doesn't always land me on the best result by any means. I'm sorry but that statement really grated me the wrong way. There are many more times when I land on the right site based on a search within DDG where google failed terribly. Google didn't sucked because AltaVista sucked. What sort of analysis is this?
I have found (and know several who agree with me) that Google search quality seems to have taken a dive in recent times. It's almost like instead of just searching, it makes assumptions about my purpose for searching and funnels results according to that assumption. If one of my search terms aligns with a current film, it assumes I want show times. If I search for a city, it assumes that I want to book travel there. Ditto for essentially every search I do.
If it guesses my purpose correctly, it feels almost like magic. But it's wrong more often than not.
However, let's not be revisionist. Back in the day, Google was light-years ahead of anything else I had used.
I like some of the ideas toward the end for metadata capture, but this guy just has too much shit open. Close things when you're done with them my dude.
Oh look, somebody else bellyaching about the tech duopoly not giving half of a flying f--- about what its customers actually want, so long as said customers continue to pay out.
Look, if your employer mandates Mac OS or Windows, I get it. Go raise hell about it to your supervisor a few times and if they're not interested, then you can piss and moan about MS and Apple foisting outmoded UI paradigms on a non-consenting public. Fine.
Otherwise, if these are systems that you've selected and paid for of your own accord, knowing full well what that entailed, then you're just feeding the monsters and then complaining that they won't change their ways. These are profit-motivated entities, folks. Until their bottom line is impacted, and somebody spells out that it's because of Issue XYZ, they're not going to give a flying f--- about Issue XYZ.
Ben Zotto, there have been options out there for decades- literally decades- that follow more flexible paradigms than the traditional desktop metaphor. Kindly consider putting your money where your mouth is, kicking Apple (and MS) to the curb, writing a public blog post explaining why so that some product engineer there understands... or otherwise perhaps just admit to yourself that they own you (not the other way around) and move on... but this thing of "I'm going to write an article complaining that a profit-driven company won't solve my problem, even though I continue to pay them not to solve my problem" has got to stop.
It is for this very reason that most of my work happens within Evernote. Each note is a fragment of information, which I can easily find, organize and link. I can even attach documents to the notes, which I can edit and are updated within the note. If I could have code notes, Evernote notes that behave like Jupyter notebooks, then I would only need this app (and a browser).
> A “document” is stored in a “file,” and that file lives permanently in a “folder” (think: filing cabinet) or temporarily on your desktop.
Or not so temporarily in my case :')
But yes I agree we need to rethink this. The desktop as a paradigm was good to introduce new users to the concept. We can come up with better stuff now. I see a bigger future for tiling window managers like i3.
Yes, 30 or 40 years ago ? On DOS, MACs and Windows were not real back then. But a UNIX system had find(1)and back then everything was text. So finding a file with text was rather simple (if a bit time consuming). Now with all these crazy file formats forced upon us by marketing types, not so much. Unless like me you only deal in text.
I would prefer the idea of workspaces where I use a tag to classify all items related to a project. Discussions, documents, meetings,meeting recordings, calendar events, presentations, people, etc. So i can actually switch my workspace and all what i need is there. This will make my life easier and prevent context switching
Computers will die. They're dying in their present form. They're just about dead as distinct units. A box, a screen, a keyboard. They're melting into the texture of everyday life. This is true or not?"
"Even the word computer."
"Even the word computer sounds backward and dumb."
I was excited and hopeful for Scopeware Vision back in the day.. or at least to have MacOS "application windows" or "mission control" UIs to be a bit more 3D layered/cascade with easier to navigate through them, kinda like Cover Flow or Time Machine's view.
this article is mindpoison to make you think individual computers & systems are important. imo they are fading.
all new systems of any interest are systems made up of multiple devices. standalone computing is fading. facing this fact is an existential question that personal computing, that free desktops must face, as must everyone else.
still some decent exploration of the value of file systems. I for one think the mind palace of a directory heirarchy is extremely worthwhile and that search only, giant nebulous seas of files, is a terrible idea that will leave us unmoored & adrift, forever lost & without concrete thought. but I could be wrong!!
If I may summarize, the single most important takeaway seems to be:
The original mental model for computers focused on its function as a storage space. We should focus more on its search abilities (and freeze ongoing tasks).
I've been using i3wm for several years now, and couldn't be happier without the clutter of a desktop. My files are better organized, with Downloads, Documents and Pictures being the sensible defaults.
Having a file-naming convention helps remarkably for that.
I tend to assemble mine from:
- Author (if not myself)
- Recipient (if correspondence)
- Project (if myself)
- Title (or keyword)
- Date
There are non-file (or metafile) systems which populate metadata automatically, with email systems perhaps the most familiar (from, to, date, subject, plus other optional tags or headers).
Versioning should be handled by a version control system.
The "New Document" / "New Folder" convention is ... just stupid.
Mind that the above are for documents in a more traditional sense, rather than, say, files within a programming or scripting project. The directory hierarchy itself can provide much needed context, though filename conventions add to that.
please advocate these (old) ideas (becoming new again) to the creator of SerenityOS, I want to listen to his opinion since he used to work for Apple as a WebKit and Safari engineer
I haven't used the desktop in 5 years, its made for users that want their main interaction with the computer to be the mouse, but that's getting rarer.
There are many systems working on that, one that springs to mind is perkeep (https://perkeep.org/):
- everything is a blob of bytes. perkeep stores everything based on their hash
- a blob of bytes can be a photo, in which case there's a little metadata blob pointing to it, explaining what it is, where it was taken, the modification time, maybe some additional details, and a title. That blob is also stored and retrievable with its hash
- or it can be a video, or a tweet, or a file... in any case that's just an additional metadata blob that creates structure out of the raw bytes
- those metadata blobs are indexed and searchable
The end result is a massive store (possibly implemented with remote storage, it doesn't matter, it's all the same API) with "fragments" in it and a search bar to access it. Search for "Scotland" and you'll see the poem you wrote while you were there, along with your flight ticket and some pictures of the mountains.
I think we've all been using the Downloads or Desktop folder as a user's temp folder, because we knew something we did was recent so it should still be there, and although it's not how it's supposed to be it still works very well that way. But computers are made to serve us, so if that is the way we're using the computer then the software should change to better suit us:
- no software made in 2021 should have a "Save" button: it should be done automatically, such that I can leave the application and reopen it, or open another one that can read the same file, and be back to the state I was in before.
- for some files I just don't care where they are saved, I only want them to be accessible in a "Recent" folder. The only way I've found to make it bearable on windows is to use everything (https://www.voidtools.com).
- for even more files I want to access them not based on some name I put but based on more metadata. We have the tools to automatically categorize photos, I want to be able to search for garden photos without going through a third-party's bloated website.
- some of the files have a specific hierarchy because they depend on each other. A path should be just another metadata instead of being so central to the identity of "something"
All of this is doable with something like perkeep, but its API is not really made for being used as such; it's more geared towards backing up one's digital life and digging through it. Maybe we don't have the right software, but I'm sure we have the right bricks already.
> no software made in 2021 should have a "Save" button: it should be done automatically […]
Just because I've opened a document that doesn't automatically imply that I want any changes to be saved without asking, though - it's not that uncommon that I open something for reference purposes only, so that any changes are either unintentional (accidentally hit a button or started typing with the wrong window focussed), a side-effect of copying data into something else (e.g. unhiding some layers in a graphics or CAD file so I can copy that data) or else just some experimental or otherwise temporary changes that I certainly don't want to keep, though.
Hmmm. OK. Read the entire article. I don't see a problem at all. Maybe there's something macOS is doing that makes the desktop painful. I don't know. I use Apple/PC and Linux. I just don't have any real issues.
Here's the other data point, actually, two billion data points. It's hard to get an accurate number, it seem there are about two billion desktop computers worldwide. About 85% of them are running Windows and most of the rest macOS.
You don't deploy two billion of anything if the user interface is broken and people have issues with it. The fact that everyone, from grandparents to PhD's in CS are able to successfully use and navigate these abstractions pretty much says they work just fine.
These platforms (all three major ones) work well enough for non-engineering users to use them every day without major concerns. That, to me, is the very definition of something that isn't broken.
That said, is there room for improvement? Sure, absolutely. Always. Nothing earth shattering though.
For me, a more intelligent clipboard and the functionality that Google Desktop Search used to provide. I keep a massive amount of useful data an a "Library" drive. This data spans a range of disciplines (electrical, software, mechanical, optics, robotics, manufacturing, etc.) as well as reference material (books, manuals, courses, etc.). Any search for information on the browser completely ignores the vast amount of information I have collected over the years.
Truly integrated search across platforms would be very useful. I know there are tools out there that might be able to do some of this. I used to use Google Desktop Search. Once that evaporated I didn't look for a replacement.
One way to get around this --to have a single browser-based universal search experience-- would be to move all the files to a web server and let Google index them there. At least two problems with this. There are files in Library that are private and should not be exposed to the internet (client files would be one example of this). The other category are paid products --the simplest of which being either PDF or other format ebooks or training material. And, finally, as we do work under ITAR we simply cannot have a bunch of our data out in the open for all to see.
I realize that for a lot of people these days the computer experience is inseparable from the online experience. However, for a lot of us, this is precisely what we need, a clear demarcation between desktop and everything on the other side of the network router.
A simple example of this I can offer is what Altium have done to their Altium Designer electronics CAD software. They are so intent on turning it into a Fusion 360 style recurring revenue cloud tool that they have, in my opinion, compromised security for every user who cares (or is required by law) about these things. Today --and this is just my opinion-- when you are using Altium Designer, you cannot possibly guarantee that the designs you are working on might not be exposed to their cloud solution. I suppose you could if you air-gap your machines. Not sure how it would handle licensing without the ability to call the mothership. This is terrible stuff.
Nearly everything this article takes issue with could be fixed by treating rich metadata at the filesystem level as a first class citizen, and enhancing basic dialog functions to reflect that metadata.
Unrelated, but could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
I'm not a mod, but they almost certainly track the IP addresses of people who create accounts. It's basically impossible to do spam prevention otherwise.
I think that most people don't have static IPs and then there are proxies and VPNs... I don't think that IP is a meaningful identifier. From my experience IP bans never really work, except maybe for range IP bans, but then you might affect other users too and of course you could still circumvent it with a VPN.
I once created a throwaway on HN for something sensitive, and I happened to be on a VPN at the time. My comment started out dead and I had to vouch for it on my main account.
So I think that's just the tradeoff they make, in order for HN to be able to exist. It probably gets reversed if you email the support link.
The general problem with search-based approaches to the desktop problem is that people expect mind-reading from what is ultimately a very limited algorithm. Then they get mad because the search algorithm doesn't have the associations that are in their head to categorize and index things and it doesn't turn up the results that they think it should.
If I had a dollar for every time I've explained why a search isn't returning the results that the user thinks it ought to, because they are a human being, and not a dumb pile of boolean logic...
Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.