The desktop is broken not because of the file/folder paradigm but because we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop, instead of through their Web interface. You should be able to save a Facebook post or Tweet and their replies to disk.
Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.
> this argument is dead and nobody will ever go back to native apps again
I agree with your post in general but not this. I see a lot more interest in self-hosting stuff lately, precisely because of the concern you mention that online services do ads and tracking. More and more people seem to be doing this.
And I personally really prefer native apps over web apps or electron stuff.
When self hosting is as easy (for your grandma who likes cats) as downloading a container from the Windows Store and double clicking it to open, install and start the server. Then we've truly made leaps and strides in fixing centralization.
Self hosting is hard because ops is hard [1]. People aren't yet in the mentality that you should provide an official modular deployment every time you provide a server binary. Everything-as-a-container wouldn't be the worst way to take the desktop [2]. Isn't that kind of how Apps work on OS X?
[1]: The ironic part is that most people don't keep anything long enough for a hard drive or any other component to fail anymore. So the argument about the cloud abstracting away the physical hardware maintenance for every day consumers like yourself or I is ... dubious.
[2]: Yes.. I just spent the weekend learning nix-build. Goodness, it's super easy to container all the things! :) I'm feeling like a zealot when I start using nix to do what I could have done with a zip file. But there's something magic to having a zip file of assets with its own shell.
Self hosting is hard because we are limited by our ISPs and what they allow us to do with our connections. Why don't they allow people to self host and when traffic starts to get big have an option to migrate that data to the cloud instead of clogging the municipal pipes if that's really the problem. The way companies like Comcast and DSL providers can succeed in this arena is to make self hosting easier. Why do I as an end user need to understand the infrastructure when all I am trying to do is share data or information?!?! I am playing the role of an uneducated user here and not as a Software Engineer as a career. It's disheartening that basic things like this aren't solved for people on the ISP level. I should have to find a service like Weebly or, DO or AWS, etc. I just want to be able to share my info and my ISP at the very least should be able to provide the basic framework for me to do it. When my content becomes populate then adjust accordingly.
I don't think that anyone self-hosting is able to make a single dent in the consumer internet infrastructure. Usually when you're self-hosting I guess you're handling either just your friends and family or perhaps at most up to 1000 strangers that follow your hobby.
Hypothetically in a future where self hosting is non hostile, will you see people self-hosting startups and the like? Yeah, maybe feasible. I think at that scale you start to care about things like uptime and maintenance. But I think the biggest winner to self hosting is the photographer who saves $5/mo on the blog that people rarely read or the kid who doesn't have to pay $10 to play Minecraft with their friends, or the family friend with 2TB of family data that doesn't want to pay $20/mo for Dropbox or the like when they already have the hard drive to store it. Yes it needs to be simpler for these people! There's a whole economy in making it difficult for these!
> Hypothetically in a future where self hosting is non hostile, will you see people self-hosting startups and the like?
Huh, is that not a thing anymore for startups today? In my experience, starting in 2003, across two companies, was that if you need any service fast and on the cheap, you're forced to host it yourself. Run your own mail server, web server, DNS, server housing, all the web apps, etc. and of course spend most of the day developing your actual product. Have the hosted options actually gotten so cheap and reliable these days? Is it a mindset thing?
My gripe with the cloud or SaaS solutions I've used over the years is always that management of them and backup is difficult tending towards impossible and without those you can't rely on these services, it's just voluntary vendor lock-in. When self-hosting, your actually (forced to) learn how they work and be able to fix them. Self-hosting, to me, is the simpler, more reliable option, if you depend on a service for your business and it enables you to get it fixed yourself, when it is not working. It doesn't prevent you to out source the work, if you have the money, but worst case, you still have direct access to your property.
With the hosted options, in my experience, you get all these fancy promises on availability and it being the latest, smartest tech and then the service is down for half a day, your data got restored from days old backups (if at all) and there is nothing you can do except tell your customers that "were sorry and working on it" while you wait for it to come back. :-(
>and then the service is down for half a day, your data got restored from days old backups (if at all)
It's not unheard of for the same thing to happen with in-house systems.
The difference is that you may have a wider range of options to avoid and respond to any outage if you run things yourself. That seems to be a rather theoretical advantage though. Every time there's a new ransomware attack, it's always those in-house deployments that are hit hardest and take the longest to recover.
I think under optimal conditions self-hosting is superior. But conditions are rarely optimal. As soon as you have to convince non-technical management to invest in non-productive necessities or in contingency planning you're already in a sub-optimal position.
The biggest win we could achieve for self-hosting doesn't involve people actually self-hosting. The idea of personally maintaining your own infrastructure is unlikely to scale to general population - but the thing we're really after is ownership of data and the ability to own infrastructure.
So, I think, the ideal situation would be to have a combination of big and small companies offering storage and compute as a commodity. You'd pay a fee to keep your stuff hosted somewhere; any time you see a better offer, you can migrate to a different provider without much hassle, and with near-zero downtime. Cloud services would work by shipping their code to your data, not the other way around[0]. And if you were so inclined, you could just build your own infra, or even buy a turn-key "self-hosting in a box" kit.
Pieces of that vision are already here. Compute providers are plenty. You can order "self-hosting in a box" kits. Internet architecture makes everyone's computer equal (at least in theory, ISPs mess it up with NAT, and their T&Cs). The only thing missing is the part where you own your data, and SaaS vendors serve you - the bit that makes SaaS truly be Software as a Service, instead of Serfdom as a Service.
--
[0] - Preferably with homomorphic encryption preventing SaaS vendors from putting their hands in the cookie jar, if we can get that to work without creating another blockchain-level environmental disaster.
You mean good old webhosting?
What I don't get about the HN crowd is making every simple, already existing, already proven, and already solved problem so damn hard.
I'm in EU, I host my websites at multiple local webhosting companies. They're small enough to care about support and big enough to guarantee speedy and reliable service. By law they're not allowed to go through my data (ofcourse they can and I have no way of proving that they did), so the legal deal between me and them is crystal clear. Who do you call when your Amazon Web Shit serverless thingy doesn't work no more?
I can and did move several webapps and websites from one webhosting company to the other. It works flawless. And besides waiting a couple of hours for a DNS change, it's almost instant. I get it, you can't do that easily with a system with million users.
This so called problem was already solved decades ago. It was called personal computers and the internet. When people started calling the internet 'the cloud' then things went downhill.
Excuse me for the slight rant. I should go outside and see some more sun. ;)
Yes, this is solved. I'm saying that there's another problem that needs to be solved too: control over data.
Regardless of what you use for hosting your own stuff, if you want to use a third-party SaaS as a user, they own the data. Want to make a document on Google Docs? That document lives on Google's servers, it's forever tied to their service and mined by them. There's no artifact you can hold on to, other than your user account.
What we need is a system where the data for that Google Docs document lives in a place you control - be it your own hardware, or some hosting you rent somewhere. It's the SaaS that should come to the data, and operate on it there. That way, if you lose your Google Docs account, or decide to edit the document with something else, you actually have that document, in its canonical form. Same for all other SaaS.
In an ideal world, yes. In practice, that's impossible to do up front, so the next best thing would be open formats - i.e. openly and fully documented ones. The goal is to break the leverage a vendor has over the users when using a closed, proprietary format.
> There's a whole economy in making it difficult for these!
And there is your niche right there. Individual companies or persons do not need to make 'dents in markets' all by themself. A host of people doing the same might. But for the individual - especially when having sustainable income objectives, not hockeystick growth - there's a good place in the market I think.
" I think at that scale you start to care about things like uptime and maintenance."
I guess none of members of your mentioned groups will ever grow to care about a scale. And thats a good thing because they are distributed. And i think it creates a better internet, which is kind of wide network instead of shallow graph of couple mega nodes.
Self-hosting isn't hard, it's currently impossible.
The real problem is who owns your data. Because if you use Word or Photoshop, your files are locked inside Word and Photoshop. And this is still true if you use FOSS alternatives, because there's only very limited support for metadata-aware sharing between applications of all kinds.
It would be super-useful to have (for example...) seamless links between text editors, web design applications, web hosting systems, and even video editors and ebook publishing tools. But that's not where we are now. There's some limited interchange, but most cross-domain transfers are difficult and fragile, and some are impossible.
Cloud is just the online version of the same model. When you have proprietary control of user data through proprietary file formats which actively frustrate open sharing of data between applications, it doesn't matter of the data is stored locally or in the cloud. It also doesn't matter if you're using a mobile or desktop UI.
The FOSS people have always been looking through the wrong end of the telescope. The real revolution would be open data which is wholly and exclusively owned by users (or user groups for collaboration) and loaned to proprietary software for specific limited tasks.
Which is the opposite of things work now. Applications and products own your data and they let you access it - but only if you ask them nicely. And - increasingly - if you pay annually for the privilege.
So containerisation or self-hosting or whatever is a non-solution unless it also gives data back to users.
Which is also why the fragments idea won't work. There's limited use in trying to automate or manage or otherwise AI-ify access to data that you don't truly own anyway.
In fact a new kind of shared Internet would be a very useful thing. But it would need a ground-up redesign of everything, including browsers, mobile apps, desktop applications, operating systems, search, and the financial and legal frameworks surrounding them.
I'd love to see that happen. But right now in 2021 it just doesn't seem likely.
> The real revolution would be open data which is wholly and exclusively owned by users (or user groups for collaboration) and loaned to proprietary software for specific limited tasks.
How would that work in practice? By its very nature open data would also be accessible to proprietary programs although the reverse need not be the case.
Seems like we will need open data and code. Either one open will not do.
> How would that work in practice? By its very nature open data would also be accessible to proprietary programs
That is a good thing. If program X works best on my data I want to use it. There are a few examples where people do mix programs from different companies. Musicians use MIDI to connect their favorite keyboard to a synthesizer from a different company all the time - sure it is tied to hardware, but it need not be and is a perfect example of what should be possible for any user data: mix and match.
> although the reverse need not be the case.
It doesn't have to be, but if users demand it, it will be.
> Seems like we will need open data and code. Either one open will not do.
Open data means we can create the code. Closed data is a lot harder to deal with than closed code.
The reason MIDI is a success is the actual data is quite simple. It's just a stream of keypress and numeric control updates. And there's no incentive for a synthesizer manufacturer to block access to incoming or outgoing data.
When your data is complex, the processing done to it will be complex, especially if you need to guarantee invariants (eg. referential integrity or database constraints).
One example for this that i have been thinking of is chat. Instead of having 15 different apps, we should be able to choose which UI we want to use, and then that opens the chat streams. Kind of like email, where you can use any client.
However most big companies would fight this every step as they loose control.
Edit: I envision it a bit like streams of data that you can subscribe / push to. Think RSS mixed with a pub/sub type of model. You would subscribe to the hackernews datastream, and submitting articles and comments are done using push. The push message would have some predefined metadata fields that are obligatory (article url, title, summary or comment text).
This is centralisation from the bottom up! You don’t want this. If everything talks the same protocol, some giant will eventually own the protocol in the same way Google stole the web with Chrome.
Can you imagine if Android licensed iMessage instead of building Hangouts? Yes, we’d all be texting on the same protocol, and yes we’d have a choice of clients, but at what cost?
True, I was mainly thinking about the HN community (and the more technical people around me) instead. This is not for my grandma who likes cats (PS: So do I!)
Yes this is kinda how apps work on macOS, but not all apps yet. The new sandboxed container model isn't mandatory yet. Some apps are just a folder of files with no kind of containerisation at all (other than the .app itself being a folder rather than a file). More modern apps store all their data in a containerised filesystem though.
It leads to some cool things you can do like easily capturing the icon of an app or even changing it without having to change the app itself.
But on the self-hosting side, docker is making big strides there. Almost everyone deploying something like Home Assistant for example will do it through a tree of managed docker containers, in many cases without even knowing it :) It comes with a supervisor (which is also dockerised) which manages that part very well.
I think most consumers I know keep their computers for long enough by the way.. Phones come and go, but desktops and even laptops tend to stick around until they do break and I can't fix them anymore or just heave a sigh and go like "NO this time you really have to get a new one, Windows Vista hasn't been supported for years"
> But on the self-hosting side, docker is making big strides there.
Docker is only part of the puzzle. Even being proficient with Docker (and running services in a traditional way), there was always a big barrier to self-hosting for me in networking. If you lose all of your cloud-equivalent functionality once you leave your home network, it isn't a realistic alternative. I'm not a network whiz and the builtin VPN functionality of my NAS is sadly not up to par either.
In comes Tailscale[0], which made it dirt easy to self-host my stuff and have it available anywhere (given there is a Tailscale client for it which has been the case for all my devices). Since I started using it, I've completely migrated my contacts, calendar, zettelkasten (Trilium) to self-hosting and started some home automation projects. For someone who tried and was stuck at the networking part in the past, it truly is a game changer (and I'd like to think I'm not the only one that is held back by that).
Selfhosting these days is actually a breeze with docker.
I don't like using docker for delivery or for deployment of saas, but as a delivery mechanism, it's really great
While self-hosting as commonly understood is not for everyone, I really hope and think that small-community hosting as a service will become a thing. Basically there's a number of places like https://syntaxserver.io/ which will host (for example) nextcloud for you. You should be able to get that as a supported service from a local company/organisation/group for a price comparable to what huge SaaS businesses would charge. With the difference that you can move the backup anywhere you want, and your data does not touch other people's data.
In reddit, colleagues around me etc. Hacker News posts/initiatives. 2-3 years ago everyone was all "Hey look at my new Office 365 setup". Now the cool new thing is more self hosting, often driven by a desire for more privacy. There's also new businesses around this usecase. Look at https://www.beeper.com/ for example. It's a hosted matrix service but all the bridges containing your private data are self-hosted.
> My money's on your anecdata being 100% from a techie bubble.
Absolutely. But this is where things start before they get mainstream. Things pick up traction here. Then they mature and commoditise and make their way to the mainstream.
> Absolutely. But this is where things start before they get mainstream. Things pick up traction here. Then they mature and commoditise and make their way to the mainstream.
1. Where is the money in that aka is there more money in that than in services?
2. Will it be braindead easy to use?
My money is on no and no so I don't see how the chasm crossing will happen.
> The desktop is broken [...] because we stopped using files to represent information.
This is it right here. Our entire world and notion of the internet is based on serving data stored in a file from one person to another. Once the developer started drawing too many conveniences and started to "move fast and break things", we thought it's good enough to just store everything in a database, or serve it as Javascript. These technologies are great, but they go completely against everything our computing paradigm stands for.
The file system is nice because it's the same database. I don't want all my programs to do their own database, since then how do I search and move stuff in bulk?
No, but all your programs can/could share the same database, and you could cross-reference anything whenever needed. At least that's how I manage it. I have a single postgresql database with per-program/per-instance schemas.
I just did my taxes and figuring out what to pay was just a single select over some tables in the paypal and various per-bank schemas + a schema that had a table of currency conversion rates from our central bank applicable for each month of the year for tax purposes, expected by the tax man (or woman). Quick and easy. I don't even bother with UI, for these once or twice a year needs, just like I wouldn't bother with writing UI for some mp3 conversion task. Just a simple script will do.
Filesystem is great for arbitrary data/files with no schema. Random pdf files, code, programs, etc. But anything that has some obvious schema and comes in large quantities and perhaps needs to be modified/synced with third party data source, I like having such things in the database. It's so much more useful that way, because it's much easier to do something with the data.
A local database will store data in a file, but I get what you're saying. Files are a common interface that allow you to pipe data around, keep it portable and malleable.
Yes. One with a generic schema, and to which the user has full access rights.
Fundamentally, it's not the files that make filesystem great. You could devise different models, perhaps a relational one, and paper complexity over with well-designed UIs. They'd probably still be more complex than the filesystem - files are about the simplest data storage abstraction you can invent[0] - but they'd be serviceable, and users would learn.
What makes the filesystem great is that it's an old abstraction, designed in the ancient days back when computing was still about enabling users. Bicycles for the mind and all that. People cared about making things useful to users, instead of just shamelessly exploiting them. So, designed back then, the filesystem grants users the vocabulary to manage their data and freedom to do so, and it's so ingrained that - despite their best efforts - companies weren't able to completely take it away.
Filesystem persists for the same reason e-mail persists. Despite its warts, it's one of those technologies made before the computing industry became exploitative.
--
[0] - Despite the frequent claims to the contrary, coming from the web and mobile world. But guess what, data magically held in app and "shared" by magic isn't easier to understand, it's just not understood at all - users have no mental model for this. The mobile app approach works only because it removed all data management features except the share button.
You are missing the point here by seeing only the technical aspect. You are misleading technical description with abstraction.
A database is hardly if not accessible to the user. The files are an abstraction that enables the user to owns its data. Once you have a file, you can do more than opening it in your app. You can store it wherever you want, edit it with any software you wish, arrange it the way you are confortable with.
As a user, you can’t do that with your SaaS database, you must rely on the « export / share » function of your SaaS provider, hoping it will export all data, in a readable format and that the import function exists and is reliable. You don’t own anything and as soon as you stop paying your subscription, you are stuck with nothing.
File formats like DOCX or PSD (fitting extension) are almost impossible to parse 100% correct and render without the (hired) software used to create them. While you may be able to copy your files, without the software they are quite useless.
You are right and it's a real issue, but you can share them with whoever you want, however you want. You can just keep them and be pretty confident that they will be readable again in years : even if it could become difficult as the time passes, if it is nothing too exotic, chances are you will be able to at least read them.
And I prefer a "95%" correct DOCX or PSD that I can recover and rework if needed than a "0% this App is not available in the WhateverStore anymore".
For popular formats that would be possible, but there are a lot of binary only formats that are completely impossible to parse these days. You're free to copy it but the bits are essentially useless.
Try to load a current ML model 20 years from now. Probably tied to proprietary software and if you're unlucky also hardware (like CUDA)
Perhaps the concept of file needs to evolve from a locally stored collection of bytes to a more generalized notion of locally identified collection of bytes, data links, and functional relations. All that should also have an ability of become fully localisable akin to 'clone'.
By now the spectrum of interactions using computers is visible enough to be able to generalize such meta-file formats. Should it be some form of database or a kind of system-level support for defining and assembling such meta-files is a question of experimentation.
It's more like an active-book paradigm vs a file, where one could tie together multiple contexts, yet being able to present it to user in some human perceivable form. Some analogies could be a project or an activity based collections of files, links, collaborations etc.
I keep repeating this argument in different forums, but: if you every lived in China you would realize that the native app is the past and the future. Since governments can block websites, that means they can block your web apps too. When I came to China I completely lost access to Google Docs, Gmail, Facebook, etc. Relying on web apps is exactly like giving governments the right to uninstall applications on your computer.
Right now this is not a big deal in most countries. Right now. But as the web becomes increasingly balkanized (and I believe it will) and as countries become less democratic (always a possibility) the native app with local data will reassert its prominence in people's lives.
This is an important point, but then "native" iOS has the same vulnerability. Apple arguably has more power to block iOS apps than China has to block the Web, so China just tells Apple to block what they want blocked, and Apple does it.
If it makes you feel better, efforts are being made to bring non-tyrannical operating systems to mobile in a braindead-easy, consumer-friendly fashion. You can search up loads of articles about the amazing stuff that the peeps running organizations like Pine64 and LineageOS have been up to.
> Relying on web apps is exactly like giving governments the right to uninstall applications on your computer.
To some extent, governments, across the globe, are already exercising this right in some form. Quite often we hear that some government has banned some app and it becomes illegal to use that app in that country.
> (spoiler: this argument is dead and nobody will ever go back to native apps again)
Ahaha, I don't touch web apps unless I really really have to. And even then it's lighter stuff, like chats. Can you imagine using an actual productivity application in the browser?? No thanks. The thought that Slack/Ms Teams/Discord all run slower than MSN Messenger did in 2005 despite my having a computer ~50x more powerful is depressing enough (and each of them consuming as much RAM as I had on my desktop back in the day!).
> Can you imagine using an actual productivity application in the browser?
I'm not sure what type of applications you're referring to, but if you're including word processing in that, then I can't imagine not using either Google Docs, or whatever might one day come along and be a reasonable alternative. If I'm using a word processor, it's because I want someone else to be able to read and, quite often, contribute to what I'm creating. That is far, far less painful using Google Docs than Pages or whatever the alternative might be.
Photopea is a browser based photoshop; it is very good. VS Code is an electron based IDE, also very good (and there are various web variants as well). There was some kind of high quality online video editor my old landlord (who has a small video production company) used to edit 4k video with his macbook air. I don't know about audio, though I suspect there are more than a few options - yet probably less than for everything else, I'm not sure how good those physical device interfaces are now.
Aa for 3D stuff; it looks like solid works have some kind of cloud platform. How much of the workstation load is there I don't know.
Those are not productivity applications, they are all tailored to specific purposes. The only one that I can say might be considered a productivity application is text editors/IDEs, but even then it's tenuous. No one is going to call Blender or Ableton a productivity application
Most likely. It's not the best IDE but it is likely the best free one. Out of the box VS Code is pretty much good to go while others are either paid or take a lot of config.
I have attempted to use vim but I just can't be bothered working out how to turn it in to VS code. If I have to decide between multiple ways to install plugins with their own tradeoffs, its already too much work when vs code just works.
You can collaborate on office docs in Office365, Teams and Sharepoint. Also, Google will convert Word. Quite a few people and organizations still prefer the richer feature set and combined suite in Office, or even iWork. Excel is still king of the spreadsheet and doesn't look to be going anywhere.
Electron apps definitely have performance issues and I do not defend them. I just also want to make sure we're recognizing that apps like MS Teams are doing a lot more than MSN Messenger did back then. Whether you use/need those features or not, the apps are much more capable. Our computers have gotten an order of magnitude faster for sure, but the workloads they're tackling have not stayed flat.
> I just also want to make sure we're recognizing that apps like MS Teams are doing a lot more than MSN Messenger did back then.
Can you elaborate? MSN Messenger did text chat, voice chat, video chat. I don't see what more features MS teams has that explain the order-of-magnitude increase in footprint.
It's more than one OOM, closer to two. From what I remember, MSMSGS used few dozen MB at most, since typical machines of the time had between 64 and 256MB of RAM. Meanwhile, I've seen Teams go over 4GB simply being idle, and others have reported much worse.
Teams does have quite a few extra features: multiplatform, easy meeting recording, media embedded in text chat (quite useful for my current work), shared editing of Word or Excel documents within calls, sharing of Powerpoint presentation controls within calls, per-channel wiki (a bit anemic, but it's there), pretty extensive Sharepoint integration, crazy extensibility...
The downside is of course that you pay for all of that even if you don't use it.
>Teams does have quite a few extra features: multiplatform, easy meeting recording, media embedded in text chat (quite useful for my current work), shared
I am curious if people would generally consider an editor / IDE a productivity app. If so, VS Code's popularity seems to be more than "many people do", in certain segments, it's the leading development environment [1].
And oddly, if we consider it an IDE (I would, if the relevant extensions are installed for whatever you are doing), it seems to use less resources than most of the others I use on a regular basis, while seeming more performant. It's really an odd app.
Personally I don't view browser based and Electron apps as equivalent. I don't really use any Electron apps, but to me they aren't as annoying as software which run within my browser window.
Sometimes I just close my browser, because to many tabs have built up. I also want a Dock icon, PWAs and Electron apps will provide that and allow normal cmd+tab to work.
PWAs are worse than Electron apps, because they die when you exit your browser. Google Chat is the worst, they had a standalone app, but now it's a PWA. So I either have to have Chrome running (a browser I don't actually use) or a window or tab open with the web-version.
Electron apps are generally fine, from my perspective, they use more RAM and don't feel completely native, but in lack of better options they're completely fine.
Yes, I can get behind this argument, it's significantly different from the standard opinion that Electron apps are trash quality in terms of performance. VS Code is generally either seen as a counter to that argument, or as the exception that proves the rule; nevertheless I am frequently left unsatisfied with such arguments. My core point is: the norm seems to be lazily written native apps, to the point that well written Electron apps can compete, assuming that the app is non-trivial. Clearly having an Electron app for a task list is unlikely to be a good idea, but as a core productivity tool that is open for months at a time, it's fine.
Coincidentally, I stumbled upon StackBlitz [1] a long time ago, and was generally very impressed with it. It is essentially a very slick, online version of VS Code optimized for rapid prototyping, or small team projects. I could easily see people working with it as a main IDE. The argument then would be whether your browser of choice provides a nice enough environment for it to rival your OS when it comes to pinning tabs, navigating between them, etc. I would agree that it's unlikely to be an awesome experience, but with some work, it could be good enough. I mean, I guess a large percentage of people use Gmail via the web interface, and what is email if not the quintessential productivity app.
Under the hood, Electron is powered by the Chromium rendering engine and NodeJS. So why do you view browser based and electron apps as different? Because you cant see the browser menu bar?
Because they run as separate processes. I frequently just close all my browser windows, which would exit any browser based apps. Electron app are their own thing, and lives outside my browser window. Electron apps also have their own dock icon and exist independently when I tab through my open applications.
That's also why I strongly dislike PWAs, they DO NOT exist independently of my browser. They are very much tied to the browser (well Chrome), which seem illogical, given that they have their own icons/launch thingy and pretend to exist as their own process.
I get that Electron and browser based apps work more or less the same, but I interact with the two types applications in a very different way. That's what I care about, the interaction, the underlying technology is irrelevant.
I work at Google and I do all my work in chrome, email, chat, IDE, ssh, docs/slides/sheets, etc. I used to use iterm and vim but during wfh I converged all my work into the browser and it's been pretty convenient accessing everything from the same interface.
I know Google is not like most places and that a lot of web apps suck, but when the web apps work well, it's pretty nice, at least for my workflow
>spoiler: this argument is dead and nobody will ever go back to native apps again)
Not with that attitude it won't. Yeah. 90% of tech offerings only work based on monetization and hostage taking of consumer data. Normal people are starting to pick up on this. All the techbros looking to score that hot, sweet -aaS money are blind to it and desperately hoping their market dominance and spend can keep people from digging through the native computing stack.
Need end to end encrypted file transfer? VPN and NFS is your friend. Need chat? IRC, at your service. Also exposable through VPN, so you can limit the audience. Want Doc updates where your secrets and in progress stuff is guaranteed to not get pored over by some intern or Admin somewhere? See above. Want no bloody ads and not to be snooped on? Set that stuff up homey. I've got a young'un whose mind is blown away at the fact games used to even exist that didn't need n internet connection.
The desktop metaphor is fine. What isn't fine is the normal person's technical education/on ramping. The old approach was teach programs first, then the protocols and problem classes they solve. Now it seems to stop at just teaching programs, because there is so much out there that is doing the same exact thing, but different branding, there's never enough time to dig into what is going on under the covers.
Just.. no. Please stop pushing IRC. It's had decades to evolve and still today lacks really basic QoL things like a good mobile story (always-on connections won't fly, bouncers are limited hacks), permissions, or user registration that doesn't look like an 80s xterm.
Matrix is carrying that torch now. IRC is an evolutionary dead-end that will only ever be used by techies.
I use IRC; it is good. I also use NNTP is also good. There are other protocols can be good too. They can be good for different purposes.
There is also I can store all of the files on my own computer. I don't need to store them elsewhere, except to make backups. I use DVDs for local private backups. For public files, I also store them on DVDs, but also on other internet services (such as chiselapp), too.
There are many computer games that don't need internet connection. Some of them, but not all of them, are designed for older computer systems such as NES/Famicom, which can be emulated on many computers, so it doesn't matter what operating system you are on, it will likely work.
Better documentation is helpful. Describe the program, the protocols, file formats, etc. This is how to learn working with computer. Other programs doing same kind of things can differ in many more ways other than only the branding though; some have different features, source code availability, etc.
I for one want nothing to do with this. File system access is for good actors only, and the advertising assholes have poisoned the well for a web browser being anything other than a dumb document browser with the privacy settings turned up to the max for me.
A neutral file system and standardized file formats are a huge part of what has made computing able to do interesting things in the past 40 years. The fact that one application can output a file, and another can open it and operate on it is basically at the core of the unix philosophy, and the reason we can have things like developer workflows.
If I, as an application author, can only work on data in ways that are intended and officially blessed by another application, we basically have the situation we have on mobile where everything is siloed, and the state of the art is limited by the imagination individual application developers.
How then are web apps supposed to become part of a more durable workflow (i.e. opening/saving/moving/backing up files) if we do not permit them the same privileges as native apps? I don't think web apps should have total control over your filesystem, but why not at least allow them to operate within a folder?
They aren't. Web apps can't be trusted with access to barely anything on your system, lest they copy it to sell as advertising. As long as the internet is fueled by advertising, essentially all web apps are adware.
BTW both Safari and Firefox are not going to implement the File Access non-standards that Chrome pushes. They expose too much, there are no good ways to limit the exposure etc.
When you think about it, the modern internet resembles a welfare state. Everyone's day to day sustenance is sponsored by a few wealthy benefactors, meanwhile they essentially hoover up what remaining potentials there is. There could be so much more than what we think is possible now.
That can be a better idea. (Unfortunately the HTML file input in any web browser that I have tried does not allow the user to change the file name to a different name than the local file name. This ought to be fixed.)
When it asks the user for a file, can also specify the wanted access: read, write, read+seek, or read+write+seek. Requested format can also be specified, but the user should be allowed to ignore the requested format if wanted and instead specify an arbitrary file. For writes, estimated file size can also be specified as a hint, which can also be ignored. Then the user can type in a file name, or for the non-seeking modes, a pipe is also possible. For write non-seeking, the user can specify append or overwrite. For seekable files, a pipe is not valid. For writing to files, the user can also optionally specify the maximum size that the file is allowed to have.
That's basically what file system access is like in Android, unless you give an app Storage permission. Seems sandboxed enough if no other web apps can view it.
> this argument is dead and nobody will ever go back to native apps again
I don't see this. The native program ecosystem is alive and well as far as I'm concerned. All the programs on my computers are native, despite noone "going back".
> Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
Now go and check out syncthing, you're in for a really good time.
Yes, I want to save the files locally to the disk. I don't use Figma, Slack, Notion, Facebook, Twitter (except sometimes for reading, using Nitter), or Google Docs. You could save the HTML, but that isn't always ideal. Having defined file formats can help, which is the case when using email, NNTP, ActivityPub, IRC, etc.
FTP is no good. There are better protocols, such as HTTP, Gopher, Gemini, Plan9, etc. I had made up a file format for serving directory listings by HTTP (but I don't know how to configure Apache, or to write an extension for Apache, to be able to use it).
About "who think the browser should be for browsing documents and not virtualizing apps", it is badly designed for virtualizing apps. (I have thought of some better ways.)
Also, the file system paradigm does not fail with shared content; you could have the program to mount a remote file system and then access it using local programs, if wanted. (You can then also easily to copy files between your computer and remotely, in this way, by using the standard operating system commands for doing so, and will work just as well with command-line or GUI. Similarly, for SQL databases, you can have an extension to expose remote data as a virtual table, and you can then easily copy data between locally and remotely.)
> FTP is no good. There are better protocols, such as HTTP, Gopher, Gemini, Plan9, etc. I had made up a file format for serving directory listings by HTTP (but I don't know how to configure Apache, or to write an extension for Apache, to be able to use it).
WebDAV is actually fine for this use case and supported everywhere (e.g. you can enable WebDAV for a certain directory in Apache, map it as a network drive in Windows, and everything will just work).
There are some things that I don't really like about WebDAV, including the use of XML.
However, the HTTP directory listing specification that I made has been described as being like a simpler and better (in some ways) version of WebDAV by some of the other people who have seen it. (It does do a few more things than only directory listings, but directory listings is its main intention.)
>Sounds good. Where is such program? This is surely not a very novel idea, but where is it?
Every office in the nineties. Windows 95 + Office on the desktop and a Windows NT Server sharing the files. I'm not saying we should try and wind the clock back but it was a solved problem.
It's probably less that they suck, and more that they aren't designed for the workload the developers want. Files are a good idea when you have exclusive write access to them at any given moment; less so, when you want to support concurrent access. Real-time collaborative creation happens at a finer level of granularity - people are manipulating individual aspects of documents, objects in the application's internal model. This doesn't work well when your atomic unit of synchronization is entire database.
Not that I disagree with your overall point, but there are multiple products in that space. It's clearly not a deal-breaker for most; but if anyone's lamenting the lack of a GoogleDocsFS, they can get one:
I mount a cloud drive containing about 3 TB data on my laptop using rclone and it works great. And the cloud provider does not even have a native Linux client. I am so happy with rclone and will totally recommend it.
Maybe, maybe not, but even then, wouldn’t implementing those be easier (for both you and the rest of the world) than creating a completely new protocol?
> It still leaves the issue of it using multiple ports.
Aren’t multiple ports only an “issue” if you assume NAT (and CGNAT, shudder) as a natural state of things?
> we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop
A good observation, but I think it conflates two different trends:
1. Some software platforms deliberately limit what data is stored on your local machine under your control
2. There's been a shift in UI/UX away from using files as a first-class abstraction
There's a good case to be made that a UI should generally hide the specifics of its data storage. As an example, it's a good thing that most email clients present the user with their emails, rather than with a raw folder of files. (Internally, the email client might make use of a database rather than a directory, so it might make good back-end sense too.) Of course, that's not the same thing as the email client being hostile to data-portability.
iOS strongly commits to this ideal, even at the expense of constraining user actions. The podcast app doesn't let you upload your downloaded podcast episodes to your desktop computer, for instance.
Aside: iOS has very poor support for dealing with files in the usual ways, to the point that you pretty much need to use a third-party app to do so. I've found the freeware Documents app by Readdle to be very good for this.
> this argument is dead and nobody will ever go back to native apps again
I agree that the web as a GUI toolkit is here to stay, but native apps are alive and well too. There's a trend to try to push users off the mobile web and onto native apps (Facebook, Gmail, reddit), rather than the other way round.
I'm not really sure how Slack saving documents to disk would really make sense. IRC was around before SaaS took off and webapps replaced desktop apps, but I can't think of a client that implemented "IRC documents" saved to disk.
Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor. They weren't in a special IRC format, and you didn't open them with your IRC client. Hell, you couldn't open them with your IRC client. There's no reason you can't just ctrl-C a bunch of stuff out of a Slack chat and ctrl-V it into your text editor. Only difference between that and IRC logging is that you have to do it manually.
> Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor. They weren't in a special IRC format, and you didn't open them with your IRC client.
Yes, that's exactly the point. You owned this data, you could do whatever you wanted with it, and it was stored in a format that was both trivial and most fitting for the data stored.
> Hell, you couldn't open them with your IRC client.
I'm pretty sure you could in some clients, and some definitely pulled stored logs to backfill the chat after restart. Though I haven't done that myself (instead I relied on a bouncer to supply the backlog on connection).
> There's no reason you can't just ctrl-C a bunch of stuff out of a Slack chat and ctrl-V it into your text editor. Only difference between that and IRC logging is that you have to do it manually.
That's a world of a difference. In Slack, it's painful to do, and if you haven't done it when you first saw a message, it's going to be even more painful to do after the fact.
> they were just text files you opened in any text editor. They weren't in a special IRC format
Of course they are! Using anything other than text files to store chat logs would be idiotic. The main point is that slack is a user-hostile application that does not even allow you to do that. Why people put up with this is beyond me.
> Sure, most clients let you automatically save logs, but they were just text files you opened in any text editor.
Right, my point is why doesn't Slack do that? Then you could use `grep` or `find` to search across all your messages and avoid paying a monthly fee to access your entire message history and...oh, right.
You can only search a log for the time that you were logged in and saving it. You can search the entire history of many Slack channels from the very first message posted in it onwards, even before you'd ever joined it. That's a significant advantage over a local file.
I'm not particularly keen on Slack but suggesting search would be better locally is plain silly. Search is obviously better done at the server.
> You can only search a log for the time that you were logged in and saving it.
That ignores the possibility of the data being synced between local system and the server. Slack is already in the cloud, so cloud options are on the table. And so is syncing data.
> suggesting search would be better locally is plain silly. Search is obviously better done at the server.
The reverse is obvious to me. Slack search, like all web SaaS search tools, is really bad. I could do better with grep - and it would work faster, and I could actually trust that it searches through all the messages, instead of giving me some eventually-consistent view into results of a query that is only tangentially related to what I requested. And they wouldn't be able to tell me which properties I can or can not search by.
And I'm not talking theory - in the past, I did some spelunking in many years' worth of IRC logs in a folder on my drive, and the experience was much better than searching for anything in Slack.
> Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it?
Just curious, does anyone know of any hybrid file formats that store information both locally and online?
It seems like one solution to this problem would be a document that stores an editable copy locally and a revision hash in its metadata, then decides whether to serve up the local or cloud copy depending on whether the user is connected to the internet.
Sure, this could cause conflicts between online / cloud files if someone else edits the file at the same time as you, but that's true of any cloud sync service like iOS Notes.
I guess in retrospect I'm just describing Dropbox which, while it's more a container for standard files than a file format in itself, has largely the same effect.
Yes, filesystem sync protocols like rsync do it at the FS level and if you want to go deeper than the FS level, you get into the realm of operational transform and other rather complex algorithms.
A very insightful comment up thread observes that filesystem-centric computing worked for as long as collaboration was very limited. Once apps needed to move beyond that to collaboration at a finer grained level it fell apart and apps started needing databases, and in particular, databases that could link data from different users together, implying a shared privacy domain.
Was this change inevitable? The long since exiled and forgotten Hans Reiser wrote about this problem a lot back in the day (he murdered his wife and obviously his ideas lost any traction at that point). His thesis predated a lot of the concerns about privacy and central control that we see today, but briefly, he argued a part of why this was happening was that filesystem technology was not good enough because it couldn't handle very small files and because POSIX had some unnecessary limitations. Due to this lack, apps were constantly forced to invent filesystem-within-a-file formats, e.g. OLE2 and OpenDoc were both centred around this concept, SQLite obviously is one too, ZIP yes, but really most file formats can be viewed as a collection of small files within a file.
The idea was, if you upgrade filesystem tech, you can radically change how apps are written.
The problem is that operating system tech on servers and desktops has been stagnant for years. Microsoft and Apple lost interest in their primary operating systems and the open source world has never really been interested in going beyond 1970s design ideas, largely because cloning and adding small elaborations to commercial designs is the way the community stays unified. Look at the mass hysteria that followed systemd, which is one of the only upgrades to the core UNIX OS design patterns in decades. Actually making changes to the core of POSIX isn't something that's going to come out of that community. It'll probably take some company that wants to innovate on the core ideas again.
> > Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it?
> Just curious, does anyone know of any hybrid file formats that store information both locally and online?
I'm sure there are better ways to do it, but MS Office can, AFAICS, at least kind of do that: Documents stored in -- wossname, OneNote? SharePoint? One of those, I think -- can be edited in-place by Office Web apps, or downloaded for editing in the regular desktop apps and then saved back on-line and/or locally. If they can do that, I'm sure other apps can also do it (and probably better).
> this argument is dead and nobody will ever go back to native apps again
This snippet has really lit the touchpaper. A long time ago, I predicted that the world was course to deliver insta-compiled applications through a browser, as though we'd have a Visual Basic runtime environment plugin. However, we're now there, essentially, with XHR and many-megabyte JS bundles, manipulating the DOM through the browser's "widget" engine.
There's really a tipping point for each application, where the application's functionality determines where it is better served. For instance, no one's going to make a web app out of Logic Pro any time soon. However, if someone comes up with a stateful protocol to implement in current browsers, then that tipping point flips to the web for a whole bunch of applications.
I'd argue that in recent years, most people can't tell the difference between a native Swift app on iOS and a Kotlin app running on a midrange Android phone. I agree that Apple's approach is more technically correct here, but Android's approach is also pretty sustainable.
I think those are both the native app targets of their respective platforms though? The Kotlin app targeting Android APIs is not cross platform. I gathered they were more drawing a comparison between native and some HTML/JS/CSS thing.
Ah, so this takes us into the question of what "native" means.
Some people use the word native to mean "the way apps were written in the 90s and on Apple platforms, still are written". It's short hand for manual memory management, full commitment to the operating system vendor's APIs, and so on.
Apps written that way have some big advantages for end users - consistency, low memory usage, and so on. But they suck for developers. Manual memory management sucks, having your app market share be limited to the operating system's market share sucks, often the vendor APIs suck.
Some people use the word "native" just to mean "uses the operating system specific APIs". The other aspects like being written in an AOT compiled manually memory managed language don't count. For those people Android apps written in Kotlin running on a JVM are native, but the other people, not so much.
> spoiler: this argument is dead and nobody will ever go back to native apps again
I think in the world of app stores this is a little odd to argue. Native apps on the desktop do seem to be on the way out, but less so on tablet and mobile phone.
Most of my work is in native apps. It's just a better experience. The browser is great for communication, and it really shines for text based communication, but in my experience that's the only place where it outshines native apps. And remember, it's not like native apps can't back things up to the cloud, so there is a false dichotomy that you have to do everything in a browser that you want backed up to a cloud. I have no problem working with IntelliJ products and then pushing to a remote repo. My Photos, Music, etc are backed up to iCloud but aren't viewed in a browser either, etc. Zoom is launched with a browser link, but it opens a native app. MS Office has a cloud drive and is even sold as a service but I use the native Excel and Word rather than in-browser versions. I just don't see this migration to browser delivery for the apps I've been using.
It's 2021, people really aren't using the App Store like they used to, imo mainly due to the insane rise of subscription based applications for things as small as calculator apps
There is lots of good thoughts in your argument, but I disagree with the "should save their information to disk".
This may make sense for technical people with a specific goal, but for most users, they shouldn't care where it is saved, ala dropbox. They just want to access their files. Online, offline, everywhere, that's what they want.
> but for most users, they shouldn't care where it is saved, ala dropbox. They just want to access their files.
Yes, but it does matter where it's saved, because the location and method confers ownership of the data. "Possession is nine-tenths of the law" is the rule of modern Internet. It shouldn't matter whether my photos live on my drive or in a third party's cloud, but it does - because in the cloud setting, the company dictates what I can and cannot do with my data, can pull shenanigans like applying strong lossy compression to uploaded photos, and they will eventually take my access away - either I cross the ever-expanding terms of service somehow, or they'll just go out of business.
In my experience of both managing my own data and helping non-tech people, filesystem vs. cloud data durability is really a wash. People seem just as likely to lose their local data due to drive failure or accidental deletion, as they are to lose access to the cloud storage (or have the company disappear from under them).
It would be nice if they didn’t have to care where the information is stored. And maybe that is the case 90% of the time. But that other 10% matters a lot and I don’t see that changing anytime soon.
It's not only ads. Autodesk I believe does online rendering now, and for most CAD drawings even a cheap APU can handle that level of geometry, but it's harder to justify a recurring revenue model for a fully local application.
I have trouble believing that this is the fault of the techie lobby, considering that said lobby otherwise has no meaningful accomplishments under its belt. My explanation would be that the web is massively successful because it enables user to navigate safely without being in danger of leaking their files. If a user isn't willing to install an app to do a task, it is precisely because they fear that such an app will be able to do unknown damage to their computer. Allowing the same thing of web apps eliminates their advantage and endangers users.
We are living in a multi-device, instant-access, access-anywhere, cloud-based world, and the desktop file-based paradigm has trouble with this reality. The vast majority of non-technical people would struggle with desktop-based files when they want everything everywhere all the time on every device at any moment.
Is that what they want, though? Most people that work an office job, at least, still deal with Excel, Word, and Outlook. Maybe they’ll setup work email on their phone, but that’s probably it. I’ve noticed a pushback (true for myself, too) ever since the push from work-supplied devices to BYOD.
People are realizing mixing private and work communications on the same device is a bad idea. The kicker is, this isn't even some kind of corporate conspiracy. It's just human nature - if you hook up your business e-mail to your private phone, you are going to be checking it after work hours, you will start responding to e-mails, and your work habits will shift to account for that.
That's why I just don't do that first step. The only connection between my current smartphone and my work is some TOTP keys in the authenticator app, to enable more convenient login to some cloud services the employer makes us use. I talked with a co-worker recently, who made the mistake of installing work communications on their personal smartphone, and they very much regret it - not because the company is exploiting it, but because they can't discipline themselves to not check business messages after work.
Yeah, well, it's not 100 % his fault. Browsers should default to "Ask every time" for download file location, in stead of just bunging everything into a default "Downloads" folder.
But that's really fucking easy to change, so still 90 % his fault. (Or 95, 99...?)
> don't know if there is any real way we can go back to the P2P paradigm without destroying NAT
I think it's time for NAT to go for residential ipv6 and the numbers I've seen show that TURN isn't required for most connections. Unfortunately universities and businesses will probably never remove NAT as there is limited incentive to do so.
I think the best we can do is have somewhat decentralized networks with limited yet trusted centralized authorities (the need for discoverability will always remain, even when using otherwise decentralized networks like SSB). This could be IPFS with bootstrap nodes for their DHT, or as I have been using to circumvent NAT when latency is unimportant, Tor directory authority to host ephemeral, local onion services.
Not just that. We have a lot more metadata these days and not everything can be a file. If you keep all the metadata and database-like files accessible to the user, how do you handle store corruption?
EG, a video recording/playback app that allows the user to save bookmarks/timestamps. You'd need to have some place to store those bookmarks, extract frames, generate multiple resolutions for both the video and frames (for gallery previews etc), possibly add some more metadata...
It's much easier to hide the actual files from the user and give them the option to export the data in some user-readable format.
Apple is notorious for this. Everything is a soup of folders and files with hashes and .plist files. Similar story with iOS and Android.
> allow you to e.g. load and save documents off your disk.
Isn't this trivial? A download button = "save" stuff from the app to disk. An upload button = "load" from disk to the app. AFAICT, webapps can already do this via existing file API's.
This isn’t the same: download/upload can be used to simulate a file system, but they don’t preserve file identities in exactly the way open/read/write does.
I don't think that writing a native app using Electron qualifies it as a web app. VSCode is not normally used from the browser, it is downloaded and used more or less fully disconnected from the MS infrastructure you used to download it (give or take some plugin updates).
Those social media contents you've listed there is nothing anybody wants to keep. Let's be real here. This is throwaway information. While outside of the edgy cool startup bubble the rest of the professional information is still being saved somewhere. Sometimes even saved AND printed. No matter if it was on Slack, Teams or wherever.
Here’s an interesting example of an app that runs in browser and opens/saves SQLite GeoPackage files on the computer running the browser instead of a remote server.
Some other reasons for saving files in the cloud not mentioned:
- lets you access them cross-device more easily (replicating the files on each device could be could, but uploading to the cloud seems easier);
- backup, in case my device breaks or is lost
> (spoiler: this argument is dead and nobody will ever go back to native apps again)
Oh god, please, no. I don't really understand the disconnect between your love of filesystems and yet disdain for native apps. The exact same arguments apply; can't serve ads locally, don't autoupdate (a feature! they rot slower), can access any time, don't need a network.
Why would you want a webapp? Because it has flashier animations and SVG? There's a long conversation to be had about this, but the summary is no, no, god no, please give me back my native applications with drop-down menus and boring file picker dialogs. As long as they have decent keyboard shortcuts, I'll manage.
Who said they disdain native apps? They simply believe that web apps have/will continue to take over. You aren't going to get back native applications for every product just because you prefer it or that to you it is better. There are factors more powerful than that which are determining that web apps are more suitable.
For example: ability to serve ads, autoupdate, AB testing, tracking
Maybe those things are bad for you but are they bad for the people who make the product? No, they're pretty good things for the company, maybe even expected in 2021. Your preference against those things doesn't change that fact
As someone who has been designing stuff in native file-based apps for 30 years, I consider Figma a godsend -- because of their collaboration / multiplayer features. And I don't miss files at all, though of course I understand that I do not control my data stored in Figma.
It points to the need for native software to allow collaboration and this is actually happening slowly.
I still hate Figma though. It's too primitive and I'm sick of being given Figma links where the necessary resources are 50% chance not isolated, or not vector, or not exposed at all.
That depends what you mean by local files: On the Mac, at least, it will keep local backups. See the Backups menu item. Admittedly, the backups are not in a convenient format, rather being json files containing encrypted blobs. But there is a utility for decrypting all the files. So if you are merely worried about losing access, you're covered. So long as you don't lose your password, that is.
But for sure, other solutions work better if you have less stringent security requirements.
> Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content.
This i do not understand - mobile and web content has easily been monetized for a long time now, why would desktop software be any different?
For example, i use software called RaiDrive for mapping network drives on Windows (https://www.raidrive.com/). In their free version, they show ads on the main app window after you open it.
Why isn't this the norm on desktop - ad supported but free software? Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
> Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
Good lord, please, no. Desktop adware should remain a bad dream from the 90s and early 2000s. It ruined software like Opera, and was often bundled with spyware.
I'm glad advertisers mostly embraced the web, where I can run their code relatively sandboxed and easily block it. A desktop app has much less restrictions over the resources it can access, so allowing software that actively wants to track and manipulate you to run in that environment doesn't seem like a good idea. The fact it was acceptable in the 90s with the complete lack of security of the popular OSs of the era is a bit nutty, and while modern OSs are much more secure, I still wouldn't run anything ad supported. F/LOSS or paid apps only for everything under my control. Subscriptions are tolerable in some cases.
> It ruined software like Opera, and was often bundled with spyware.
Wait, I hate ads as much as the next person, but how did they ruin Opera? Opera was originally trialware-only, then for several years replaced the trial with an ad-supported version (with the full version still available for purchase), and then became entirely freeware.
I suppose the nuance of "ruined" is down to personal preference, but I was annoyed by the large banner ad placement and stopped using it shortly after they added it. Purchasing wasn't an option back then for me.
This was also done in other software like Go!Zilla and it made the UI unusable IMO. It was a very disruptive and obnoxious way to monetize a project. Not sure if they improved this later before the move to freeware, since soon after I switched to Phoenix/Firebird and never looked back.
Right, but if you couldn't purchase it, Opera wasn't "usable" prior to their ad-supported version either - it had to be purchased once the trial ran out.
Oh, I'm not necessarily advocating for it but the reasons behind it seem interesting, whatever they might be.
Is it a matter of differences in cultures, that people don't seem to mind ads as much online, or perhaps there'd be backlash from OS app distribution channels, were devs to attempt to monetize software in the Windows Store for example (I don't really use UWP apps so no idea), or perhaps it's something else entirely...
That said, I feel like the option not even being there is limiting in of itself. Suppose I'm a developer who wants to create software that's free to download and use, but ad supported. Now I cannot possibly do that. As for those who would prefer no ads, there would always be the possibility of a paid version with no ads, the possibility of altering the hosts file to block ads, or possibly downloading the source code of the app, removing the ad integration code and compiling the app themselves.
Though there are also interesting technical aspects as well, such as us not being able to sandbox most native apps (short of AppImage and Flatpak as well as Snaps, but even then there are other challenges), which may contribute to spyware in desktop apps. Plus I bet there's a large difference between showing an ad in an app and being able to mess around with the OS default browser settings and so on...
> Why isn't this the norm on desktop - ad supported but free software? Why aren't there ad networks for desktop apps like there are for mobile apps and web content?
My guess is because the desktop model doesn't assume live internet access. My understanding of ad networks is that they involve a live bidding process against interested parties at the moment you load the page. This allows integration of live geo data, what you just searched, what you just looked at previously, etc. And you don't need to reconcile what was served if an app goes offline then rejoins for charging the advertising account.
How would Facebook guarantee you use their app to look at their stuff? If the answer is "some proprietary format that only the app can read" then what's the point?
Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store.
I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.