It's the complete opposite and one of the reasons I prefer to use a comparable web app over a native desktop app. Installing an app requires way more trust into it than using a web app that runs inside a heavily restricted sandbox.
As for why I also like to prefer developing web apps, that's because the available tools make web development more productive. Checking changes to the source is near-instant by simply pressing F5 in the browser, and the dev tools in browsers are some of the best you can get for debugging.
Even in iOS which keeps apps in a fairly restrictive sandbox + goes through some kind of review process I wouldn't trust an app like Facebook's or Reddit's to not be gathering some kind of other information to use somehow.
There is a reason Reddit insists on you downloading their app.
I think there are other possible motivations, too. Not to say that you are wrong (you aren't), and not that this necessarily applies to Reddit specifically, but there is something to be said for having your service's icon implanted on the user's homescreen, possibly being one of the first things they see every time they open their phone. This opens up having muscle-memory to open up your app, making it that much easier to hop on and start doomscrolling. Having an app also makes it easier to drive engagement using notifications.
I think it's eye opening how unsuccessful web notifications have been on desktop. It's my belief that most people would not turn on notifications for most apps which send them if Android prompted like web notifications (I think iOS also defaults to allowing notifications without prompting for permission, but I'm not sure).
There isn't even an option on Android to block notifications except when explicitly allowed in settings. Each time you install a new app, you need to turn off its notifications.
I just think the entire concept of desktop-targeted notifications is superfluous and thereby it doesn't surprise me that it is unpopular. Like, if I want a notification about something I probably want it to go to my phone; sure, it is maybe nice to also be able to browse my phone notifications on my desktop, but I don't need two actual "notifications", as it isn't like I don't have my phone near me 24/7: what I'd really want is the "notification" to happen on my phone and then let me remotely browse a synchronized list of notifications (preferably end-to-end encrypted to the running phone, not using a user) from my desktop... but registering for a notification only on my desktop--or even my laptop!--seems like an extremely niche value proposition.
iOS does not allow notifications by default. Apps must display a system pop-up to request permission to display notifications, and the option is there to deny permission.
Whether users have already fallen into the same cycle as cookie consent pop-ups and, more recently, GDPR pop-ups, just clicking "allow" by default because it's the easier option, is a different question.
This, on the other hand, that is, necessarily networked applications where the entire purpose is to interact with people and data that are geographically distributed, is perfect for taking place in a browser, and yes, I agree that I trust the browser much more than someone's native app. For applications that don't need to be networked, though, given me native.
No it's not. To use a web app I have to have an internet connection to even start the app. Anything I click on that loads a new resource sends data to the server about what I'm doing. I can unplug from the net, install desktop software and run it without it being able to send anything anywhere.
Every single thing you just mentioned both occurs and doesn't occur with both web and native apps depending on what the app was built to do.
- Web apps can require internet
- Web apps can not require internet
- Native apps can require internet
- Native apps can not require internet
- Web apps can inform a server about what you're doing
- Web apps can not inform a server about what you're doing
- Native apps can inform a server about what you're doing
- Native apps can not inform a server about what you're doing
- You can unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere
- You can not unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere
- You can unplug from the net, install desktop software (native app) and run it without it being able to send anything anywhere until you plug back into the net
- You can unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere
- You can not unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere
- You can unplug from the net, install desktop software (web app) and run it without it being able to send anything anywhere until you plug back into the net
The average user and use case typically has internet connection so you're talking about edge-case scenarios. In a regular scenario, a native app can send whatever it wants about your system to a server, including your sensitive photos and private information.
> Anything I click on that loads a new resource sends data to the server about what I'm doing.
Nothing stops native apps of doing that. Plenty require you to have internet connection and coerce you into telemetry. The difference is that one can only send data that the browser will allow it to have, while the other can read all your sensitive data on disk.
Or living in a rural area with crappy Internet. I've installed wireless Internet in places where the people are wealthy but could only get a crappy DSL line. An Internet connected app, web or native, was a pain in the butt to use.
Mobile users can have problems too. I still come across dead zones in network coverage that render some apps useless and I'm not even outside of town.
I am not at all concerned about an application developer having access to data about my usage of his app. (In fact I have a hard time empathizing with people who are so concerned).
I am extremely concerned about an application developer having the run of other data which has nothing to do with his app, which is always the case on a general purpose computer, except insofar as it’s been made into a walled garden.
As a user, I am unsure that is a positive. There are near-daily stories about how feature X of a program was removed/hidden/etc for seemingly no reason. Using an offline only program ensures some UI developer does not get to swap buttons on me for "engagement".
This is why I am still using an arse-old version of Mailwasher - the newer versions started sharing data with the mothership, including your eMail passwords, just so it could transparently sync that data to their iOS & Android apps.
My passwords are between me and my eMail server. No-one else should get that data, for any reason whatsoever.
i think most folks allow auto-updates because we want new features and security fixes. So, while you're technically correct i think the practical result is that it's no different on this axis than a webapp.
I believe the rationale here is, sure, you need to trust that the vendor isn't giving you malware, but for most software makers that have existed for longer than a few months, you have a reasonable track record to grant that level of trust. On the other hand, trusting that your data won't be exfiltrated somehow is much harder because of how widespread the practice is. But at least desktop software can, in principle, run without network access. A web app cannot.
Beyond that, you can also, in principle, run a desktop app in a sandbox and/or audit it in some way to observe its behavior and assure yourself it isn't malicious, and only then use it on a sensitive host. A web app, in contrast, can't even be guaranteed to serve you the same version of itself from one second to the next, let alone guarantee that any change made to it between requests wasn't malicious. The sandboxing done by the browser is the only protection you have (short of only ever visiting the web at all from a sandboxed host).
Then get into applications like basic photo processing and document editing. Sure, it can be done server-side via the web, but the only way for that to be possible is for you to upload all of your photos across the network to the server, along with all the threat models that entails. If the software is running on your host, your data can stay on your host. I'm assuming here that pure Javascript is still not really used for compute-intensive applications and things like doc-to-pdf conversion and photo smoothing accomplished via web apps mostly still relies on server-side processing.
You mean like “trusting” that Zoom won’t secretly install a web server on your computer that reinstalls itself when you uninstall it or trusting that Chrome won’t corrupt your Mac when you turn off System Integrity Protection
Browser sandboxes are among the best in the world. Chrome has an entire team focusing on sandboxing the browser nonstop. Every single Tab is sandboxes from the next, too.
If you’re using a web app, your data lives on someone else’s server. Not on a computer you can physically control or even—perish the thought—disconnect from the internet.
Oh cool, the app that handles all my private financial and/or health records runs on the internet and keeps my information in a datacenter in northern Virginia, but at least it can’t get access to the data on my local drive, which by the way there isn’t any because all of that data is in another web app and stored in quite possibly the exact same datacenter!
But there is no need for the app to do that really, there are APIs like localstorage and IndexedDB in place to allow you to do all this clientside. Browsers have come a long way from just html renderers.
There's no need for web apps to store all of your data in the cloud, but almost all of them in the real world do it anyway. Users have been conditioned to expect that if they log into the same web app from two different devices that their data will still be there, and this can be a huge convenience, but it's also less private and less secure than what we had with desktop apps.
I think your statement on the security of desktop apps is a tad misinformed.
Desktop apps do have to adhere to the same system security permission that the browser provides. WebApps can be even more intrusive than a desktop app because you're constantly sending signals to a central set of servers with a unique browser fingerprint. You also lose control of updates, and would be completely unaware of new tracking dependencies being injected. The data you create with a web app is and always will be property of the company that manages it.
Desktop apps being packed into Flatpak is a good start to addressing sandboxing desktop apps, imo, and touches on your concerns as well.
> Desktop apps do have to adhere to the same system security permission that the browser provides.
A desktop app on an average users PC has access to most files on disk, including sensitive data. While a browser has that too, apps running inside a browsers sandbox do not.
> Desktop apps do have to adhere to the same system security permission that the browser provides. WebApps can be even more intrusive than a desktop app because you're constantly sending signals to a central set of servers with a unique browser fingerprint. You also lose control of updates, and would be completely unaware of new tracking dependencies being injected. The data you create with a web app is and always will be property of the company that manages it.
A browser is just an arbitrary binary, a desktop app is going to be capable of anything a browser is, but much much more.
Yes it's possible for web apps to integrate things like FullStory that let devs monitor people like a citizen of Zalem. But local guis are doing that too these days. For instance someone posted a show hn thread a few months ago of a terminal gui they built that had fullstory integrated. The author was like mea culpa and removed it, since all he probably wanted to do was fix bugs. But my point is that everything creepy browsers are able to do, local apps can now do too -- and then some. On the other hand, local apps can be positively the most secure and they're the foundation on which big companies are built. But what distinguishes the apps that empower you versus the ones that disempower you isn't obvious, so I'll explain how I do it.
The question people always ask is how can we build a technology that makes being evil impossible? Like sandboxing. And that's usually the wrong question to be asking, because it's a people problem, not a technology problem. What we need is transparency. The ability to monitor the monitors. If you can empirically see what a local app is actually doing, then you can say for certain that it's more trustworthy than anything else. So how do we do that?
Well, for starters, here's a 1KLOC tutorial (similar to Antirez's Kilo editor or Linenoise) on how to build your own version of the `strace` command in pure C. https://github.com/jart/cosmopolitan/blob/master/tool/build/... If you monitor the system interfaces then you don't need to read the source code. It's analogous to watching someone's behavior rather than reading their dna. But the nice thing about ptrace() is it gives you the power to control the interfaces in addition to monitoring them. For example, you can disable the socket() system call and see if it breaks. If it does, and there's no apparent reason for it to need sockets, then maybe you shouldn't be using it. Another good tool that might help you control your digital spaces is Blinkenlights. Right now it only supports static binaries, but if you have one of those and you want to learn more about how it works, then you can watch what it does with memory in real time. https://justine.lol/blinkenlights/
This is the same philosophy behind Chrome's incredible debugger GUI (which is something that sadly local electron-like apps have the power to take away) because transparency is the bedrock of trust for those of us who aren't whole hog on faith in institutions. It's always surprised me that more people haven't been focusing on building tools like this. It also makes me sad when certain open source apps (which shall remain nameless because I don't want to be flamed) go out of their way to make strace output incomprehensible. The libertine use of dependencies is part of the problem. For example, you might not be using sockets, but maybe your programming language or ansi color framework library does, due to some API you didn't even know it had. So if you're a developer, you've really got to monitor this stuff, because if you don't your users will. And if you learn about it the first time from your users, then you're going to lose out on a lot of potential.
It's obviously true. Executing random binaries from the internet directly on your machine is clearly much less secure than executing a js script in an extremely hardened and restricted browser sandbox.
Which isn't true with all the access the latest html5 apps want to your hdd,webcam, etc .... They can take over your system silently and you don't need to click on anything
> They can take over your system silently and you don't need to click on anything
No, this is totally wrong. All browsers require explicit permission to grant access to hardware resources, they cannot "take over your system silently"... unlike an arbitrary binary.
This is absurd. A binary reviewed and vetted by a Linux distro is really unlike to contain spyware, unlike 90% of webpages. The web a well-known security dumpster fire.
Additionally, it's false that desktop applications are not sandboxed. On the contrary, the sandbox implemented around an application can be way more fine-grained that a browser. Firejail is a good example.
Browsers are behemoths and you can look up for yourself how many vulnerabilities they have and also the SLOC count.
Edit: silent downvotes? Leave it to HN to believe that webshit is more secure than desktop applications. This is material for /r/ShitHNSays
Do you execute shell scripts that are curled from the internet?
Why not? The reason is probably the same why other people argue that they don't want to install desktop apps anymore.
They don't trust those apps, because the security model they have in place doesn't live up to their expectations. Most users don't use opensnitch, selinux or firejail because those tools - honestly - suck for normal users.
We need to make app sandboxing easier, GUI driven and as simple as the android settings app (when it comes to the approachability).
The dumpster config fatigue that is selinux is just a bad joke and nobody will ever be able to use this tool correctly without having to make thousands of mistakes.
We have to build better profilers that use reasonable sandboxes by default, and allow to generate a config automatically for the end users.
The useless tech that is flatpak/snap/appimage is pretty much not what it promised initially when it nowadays bundles a microkernel, shared libraries and everything the app needs ... but cannot even protect my user's profile folder from the app I'm running.
I can't downvote you, but an arbitrary binary is unequivocally a much bigger security and privacy threat than a js script executed in the browser, this is an indisputable fact. My guess is that you're getting downvoted because you're confidently espousing an opinion that any security expert would easily disabuse you of, if you're willing to listen.
> This is absurd. A binary reviewed and vetted by a Linux distro is really unlike to contain spyware
What's absurd is your special pleading a linux distro review to conclude that arbitrary code execution is more secure than a js script. This is wrong on so many levels. This comparison is also specious because you're comparing a curated repository to arbitrary js on the internet. You are also woefully misinformed if you think that "linux distro review" precludes the existence of your vaguely defined "spyware", arbitrary binaries (unlike js scripts) have unrestricted socket access and quite regularly emit all kinds of telemetry over the internet.
> Additionally, it's false that desktop applications are not sandboxed. On the contrary, the sandbox implemented around an application can be way more fine-grained that a browser. Firejail is a good example.
You have no idea whether or not an arbitrary binary is sandboxed before you execute it, thus it is capable of literally anything - not true of an arbitrary js script which is always sandboxed.
> Browsers are behemoths and you can look up for yourself how many vulnerabilities they have and also the SLOC count.
The top browsers are literally the most hardened sandboxes in the history of computing and there are far more vulnerabilities exposed through the uncountable ecosystem of arbitrary binaries than through browsers, many of which are never patched, and when they are, often aren't received by users because they may not upgrade them. Additionally, the vast majority of browser vulnerabilities are of a modest threat level, with the higher threat vulnerabilities usually being discovered by highly sophisticated security research firms where they are usually safely patched before ever being exploited in the wild.
> This is material for /r/ShitHNSays
Indeed. Try submitting this thread and see how that turns out for you.
How is installing a binary supposed to be more secure than a web page? A binary can do largely whatever it wants, especially for an average person that will grant it all the permissions it requests.
You are making a surprisingly good case for browser apps. With binaries, you're restricted to a tiny fraction of available apps that are carefully vetted to _reduce_ the chance of security issues, whereas with browser apps, you can run any untrusted web page. A malicious binary can spy on pretty much any private and sensible data on your PC, while a malicious web page can only do some fingerprinting.
It's the complete opposite and one of the reasons I prefer to use a comparable web app over a native desktop app. Installing an app requires way more trust into it than using a web app that runs inside a heavily restricted sandbox.
As for why I also like to prefer developing web apps, that's because the available tools make web development more productive. Checking changes to the source is near-instant by simply pressing F5 in the browser, and the dev tools in browsers are some of the best you can get for debugging.