Unfortunately, the feature itself is vital for making web apps work in anything like a coherent fashion, so it isn't something that can be disabled (though there may be meat on the bones of permission-gating it).
I think there are some solutions to this problem. Akin to "after a back navigation, you cannot add to the history state without a user interaction"-- or better "the history stack can never grow beyond the number of user interactions". Basically, I should always be able to navigate back to the referrer in a definitive number of actions.
I think that can still be gamed by forcing nonsense interactions that seem meaningful on the user.
This is a really tricky one to solve because the protection that is intended to guard against it ("The user is aware the current domain they are accessing doesn't match the site they expect it to match") isn't working. I think that aspect is the larger problem... IRL, people know if they're standing in a Target vs. a used car dealership, but they rarely know if they're at target.com instead of target.used-car-dealership.com.
It's possible the browser's framing should be changed to make it harder to be confused about that (color-and-texture-hash the TLD and apply it to the URL bar as a background, so there's a major visual difference if I'm on the wrong site?).
> If people have to install an app just to us your site, then many of them won't bother.
If you have some "thing", that is so grotesque that it needs to break a users web browser in order to work correctly, you no longer have a "site". You have a creature, you have an application. Some people would step back at that point and maybe think about the path they are going down. Others would trundle forward, oblivious to the fact that they are hammering a square shaped JavaScript peg into a round hole.
In the year 2022, the browser supports full motion video, 3D rendering, dynamic audio synthesis, local key value database storage, a USB access API, and an API for interacting with head mount displays and hand trackers.
The simple dynamically-linked page displaying application you are imagining a browser to be is long dead, in much the same way the simple programmable calculating machine that was the personal computer was long dead by the time John Carmack got it to run Doom.
> If you have some "thing", that is so grotesque that it needs to break a users web browser in order to work correctly, you no longer have a "site". You have a creature, you have an application.
There is nothing "grotesque" about writing web applications.
That is the entire point of HTML5 + CSS3 + AJAX.
With modern Chrome-based browsers and Firefox I can write an application that can be used by Windows, Mac, Linux, BSD, iPhone and Android users who need only visit that site.
No installation necessary, and (most importantly) no gatekeeping of apps by Apple, Google or Microsoft.
> No installation necessary, and (most importantly) no gatekeeping of apps by Apple, Google or Microsoft.
This is kind of misleading, as you need to install a browser for it to work. Granted, most computers already have that, but the browser is acting as a "runtime" in this situation. Contrast with other programming languages such as Go or Rust, that produce a single executable that can be dropped in a folder and ran.
Because they are major use-cases for the web browser framework. I mean, that's a bit like asking why you should care about Web Audio API, or the accessibility layer... The fact you're not using it doesn't mean it isn't vital for those who do.
> I mean, that's a bit like asking why you should care about Web Audio API, or the accessibility layer
Nope. Those are both useful elements of a browser, that don't even require JavaScript to use. What you're talking about is the monster/frankenstein that is web applications.
Not for users of Sheets, Docs, AutoCAD Web, myriad tools companies have built for their intranets, and hundreds of other apps, no. It will make them more complicated.
I can't see any argument for how removing javascript navigation will make apps more complicated. The desired functionality is literally built into the browser.
Not more complicated in the sense of "more code;" more complicated in the sense of "harder to use." You had said "win/win" and I disagree that making apps bigger, clunkier, and slower by requiring more server-side negotiation would make them better for end-users. The hard-coded behavior of the browser doesn't always match the user's concept of what has happened when performance optimizations are added in.
For example: for performance reasons, many web apps are logically divided into sections. Navigating between sections doesn't unload the current page, it retains it and context-switches to another page. This is done for several reasons (the main ones being performance and flicker-stoppage, so users aren't hit with a screen-blank navigating from one section to another). This trick is often accompliahed by being very creative with the routing so the user's experience is as if they are navigating around to different pages on a site (while in reality, a page navigation never occurs). console.cloud.google.com is an example of a site that works this way. AWS console does too; look closely while navigating around AWS and one will observe that the URL looks like https://us-east-2.console.aws.amazon.com/cloudformation/home..., i.e. the sub-panel is encoded locally in the fragment instead of up in the resource name, and going from subpanel to subpanel changes the fragment and pushes entries into history.
... but to get that seamless behavior, it has to inject history entries so that the back button takes the user to a previous logical page, not a previous URL the browser requested. Writing the app will be more complicated if the back button navigates the user entirely off of console.cloud.google.com instead of taking them to the previous pane they had open.
> Not more complicated in the sense of "more code;" more complicated in the sense of "harder to use." You had said "win/win" and I disagree that making apps bigger, clunkier, and slower by requiring more server-side negotiation would make them better for end-users.
People always say that, yet every time I encounter an app that doesn't use JS navigation, it always feels faster and smoother...
And as for them being bigger and clunkier, in almost all cases I've experienced, the frontend code is by far the clunkier and bigger of the two. But maybe it's because I use Ruby and other backend languages are less generous in their accommodations.
> The hard-coded behavior of the browser doesn't always match the user's concept of what has happened when performance optimizations are added in.
Eliminating JS navigation doesn't mean eliminating JS. You'd still be asynchronously modifying backend state and browser state on a single page, you're just not using it to change pages.
> For example: for performance reasons, many web apps are logically divided into sections. Navigating between sections doesn't unload the current page, it retains it and context-switches to another page. This is done for several reasons (the main ones being performance and flicker-stoppage, so users aren't hit with a screen-blank navigating from one section to another). This trick is often accompliahed by being very creative with the routing so the user's experience is as if they are navigating around to different pages on a site (while in reality, a page navigation never occurs). console.cloud.google.com is an example of a site that works this way. AWS console does too; look closely while navigating around AWS and one will observe that the URL looks like https://us-east-2.console.aws.amazon.com/cloudformation/home..., i.e. the sub-panel is encoded locally in the fragment instead of up in the resource name, and going from subpanel to subpanel changes the fragment and pushes entries into history.
Do you have a study that demonstrates the impact of this website organization with JS navigation and without JS navigation? There's a lot of talk about this being more performant and eliminating flicker, but are those real problems? Does HN flicker for you when you move across pages? I've heard these reasons a hundred times, but rarely see evidence that they hold up under scrutiny. Usually it's an after-the-fact justification rather than an upfront necessity.
> Writing the app will be more complicated if the back button navigates the user entirely off of console.cloud.google.com instead of taking them to the previous pane they had open.
I feel like this answer is almost intentionally forgetting that URI paths exist without JS. Am I missing something in your answer that explains why they are insufficient?
- the new page must be loaded from scratch, a new context set up, JavaScript started from scratch, and the page parsed and executed
Even if the resources between the two pages are shared and those shared resources properly cached, time is lost relative to the more instantaneous process of making partial edits to an already loaded page and only loading the JavaScript necessary to pull in features that weren't on the page navigated away from.
I recommend popping the browser inspector open when using AWS console or Google Cloud console and look at what's going over the wire. As the user navigates around, these web apps are only loading the chunks of the interface necessary, not the Chrome or the sidebar or any of those other already-loaded components. Those components are already live in the JavaScript context and don't need to be rebuilt from scratch because they weren't destroyed, since no page navigation occurred.
You can make the case that those consoles are over complicated and could be replaced with a handful of web forms (not by comparing them to hacker news, the relative complexity of the pages are apples to oranges... If I were to make the case, I would make it by saying that the Google app engine console that predated Google Cloud console was perfectly serviceable, clunky and slow as it was). But given that they are what they are, the dynamic loading is much faster than tearing down and building new pages as a user navigates around the panels in the console.
And given the way they work, the user expects the back button to go to the previous panel, not to navigate completely out of the web app because they happened to enter the experience from a specific URL.
> Even if the resources between the two pages are shared and those shared resources properly cached, time is lost relative to the more instantaneous process of making partial edits to an already loaded page and only loading the JavaScript necessary to pull in features that weren't on the page navigated away from
Technically, yes. Whether it's significant difference is reflective of the engineering, and sometimes, on the product design.
And the beauty of not using JS navigation is that you don't need to draw the page using JS templates. The dynamic data, sure, but the rest can load pretty instantaneously.
Out of curiosity, when's the last time you built a web app that doesn't rely on JS navigation, and architected it with that in mind? What was that experience like, and how does it compare to your JS experiences?
It was Ruby on Rails, it was forms-based, and it was hell. To personal taste these days, I much prefer writing a heavyweight client and a thin server that emits raw data and does very little HTML rendering or HTML pre-processing to the alternatives I tried before.
But even if the server is doing a lot of heavy lifting, the end user experience can benefit both in performance and bandwidth from minimizing page reloads. There's some pretty spiffy client and server template technologies now that let you get pretty close to write once, render on either the client or the server depending on which is cheaper.
And there are other applications for which manipulating history is a better fit. Otherwise, how would you mail history into a web experience that doesn't match very well to page navigation at all, like a WebVR walk where you want to be able to return to a location via URL, or a music player where you want the history to go back and forth between the songs you played?
And it can be as simple as clicking on an image to expand it, like on twitter.
It's nice to be able to hit back to close it, especially on phone browsers. And it's also nice to stay on the same page context the entire time, so it can respond much faster and doesn't screw up the scrolling.
It would be pretty simple to open a web page, do a few user interactions that shouldn't trigger this behavior, and then penalize or deindex web sites that hijack the back button.
Add manual penalties for the ones that slip through the automated testing.
Once it becomes known that doing this means saying bye-bye to your traffic, sites will stop doing it.
Users will not understand why there are two back buttons. That'd be like working around pastebin issues by including a different "copy selection" button.
They'll click the browser back button, get thrown back onto the search/new tab page, they'll click forward to get back to the webapp (which will have saved all of their state), and they'll understand just fine going forward.
First of all: user testing I've observed strongly suggests that no, they won't. You would. The average user doesn't have nearly your level of savvy.
Secondly: that's already a bad user experience in contrast to just unifying the behavior behind one button. Why did the user have to discover that a button doesn't work the way they expect? Why are they going to have to remember it doesn't work the right way now? And they'll have to repeat that experience for every web app they use? That's a mess.