> "Users seldom read home page “fluff” and often look for things like testimonials, case studies, pricing levels and staff profiles / company information in search for credibility and trust. One of my upcoming tests will be to combine home page with “about us”, “testimonials”, “case studies” and “packages”. This would give users all they really want on a single page."
Shady tactics aside, this was interesting but could also have been measured by simply tracking his own website.
Thanks for posting this. I am increasingly frustrated with browsers' weak stance on user control. Hijacking back buttons, right clicks, copy functions, and other items has become quite commonplace and even expected. For example, YouTube puts some functions only in the right-click menu and TinkerCad's viewport rotation is primarily right-click and drag.
Presumably, this is in pursuit of making web pages behave more like apps, but it is truly frustrating. If I wanted app behavior, I'd install an app (even something like Chrome's apps). While I'm in a web browser on a web page, I expect to interact with the web browser primarily and the web page through the browser intermediary.
As a counterpoint, I don’t want to download apps when webapps suffice. I appreciate when a right click gives me the options I’m hoping for rather than a set of generic Chrome actions that aren’t what I want. I also appreciate when copying works how I want it to (e.g. copy paste in google docs or Figma work as I expect it to, including all styles). And hijacking browser history doesn’t seem to me like it adds much exposure, because the attack vector is still there without browser support (when a user enters your site, auto redirect to google.mydomain.com, which then auto redirects to your content. Back button now will return you to google.mydomain.com without relying on custom browser back button shenanigans)
Your post seems to suggest that you feel Figma is a prime example of how webapps are sufficient over desktop/mobile apps. Yet Figma actually does offer desktop/mobile apps so I'm a bit confused by how Figma helps make your point haha
If a webpage implements something perfectly, it enhances the experience. E.g. webpages that mess with scrolling - maybe for 20% of sites that do this it makes them 20% better, but 80% of sites that do it become 80% worse.
I'm not sure the benefits of allowing the Figmas of this world to offer a good app experience outweigh the costs of putting the same tools in the hands of every shitty news site.
This is why I mention Chrome's web apps. I see these as a reasonable middle ground. They are a relatively low barrier to entry and could be used as a way to allow the user to opt into app-like behavior. Sites that the user hasn't opted into wouldn't have these abilities.
As another commenter mentions, I'm more likely to disable Javascript by default rather than continue to allow it's abuse. I suspect I'm in the minority though.
Turn off JS except on whitelisted sites and you'll experience a saner web. Unfortunately even static text is often hidden behind such "app-site" monstrosities these days.
There’s nothing that infuriates me more than trying to read an article and suddenly being forced to either spam the back button or close the tab entirely.
This depends on your browser, but on desktop firefox I can get out of these by holding down the back button a moment. It pops up a list of my (real) browsing history in that tab. I can then select where I want to go back to.
I’m usually not interested in social engineering which I think is boring stuff, but I think that (1) this is a weakness on my part as a developer with something of a security focus, and (2) this is perhaps the perfect sweet spot of social engineering and programming.
It is an utterly fascinating takedown of the back button hijack. Totally unethical but also very eye-opening for me.
Is this kind of back button hijack and history rewriting still possible in modern browsers? Edit: this link leads me to believe this may still be possible: https://developer.mozilla.org/en-US/docs/Web/API/History - would love a confirmation.
i believe that is a fair question and i believe that ethics are personal and each individual must find/define their own ethics based on their own values and life experiences.
here is where i am coming from:
1. the end user is on a google search engine results page (SERP) and sees the blog post author's website as one of the search results
2. the end user clicks the link to the blog post author's website
3. the end user is now on the blog post author's website
4. the end user hits the back button
- i believe the end user has a reasonable expectation that they will be back on the google search engine results page... but *they are not*. they are on a mockup that looks like the google SERP but is in fact controlled by the blog post author.
5. the end user clicks on a "link" to a competitor's website - but the "link" is actually yet another mockup created and hosted by the blog post author.
i believe this is highly unethical! they are fooling an unsuspecting end-user into thinking they are visiting a brand new site, but they most definitely are not doing so. ultimately, i think that google and other browser authors should remove the possibility for this sort of trickery. i do admire the blog post author for posting the social engineering/programming trickery while still viewing it as unethical.
The quote from a security researcher at the end treats this like a vulnerability.
If this were early days of the web, I'd agree, but web browsers allow so many other shady tactics, this feels more like the web working as intended.
(Yes, phishing attacks are bad, but the browser back button spec is specifically designed to allow these sorts of shenanigans, with basically zero legitimate use cases -- the only use case I can think of is telling the browser certain actions should not push themselves onto the back button stack).
> with basically zero legitimate use cases -- the only use case I can think of is telling the browser certain actions should not push themselves onto the back button stack).
I agree on the "legitimate" part, but I suspect one of the main reasons is that Google and Apple both really want people to be creating SPAs that pretend to be real apps, and that's hard to do without being able to hijack the back button for navigation.
Middle mouse button click for any link. I don't remember the last time I used back. Just open and close tabs based on what I want to do. I learned this during research methods in graduate school as a way to avoid losing valuable studies while working on the various archaic databases, and it stuck. I know every graduate student at my university learned the same thing.
I always configure three-finger tap on the trackpad to act as the middle mouse button. I haven't tried this on a Mac, but it can be done on Windows and Linux with Gnome (and I believe KDE, too).
Yes, and firefox configured to open windows, not tabs (!) Call me a luddite if you like but ctl-w and alt-tab are always to hand and better than any amount of new-fangled tabby nonsense IMNSHO...
Never thought of it as relevant to security before though.
In a similar way that I choose to use backspace vs ctrl+z, I may use the back button or open a new tab (or duplicate tab, then go back), depending on if I want to keep current context or discard my current work.
It's not true that researchers always do a minimal PoC. I've seen soo many people release fully weaponized attack toolkits, ostentibly for red teams etc., that then end up being abused by actual attackers. These are not just PoCs, but ready-to-reuse, universal toolkits.
OTOH, sometimes a harmless PoC isn't enough to induce action, and a proper attack PoC does. I think this may be such a case.
As much fun as it is seeing everybody reiterate the "SPAs are stupid and we should all go back to native apps" argument for the thousandth time with exactly the same arguments again...
It's all a moot point, because you can reproduce this particular attach using nothing but 2001-era DHTML. Start with a page that has a hidden iframe, a link that targets it, and a timer that polls the contents of the iframe. When the page first loads, use JS to click the link to add a new item to the back stack. If clicking the link with JavaScript doesn't add a back stack item, make the link visible, but also attach an onclick event handler to it so that the link can simultaneously do what you want and also do what the victim wants.
After you've poisoned the back stack, you can detect that the user clicked "back" when the iframe gets reset back to its initial page. Once this is done, use `document.body.innerHTML = whatever` to set up your fake SERP.
The "attack" I'm thinking of is hijacking the back button, but done using iframes instead of history.pushState. It doesn't involve any third-party origins, so x-frame-options doesn't matter, because a domain owner that wants to launch this attack has control of all the HTTP headers.
This attach is similar to linking to g00g1e.com and setting up a mock page there. Impersonating sites is going to be hard to secure technically at all.
Am I missing something? This “hack” requires you to go to his site first, then use the back button and then click on a (fake) competitor link. How is he ever going to get people to his site in the first place? And if it’s through paid ads, why not create a fake paid ad that directs you straight to his fake site in the first place? All sounds very much like a marketer who uses the veil of “security researcher” to hide a scam.
> later used it to mess with conspiracy theory people
I always find it funny how these hackers grasp for some othered group that they can justify mistreating. If you're gonna be a hacker stop pretending that you're a moral being and accept what you are
I despise sites that hijack my back button (No, I don't want to check any of these DENTISTS HATE THIS MOM'S NEW TRICK clickbait articles thanks) so I can't say I'm surprised there are malicious uses for it, but wow!
We actually had an accidental back button hijack at a place I used to work at. It was an SPA, where if you navigated to / it would check if you were logged in. If so, you would be redirected (client-side) to /home, otherwise you were sent to /login. This was done with pushState() instead of replaceState(), so going back from /home would take you to / which would immediately see that you were logged in and send you back to /home.
Unfortunately, the feature itself is vital for making web apps work in anything like a coherent fashion, so it isn't something that can be disabled (though there may be meat on the bones of permission-gating it).
I think there are some solutions to this problem. Akin to "after a back navigation, you cannot add to the history state without a user interaction"-- or better "the history stack can never grow beyond the number of user interactions". Basically, I should always be able to navigate back to the referrer in a definitive number of actions.
I think that can still be gamed by forcing nonsense interactions that seem meaningful on the user.
This is a really tricky one to solve because the protection that is intended to guard against it ("The user is aware the current domain they are accessing doesn't match the site they expect it to match") isn't working. I think that aspect is the larger problem... IRL, people know if they're standing in a Target vs. a used car dealership, but they rarely know if they're at target.com instead of target.used-car-dealership.com.
It's possible the browser's framing should be changed to make it harder to be confused about that (color-and-texture-hash the TLD and apply it to the URL bar as a background, so there's a major visual difference if I'm on the wrong site?).
> If people have to install an app just to us your site, then many of them won't bother.
If you have some "thing", that is so grotesque that it needs to break a users web browser in order to work correctly, you no longer have a "site". You have a creature, you have an application. Some people would step back at that point and maybe think about the path they are going down. Others would trundle forward, oblivious to the fact that they are hammering a square shaped JavaScript peg into a round hole.
In the year 2022, the browser supports full motion video, 3D rendering, dynamic audio synthesis, local key value database storage, a USB access API, and an API for interacting with head mount displays and hand trackers.
The simple dynamically-linked page displaying application you are imagining a browser to be is long dead, in much the same way the simple programmable calculating machine that was the personal computer was long dead by the time John Carmack got it to run Doom.
> If you have some "thing", that is so grotesque that it needs to break a users web browser in order to work correctly, you no longer have a "site". You have a creature, you have an application.
There is nothing "grotesque" about writing web applications.
That is the entire point of HTML5 + CSS3 + AJAX.
With modern Chrome-based browsers and Firefox I can write an application that can be used by Windows, Mac, Linux, BSD, iPhone and Android users who need only visit that site.
No installation necessary, and (most importantly) no gatekeeping of apps by Apple, Google or Microsoft.
> No installation necessary, and (most importantly) no gatekeeping of apps by Apple, Google or Microsoft.
This is kind of misleading, as you need to install a browser for it to work. Granted, most computers already have that, but the browser is acting as a "runtime" in this situation. Contrast with other programming languages such as Go or Rust, that produce a single executable that can be dropped in a folder and ran.
Because they are major use-cases for the web browser framework. I mean, that's a bit like asking why you should care about Web Audio API, or the accessibility layer... The fact you're not using it doesn't mean it isn't vital for those who do.
> I mean, that's a bit like asking why you should care about Web Audio API, or the accessibility layer
Nope. Those are both useful elements of a browser, that don't even require JavaScript to use. What you're talking about is the monster/frankenstein that is web applications.
Not for users of Sheets, Docs, AutoCAD Web, myriad tools companies have built for their intranets, and hundreds of other apps, no. It will make them more complicated.
I can't see any argument for how removing javascript navigation will make apps more complicated. The desired functionality is literally built into the browser.
Not more complicated in the sense of "more code;" more complicated in the sense of "harder to use." You had said "win/win" and I disagree that making apps bigger, clunkier, and slower by requiring more server-side negotiation would make them better for end-users. The hard-coded behavior of the browser doesn't always match the user's concept of what has happened when performance optimizations are added in.
For example: for performance reasons, many web apps are logically divided into sections. Navigating between sections doesn't unload the current page, it retains it and context-switches to another page. This is done for several reasons (the main ones being performance and flicker-stoppage, so users aren't hit with a screen-blank navigating from one section to another). This trick is often accompliahed by being very creative with the routing so the user's experience is as if they are navigating around to different pages on a site (while in reality, a page navigation never occurs). console.cloud.google.com is an example of a site that works this way. AWS console does too; look closely while navigating around AWS and one will observe that the URL looks like https://us-east-2.console.aws.amazon.com/cloudformation/home..., i.e. the sub-panel is encoded locally in the fragment instead of up in the resource name, and going from subpanel to subpanel changes the fragment and pushes entries into history.
... but to get that seamless behavior, it has to inject history entries so that the back button takes the user to a previous logical page, not a previous URL the browser requested. Writing the app will be more complicated if the back button navigates the user entirely off of console.cloud.google.com instead of taking them to the previous pane they had open.
> Not more complicated in the sense of "more code;" more complicated in the sense of "harder to use." You had said "win/win" and I disagree that making apps bigger, clunkier, and slower by requiring more server-side negotiation would make them better for end-users.
People always say that, yet every time I encounter an app that doesn't use JS navigation, it always feels faster and smoother...
And as for them being bigger and clunkier, in almost all cases I've experienced, the frontend code is by far the clunkier and bigger of the two. But maybe it's because I use Ruby and other backend languages are less generous in their accommodations.
> The hard-coded behavior of the browser doesn't always match the user's concept of what has happened when performance optimizations are added in.
Eliminating JS navigation doesn't mean eliminating JS. You'd still be asynchronously modifying backend state and browser state on a single page, you're just not using it to change pages.
> For example: for performance reasons, many web apps are logically divided into sections. Navigating between sections doesn't unload the current page, it retains it and context-switches to another page. This is done for several reasons (the main ones being performance and flicker-stoppage, so users aren't hit with a screen-blank navigating from one section to another). This trick is often accompliahed by being very creative with the routing so the user's experience is as if they are navigating around to different pages on a site (while in reality, a page navigation never occurs). console.cloud.google.com is an example of a site that works this way. AWS console does too; look closely while navigating around AWS and one will observe that the URL looks like https://us-east-2.console.aws.amazon.com/cloudformation/home..., i.e. the sub-panel is encoded locally in the fragment instead of up in the resource name, and going from subpanel to subpanel changes the fragment and pushes entries into history.
Do you have a study that demonstrates the impact of this website organization with JS navigation and without JS navigation? There's a lot of talk about this being more performant and eliminating flicker, but are those real problems? Does HN flicker for you when you move across pages? I've heard these reasons a hundred times, but rarely see evidence that they hold up under scrutiny. Usually it's an after-the-fact justification rather than an upfront necessity.
> Writing the app will be more complicated if the back button navigates the user entirely off of console.cloud.google.com instead of taking them to the previous pane they had open.
I feel like this answer is almost intentionally forgetting that URI paths exist without JS. Am I missing something in your answer that explains why they are insufficient?
- the new page must be loaded from scratch, a new context set up, JavaScript started from scratch, and the page parsed and executed
Even if the resources between the two pages are shared and those shared resources properly cached, time is lost relative to the more instantaneous process of making partial edits to an already loaded page and only loading the JavaScript necessary to pull in features that weren't on the page navigated away from.
I recommend popping the browser inspector open when using AWS console or Google Cloud console and look at what's going over the wire. As the user navigates around, these web apps are only loading the chunks of the interface necessary, not the Chrome or the sidebar or any of those other already-loaded components. Those components are already live in the JavaScript context and don't need to be rebuilt from scratch because they weren't destroyed, since no page navigation occurred.
You can make the case that those consoles are over complicated and could be replaced with a handful of web forms (not by comparing them to hacker news, the relative complexity of the pages are apples to oranges... If I were to make the case, I would make it by saying that the Google app engine console that predated Google Cloud console was perfectly serviceable, clunky and slow as it was). But given that they are what they are, the dynamic loading is much faster than tearing down and building new pages as a user navigates around the panels in the console.
And given the way they work, the user expects the back button to go to the previous panel, not to navigate completely out of the web app because they happened to enter the experience from a specific URL.
> Even if the resources between the two pages are shared and those shared resources properly cached, time is lost relative to the more instantaneous process of making partial edits to an already loaded page and only loading the JavaScript necessary to pull in features that weren't on the page navigated away from
Technically, yes. Whether it's significant difference is reflective of the engineering, and sometimes, on the product design.
And the beauty of not using JS navigation is that you don't need to draw the page using JS templates. The dynamic data, sure, but the rest can load pretty instantaneously.
Out of curiosity, when's the last time you built a web app that doesn't rely on JS navigation, and architected it with that in mind? What was that experience like, and how does it compare to your JS experiences?
It was Ruby on Rails, it was forms-based, and it was hell. To personal taste these days, I much prefer writing a heavyweight client and a thin server that emits raw data and does very little HTML rendering or HTML pre-processing to the alternatives I tried before.
But even if the server is doing a lot of heavy lifting, the end user experience can benefit both in performance and bandwidth from minimizing page reloads. There's some pretty spiffy client and server template technologies now that let you get pretty close to write once, render on either the client or the server depending on which is cheaper.
And there are other applications for which manipulating history is a better fit. Otherwise, how would you mail history into a web experience that doesn't match very well to page navigation at all, like a WebVR walk where you want to be able to return to a location via URL, or a music player where you want the history to go back and forth between the songs you played?
And it can be as simple as clicking on an image to expand it, like on twitter.
It's nice to be able to hit back to close it, especially on phone browsers. And it's also nice to stay on the same page context the entire time, so it can respond much faster and doesn't screw up the scrolling.
It would be pretty simple to open a web page, do a few user interactions that shouldn't trigger this behavior, and then penalize or deindex web sites that hijack the back button.
Add manual penalties for the ones that slip through the automated testing.
Once it becomes known that doing this means saying bye-bye to your traffic, sites will stop doing it.
Users will not understand why there are two back buttons. That'd be like working around pastebin issues by including a different "copy selection" button.
They'll click the browser back button, get thrown back onto the search/new tab page, they'll click forward to get back to the webapp (which will have saved all of their state), and they'll understand just fine going forward.
First of all: user testing I've observed strongly suggests that no, they won't. You would. The average user doesn't have nearly your level of savvy.
Secondly: that's already a bad user experience in contrast to just unifying the behavior behind one button. Why did the user have to discover that a button doesn't work the way they expect? Why are they going to have to remember it doesn't work the right way now? And they'll have to repeat that experience for every web app they use? That's a mess.
Author could have learned this from a single user testing session rather than his hijacking technique. Users ignoring fluff is well established in the field of UX.
The moment you are impersonating Google and your competitors, it's very clearly in fraud/criminal behaviour territory. No serious business will even consider doing something like this.
I don't think the site suggests serious businesses would do that.
P.S. Yet it is not like "serious businesses" do not partake in fraud: Enron, coughcough some/most banks(creating of unauthorized accounts, mortgage fraud, investor fraud, etc).
Examples of press on topic:
https://valleywag.gawker.com/how-a-hacker-intercepted-fbi-an...
https://www.theverge.com/2014/2/28/5458610/fake-google-maps-...