Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can assure you that Google takes VRP reports very seriously. Two members of Google security (f- and adobkin) have provided context elsewhere in this thread on the bug described in the post.

As for the Chrome report you mention, if you provide the bug ID I can check. However, given your description, it definitely seems like the bug was closed because you were observing the intended behavior of terminating the renderer process on an out-of-memory condition. The bug reporting form links to guidelines on reporting security bugs, and explains why this specific case is not a security issue: http://www.chromium.org/Home/chromium-security/reporting-sec...



I shouldn't be able to terminate the rendering process, which contains the contents and information of numerous tabs by just loading a large image into it: it should realize the image is too large and stop loading it... otherwise that's a denial of service attack you can use against someone and potentially cause them data loss from one of the other tabs in that process. (That Chome shares processes between lots of tabs, by the way, was a massive disappointment after the original video that made it sound like tabs would all be isolated... it frankly is largely a worthless token show of security theatre to have things separated the way Chrome ended up doing.)


Attempting to recover from OOM is almost always a bad idea, which is why Chrome terminates processes by default on OOM. To underscore this point, I should note that I've found a number of serious vulnerabilities resulting from applications attempting to recover from OOM. Here's a detailed writeup of one I found in Firefox several years ago: http://blogs.iss.net/archive/cve-2008-0017.html

As for how Chrome's process sharing works, it's opener based for Web content. That is, if one browsing context opens other browsing contexts (either via iframes or popups) where the opened context is retained by the opener, then they will run in the same process in order to preserve resources and the relationships required by Javascript. This means the process is shared in cases where the HTML standard explicitly requires that the child browsing context be able to navigate the parent opener or frame. So, at that point, required Web functionality achieves an equivalent to the DoS you mention. Process sharing can also be triggered when resource limits are reached or during page transitions, but that's rarely a significant factor in practice and not controllable by a Web site.

And to be very clear, Chrome's process isolation is far from security theater. The sandbox prevents renderer processes from accessing or manipulating any system state directly. It's a hard security boundary that's been extremely effective in preventing exploits. It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.

That stated, we're still working on improving Web content isolation because our final goal is to entirely isolate different origins in the same class. But doing so is far beyond anything attempted in a production browser before, and entails a massive engineering effort. The team working on that has various bugs you can follow in our tracker, and a public design document: http://www.chromium.org/developers/design-documents/site-iso...


> Attempting to recover from OOM is almost always a bad idea, which is why Chrome terminates processes by default on OOM.

You are thinking inside of "the box" that the only resource manager is the OS and the only resource in question here is virtualized memory. Yes: I entirely agree with the security advantages of letting OOM kill a process. My argument is that there is something wrong if my web browser allows untrusted code to attempt to load a 16GB large bitmap (65,535 x 65,535 RGB) and that that is simply allowed to crash a large number of unrelated tabs, including my e-mail client (which has now died on me way too many times due to other tabs).

Heap memory is what is causing the kill, but it isn't the allocation of the heap memory from the OS for this bitmap that is the semantic problem in this situation: there should be other resource limits that apply much earlier that keep a tab from loading a 16GB large bitmap; some of these resource limits I will even say the browser already is in a position to track (such as specifically, the amount of space available in the disk cache). It isn't about recovering from malloc(16GB): it is about avoiding that malloc.

My argument then is that the browser is a virtual machine that is allowing largely-untrusted turing complete code to run on my computer. These programs have tons of limits associated with them: one of them should be a VM object heap space limit, exactly as Java applets have. Java doesn't die when it hits OOM, it kills itself when it hits its heap space limit. There may still be ways that the process can end up exhausting available RAM, and in those cases yes: the OS should still kill the process.

You might think this is limiting ("but what about my amazingly great and large webapp?"), but it isn't: most web pages people browse to that use a lot of memory are really just using very large images. If they actually have a large number of JavaScript objects in play they should probably have to get a memory limit increase warning, exactly like they do if they want more HTML5 localStorage. Such large applications might also be Chrome Apps anyway, and can have manifest configuration.

But for all those web pages that have a lot of large data assets, like images, this problem should be being solved by the disk cache: resource limits should cause images to be unloaded from RAM and potentially pinned in the disk cache (so that they can't be deleted until that tab is closed). If the disk cache finally can't take the situation and needs to delete resources being used by tabs (that can't just be yanked due to HTML semantics), then the page (not the tab) can be sacrificed.

Not even the entire tab should not crash because one page wants to load a very very large bitmap, much less a bunch of unrelated tabs. This is not a "crash" scenario: this is a virtual machine that had some of its data evicted. The browser is already modeling individual web pages in ways where it can throw them away as a group: if the disk cache refuses to store something, that machine should be killed, much like the OS kills processes that want memory that can't be backed by swap.

Even without such amazing resource limits, the way a massive resource should be handled is that it gets streamed to the disk cache (and if the disk cache refuses to hold it, it should just be denied) and then memory mapped (so the pages for it will page back to the file, and don't cause memory pressure). There is simply no reason why the browser should ever be trying to allocate 16GB of private memory for purposes of loading a bitmap: that's clearly far on the other side of absurd ;P.

To be clear: I am not saying "Chrome should already be doing all of this"; I am simply arguing that claiming this is impossible, or even impractical, is wrong, and that the current solution doesn't seem to be helping (despite the wide-spread belief, and even occasional claim, that it does).

> As for how Chrome's process sharing works, it's opener based for Web content. ... This means the process is shared in cases where the HTML standard explicitly requires that the child browsing context be able to navigate the parent opener or frame. So, at that point, required Web functionality achieves an equivalent to the DoS you mention.

Which means that if I have an attack against Chrome and I want access to your e-mail data, I simply get you to click a link that opens your e-mail client target="new". It will almost certainly be running in the same process, and then I use my exploit, steal your data, and upload it back to my server. The result of this way of dividing processes means that the attacker has nearly complete control over which websites will be in the same process when it comes time to exploit Chrome.

> Process sharing can also be triggered when resource limits are reached or during page transitions, but that's rarely a significant factor in practice and not controllable by a Web site.

The fact that I can't control this isn't terribly important, because I have complete control over process isolation from the previous paragraph. However, for completeness, I will point out that this decreases (slightly) the probability of the direct attack "steal data from saurik's e-mail", but does nothing to mitigate the more general attack "steal sensitive information from saurik", because the result of resource limitations means that sensitive websites end up distributed through every single tab process over time. You can't be guaranteed of hitting my e-mail client (as in the previous paragraph), but you can get something juicy from any of my processes.

> And to be very clear, Chrome's process isolation is far from security theater. ... It also allows Chrome to entirely isolate different classes of renderer processes from each other, such as those used for Web content versus filesystem, extensions, apps, or system settings.

Sure. I am only talking about the separation of tabs into multiple processes. Having actual privilege separation, where certain types of things can't be done by just any process, is certainly advantageous. However, having my tabs in separate processes, when the tabs have control over what web page content is in that process, and can even open new tabs that are associated with the same process, is "security theater": it is billed as a security feature, but it is barely a speed bump.


> That Chome shares processes between lots of tabs, by the way, was a massive disappointment after the original video that made it sound like tabs would all be isolated

I'm not an expert on sandboxing browser tabs, but so far I haven't had any (memorable) experience where a rogue Chrome tab crashed the whole browser. Just yesterday someone posted a jsfiddle on HN [0] that crashes tabs in Chrome, but the entire browser when opened in Firefox.

[0] https://news.ycombinator.com/item?id=6358727


There is a difference between separating "web page rendering" from "browser UI" or "networking" and the kind of tab separation I am discussing: those are privilege separations, which at their bare minimum mean that when some JavaScript crashes, it doesn't take down the UI. This is both a functionality and a security benefit that I did not and will not argue against.

Chrome, however, also claims to isolate tabs from each other, so that one tab cannot affect the behavior of another tab; but, in practice, I have tons of tabs that all have ten totally unrelated websites rendering in them (everything from my e-mail client to 4chan), so that isn't actually offering me any advantage: it is still possible for rogue websites that are able to exploit only their rendering process to steal data from any other website that ended up in the same process.


> As for the Chrome report you mention, if you provide the bug ID I can check.

This demonstrates exactly what's wrong with Google's interpretation of "openness". Sure, if one finds some Google insider, one may get information. Normally (read: in almost all other bigger Free Software projects), one could simply have a look at the bug tracker on one's own, without being on the mercy of Google.


I asked for the bug ID so that I could verify it was triaged properly, not because the report isn't public. Some reports (like security bugs) are initially private, as is typical in any open source project. However, security bug reports are made public as well at some after they're resolved.

You can go ahead and see for yourself: https://code.google.com/p/chromium/issues/list


For example, the KDE security mailing list is definitely private, and some KDE bugtracker bugs are private as well.


Having reported a bug in Chromium myself that I thought was security related (but in the end was not - better safe than sorry), I think it's a reasonable decision to not show security related bug reports/tickets from the general public - you'd otherwise present all security holes in the browser to every evil guy in the world on a silver platter.


Not usually for reported security vulnerabilities


But I thought the point was that they didn't consider this to be a security vulnerability.


It appears that phishing-related vulnerabilities do not qualify for the reward program.

You may want to reevaluate your policy, because it is incredibly short-sighted.

Phishing is one of the most effective attack vector (link to a page with a zero-day browser exploit), and even though it may not endanger data held by Google, it puts users at risk of ending up with malware that may steal much more than Google-held data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: