I haven't been following any back-and-forth, but I'm a fan of exploit mitigation so I'll chime in and say to anyone browsing here that https://www.openbsd.org/papers/ru13-deraadt/mgp00001.html is an excellent explanation of them.
(I was shocked when I just googled "exploit mitigation" and all I got back was infomercial articles by Symantec and Sophos etc. Scary. So for people browsing here and googling terms, dig deeper!)
I think we need exploit mitigation _and_ process isolation _and_ the principle of least privilege.
Fwiw, the twitter article mentions capabilities. Something I can pontificate about! I've been lucky enough to chew a _lot_ of cud with some GNOSIS/KeyKOS/caps luminaries etc. And we were still mega-fans of the Capability Object Model but _not_ believers in Capability-Based Addressing. When I did some design work with Norm Hardy on the Mill CPU I designed a temporal
variant of the former that protected ... memory. OpenBSD is actually damn serious about the former. The twitter link is advocating the latter as a fix for something?
I would love to chat with anyone who wants to convince me that Capability-Based Addressing _works_ or is workable. I know that others from KeyKOS etc moved more towards the caps addressing. Anyone want to read up on some of this stuff, I warmly recommend the papers and talks for CHERI, even though I'm not a fan of caps addressing.
> I haven't been following any back-and-forth, but I'm a fan of exploit mitigation so I'll chime in and say to anyone browsing here that https://www.openbsd.org/papers/ru13-deraadt/mgp00001.html is an excellent explanation of them.
Is it possible to get this resource on a single page? Having to click through each page just to read a sentence or two is quite cumbersome.
What exactly do you mean by capability based addressing Vs capability object model? Is it the use of capabilities to grant access to particular chunks of memory? What do you see as the pros and cons?
"Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection."
The surviving, capability architecture is System/38 -> AS/400 -> IBM i. A side effect of better architecture is that they are also incredibly reliable. One person here described them as the tanks of their company's servers.
A counterpoint-counterpoint, see OpenBSD's recent work on compiler/toolchain-based ROP mitigations for clang to replace ROP-friendy instructions with safer alternatives, and RETGUARD, which completely replaced the traditional stack protector on amd64/arm64 in 6.5.
Despite what people might think, OpenBSD only enables mitigations that have an acceptable overhead. And because of this work, for example, OpenBSD/arm64 kernel has _zero_ ROP gadgets at runtime, and OpenBSD/amd64 common scripts like ROPGadget.py will often fail to find diverse enough gadgets to even exec a shell.
I really wish people would stop publishing useful information as tweets. It's like publishing a research addendum on fortune cookie papers. Besides it being hard to read, it's easy to lose.
Unfortunately the majority dictates the medium. As for the second point, if I consider the information worthwile, I make a personal copy of the volatile medium (this means not just social media, but any web page - if it's not on your computer, you don't control it and it can disappear at any moment).
Maybe you can expand on why slowing down exploit dev is written off as zero value? If you're writing a zero day, I can certainly concede that this week vs next week has no practical effect.
But most users' exposure to vulnerabilities comes in the window between patch released and patch applied. It takes some amount of time to reverse a patch and develop an exploit, and users are racing against that time. The difference between just 24h and 48h can matter, no?
But SPARC and Solaris are basically dead, iOS isn't a general-purpose computing platform, and Android doesn't fare much better. Is there any way to actually get this on a useful desktop or work station?
This is not an accurate statement. Oracle is still actively developing Solaris 11.4 and releasing regular updates. They plan to support Solaris and SPARC systems to at least 2034. This does not sound very dead to me.
The mitigations probably do increase exploit difficulty. I still think most of the security is much like we saw with Mac's years ago: virtually nobody targets them. The mitigations in CompSci, such as CFI, often get broken when another team spends some time on them. Those talented folks probably only attack a tiny sample of designs, though. Much like black hats who go wherever the users, data, and money are. The more return for attack, the more they focus on that platform. That's mostly Windows, Linux, iOS, and Android.
Right now, I assume any "success" of OpenBSD's mitigations are a mix of (small one) actual increases in difficulty and (big one) benefit of being obscure system best breakers aren't interested in. Long as they're uninterested, it should stay safe from most attackers regardless of which mitigations they use. Much like Mac was with hardly any security.
We won't know if the mitigations actually work until the kind of people who break mitigations, white or black hat, start focusing hard on beating them one by one or bypassing them all together (eg hardware exploits). Any talk of how strong they are is just speculation. The proof, if it turns up, would be prolonged attempts by highly-talented people to break them with consistent failure and/or less access/damage resulting.
No one likes hearing the truth. However, to bolster your point; ASLR. OpenBSD was the first to implement and support it. It Generally seems to just be a bullet point of an extra step taken in a given exploit POC these days.
"The biggest success of OpenBSD's mitigations is that all the other operating systems adopt them."
That just means they're popular, not successful against attackers.
"All the attacks that these techniques mitigate on other operating systems are proof "
This would be evidence if it's true. So, what attacks do they mitigate? Were those attacks designed to bypass the mitigation of not designed to bypass the mitigation? I find most assessments of mitigations focus on success against attacks not designed with mitigations in mind. Once it's important to attackers, they start focusing on the mitigations. Further, how do we assess whether hackers are having a hard time with the mitigations given they don't disclose most of their successes?
One guess I have is to look at exploit prices from firms such as Zerodium. I figure the prices will be much higher for targets that are harder to reliably exploit. If they're low, that might indicate they already have plenty of exploits for it. If it stays low or high, then that might show how consistently it was easy or hard to exploit. One variable to eliminate, though, was a vendor raising prices on one or more items just to further incentivize 0-day hunters. That happened at least once with Zerodium.
Let's look at the prices for desktops and servers:
The highest payout is a Windows 0-day for remote code execution. There's no mention of Mac or Linux payouts for the same thing. Instead, they offer $50,000 for privilege escalation on both. They also have $100-500k payouts for RCE's on services that often run on Linux hosts. These do get lots of eyeballs from bug hunters, some getting hit for long time, which might drive prices up a bit. The combo suggests the attackers mostly hit a service/app on Mac and Linux followed by privilege escalation. In any case, the Linux boxes cost them anywhere from $150k-550k for exploits with Windows about a million.
If it counts as field evidence, that would argue strongly against what many might suspect is safest route to security. Whatever Linux is doing isn't working. Whatever Windows is doing is working really well. On mobile, iOS is the champ of comparable value. Note that they're not soliciting 0-days for OpenBSD on that page. That may or may not mean their clients don't care about it. If it did mean that, it would support my point about the OS being secure due to obscurity first, mitigations second.
So, I check another article to find they're offering $500k for UNIX exploits, esp BSD's:
They explain that how many systems are impacted with what level of interactivity largely determines the price. Implicitly corroborates what I said about attacker focus, too, if these incentives matter to them. For BSD's, I can't tell if the price is just to get a first-mover advantage against BSD's in their market or because they're harder to hack. Might have to wait a while. They did put a number on what Linux vulnerabilities were worth at the time: "as high as $45,000."
Maybe we ask Zerodium if they have an OpenBSD exploit with what level of interactivity. Although I doubt they'll say, maybe someone could sell them on the marketing benefit of it if they do have something. The OpenBSD devs might feel proud if they're consistently not on the list with the bounty going up every year or something.
You didn't account for competition here. Because Windows is more popular they have to offer more to keep things out of the hands of hackers. Of course some researchers are honest (one would hope most) and will never sell on the black market, but there are always a few.
posix_spawn is fine for that point in particular: it's encouraging the use of exec to regenerate state in servers that fork child processes as workers, so that accidentally leaking information from the child doesn't also leak it for the original parent process as well.
posix_spawn-style interfaces aren't particularly friendly to spawning processes with reduced privileges though, because you don't want to drop privileges in the parent.
I was at the talk and really enjoyed it. I was honestly surprised at the amount of security features implemented in OpenBSD. I really like the idea of Pledge's.
Tangential, but its so sad the room is almost empty. Just a sea of empty chairs.
I was toying with making a long trip to a 'con of some kind this year. The last one I went to was a pycon in, I think it was, 2016. But are they dying?
Some of us didnt even hear about it existing until the latest issue of 2600 dropped. The website normally updates around the new year, letting us plan to go - February rolled past without such.
If you are asking if Cons in general or dying, I think that's a definite no. Every conference I've been to lately has only seen tremendous growth: PyCon, PyCascades, LinuxFest NW, SeaGL, etc. However, that is just my personal experience.
Distribution is uneven by topic. I went to two talks at LFNW this year. One was packed, no more chairs available. The other was much lighter, with only 10 or so people in the room. It all depends how general interest the topics are.
https://twitter.com/halvarflake/status/1125169161125158913