Wow, the Linux kernel must be full of dunces that rather than writing in proven correct C, added another language that other maintainers hate. What morons! /sarcasm
Disclaimer: above is sarcasm, while I don't think Linux Kernel Maintainers are perfect, I don't consider them dunces either. If they can't avoid writing UAF (use after free) or DF (Double free) bugs then what hope does the average C programmer has?
I'm not sure what you're even trying to say. Do you think that formally verified C isn't a thing? Because it's not used in Linux?
Linux isn't designed to be formally verifiable. For all of the manpower and resources dedicated to it, it's still bidden to the technical debt of being a 30 year old hobbyist experiment at its core. If you want a formally verified, secure kernel that wasn't written by a college student to see if he could, seL4 exists.
> Do you think that formally verified C isn't a thing? Because it's not used in Linux?
No. It exists but is it practical to be used in an OS? From what I gathered from HN, sel4 is very limited in scope and extending that to a semi-usable OS (that uses ""proven"" C) is expensive time and money wise.
> still bidden to the technical debt of being a 30 year old hobbyist experiment
Citation needed. I admit it has limitations but it's not like the biggest corps in the world aren't working on it.
> sel4 is very limited in scope and extending that to a semi-usable OS (that uses ""proven"" C) is expensive time and money wise.
Everything that has to meet a high criteria of safety is like this, yes. For the exact same reason unsafe exists in Rust and completely inescapable, because automatic correctness checking beyond the most trivial static analysis doesn't actually come for free, and it imposes intense design restrictions that make a lot of things very time consuming to do.
You will never find a single point in the multiple archives of Linux's git history where a clean break point occurred and the kernel was greenfielded with the express intent of replacing its design abstractions with ones more appropriate and ergonomic for formal analysis. If you'd like to express further skepticism, feel free to try and find that moment which never happened.
Participation of large, well recognized corporations does not imply the kernel isn't a mess of technical debt. It actually implies worse.
While the memory safety issues are a concern, switching to Rust + Unsafe will only reduce but not eliminate those issues and it is unclear whether adding a lot of complexity is actually worth it compared to other efforts to improve memory safety in C.
> switching to Rust + Unsafe will only reduce but not eliminate those issues
Except in practice the code written in Rust experiences no safety issues[1].
I've seen this argument before a million times it is one part FUD (by making actual memory issues bigger than they are) one part Nirvana fallacy (argments that make so Java isn't memory safe because it will have to call into C).
> is actually worth it compared to other efforts to improve memory safety in C
As I aluded before, I am sure Linux Kernel Maintainers are aware of sel4. Which begs the question, why they didn't do it? It's in C, proves everything, and solves even more than Rust, why isn't it in kernel?
I'm going to hazard a guess, the effort of going full proof for something like Linux would be through the roof. You would need lifetime of LKML dedication.
Software written in Rust also experiences safety issues. If you believe otherwise, you are delusional. In the end, if you want extremely safe software there is no other way than formal verification. (and then there can still hardware be issues or bugs in the specification)
The world is not black-and-white. One can have more or less issues.
And the same goes for improving the situation. Stopping the world and rewriting everything is never going to fly. The only viable approach is gradual introduction of safer techniques, be it a language or verification options, that step by step improve things and make the bar of adoption minimal. Nobody is going to get convinced if this is cast as a culture war. The folks annoyed by the status quo need to bring the skeptics along, everything else is doomed to fail (as well-illustrated by all the discussions here).
> Software written in Rust also experiences safety issues.
Yes. And?
Seatbelts and air bags all experience safety issues, and general public was against them for a very long time. It doesn't mean they don't increase safety. Or that because you could fall into volcano you should abolish seatbelts.
How about for start software with no memory errors? How about more XSS and less UAF?
Also, not even formal proofs are enough. A smart enough attacker will use the gap between spec and implementation and target those errors.
And while I agree we need more formal proofs and invariant oriented software design, those options are even less popular than Haskell.
> A smart enough attacker will use the gap between spec and implementation and target those errors.
"Formal verification" literally means that the implementation is proven to satisfy the formal specification, so by definition there won't be any gaps. That's the whole point of formal verification.
But you have a point in that there will still be gaps -- between specification and intent. That is, bugs in the specification. (Or maybe you mean "specification" to be some kind of informal description? That's closer to intent than an actual specification which in the formal verification world is a formal and rigorous description of the required behavior.) Though those bugs would likely be significantly less prevalent .. at least that's the expectation, since nobody has really been able to do this in the real world.
> "Formal verification" literally means that the implementation is proven to satisfy the formal specification, so by definition there won't be any gaps.
To quote Donald Knuth:
"Beware of bugs in the above code; I have only proved it correct, not tried it."
Sure but proof itself might have gaps itself. A famous proved implementation of mergesort had a subtle flaw when array size approached usize. They proved it would sort, they didn't prove it won't experience UB. Or maybe you prove it has no UB if there are no hardware errors (there are hardware errors), etc.
The unbeatable quantum cryptography was beat in one instance because the attacker abused the properties of detectors to get the signal without alerting either sides of the conversation.
> Or maybe you mean "specification" to be some kind of informal description?
I read somewhere that the secret to hacker mindset is that they ignore what the model is, but look at what is actually there. If they behaved as spec writers expected, they would have never been able to break in.
Reminds me of a place I worked. The doors were this heavy mass of wood and metal that would unlock with a loud thud (from several huge rods holding it secure). This was done to formally pass some ISO declaration for document security. The thing is any sane robber would just easily bypass the door by breaking the plaster walls on either side.
At least we can stop writing code made of plaster for start.
Then it's an incorrect proof. In the pen-and-paper era, that was a thing, but nowadays proofs are machine-checked and don't have "gaps" anymore.
Or your assumptions have a bug. That's exactly what I mean with specifications being potentially flawed, since all assumptions must be baked into the formal specification. If hardware can fail then that must be specified, otherwise it's assumed to not fail ever. This never ends, and that's the whole point here.
I suppose we are trying to say the same thing, just not with the same terminology.