Hacker Newsnew | past | comments | ask | show | jobs | submit | cross's commentslogin

We don't. When I'm there, I see it addressed as a communal responsibility. Not to be facetious, but the way it's done is a bit of a microcosm of Oxide itself: everyone just chips in and does parts of it as needed. I even filed a ticket to get a push-broom last time to make sweeping the floor easier.


Y'all are really scrubbing toilets?


But the real hero was the person who walked 100 yards to home depot and back with the push broom!


Linux won in large part because it was in the right place, at the right time: freely available, and rapidly improving in functionality and utility, and it ran on hardware people had access to at home.

BSD was mired in legal issues, the commercial Unix vendors were by and large determined to stay proprietary (only Solaris made a go of this and by that time it was years too late), and things like Hurd were bogged down in second-system perfectionism.

Had Linux started, maybe, 9 months later BSD may have won instead. Had Larry McVoy's "sourceware" proposal for SunOS been adopted by Sun, perhaps it would have won out. Of course, all of this is impossible to predict. But, by the time BSD (for example) was out of the lawsuit woods, Linux had gained a foothold and the adoption gap was impossible to overcome.

At the end of the day, I think technical details had very little to do with it.


In early 1992 I emailed BSDI asking about the possibility of buying a copy of BSD/386 as a student - the $1000 they wanted was a little too high for me. I got an email back pointing me at an 'upstart OS' called linux that would probably suit a CS student more, and was completely free, that week I think it was 0.13 I downloaded that week, it got renamed 0.95 a few weeks later, there was no X (I think 0.99pl6 was the first time I ran X on it, from a yggdrasil disc in august 1992) but it was freedom from MSDOS.

Ironically, 386BSD would have been brewing at the same time with a roughly similar status.


I installed 386BSD for my university admin in 1992 I think. They paid my to do it but otherwise it was free. Linux was not yet version 1.0 if I remember correctly.


Yes, 386BSD was free, and the precursor to FreeBSD, NetBSD and OpenBSD. BSD/386 was a different, commercial product though, available a few months earlier.

All 3 projects, BSD/386, Linux and 386BSD gained recognition over the span of about 6 months in 1992.


Yes, I know. Interesting that BSD/386 was pointing people at Linux. I guess they knew that 386BSD would eat their lunch. Perhaps they did not see Linux as real competition.


I installed 386BSD for my university admin in 1991 I think. They paid my to do it but it was otherwise free.


And most commercial UNIXes would still be around, taking as they please out of BSD systems.


They are still around. And not taking much from BSD it does not seem.

Solaris, AIX, HP-UX, and UnixWare could all use a shot of BSD. I was playing with UnixWare earlier today. Time capsule.


They are around, struggling, a shadow of the greatness they once were before everyone went Linux.

Additionally, Apple and Sony have already taken what they needed.


The first attempt appears to try and transfer ownership of the allocated memory from the Vec to C, so my first question is, why not allocate the returned memory using libc::malloc?

But I do recognize that the code in the post was a simplified example, and it's possible that the flexibility of `Vec` is actually used; perhaps elements are pushed into the `Vec` dynamically or something, and it would be inconvenient to simulate that with `libc::malloc` et al. But even then, in an environment that's not inherently memory starved, a viable approach might be to build up the data in a `Vec`, and then allocate a properly-sized region using `libc::malloc` and copy the data into it.

Another option might be to maintain something like a BTreeMap indexed by pointer on the Rust side, keeping track of the capacity there so it can be recovered on free.


> If you've been doing C for five decades, it's a shame not to have noticed that it's totally fine to pass a NULL pointer to free().

Or that calling one's FFI function inside of an `assert` means it will be compiled out if the `NDEBUG` macro is defined.


I'm not Bryan, obviously, but part of the answer here is that 100 servers running at 100% capacity is an absolute upper bound, but most of the time you're nowhere near that. Most of the time few things are at full capacity, which means that you can multiplex your physical hardware resources to increase utilization efficiency.


This is less of an issue for us at Oxide, since we control the hardware (and it is all modern hardware; just a relatively small subset of what exists out there). Part of Sun's issue was that it was tied not just to a software ecosystem, but also to an all-but-proprietary hardware architecture and surrounding platform. Sun eventually tried to move beyond SPARC and SBus/MBus, but they really only succeeded in the latter, not the former.


xv6 was originally written for 32-bit x86; the RISC-V port is a relatively recent development. See e.g. https://github.com/mit-pdos/xv6-public for some of the earlier history.

rxv64 was written for a specific purpose: we had to ramp up professional engineers on both 64-bit x86_64 and kernel development in Rust; we were pointing them to the MIT materials, which at the time still focused on x86, but they were getting tripped up 32-bit-isms and the original PC peripherals (e.g., accessing the IDE disk via programmed IO). Interestingly, the non sequitur about C++ aside, porting to Rust exposed several bugs or omissions in the C original; fixes were contributed back to MIT and applied (and survived into the RISC-V port).

Oh, by the way, the use of the term "SMP" predates Intel's usage by decades.


(all points taken, all valid in their own right)

yes, riscv arch is a relatively new thing, which now quietly powers every modern nvidia gpu, led to a final demise of MIPS, and for whatever reason made MIT PDOS abandon x86 as their OS teaching platform back in 2018 (ask them why).

but perhaps i didn’t stress my central point enough, which is “textbook”. Xv6 used to have a make target which produced 99 pages of pdf straight out of C code. i don’t think the latest riscv release (rev3) still has it, probably because it is no longer deemed necessary - the code now documents itself entirely, and the tree is down to just kernel and user(land), both implemented in consistent and uniform style.

rxv6, at least its userland, still seems to be written in C, which (correct me if i’m wrong) must be creating a lot of pressure on the rust kernel along the lines of ‘unsafe’ and ‘extern “C”’.

i only hope the said group of pro engineers who needed to be ramped up on all of that at the same time plus the essentials of how an OS even works got ramped up alright.

again, no offense. and not to start a holiwar “rust will never replace C”. why not, maybe it will, where appropriate. which is why the notion of C++ is a sequitur all the way.


> rxv6, at least its userland, still seems to be written in C, which (correct me if i’m wrong) must be creating a lot of pressure on the rust kernel along the lines of ‘unsafe’ and ‘extern “C”’.

Yes. I didn't feel the need to rewrite most of that. The C library is written in Rust, but as a demonstration, most of the userspace programs are C to show how one can invoke OS services. Are there unmangled `extern "C"` interfaces and some unsafe code? Yes.

Userspace interacts with the kernel via a well-defined interface: the kernel provides system calls, and userspace programs invoke those to request services from the kernel. The kernel doesn't particularly care what language userspace programs are written in; they could be C, Rust, C++, FORTRAN, etc. If they are able to make system calls using the kernel-defined interface, they should work (barring programmer error). Part of the reason rxv64 leaves userspace code in C is to demonstrate this.

The rxv64 kernel, however, is written in almost entirely in Rust, with some assembly required.

> i only hope the said group of pro engineers who needed to be ramped up on all of that at the same time plus the essentials of how an OS even works got ramped up alright.

They did just fine.


okay, fair. i only got misled by the title of the post, which claims all-rust xv6 port.

now that we cleared the userland part, here’s what I’m contemplating on the kernel side. i can’t think of anything simpler and more staple than this, so:

https://github.com/dancrossnyc/rxv64/blob/main/kernel/src/ua...

https://github.com/mit-pdos/xv6-riscv/blob/riscv/kernel/uart...

honestly - i don’t feel at ease to tell which driver code is more instructional, which is easier to read, which is better documented, which is better covered with tests, which has more unsafety built into it (explicit or otherwise), what size are the object files, and what is easier to cross-compile and run on the designated target from, say, one of now-ubiquitous apple silicon devices.

lest we forget that the whole point of it is “pedagogical”, i.e. to learn something about how a modern OS can be organized, and how computer generally works.

and i’m just not sure.


Well, you're free to study both in detail and draw your own conclusions. But the UART driver in both is pretty uninteresting, and I suspect whatever conclusions one may draw from comparing the two will be generally specious.

Perhaps compare the process code, instead, and look at how the use of RAII around locks compares to explicit lock/unlock pairs in C, or compare how system calls are implemented: in rxv64, most syscalls are actually methods on the Proc type; by taking a reference to a proc, we know, statically, that the system call is always operating on the correct process, versus in C, where the "current" process is taken from the environment via per-CPU storage. Similarly with some of the code in the filesystem and block storage layer, where operations on a block are done by passing a thunk to `with_block`, which wraps a block in a `read`/`relse` pair.

Of course I'm biased here, but one of the nice things about Rust IMO is that it makes entire classes of problems unrepresentable. E.g., forgetting to release a lock in an error path, since the lock guard frees the lock automatically when it goes out of scope, or forgetting to `brelse` a block when you're done with it if the block is manipulated inside of `bio::with_block`. Indeed, the ease of error handling let me make some semantic changes where some things that caused the kernel to `panic` in response to a system call in xv6 are bubbled up back up to userspace as errors in rxv64. (Generally speaking, a user program should not be able to make the kernel panic.)


thanks, this is useful, let me ponder. obviously i am also a subject of severe cognitive bias, granted.

but ok, if not uart driver - what other direct comparison in r/xv6 kernel spaces you would use to show where rust shows a real hard edge over C?

not a loaded question, I’m seriously asking for a valid pointer (no pun intended)


Sorry, I thought my last message did give suggestions for things to compare?


Probably true. Porting rxv64 to RISC-V probably wouldn't be that big of a lift, honestly.


In a sense yes, but also no.

Plan 9 was built with the observation that high-resolution, bitmapped graphics displays were ubiquitous, and there was little motivation to keep the dated "tty" model as a basis for interaction. So you're expected to a graphical interface for working with the system, but it doesn't _need_ to be `rio`.

I wrote a bit about this a few years ago: http://pub.gajendra.net/2016/05/plan9part1 (I guess I should get around to actually supporting HTTPS here....)


> Plan 9 was built with the observation that high-resolution, bitmapped graphics displays were ubiquitous

My impression is that Plan 9 was a child of the technical workstation age - when we thought that the endgame was having ever more powerful desktops who did most things locally and connected over a network to send and receive files. In the end, our desktops and laptops have been fast enough for the past decade or so, while servers are becoming increasingly more like the mainframes we thought had their days counted and a lot of the heavy lifting is done remotely. A lot of what I do includes firing up a surreally large cloud machine, run a couple data transformations, and then unceremoniously delete that big workstation (that's actually a small slice of a humongous server). And, the rest, is mostly applications running on what I assume are clusters of cloud machines that expose an HTTP interface to my browser (a lot like the beloved 3278 terminal, but not nearly as clicky and tactile).


Plan 9 definitely wasn't built with the thought that the endgame was ever more powerful desktops doing most things locally. "Connected over a network to send and receive files" is only technically true, yeah it's all around the 9P protocol which is a filesystem protocol, but it's a world where everything is a file. A window on your screen is a file. A CPU is a file. And app is a directory with files in it. The Plan 9 vision is all the things you describe, but more seamless. Firing up a a surreally large cloud machine isn't starting a VM with its own separate OS and configuration, it's connecting to a cloud CPU and your tasks transparently running on it. Mostl applications running on a cluster of machines that expose an HTTP interface to your browser are instead applications running on a cluster of machines that expose a 9P interface to your OS. Your impression of "the endgame was having ever more powerful desktops who did most things locally" was the Unix trajectory, and steering away from that was the big departure of Plan 9 from Unix.


Yeah, it's really more just a shell at the moment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: