X11's problems were rooted in the abstractions presented by the X11 core protocol and its extension mechanisms. The interface, not merely the implementation.
Wayland was correct in first focusing on replacing this interface. The problem is the effort stopped there and left the ecosystem to figure out the implementation part.
Don't take it from me, Daniel Stone has a whole talk on the motivation for Wayland: https://youtu.be/RIctzAQOe44
Broadly, the X Server has a bunch of capabilities which are irrelevant. The modern model is really Window <-> Compositor based, and the X Server protocol is just a pointless middle man in that exchange.
Sorry, I know this nonsense from Daniel Stone I am interested why you believe it.
"compatibilities which are irrelevant" do exist, such as old drawing primitives but those are not really an issue. They can be maintained for backwards compatibility and eventually deprecated and removed, but they are not anything which holds back modern clients. The X server protocol is not anymore a "pointless middle man in that exchange" than the Wayland protocol is a "pointless middle man". A protocol is obviously needed so can not be pointless.
I believe it for the reasons explained by Daniel Stone, there is no difference between my opinion and his on the topic of X's inefficiency as an IPC middleman.
In X11, the problem was Xserver. Now, X11's design philosophy was hopelessly broken and needed to be replaced, but it wasn't replaced. As you correctly point out, there is no "Wayland", Wayland is a methodology, a description, of how one might implement the technologies necessary to replace X11.
This has led to hopeless fracturing and replication of effort. Every WM is forced to become an entire compositor and partial desktop environment, which they inevitably fail at. In turn application developers cannot rely on protocol extensions which represent necessary desktop program behavior being available or working consistently.
This manifests in users feeling the ecosystem is forever broken, because for them, on their machine, some part of it is.
There is no longer one central broken component to be fixed. There are hundreds of scattered, slightly broken components.
I maintain Red Hat backed it as part of a play to make it harder to develop competing distros that aren’t basically identical to Red Hat’s product.
Their actions on systemd, Wayland, plus gnome and associated tech, sure look like classic “fire and motion”. Everyone else has to play catch-up, and they steer enough incompatible-with-alternatives default choices that it’s a ton of work and may involve serious compromises to resist just doing whatever they do.
Wayland is far more aligned with the Unix philosophy than Xorg ever was. Xorg was a giant, monolithic, do everything app.
The Unix philosophy is fragmentation into tiny pieces, each doing one thing and hoping everyone else conforms to the same interfaces. Piping commands between processes and hoping for the best. That's exactly how Wayland works, although not in plain text because that would be a step too far even for Wayland.
Some stuff should not follow the Unix philosophy, PID 1 and the compositor are chief examples of things that should not. It is better to have everything centralized for these processes.
In X you have server, window manager, compositing manager, and clients and all is scoupled by a very flexible protocol. This seems nicely split and aligned with Unix philosophy to me. It also works very well, so I do not think this should be monolithic.
If we're going to be pedantic, mmap is a syscall. It happens that the C version is standardized by POSIX.
The underlying syscall doesn't use the C ABI, you need to wrap it to use it from C in the same way you need to wrap it to use it from any language, which is exactly what glibc and friends do.
Moral of the story is mmap belongs to the platform, not the language.
No, that's too far down the pedantry rabbit hole. "mmap()" is quite literally a C function in the 4.2BSD libc. It happens to wrap a system call of the same name, but to claim that they are different when they arrived in the same software and were written by the same author at the same time is straining the argument past the breaking point. You now have a "C Erasure Polemic" and not a clarifying comment.
If you take a kernel written in C and implement a VM system for it in C and expose a new API for it to be used by userspace processes written in C, it doesn't magically become "not C" just because there's a hardware trap in the middle somewhere.
and if i directly do an mmap syscall on linux from a freestanding forth that doesn't go through libc for anything? sure, c unfortunately defines how i have say, pass a string, but that's effectively an arbitrary calling convention at that point; there's no c runtime on the calling side so it's not particularly useful to contend that what i'm using is a c api.
or perhaps mmap is incontrovertibly a c function on platforms where libc wrappers are the sole stable interface to the kernel but something else entirely on linux?
> and if i directly do an mmap syscall on linux from a freestanding forth
... mmap() remains a system call to a C kernel designed for use from the C library in C programs, and you're running what amounts to an emulator.
The fact that you can imagine[1] an environment where that might not be the case doesn't mean that it isn't the case in the real world.
Your argument appears to be one of Personal Liberty: de facto truths don't matter because you can just make your own. This is sort of a software variant of a Sovereign Citizen, I think.
[1] Can you even link a "freestanding forth" with an mmap() binding on any Unix that doesn't live above the libc implementation? I mean, absent everything else it would have to open code all the flag constants, whose values change between systems. This appears to be a completely fictitious runtime you've invented, which if anything sits as evidence in my favor and not yours.
i'm not so much imagining an environment per se¹ as describing one i've already written, so i'm not entirely sure where any of this is coming from. if you care to have some additional assurance this isn't somehow an elaborate rhetorical trap, a previous comment about forth tail call elimination with a bit of demonstrative assembly is presumably only a short scroll down my profile. ctrl-f for cmov if you want to find it quickly. as i recall, it came up for similar reasons then because people often make similar incorrect generalizations about lots of things that implicitly sit atop a c runtime in their minds. that said, you're the first one to call me a sovcit before asking any clarifying questions so at least there's some new pizzazz there.
i was clear that i was talking specifically about linux precisely because this isn't something one can do portably for exactly the reasons you're describing (which, yes, makes porting things built like this off of linux before the point you've built up enough to be able to go through libc annoying and ad hoc at the very least).
the fact remains that i can, right now, non-theoretically, on a well supported common unixlike os, and entirely unrelated to whatever weird crusade you seem to have invented to stand in for my side of this discussion, link a pile of assembly with -static -nolibc, fire up the repl, and mmap files into memory as i please with nary a bit of c on the userspace side.
as i originally said, i'm happy to consider linux a weird exception to the point you're making in a wider context since this isn't something you can do portably, but there still are entirely useful things one can do today with mmap that involve zero userspace c code on a widely supported platform.
edit: lol forgot to even get to this part. i'm also somewhat curious what you mean with this bit: "you're running what amounts to an emulator." perhaps i'm not firing on all cylinders today but i fail to see how it's useful to characterize performing bare syscalls from assembly (or something more high-level built out of assembly legos) as an emulator in any way, but i'm open to having missed some interesting nuance there.
¹ unless you mean trivially (seeing as this is code i imagined and then proceeded to write) in which case i suppose i agree
> If the user applications are going to request huge pages using mmap system
call, then it is required that system administrator mount a file system of
type hugetlbfs::
Note this otherwise has semantics similar to tmpfs; notably, it's usage is mutually exclusive with being able to supply a disk file fd to mmap!
On BSD, read() was already implemented in the kernel by page-faulting in the desired pages of the file, to then be copied into the user-supplied buffer. So from the first time mmap was ever implemented, it was always the fastest input mechanism. (First deployed implementation was in SunOS btw, 4.2BSD specified and documented it but didn't implement it.) Anyway there's no magic to get data off a device into memory faster, io_uring just lets you hide the delay in some other thread's time.
mmap is slow because stalling on page faults is slow. Your process stalls and sits around doing nothing instead of processing data you've read already. You can google the benchmarks if you like. io_uring wasn't built just for kicks.
Compilers aren't deterministic in small ways, timestamps, encoding paths into debug information, etc. These are trivial, annoyances to reproducible build people and little else.
You cannot take these trivial reproducibility issues and extrapolate out to "determinism doesn't matter therefore LLMs are fine". You cannot throw a ball in the air, determine it is trivial to launch an object a few feet, and thus conclude a trip the moon is similarly easy.
The magnitude matters, not merely the category. Handwaving magnitude is a massive red flag a speaker has no idea what they're talking about.
And that result of that magnitude is the paradigm of operation is just completely different. Good programmers create inputs, check outputs, and build up a mental model of the system. When the input -> output is not well defined you can't use those same skills.
> a paid, invite-only social network where every person is verified human and there's no algorithm
This seems like an incredibly niche product that only a handful of people are interested in to begin with. It isn't an notable or surprising result that building it resulted in little interest from general audiences.
At the same time, I see the appeal. I feel like 10% of the comments I read lately are "is this an AI response?" - would be nice to be free of that. Probably not possible tho.
It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.
In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.
It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".
Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it
This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.
Properly manage PATH for the context you're in and this is a non-issue. This is the solution used by most programming environments these days, you don't carry around the entire npm or PyPI ecosystem all the time, only when you activate it.
Then again, I don't really believe in performing complex operations manually and directly from a shell, so I don't really understand the use-case for having many small utilities in PATH to begin with.
Click any protocol, very few outside the core and absolute essential extensions have universal support.
reply