Not OP: how would you handle the second interrupt during the interrupt handler here then? I can see how you could use two separate ring buffers for different contexts, but I don't see how to handle the nested interrupt. Also indeed they just drop all these samples that get deadlocked.
Actually, as long as you use different ring buffers for interrupt/non-interrupt context, it should be fine to just drop if you encounter a deadlock due to interrupting an already running interrupt handler.
The code described is not nested interrupt handlers. It is eBPF code executing during a context switch which is interrupted by the sampling NMI which is also configured to execute eBPF code.
NMIs will not nest, so there is no risk of arbitrary nesting. So, there should be at most three nesting levels: regular, interrupt (I suspect they do not do logging during interrupts so this may not even exist in their use case), non-maskable interrupt.
Off the top of my head I can think of at least 5 unique ways to not drop the sample with your idea of separate ring buffers being one of them.
One thing wasn't clear for me in the article: is there only one such ringbuffer defined by the kernel or can the eBPF program specify as many ringbuffers as it wants?
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
> The entire point of the article is that you cannot throw from a destructor.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors
It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".
Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.
Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
Perhaps, but I fear you’re veering way too much into “clever” territory. Remember, this code has to be understandable to the junior members of the team! If you’re not careful you’ll end up with arcane operators, strange magic numbers, and a general unreadable mess.
The view transform doesn't necessarily have to be known to the fragment shader, though. That's usually in the realm of the geometry shader, but even the geometry shader doesn't have to know how things correspond to screen coordinates, for example if your API of choice represents coordinates as floats from [0.5, 0.5) and all you feed it is vertex positions. (I experienced that with wgpu-rs) You can rasterize things perfectly fine with just vertex positions; in fact you can even hardcode vertex positions into the geometry shader and not have to input any coordinates at all.
Rasterizing and shading are two separate stages. You don’t need to know pixel position when shading. You can wire up the pixel coordinates, if you want, and they are often nearby, but it’s not necessary. This gets even more clear when you do deferred shading - storing what you need in a G-buffer, and running the shaders later, long after all rasterization is complete.
I'm not sure exactly what you mean, but you can both output line primitives directly from the mesh shader or output mitered/capped extruded lines via triangles.
As far as other platforms, there's VK_EXT_line_rasterization which is a port of opengl line drawing functionality to vulkan.
reply