Do you mean why I care that I have to call the free function at every exit point of the scope? That's easy: because it's error prone. Defer is much less error prone.
Of course people do" virtual functions" in C, but I think this is not an argument despite C.
I noticed that making virtual in C++ is sooo easy that people start abusing it. This making reading/understanding/debugging code much harder (especially if they mess this up with templates).
And here C is a way - it allow but complicates "virtual". So, you will think twice before using it
can someone explain security consideration of placing scripts into $HOME?
Some time ago I moved all my scripts to /usr/local/bin, because I feel that this is better from security perspective.
There are no security implications, on the contrary.
It is objectively cleaner to keep your user scripts in your home, that way they are only in _your_ PATH, whereas putting them in /usr/[local/]bin implicitly adds them to every [service] user on the machine, which I can see creating obscure undesired effets.
Not even mentioning the potential issues with packages that could override your scripts at install, unexpected shadowing of service binaries, setuid security implications, etc.
not every.
I have not lived in 90s at all, I am from the current millennium, but.. I am nostalgic about 90s. This is strange, but I feel nostalgia about times I never live
Rasterizing triangles is a nightmare, especially if performance is a goal. One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps.
I did this stuff for a living 30 years ago. Just this week I had Deep Think create a 3D engine with triangle rasterizer in 16-bit x86 for the original IBM XT.
That does perspective correct texture mapping, and from a quick count of the instructions in the main loop is approximately 44 cycles per 8 pixels.
The process of solving the half-line equation used also doesn't suffer from any overlapping pixel or gaps, as long as both points are the same and you use fixed point arithmetic.
The key trick is to rework each line equation such that it's effectively x.dx+y.dy+C=0. You can then evaluate A=x.dx+y.dy+C at the top left of the square that encloses the triangle. Every pixel to the right, you can just add dx, and every pixel down, you can just add dy. The sign bit indicates whether the pixel is or isn't inside that side of the triangle, and you can and/or the 3 side's sign bits together to determine whether a pixel is inside or outside the triangle. (Whether to use and or or depends on how you've decided to interpret the sign bit)
(Apologies if my memory/terminology is a bit hazy on this - it was a very long time ago now!)
IIRC in terms of performance, this software implementation filling a 720p screen with perspective-correct texture mapped triangles could hit 60Hz using only 1 of the the 7 SPUs, although they weren't overlapping so there was no overdraw. The biggest problem was actually saturating the memory bandwidth, because I wasn't caching the texture data as an unconditional DMA fetch from main memory always completed before the values were needed later in the loop.
It's definitely not "fairly easy" once you get into perspective-correct texture-mapping on the triangles, and making sure the pixels along the diagonal of a quad aren't all janky so they texture has an obvious line across it. Then you add on whatever methods you're using to light/shade it. It gets horrible really quick. To me, at least!
The first link I posted, specifically lines 200-204 ( https://github.com/ralferoo/spugl/blob/master/pixelshaders/t... ) isn't quite what I remembered as this seems the be doing a texture correct visualisation of s,t,k used for calculating mipmap levels and not actually doing the texture fetch - you'll have to forgive me, it's been 17 years since I looked at the code so I forgot where everything is.
This is doing full perspective correct texture mapping, including mipmapping and then effectively doing GL_NEAREST by sampling the 4 nearest pixels from 2 mipmap layers, and blending both sets of 4 pixels and then interpolating between the mipmaps.
But anyway, to do any interpolation perspective correctly, you need to interpolate w, exactly as you would interpolate r,g,b for flat colours or u,v for texture coords. You then have 1 reciprocal per pixel to get 1/w, and then multiply all the interpolated parameters by that.
In terms of "obvious line across it", it could be that you're just not clamping u and v between 0,1 (or whatever texture coordinates you're using) or clamping them not wrapping for a wrapped texture. And if you're not doing mipmapping and just doing nearest pixel on a high-res texture, then you will get sparklies.
I've got a very old and poor quality video here, and it's kind of hard to see anything because it was filmed using a phone pointing at the screen: https://www.youtube.com/watch?v=U5o-01s5KQw
I don't have anything newer as I haven't turned on my linux PS3 for probably at least 15 years now, but even though it's low quality there's no obvious problem at the edges.
Forgot to add, that when you're calculating these fixed values for each triangle, you can also get the hidden surface removal for free. If you have a constant CW or CCW orientation, the sign of the base value for C tells you whether the triangle is facing towards you or away.
I'm not sure I understand the problem you're having.
Obviously, if you have translucent, then you need to be doing those objects last, but if you're using the half line method, then two triangles that share an edge will follow the same edge exactly if you're using fixed point math (and doing it properly, I guess!) A pixel will either be in one or the other, not both.
The only issue would be if you're wanted to do MSAA, then yes it gets more complicated, but I'd say it's conceptually simpler to have a 2x resolution and then downsample later. I didn't attempt to tackle MSAA, but one optimisation would be to write 2x2 from a single calculated pixel and but do the half line equation at the finer resolution to determine which of the 2x2 pixels receive the contribution. And then after you render everything, do a 2x2 downsample on the final image.
As the other posters have shown it’s not that hard.
Most graphics specs will explicitly say how tie break rules work.
The key is to work in fixed point (16.8 or even 16.4 if you’re feeling spicy). It’s not “trivial” but in general you write it and it’s done. It’s not something you have to go back to over and over for weird bugs.
> One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps.
> I did this stuff for a living 30 years ago.
So you did CAD or something like that? Since that matters far less in games.
Macro hygiene, static initialization ordering, control over symbol export (no more detail namespaces), slightly higher ceiling for compile-time and optimization performance.
If these aren't compelling, there's no real reason.
modules are the future and the rules for are well thought out. Ever compiler has their own version of PCH and they all work different in annoying ways.
Not the OP, but note that adding a std::string to a POD type makes it non-POD. If you were doing something like using malloc() to make the struct (not recommended in C++!), then suddenly your std::string is uninitialized, and touching that object will be instant UB. Uninitialized primitives are benign unless read, but uninitialized objects are extremely dangerous.
reply