Well, sometimes your compiler will work out how to more efficiently compile a thing (e.g. vectorize a loop), and other times you'll rework some code to an equivalent formulation and suddenly it won't get vectorized because you've tripped some invisible flag that prevents an inlining operation that was crucial to enabling that vectorization, and now that hot path runs at a quarter of the speed it did before.
Technically it's (usually) deterministic for a given input, and you can follow various best practices to increase your odds of triggering the right optimizations.
But practically speaking "will I get the good & fast code for this formulation" is a crap shoot, and something like 99% (99.99%?) of programmers live with that. (you have guidelines and best practices you can follow, kinda like steering agents, but rarely get guarantees)
Maybe in the future the vast majority of programmers put up with a non-deterministic & variable amount of correctness in their programs, similar to how most of us put up with a (in practice) non-deterministic & variable amount of performance in our programs now?
Not strictly true afaik? If you own the copyright to the entire codebase you can relicense at will to a different license. (that's what CLAs enable among other things)
Not sure whether you'd still be entitled to the source code under the previous license then.. can a copyright owner revoke a previously issued license to the code? Haven't heard of it, but wouldn't surprise me if it's legal.
Sure, you can change the license, but the old license still applies to the code as it was before you changed it. Assuming you're using a legit open source license the first time around, nothing changes regarding how you can make use of the old code; all they can do is make it harder to find (close the repo) or harder to make use of (squashing/flattening the commits to make it impossible to get the correct historical version), both of which are trivially bypassed by using a third party fork or source release.
Presumably others will write the prompts (or equivalent directing mechanism) that will steer the generation, such that you can act out whatever fantasies interest you.
Likely a separate issue, but I also have massive slowdowns whenever the agent manages to read a particularly long line from a grep or similar (as in, multiple seconds before characters I type actually appear, and sometimes it's difficult to get claude code to register any keypresses at all).
Suspect it's because their "60 frames a second" layout logic is trying to render extremely long lines, maybe with some kind of wrapping being unnecessarily applied. Could obviously just trim the rendered output after the first, I dunno, 1000 characters in a line, but apparently nobody has had time to ask claude code to patch itself to do that.
FYI the sandbox feature is not fully baked and does not seem to be high priority.
For example, for the last 3 weeks using the sandbox on Linux will almost-always litter your repo root with a bunch of write-protected trash files[0] - there are 2 PRs open to fix it, but Anthropic employees have so far entirely ignored both the issue and the PRs.
Very frustrating, since models sometimes accidentally commit those files, so you have to add a bunch of junk to your gitignore. And with claude code being closed source and distributed as a bun standalone executable it's difficult to patch the bug yourself.
Hmm, very good point indeed. So far it’s behaved, but I also admit I wasn’t crazy about the outputs it gave me. We’ll see, Anthropic should probably think about their reputation if these issues are common enough.
I think state of the art LLMs would pass the Turing test for 95% people if those people could (text) chat to them in a time before LLM chatbots became widespread.
That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.
Unity is Unreal Engine's biggest competitor by far. Godot competes with Unity (mostly for 2D games) but is at least a decade off being any threat to Unreal.
So yes, funding Godot is A Nice Thing To Do but it also conveniently puts a bit of pressure on Unity, their biggest competitor, without impacting their own business.
Also, if you believe Matthew Ball's take[0] then Epic is all-in on fostering as many gamedev-ish creators as it can so that it can loop them all into making content for its metaverse later. As you alluded to, in the long term funding a FOSS game engine which is focused on ease of use helps that too.
4 years ago I tackled exactly those courses (raytracer[0] first, then CPU rasterizer[1]) to learn the basics. And then, yes, I picked up a lib that's a thin wrapper around OpenGL (macroquad) and learned the basics of shaders.
So far this has been enough to build my prototype of a multiplayer Noita-like, with radiance-cascades-powered lighting. Still haven't learned Vulkan or WebGPU properly, though am now considering porting my game to the latter to get some modern niceties.
Technically it's (usually) deterministic for a given input, and you can follow various best practices to increase your odds of triggering the right optimizations.
But practically speaking "will I get the good & fast code for this formulation" is a crap shoot, and something like 99% (99.99%?) of programmers live with that. (you have guidelines and best practices you can follow, kinda like steering agents, but rarely get guarantees)
Maybe in the future the vast majority of programmers put up with a non-deterministic & variable amount of correctness in their programs, similar to how most of us put up with a (in practice) non-deterministic & variable amount of performance in our programs now?
reply