Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)
The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.
Even C++23 is largely usable at this point, though there are still gaps for some features.
Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?
If you assume AGI that is better than humans for effectively free of course it seems better.
But your assumptions are based on an idealized thing unrelated to anything that is shown.
No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.
Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.
Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.
Ah yes people were making emulators because emulators weren't a solved problem...
That isn't why people made emulators. It is because it is an easy to solve problem that is tricky to get right and provides as much testable space as you are willing to spend on working on it.
People rarely post proprietary code to GitHub. Most of it is open licenses that generally only require attribution. Some use a copy left license.
Software patents are not copyright in anyway they are a completely different thing.
So this isn't AI getting back at the big guys it is AI using open source code you could have used if you just followed the simple license.
Copyright in regards to software is effectively "if you directly use my code you need a license" this doesn't have any of the downsides of copyright in other fields which is mostly problematic for content that is generations old but still protected.
GitHub code tends to be relatively young still since the product has only existed for less than twenty years and most things you find are going to be way less than that in age on average.
But there's the rub. If you found the code on Github, you would have seen the "simple licence" which required you to either give an attribution, release your code under a specific licence, seek an alternative licence, or perform some other appropriate action.
But if the LLM generates the code for you, you don't know the conditions of the "simple license" in order to follow them. So you are probably violating the conditions of the original license, but because someone can try to say "I didn't copy that code, I just generated some new code using an LLM", they try to ignore the fact that it's based on some other code in a Github somewhere.
I don't believe any AI model has admitted to having access to private GitHub repos unless you count instances where a business explicitly gives access related to their own users things.
Are people repeatedly handling merge conflicts on multiple machines?
If there was a better way to handle "I needed to merge in the middle of my PR work" without introducing reverse merged permanently in the history I wouldn't mind merge commits.
But tools will sometimes skip over others work if you `git pull` a change into your local repo due to getting confused which leg of the merge to follow.
One place where it mattered was when I was working on a large PHP web site, where backend devs and frontend devs would be working in the same branch — this way you don't have to go back and forth to get the new API, and this workflow was quite unique and, in my mind, quite efficient. The branchs also could live for some time (e.g. in case of large refactorings), and it's a good idea to merge in the master branch frequently, so recursive merge was really nice. Nowadays, of course, you design the API for your frontend, mobile, etc, upfront, so there's little reason to do that anymore.
Honestly if the tooling were better at keeping upstream on the left I wouldn't mind as much but IIRC `git pull` puts your branch on the left which means walking history requires analysing each merge commit to figure out where history actually is vs where a temporary branch is.
That is my main problem with merge, I think the commit ballooning is annoying too but that is easier to ignore.
reply