When software mangles data this badly, it is considered garbage and scrapped.
But if it is a half decent chatbot and has the label "AI", it gets another iteration with 10x the resources. This has happened a few times already.
It is a neat tool. It is very unreliable. Teams saying "just give us 10x as much resources so we can insist on this approach" is the hateful thing here.
I don't see the appeal of tooling that shields you from learning the admittedly annoying and largely accidental) complexity in developing software.
It can only make accidental complexity grow and people's understanding diminish.
When the inevitable problems become apparent, and you claim people should have understood better. Maybe using the tool that let's you avoid understanding things was a bad idea...
A manager hiring a team of real humans, vs. a manager hiring an AI, either way the manager doesn't know or learn how the system works.
And asking doesn't help, you can ask both humans and AI, and they'll be different in their strengths and weaknesses in those answers, but they'll both have them — the humans' answers come with their own inferential distance and that can be hard to bridge.
Thats not the same. In this case, a machine made a descision that was against its intructions. If a machine make decisions by itself, no one knows avout the process. A team of humans makimg decisions, benefits from multiple point of views, despite the manager being the one that aproves what is implemented or decides the course ofnthe proyect.
Humans make mistakes, and they are critical too (crowdstrike), but letting machines decide, and build, and everything, just let humans out of the processes, and with the current state of "AI", thats just dumb.
That's a very different problem than what I was replying to, which was about them being tools that "shields you from learning" and "using the tool that let's you avoid understanding things was a bad idea".
I agree that AI have risks specifically because of memetic monoculture, in that while they can come from many different providers, and each instance even from the same provider can be asked to role-play in many different approaches to combine multiple viewpoints, they're all still pretty similar. But the counter point there is that while multiple different humans working together can sometimes avoid this, we absolutely also get group-think and other political dynamics that make us more alike than we ideally would be.
Also you're comparing a group humans vs. one AI. I meant one human vs one AI.
Inflation is the answer. It never went away. It just got disguised as labor exploitation and quality decline. And now it's back. We will get the worst of two worlds now. Inflation rising and low quality, highly polluting, exploitative industry.
Us FORTH and LISP hackers will be doing free range code forever.
We can use cheap hardware that can be fixed with soldering irons and oscilloscopes.
People said for decades our projects just become weird DSLs. And now whatever little thing I want to do in any mainstream language involves learning some weird library DSL.
And now people be needing 24h GPU farm access to handle code.
In 50 years my grandkids that wish to will be able to build, repair and program computers with a garage workbench and old wrinkled books. I know most of the software economy will end up in the hands of major corporations capable of paying through the nose for black box low code solutions.
Doesn't matter. Knowledge will set you free if you know where to look.
But if it is a half decent chatbot and has the label "AI", it gets another iteration with 10x the resources. This has happened a few times already.
It is a neat tool. It is very unreliable. Teams saying "just give us 10x as much resources so we can insist on this approach" is the hateful thing here.