That is a very insightful point. It highlights the irony of complaining about "loss of control" immediately after voluntarily inviting an autonomous agent into the codebase.
Those specific logs are essentially a prop anyways. Removing them makes it harder to LARP as an active participant; it forces the realization that "we" are now just passive observers.
Because there are plenty of developers who'll say yes, so anyone saying no is putting their ethics ahead of their livelihood. Few people will be willing to put their beliefs ahead of providing for their family.
It's easy to say you will, and very hard to actually do it.
This is easy to say until you're an immigrant worker in a foreign country - something one probably worked for their entire life up to that point - risking it all (and potentially wrecking the life of their entire family) just to stop some random utility from having a Copilot button. It's not "this software will be used to kill people", it's more like "there's this extra toolbar which nobody uses".
I hadn't made more solid connections between the current state of software and industry, the subjugation of immigrants, and the death of the American neoliberal order until this comment thread but it here it lies bare, naked, and essentially impossible to ignore. With regards to the whole picture, there's no good or moral place to "RETVRN" to in a nostalgic sense. The one question that keeps ringing through my head as I see the world in constant upheaval, and my one refuge in meaning, technical craftsmanship, tumbling, is: Why did I not see this coming?
Because the society in US is arranged as a competition with no safety net and where your employer has a disproportionate amount of influence on your well being and the happiness of your kids.
I'm not going to give up $1M in total comp and excellent insurance for my family because you and I don't like where AI is going.
Just having the option of giving up $1 million in compensation put one far far far above meaningful worries about your well-being and the happiness of your kids.
I'll have to explain it to the wife: "well, you see, we cant live in this house anymore because AI in Notepad was just too much".
I'll dial up my ethical and moral stance on software up to 11 when I see a proper social safety net in this country, with free healthcare and free education.
And if we cant all agree on having even those vital things for free, then relying on collective agreement on software issues will never work in practice so my sacrifice would be for nothing. I would just end up being the dumb idealist.
I don't think you should make any change you don't want to, I'm not arguing for collective agreement on anything, and I'm not convinced there's a big ethical case for or against AI, even in Notepad.exe. If you can make $1M, go nuts, I just think it's not a great example of dealing with ethics & tradeoffs.
I was more just reacting to your the contrast between ideas early in this thread, and your implication of a $1M comp. Early in the thread there was implication that poor/exploited/low-level workers with few other options were either being blamed for AI in notepad, or should not be blamed. Then you casually drop the $1M comp line. Maybe that's real, maybe it's not but regardless, it felt silly to compare the earlier population with people who can or have made $1M. Of course we all face challenges, and the hedonic treadmill calls for us equally at $1K/year and $1M/year, I just think people in the latter have objectively more options, even if the wife complains, than people in the former, and it's tough to take the latter seriously when they talk about lifestyle adjustments.
Your solution for us to all agree to do the same thing is not realistic for the same reason that recycling doesn't really work, why we have a myriad of programming languages and similar but incompatible hardware, etc.
There is always someone who will take advantage of the prisoners dilemma.
The silver lining could be that people just spend more time in the real world, discussing important things. Which is definitely good for peoples autonomy and freedom. That's why I'm not too bothered by AI slop for instance, making the internet a worse, less rewarding/novel place in general.
It's their way of attempting to fight user churn. Forums need all the help they can get in that regard given the attention economy of today and the giants they're attempting to fight against. Anything novel is a win.
Conspiracy theory: those long-tail videos are made by them, so they can send you to a "preferable content" page a video (people would rather watch a video than read, etc), which can serve ads.
I mean perhaps, I don't know what lm28469 mentions, perhaps I can test it but I feel like those LLM generated videos would be some days/months old.
If I ask a prompt right now and the video's say 1-4 months old, then the conspiracy theory falls short.
Unless.. Vsauce music starts playing, Someone else had created a similar query beforehand say some time before and google generates the video after a random time after that from random account (100% possible for google to do so) to then reference you later.
Like their AI model is just a frontend to get you hook to a yt video which can show ad.
Hm...
Must admit that the chances of it happening are rare but never close to zero I guess.
Those specific logs are essentially a prop anyways. Removing them makes it harder to LARP as an active participant; it forces the realization that "we" are now just passive observers.
reply