This might be possible outside the US, but in the US the internet has become reality. Trump tweets and it effects financial markets. People post on X, go viral, get hired by OpenAI. Filtering out news about institutional instability doesn't make institutions more stable, it just makes you less informed about it. And maybe one day you'll find yourself actually facing the consequences of that without knowing how you could have prevented it.
Hell, his tweets affect real world violence in the US. You have to keep an eye on his posts to figure out if there's going to be Nazi marches tomorrow.
I don't know what you're advocating for. Are you saying we shouldn't have any safety restrictions on AI because we're responsible for how we use the tool? The hardcore pornography people managed to get laws put in place where you need an ID to view it, pretty much every major AI company has measures in place to do harm reduction and save the user from themselves, so to some degree society kind of agrees with the side you're aruging against.
Great products sell methodology, not just code. Great developers produce methodology. So what OpenAI bought isn't a developer, but a meta-methodology owner. It's a bet on Peter's mind to produce leading methodology for agent applications.
For hacker news and Twitter. The agents being hooked up are basically click bait generators, posting whatever content will get engagement from humans. It's for a couple screenshots and then people forget about it. No one actually wants to spend their time reading AI slop comments that all sound the same.
reply