It’s partly that, but it’s also partly that the quality SUCKS. I’m frustrated with AI blogspam because it doesn’t in any way help me figure out whatever I’m researching. It’s such low quality. What I want and need is higher quality primary sources — in depth research, investigation, presented in an engaging way. Or with movies and shows, I want something genuine. With a genuine story that feels real, characters that feel real and motivated.
AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap. Sure, people are susceptible to it, and get engaged by it, but it’s not exactly what people want or aspire to.
I want something real, something that makes me feel. AI generated content is by definition fake and not genuine. A human is by definition not putting as much thought and effort into their work when they use AI.
Now someone could put a lot of thought and effort into a project and also use gen AI, but that’s not what’s getting spammed across the internet. AI is low-effort, so of course the pure volume of low effort garbage is going to surpass the volume of high effort quality content.
So it’s basically not possible to like what AI is putting out, generally speaking.
As a productivity enhancer in a small role, sure it’s useful, but that’s not what we’re complaining about.
> Soon ChatGPT will start to weave ads into their output because they'll need to make $.
AI enthusiasts need to anticipate that. We're in the VC subsidy phase, but the hammer will drop sooner or later. If you think ads are bad on Google and Facebook now, just imagine a Google that has to spend 100x more on compute to service your requests.
Plus the incentives are totally fucked. It's bad enough with search, it's far worse with AI.
Nobody (referring to companies) wants the best model. Or the one that gives the right answer the first time 100% of the time. They want the model that's just good enough to keep you prompting, but just bad enough that you use a fuck load of tokens and see a million ads.
Unless they start making these things say more expensive, pretty soon developers are going to start seeing ads in the comments of their damn source code. Or worse, suggestions to use paid services to solve all your problems, because companies paid to have the LLM shill it's products.
Yes exactly. In a decade or two people will wistfully look back on 2025 era GenAI similar to how people currently remember when Google Search was great 10+ years ago. The AI enshittification will probably be even more dramatic though, in part because of the immense cost. Despite getting more abusive every year, Google ads are at least still identifiable if you know to look for them. Once AI training weights start getting heavily manipulated for corporate and political reasons things could get pretty ugly.
AI posts/comments are meant to disenfranchise. AI posts/comments are made to flood the market of opinion and drown out the voice of the individual, to speed up dead internet theory and to remove the adversarial nature the old, open internet had with entrenched power/control (especially on narratives).
Sincerely, thanks for making this point. It may explain some subtle oddities I feel (cannot accurately reproduce them) I've observed using copilot in my daily workflow.
I mean, it gives product recommendations when you ask it to, so it's already doing that, I'm sure. It might not be making money by giving specific recommendations; but I bet it's at least getting money off Amazon referral links.
I experimented with some ai generated political spam on YouTube. The reality is a lot of people can’t tell the difference or don’t care. Given the demographic this site selects for, it’s easy to forget how many dumb people there are in the world.
> Given the demographic this site selects for, it’s easy to forget how many dumb people there are in the world.
Let's not go there.
I could see an argument that Hacker News users are a bit more book smart than the average internet user. But this site's user base is just as susceptible to motivated reasoning, myopia, and lack of empathy for those who view the world differently than them.
Those are all their own kind of intelligence. If anything, the book smarts can make those other areas disproportionately worse.
Once someone decides their intelligence guarantees their correctness, they stop questioning themselves. There was conversation here about Israel starving Gaza, and they Israel needed to provide for Gaza like the USA did for Germany, that that was acceptable/gold standard treatment of an occupied populace. When I looked up the numbers, and Israel was actually during starvation providing more calories per person than the USA did Germany, I was instantly downvoted and told people need more calories. Just for providing actual context to comments that were routinely being made.
Edit: I'm post rate limited from replying below. HN routinely chose to whitelist flagged Gaza discussions, but didn't whitelist comments of people who stated the minority opinion and whose comments were completely flagged into invisibility. If you arrived late and didn't get to read the original non-offensive but viewpoint challenging comments, you would assume everything from the 'wrong' viewpoint was so unhinged it had to be flagged, but many were just 'wrongthink' and not 'flag to invisibility' worthy. Or that there was group consensus on the discussion (obviously people just learned to stop posting on those threads if you had wrongthink).
Not sure how moderation can intervene, remove the topic flag and say it's 'a worthwhile discussion for HN' when the same moderation allows views/challenge of the narrative to be flagged to invisibility. It becomes more pontification than discussion at that point.
To someone who used to run a community, it is absolutely insane to me that on HN, users are given the tools to collectively censor people they disagree with, or even those who bring up inconvenient questions or truths.
HN moderators have the ability to take away people's voting privileges. It's either not an effective deterrent, not done at a large enough scale to be effective, or they are knowingly complicit in the manipulation.
Unfortunately, there's whole industries built around fooling people. It seems to me that we select for people who are good at fooling people and they become wealthy and successful. What would be nice is if we could start selecting for knowledgeable, helpful and useful people instead where insane amounts of money might actually bring some benefit to humanity rather than being spent on plunging us into dystopia.
I know people who get confused and consume AI content but when you point out that it's AI they're embarrassed they were fooled and upset. I've never heard the response "I don't care that it's AI." The tech bros will say that it's a "revealed preference" for AI, but it's really just tricking people into engaging.
I caught my mom watching a bunch of AI impersonations of musicians on Youtube singing slop that rarely rhymed or had any kind of message in the lyrics and with super formulaic arrangements. I asked her what she liked about them and it was like "they seem well made" and I showed her how easy Suno is to use and then showed her some of the bad missed rhymes and transcribed the lyrics to where she could see there wasn't any there there to any part of it (and how easy it is to get LLMs to generate better). It seemed to have been an antidote.
This is stuff that used to take effort and was worth consuming just for that, and lots of people don't have their filter adjusted (much as the early advent of consumer-facing email spam) to account for how low effort and plentiful these forms of content are.
I can only hope that people raise their filters to a point where scrutinizing everything becomes common place and a message existing doesn't lend it any assumed legitimacy. Maybe AI will be the poison for propaganda (but I'm not holding my breath).
The issue is that one could reasonably argue that about 95% of pop music is was already formulaic slop. Not just pejoratively, but much of it was even made by the same people. Everybody from Britney Spears to Taylor Swift and more modern acts are all being driven by one guy - 'Max Martin'. [1]
Once you see the songs he's credited with, you instantly start to realize it's painfully formulaic, but most people are happy to just bop their head to his formula of highly repetitive beats paired with simplistic and easy to sing 5-beat choruses.
Adam Conover discussed ad bumpers from the 1990s and 2000s. These were legal requirements for children's programming from the FCC. They're a compliance item, yet they were incredibly well made and creative in in many cases:
Because people at the top of their game will do great creative work even when doing commercial art and in many cases, will do way more than is perhaps commercially necessary.
So much of this AI push reminds me of the scene in 1984 where they had pornography generating machines creating completely uninspired formulaic brainrot stories by machine to occupy the proles.
You can take a thousand people and give them baseline technical skills for any medium. If you're lucky a few people out of your thousand will have a special kind of fluency that makes them stand out. from the rest.
Even more rarely you'll get someone who eats the technical skills alive and adds something original and unique which pushes them outside of the usual recycled tropes and cliches.
Martin is somewhere between those two. He's not a genius, but he's a rock solid pop writer, with a unique ear for hooks and drama, and stand-out arrangement skills.
Some of us are just here to make fun of VC folks that deserve to be endlessly mocked. The crossover with those that are also AI hype types just makes it funnier.
> AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap.
I'm not sure if they actually think that. I think it's more likely it's some combination of 1) saying what they need to say based on their goals (I need to sell this, therefore I will say it's good and it's something you should want) and 2) a contempt for their audience (I'm a clever guy and I can make those suckers do what I want).
My problem with 2) is: it works. If you look at most movies; both the way they are made and the stories they tell is really subpar these days and this has grown worse in the last decade or so. It's not the fact that CGI is used, it's the really lazy and sloppy way that it's used. Same goes for the stories: there are so few films that have a real story to tell that I have mostly stopped watching movies at all.
I'm totally convinced the industry can sell AI generated media just fine, even with the attitude you described.
EDIT: in similar vein the settings of movies/series are equally minimised, particularly in fantasy. Take for example Game of Thrones, Winterfell. This setting could never have worked in reality and yet people loved it. Brett Deveraux pointed out how silly it was and still.https://acoup.blog/2019/07/12/collections-the-lonely-city-pa...
CGI and Photoshop filters were 'fake' too. Until they weren't.
Every single time {something more convenient} got invented, the supports of the {older, less convenient thing} would criticize it to death.
Oil painting was considered serious art now. Probably the most serious medium in traditional art schools. But at Michelangelo's time he insisted to use fresco because he believed oil was "an art for women and for leisurely and idle people like Fra Sebastiano."[0]
Forward 100 years, oil replaced tempera and fresco.
Another example: Frank Frazetta insisted he didn't use references, except he did all the time[1]. Why? We'll never know the exact reason, but it might be that using photos as references was considered 'lesser.' And now it's completely normal, even the norm.
Looking back through art history, gen-AI art seems awfully inevitable.
>CGI and Photoshop filters were 'fake' too. Until they weren't.
IMHO they still are, watch any old movie with practical effects (Aliens, Star Wars, just to name 2) and compare them to any 2025 production, green screen movies might look spectacular but they look fake, flat and boring.
True pretty much across the board for all generative AI, IMO.
I do understand why people get somewhat enamored with it when they first encounter it because there is a superficial magic to it when you first start using it.
But use it for a while (or view the output of other people's uses) and all the limitations and repetitiveness starts to become pretty obvious, and then after a while that's all you see.
AI has a quality problem because of GIGO, and thats accelerating.
It simple vacuums up everything and in the past decade, everything was more and more shit.
Information entropy crossed with physical entropy. These MBAs will never invest in weeding out the garbage, and the rest of us will never get paid enough to do it ourselves.
I don’t really buy this argument. It assumes companies like OpenAI are incentivized to be unselective about training material. Instead what we see is things like making deals with Reddit for known valuable data. I don’t think any AI operator is training on brand-new unvetted so spam websites by default now.
Oh god. Reddit has a select amount of good data compared to other markets. But if you actually read through a thread, you will find absolutly random things upvoted that stray into the zeitgeist.
If you givw a valid opinion in the wrong subreddit you get muted. The inverse is also truth. You arw using a filter these AIs dont.
Ironically, AI blogspam, because it’s disingenuous and because Google’s PageRank has been fully defeated by spam (and Search ruined further by Google’s ads) in general, has ruined the web for research. It means that you are usually better off now asking a flagship model your research questions. Let it search and provide sources. You can always tell it the sorts of sources you consider reliable.
AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap. Sure, people are susceptible to it, and get engaged by it, but it’s not exactly what people want or aspire to.
I want something real, something that makes me feel. AI generated content is by definition fake and not genuine. A human is by definition not putting as much thought and effort into their work when they use AI.
Now someone could put a lot of thought and effort into a project and also use gen AI, but that’s not what’s getting spammed across the internet. AI is low-effort, so of course the pure volume of low effort garbage is going to surpass the volume of high effort quality content.
So it’s basically not possible to like what AI is putting out, generally speaking.
As a productivity enhancer in a small role, sure it’s useful, but that’s not what we’re complaining about.