Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This criticism, like many others, attacks the mechanics of how LLMs think, apparently dismissing the models for not doing so using the same process, faculties, and background life experience as a human. It does not contain any compelling arguments to refute the notion that LLMs think. We may not have consumer level AGI just yet, but suggesting a LLM is just a dumb anything (stochastic parrot or otherwise) is a rather extraordinary claim to make about something that basically patterns the whole internet.

We’ve been here before. Our sense of place in the universe was first upset by the heliocentric model (we’re not as special as we thought), then the theory of general relativity (not as correct as we thought), then quantum mechanics (not even living in a deterministic universe, really). With all these fantastic discoveries behind us, now seems like the right time to learn that we’re not as smart as we thought either.



The stochastic parrot claim seems to crash people’s brains because they assume this is all about words. The fact that these models very clearly learn not just patterns of words, but patterns of ideas, and ideas about ideas, seems a very profound result. Personally I do remain unimpressed by how the current models ‘think’ on top of this knowledge, but I’d have to work very hard to be as cynical as to imply this was all a scam, or somehow not progress (whatever you think of the destination).


"these models very clearly learn not just patterns of words, but patterns of ideas, and ideas about ideas"

This is not at all clear to me.


As an new AI image generators are not just replicating existing pictures but generating wonderful new ones, language models are not just replicating average of what they seen but are capable of generating novel ideas.


They're certainly capable of remixing things they've seen, and adding in randomness will add novelty. Whether that counts as "creativity" is something people can debate : - )

I think that the reason image ones have caught on better in some ways is because they don't need to be accurate. We're not asking them to understand anything, just produce images based on text prompts (which is amazing stuff all by itself).


Remixing objects and designs consistently like "van gogh" or "super mario" really implies some kind of internal model or "understanding" of the world.

Image generation didn't catch on because of lack of accuracy, but because of how GOOD the results are. It's made artwork immediately accessible to the masses without the huge learning curve. That's where these AI's are going to shine very very quickly.


Oh yes - you show them X and they make a model of X.

Show them enough pictures labelled "Van Gogh" and they get an idea of what "Van Gogh" looks like. They dp am awesome job of that.

The problem with the text ones is that people think that showing them words mean that they make a model of the thing the words are describing, rather than of how those words go together.


> The problem with the text ones is that people think that showing them words mean that they make a model of the thing the words are describing, rather than of how those words go together.

I believe a colossal truth is that the most efficient way to learn how the words go together is to at least make some approximate model of what the words are describing. And our optimization algorithms and model architectures are good enough to find theae solutions.


I guess you are on a different level of abstraction that I am.

It's clear the art AI's have a model of "van gogh's" style, and apply it to create very unique forms of art. The neural model weight's aren't storing compressed images of van gogh, but relationships and mathematical models about a concept.


Yes, like that new game that an AI just generated the other day. I don't have the link but if someone could link it that would be appreciated.


The assertion seems to be that epistemology is boils down to data crunching.

But is epistemology purely empirical?

Color me skeptical. One feels that we're nailing the easy 3/4 of the question or so.


I think it's worse than that. We're doing a great job of generating text which looks reasonable X% of the time.

Which doesn't actually mean we've solved X% of the question - because there is no way of building on top of what we have to close the gap.


Are you talking about people or LLMs?


Hum, as someone who grew up in the times of the linguistic turn, I'm somewhat missing the groundbreaking revolution here. The entire idea of "text" was based on this.


> The fact...

Proof or at least citation somewhere?


They do not think.. They're attacking the mechanics of what it does.


We're certainly not as smart, or special, as we think, and I often wonder if other species like dolphins or whales -- or crows! -- see the universe as made especially for them.

But comparing the current AI situation, and its parroting LLM models, to the Copernic revolution is so over the top that it's absurd.

Sam Bankman, meet Sam Altman?


Is it really so over the top, though? We have machines that have basically internalized the Internet and can easily pass a Turing test, and yet we still have people trivializing these marvels as mere “parrots”. I believe this is partly a defence mechanism (we humans are inherently hubristic), and if a stubborn attachment to humanity’s “specialness” is the cause, then that certainly mirrors the psychology of Copernicus’s detractors.

In a way, though, the “parrot” moniker is apt. The (long aspirational) Turing Test was originally called the “imitation game”, and what’s better at imitation than a parrot? Apparently, it’s ChatGPT - I never did see a parrot write code.


There's a contradiction in pretending that "humans are not as smart as they think" and that "AI exists": it should be one or the other.

In other words, if machines can convincingly imitate us, does it mean machines are intelligent, or that we're stupid?


I don't see a contradiction.. Humans can both be intelligent and overestimate that intelligence (by thinking it's special and unobtainable) at the same time.


It's gets worse because human intelligence exists on a normal curve.

So is the AI smarter than "most" humans, "some" humans, or "a few" humans ?

At any level of that, you have a paradigm shift affecting millions to billions of people.


I think it showed us that the Turing test isnt enough


I've read that there is evidence that some ocean mammals have stronger emotional intelligence and experiences than humans do. Social relationships which are more complicated.

The experience of losing a mate might very well be a deeper kind of pain for them than humans are capable of feeling.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: