> Even 'thinking' models do not have the ability to truly reason
Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...
The problem of other minds is real, which is why I specifically separated philosophical debate from the technological one. Even if we met each other in person, for all I know, I could in fact be the only intelligent being in the universe and everyone else is effectively a bunch of NPCs.
At the end of the day, the underlying architecture of LLMs does not have any capacity for abstract reasoning, they have no goals or intentions of their own, and most importantly their ability to generate something truly new or novel that isn't directly derived from their training data is limited at best. They're glorified next-word predictors, nothing more than that. This is why I said anthropomorphizing them is something only fools would do.
Nobody is going to sit here and try to argue that an earthworm is sapient, at least not without being a deliberate troll. I'd argue, and many would agree, that LLMs lack even that level of sentience.
You do too. What makes you think the models are intelligent? Are you seriously that dense? Do you think your phones keyboard autocomplete is intelligent because it can improve by adapting to new words?
How much of this is executed as a retrieval-and-interpolation task on the vast amount of input data they've encoded?
There's a lot of evidence that LLMs tend to come up empty or hilariously wrong when there's a relative sparsity in relevant training data (think <10e4 even) for a given qury.
> in seconds
I see this as less relevant to a discussiom about intelligence. Calculators are very fast in operating on large numbers.
When I ask an LLM to plan a trip to Italy and it finishes with with "oh and btw i figured the problem you had last week with the thin plate splines yoi have to do this ...."
Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...