> Once conversational intelligence machines reach a sort of godlike generality,
Sums up fantastical inevitablism.
Why would they? How could they? The data is telling us that they won't. Anybody who believes otherwise is ignoring the science.
The assumed path to "AGI" was more tokens, but more tokens actually means worse output, and if LLMs aren't a total technological dead-end, period, then the data is supporting smaller models meant for more specific things, ergo, LLMs are not giving AGI any time, ever.
Pure fantasy. Can't even call it sci-fi when it ignores the science entirely.
Ah, so you misread. I said conversational intelligence machines, but you focused on LLMs.
I don't know where LLMs will lead, but I haven't ruled out the possibility of improvements continuing to surprise us. If anything is unscientific, it would be overconfidence in one way or the other.
Confidence comes from statistics. The statistics say that smaller models are better than bigger ones, ergo I have I have confidence in saying what I did.
Even if smaller more specialized models outperform larger more generalized models for specific tasks with these architectures, that does not logically support your point or serve as meaningful evidence against my post.
These systems can have many models and the systems themselves could eventually arrive at knowledge they know they underperform on, automatically training new models for that subject. The final response wouldn't necessarily come exclusively from the specialized model, it might come through a model specialized at integrating knowledge between multiple models, asking the right questions, verifying and so on.
We're barely a few years into this, so it's premature to know what 100 years of developments will bring.