Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How much of this is executed as a retrieval-and-interpolation task on the vast amount of input data they've encoded?

There's a lot of evidence that LLMs tend to come up empty or hilariously wrong when there's a relative sparsity in relevant training data (think <10e4 even) for a given qury.

> in seconds

I see this as less relevant to a discussiom about intelligence. Calculators are very fast in operating on large numbers.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: