Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.

> 2026 will likely see the arrival of systems that can figure out novel insights

Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.

[1] https://www.windowscentral.com/software-apps/google-ceo-agi-...

[2] https://www.tomshardware.com/tech-industry/artificial-intell...



> Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.

well yeah, you can't continue the ponzi scheme if you say "aw shit, it's gonna take another 10 years"

Microsoft/Google will continue to exist without, OpenAI won't


Gotta put the optimism in context vs. previous Sam Altman writing.

Here he says:

> Intelligence too cheap to meter is well within grasp.

Six months ago[0] he said:

> We are now confident we know how to build AGI as we have traditionally understood it.

This time:

> we have recently built systems that are smarter than people in many ways

My summary: ChatGPT is already pretty great and we can make it cheaper and that will help humanity because...etc

Which moves the goal posts quite a bit vs: we'll have AGI pretty soon.

Could be he didn't reiterate we'd have AGI soon because he thought that was obvious/off-topic. Or it could be that he's feeling less bullish, too.

[0]: <https://blog.samaltman.com/reflections>


In a recent interview with The Verge (available on YouTube), the DeepMind CEO said that LLMs based on transformers can't create anything truly novel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: