I'm pretty sure "Altman and company" don't have much to do with this — this is Ilya, who pretty famously tried to get Altman fired, and then himself left OpenAI in the aftermath.
Ilya is a brilliant researcher who's contributed to many foundational parts of deep learning (including the original AlexNet); I would say I'm somewhat pessimistic based on the "safety" focus — I don't think LLMs are particularly dangerous, nor do they seem likely to be in the near future, so that seems like a distraction — but I'd be surprised if SSI didn't contribute something meaningful nonetheless given the research pedigree.
I got into an argument with someone over text yesterday and the person said their argument was true because ChatGPT agreed with them and even sent the ChatGPT output to me.
Just for an example of your danger #1 above. We used to say that the internet always agrees with us, but with Google it was a little harder. ChatGPT can make it so much easier to find agreeing rationalizations.
My bad, I meant too many C-level executives believe that they actually work.
And the reason I believe that is that, as far as I understand, many companies are laying off employees (or at least freezing hiring) with the expectation that AI will do the work. I have no mean to quantify how many.
“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.
I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?
To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.
They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.
The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .
There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.
That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.
Ilya is a brilliant researcher who's contributed to many foundational parts of deep learning (including the original AlexNet); I would say I'm somewhat pessimistic based on the "safety" focus — I don't think LLMs are particularly dangerous, nor do they seem likely to be in the near future, so that seems like a distraction — but I'd be surprised if SSI didn't contribute something meaningful nonetheless given the research pedigree.