If they intended to protect humanity this was a misfire.
OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.
We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.
OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.
Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.
Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.
If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.