Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No idea what this guy is talking about. Big tech are only talking about the existential risks to hide talking about the actual very real risks of how the technology will be misused.


IMO the biggest risk is that big tech will be the only ones allowed to do AI in a future regulatory regime, leaving smaller players out of the wealth making opportunity, and leaving society with a low diversity of options. And that regulatory capture is enabled ironically by people like Tegmark, who are pushing for restricting everyone with bureaucratic nightmares.


It's part of the whole EA schtick (FLI is in that space w/ their funding from the Center of Existential Risk).

I always found those guys annoying - they adopted sci-fi tropes while ignoring decades of data driven work on how to minimize misuse of technology.

It's like a postmodern version of Herman Kahn - overusing data driven models while ignoring the variability that arises from humanity.

Edit: also this article is a submarine piece from the AI Safety Summit in Seoul that was cohosted by the UK and SK, and was a flop [0]

[0] - https://www.reuters.com/technology/second-global-ai-safety-s...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: