Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's part of the whole EA schtick (FLI is in that space w/ their funding from the Center of Existential Risk).

I always found those guys annoying - they adopted sci-fi tropes while ignoring decades of data driven work on how to minimize misuse of technology.

It's like a postmodern version of Herman Kahn - overusing data driven models while ignoring the variability that arises from humanity.

Edit: also this article is a submarine piece from the AI Safety Summit in Seoul that was cohosted by the UK and SK, and was a flop [0]

[0] - https://www.reuters.com/technology/second-global-ai-safety-s...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: