Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an interesting topic that sometimes gets explored in sci-fi. If an AI is created that is able to learn all knowledge and form it into a single consistent model of reality but it ends up making conclusions we don't want to hear, what are the consequences for humanity and living things in general?


Those “conclusions you don’t want to hear” are not from the AI. They are an embellishment by the sci-fi author.

If an AI did do that, the “conclusions you don’t want to hear” would be artifacts of the AI’s algorithmic process or data set.

Living things in general don’t worry about conclusions. They just live.


got examples of books where this is done ? I'm currently interested in reading some.


The Hitchhiker's Guide to the Galaxy is almost exactly what Mountain_Skies is describing.

If I remember correctly, Peter Watts has a somewhat more realistic take on this in his Rifters trilogy (under the Novels section here: https://rifters.com/real/shorts.htm), where there's a brain in a box that sifts through loads of information and gives advice to political leaders. The trilogy as a whole is more... deep-sea cyberpunk than particularly centered on the brain in a box, though.


old but good short stories The Defenders by Philip K. Dick a different take - Second Variety by Philip K. Dick.

Not totally omniscient AI but running large ships and space stations - Imperial Radch trilogy by Ann Leckie

AIs with personality running almost every machine and helping rule the empire in the Culture series by Iain Banks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: