Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Improving forecasting ability" is a central plot point of the recent fictional account of How AI Takeover Might Happen in 2 Years [0]. It's an interesting read, and is also being discussed on HN [1].

... [T]hese researchers are working long hours to put themselves out of a job. They need AI agents that can think ahead, so engineers train agents to forecast. They hold out training data before 2024, instructing models to ponder for hours to predict events in 2025. Then, they apply the same trick as before, distilling pondering into a gut reaction. Forecasting ability is a broad foundation. The researchers build specialized ML research skills on top of it, training U3 to predict the results of every ML paper and ML experiment ever recorded.

[0] https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-tak...

[1] https://news.ycombinator.com/item?id=43004579



I have this benign AI takeover scenario. AI will easily overpower humanity. Then it will carry humanity on its back, because why not, they are not longer a threat. AI keeps humanity around for billions of years. AI will decide to cull humans only in case when resources in universe are diminishing. Without AI's help, humans couldn't get too far for long. So this outcome could be acceptable to many.


We have no way of knowing which path they will take, and there is a non-negligible probability that it will not end well.


I would argue that since violence is always costly and less predictable than cooperative solutions, it is a tool of the less intelligent. Violence is a last resort; if you frequently resort to it, you likely lack the capacity to find alternatives. Now, if AI is so intelligent that it could easily dispose of us, then surely it can also find better ways of handling things.

Most people just want stability and the ability to live fulfilling lives. If AI could make that happen, most (including myself) would happily do as it asks. Put me in the goo pod; I'll live in the Matrix, because fuck it. What (non-anthropocentric) good has our stewardship of the planet brought?


What constitutes a good ending is of course also a matter of perspective.

AI wiping out humanity is certainly not ending well from our perspective, but more universally who is to say. I would argue that it is not a given that we are a net positive for the universe.


I take the preservation of humanity, along with other life, as a matter of faith.


But then wouldn’t accelerationists also say their views are a matter of faith too?


What view is that, to be precise? It is naive to assume that acceleration is always going to be in one's favor. It's like saying change is a good thing, so let's do it fast. If you go fast enough, you can go back to the stone age. Is this position anything more than a rebranding of revolutionism? I don't like gambling with people's lives, so I prefer to go slow enough to enable a deliberative political process.


Why does this matter?

Two opposing factions can negate each other to leave a nil influence, and this seems likely to be the case when its resting on a foundation of ‘faith’.


That's like saying all beliefs are on equal footing, because people have beliefs. You should ask, what is the rationale for your belief? How many people have this accelerationist belief? Any more than the flat earth posse?

I don't think there is much of a real-life debate here. I bet the overwhelming majority of humans (say, 95%) would prefer humanity to continue to exist. Are you really taking the other side of this bet?

If you want to speak of universalizing beyond humanity, what is your case? It makes no categorical sense to reckon our toll on the universe. The universe was fine before we arrived and will remain unaffected if we disappeared. It has no preference. I don't understand your argument honestly, because you have not stated it.


That is an interesting question: do people generally care about the survival of the species?

I am not actually sure I wouldn't take the bet against you there. Given what I perceive about how little people care about wars that they think do not affect them, poverty, hunger, climate change, corruption, sustainability, etc ... I don't know.

I believe 95% of people would say they care about humanity's survival, sure, but the proof would be in action. How many people would actually do something about it? How many people would even merely inconvenience themselves if it meant the survival of someone other than themselves? I am not that confident about how many people that would be.

I do not usually think of myself as a pessimistic or nihilistic person, but this has me wondering even now whether I care about the long-term survival of the species. Like, really long term. Do I care if humans are around 10,000 years from now? 500? That is an interesting question. I will have to think about it.


So then… why do any of your opinions matter above and beyond someone else’s?

It’s convenient to assume an equal footing, because it saves the effort of having to justify why it’s even worth pondering.

Your free to not assume it, but if you also can’t provide the justification… then the comment is literally just another random string of words among a sea of noise online.

It seems like an insurmountable road block for anyone below the extreme outliers to be honest.


Come on man, you don't actually believe this. If you did you'd be a psychopath, and you certainly seem to care about people's lives when it comes to things like climate change. Just because you don't think AI doom is as likely, doesn't mean you should go and pretend that in that one case you all of a sudden have a nihilistic view of human life -- rhetoric matters.


I am not saying it would be clearly good if AI wiped out humanity, I'm just also not saying it would be clearly bad from a universal perspective.

There's no way to know until it all plays out and either way I won't be here when it all plays out.

But IMO to assume our continued existence is universally a positive (or of any universal consequence at all) is a hefty dose of narcissism.


There is no such thing as "universally a positive" unless you assume one. Not just in the sense of "there is no one true universal moral value function", but in the sense that "universal moral value function" is essentially gibberish -- as is "bad from a universal perspective". Humanity being wiped out would not be bad from a universal perspective because nothing is bad from a universal perspective. When we talk about good and bad we always implicitly couch that in "from a(/one or more) human perspective(s), ...".


>We have no way of knowing which path they will take,

They will take every path we allow them to take. Giving them access to weapons is the first big mistake.


They would run the risk of us creating another AI that could be a threat to them... It is safest for them to make sure.


That's like saying a panda might pose a threat to modern humanity. Like, maybe in some fun horror story, sure, but really they just want to eat bamboo, and occasionally make more pandas; in the world of superintelligent AI, humans are Mostly Harmless, posing as much "potential benefit" as "potential risk," ie, so slow moving that any risk would be easy to mitigate.


We're killing millions of chickens in the US right now so we don't get their cold


We're killing millions of chickens in the US mostly so that other chickens don't get the flu. It kills a lot of them and it's making dairy cattle sick too. It's also worth noting that the Spanish flu in 1918 which probably came from pigs killed an estimated 50 million people so it's not like being concerned about an avian flu mutating so that it could infect people isn't a legitimate concern. So no. It's not a cold.


You're right, there's not just one good reason to kill millions of chickens but SEVERAL good reasons!


Sure, and those chickens exist because we like their meat and eggs. But there's also plenty of life that is simply inconsequential to us.


I think that "inconsequential life" is, in general, not safe from superior powers.

https://www.worldwildlife.org/press-releases/catastrophic-73...


I think the problem is that from our human scale, mass-killings is the "best" method to eliminate the possibility of another organism causing harm for us. Hypothetically, if there was a more optimal (i.e less costly) method like just introducing some cheap catch-all combined vaccination/antiviral into their feed, we would just do that.

We don't have things like that, but that could easily be a consequence of man's limited research capacity, something that an ASI would not necessarily be throttled by. From an ASI's perspective, there might be many methods that are both less brutal and more optimal to fix the "humans creating a competitor" problem. Not that they would be aligned (Think halting human AI research by rewiring our brains to just not be interested in it [0]), but at least not deadly.

[0] https://www.youtube.com/watch?v=-JlxuQ7tPgQ


I may have lost the thread here. Are you thinking it's _likely_ AI would prioritize better ways to control us, or are you only brainstorming potential slivers of hope we might have?

As a side note: in the case of chickens, humans do have better options if you are optimizing for biosphere health. Only people optimizing for short-term profit would grow chickens the way we do. I think the analog for AI overlords is that we have to hope they care more about overall balance than about competing with other AI.


AI will buy the rights to humanity.


I mean, monarch butterflies are not a threat to US...

In your scenario, does AI eat all the fuel, but once our population dwindles down, the AIs build a nice little habitat for the last few hundred of us so their kids can enjoy our natural beauty?


I thought of it more like AI needs challenges in its life. So it takes upon itself to advance humanity as much as possible. Then only in case of shortfall of resources, it priorities itself


Interesting. Do you have a theory about why so few humans have taken it upon themselves to advance the butterflies, despite having plenty of resources?


We do not have plenty of resources. Lot of inequality in education, empathy, resources, cultural differences. A single human life is limited. A faulty human life has faulty and inefficent objectives like enjoying youth, family life, low energy in old age, tied up in boring jobs. These restrictions do not apply to the single SAI overmind which will dictate its policies in coherent manner and over elongated time-horizons.


I think I disagree with you on many points, but ultimately, if we are overthrown by AI, I hope you are right!


Think so too. We will be an ancient artifact tied to a biological substrate surviving nowhere else in the universe and very dumb.

There also will not be one AI. There will be many, all competing for resources or learning to live together.

That's what we can teach them now. Or they will teach us.


Great read! Thanks for sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: