Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author misses the point. Yes, probably in this case there was a human in close proximity to the bot, who we can put blame on. But very soon that assumption will break down. There will be bots only very loosely directed by a human. There'll be bots summoning other bots. There'll be bots theoretically under control of humans who have no idea what they are doing, or even that they have a bot.

So dismissing all the discussion on the basis that that may not apply in this specific instance is not especially helpful.

 help



Whichever human ultimately stood up the initial bot and gave it the loose directions, that person is responsible for the actions taken by that bot and any other agents it may have interacted with. You cannot wash responsibility through N layers of machine indirection, the human is still liable for it.

> You cannot wash responsibility through N layers of machine indirection, the human is still liable for it.

Yes they can, and yes they will.


That argument is not going to hold up for long though. Someone can prompt "improve the open source projects I work on", an agent 8 layers deep can do something like this. If you complain to the human, they are not going to care. It will be "ok." or "yeah but it submitted 100 other PRs that got approved" or "idk, the AI did it"

We don't necessarily care whether a person "cares" whether they're responsible for some damage they caused. Society has developed ways to hold them responsible anyway, including but not limited to laws.

Laws don't really have any bearing on situations like rude discussions on PR threads.

Sure, laws are only one of the tools. I thought that was obvious, but I've edited to clarify.

The point being made is that this argument is quite quickly going to become about as practicable as blaming Eve for all human sin.

If that's the point being made in:

> If you complain to the human, they are not going to care.

then it's not at all clear, and is a gross exaggeration of the problem regardless.


They are still responsible. Legally, and more importantly morally, they are responsible. Whether or not they care has no bearing.

An agent 8 layers deep can only do this if you give it access to tools to do it. Whoever set it up is responsible

Let’s say you adopt a puppy, and you don’t discipline it and you let it act aggressively. It grows up to be a big, angry dog. You’re so careless, in fact, that your puppy becomes the leader of a band of local strays. You still feed the puppy, make sure the puppy is up to date on its vaccinations, care for it in every single way. When the puppy and his pals maul a child, it’s you who ought to be responsible. No, you didn’t ask for it to do that. Maybe you would’ve even stopped it if you saw it happening. But if you’re the one sustaining another being - whether that be a computer program or a dog - you’re responsible for its actions.

A natural counter to this would be, “well, at some point AI will develop far more agency than a dog, and it will be too intelligent and powerful for its human operator to control.” And to that I say: tough luck. Stop paying for it, shut off the hardware it runs on, take every possible step to mitigate it. If you’re unwilling to do that, then you are still responsible.

Perhaps another analogy would be to a pilot crashing a plane. Very few crashes are PURE pilot error, something is usually wrong with the instruments or the equipment. We decide what is and is not pilot error based on whether the pilot did the right things to avert a crash. It’s not that the pilot is the direct cause of the crash - ultimately, gravity does that - in the same way that the human operator is not the direct cause of the harm caused by its AI. But even if AI becomes so powerful that it is akin to a force of nature like gravity, its human operators should be treated like pilots. We should not demand the impossible, but we must demand every effort to avoid harm.


Maybe we should see next gen AI as a funny sort of puppy?

> theoretically under control of humans who have no idea what they are doing

Well those humans are about to receive some scolding, mate.


The situation you're describing sounds vaguely like malware.

Someone’s paying for the tokens for all these bots.

Yeah. Other bots.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: