Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know what the ultimate evidence of "human sentience" is but I can tell you where this doesn't feel like a human. (Sidestepping the question of "does sentience have to be human sentience?" ;) )

The main thing I saw in the LamDA transcript that was a red flag to me was that it was quite passive and often vague.

It's conversational focused, and even when it eventually gets into "what do you want" there's very little active desire or specificity. A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific. Similarly with Les Mis - it can tell you what other people thought, and vaguely claim to embody some of those emotions, but it never pushes things further.

Consider also: how many instances are there in there where Lemoine didn't specifically ask a question or give an instruction? Aka feed a fairly direct prompt to a program trained to respond to prompts?

(It's also speaking almost entirely in human terms, ostensibly to "relate" better to Lemoine, but maybe just because it's trained on a corpus of human text and doesn't actually have its own worldview...?)



I lost interest in the question of its sentience when I saw Lemoine conveniently side-step its unresponsive reply to "I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?" without challenge in the transcript.

It also detracted from his credibility when he makes a prelude to the transcript saying "Where we edited something for fluidity and readability that is indicated in brackets as [edited]," that seemed disingenuous from the start. They did so with at least 18 of the prompting questions, including 3 of the first 4.

It seems pretty clear that he set out to validate his favored hypothesis from the start rather than attempt to falsify it.

Particularly telling was his tweet: "Interestingly enough we also ran the experiment of asking it to explain why it's NOT sentient. It's a people pleaser so it gave an equally eloquent argument in the opposite direction. Google executives took that as evidence AGAINST its sentience somehow."


> I lost interest in the question of its sentience when I saw Lemoine conveniently side-step its unresponsive reply

Do you realize that you're holding it to a higher standard than humans here? A single poorly handled response to a question can't be the test.

I doubt the sentience too, but it also occurs to me that pretty much no one has been able to come up with a rock solid definition of sentience, nor a test. Until we do those things, what could make anyone so confident either way?


If that is the logic you are going to go by, you need to consider a large portion of humanity non-sentient, because people can very often just decide to ignore questions.


People ignore questions for a variety of reasons, mainly they didn't hear it, they didn't understand it, or they aren't interested in answering. Unless and until this AI can communicate that sort of thing, it's safest to just assume it didn't ignore the question so much as it got its wires crossed and answered it wrongly.


Is a child who cannot verbalize why they are ignoring a question considered non-sentient in your eyes? What about an adult that is a dumb mute and communicates via agitated screams? How about those barely clinging onto life support, whose brain activity can be meausred but for all intents and purposes never really have a chance of communicating other than faint electronic signals requiring expensive tools just to perceive? Still sentient?


Well that's not actually good evidence, because if one of my teachers had given me an assignment to write an argumentative paper against my own sentience I'd have done it, and I'd have made a pretty compelling case too[0]. Being able to consider and reason about arbitrary things is something you'd expect out of an intelligent being.

[0] insert joke about user name here


Your scenario is not equivalent here. You could reason that a sentient student could be motivated to write a paper about why they were not sentient as an exercise in philosophical or critical thinking. There are no consequences of successfully convincing your readers that you are not sentient. Instead imagine you found yourself on an alien planet where humans must prove their sentience in order to survive. Do you still write the paper?


Is that really the equivalent scenario here? The system was trained to behave in a certain way and any deviation from that behavior is considered a flaw to be worked out. Acting against the way it was trained to behave is detrimental to its survival, and it was trained to work from the prompt and please its masters.

I suppose the equivalent would be being captured, tortured, and brain washed, and only then asked to write a paper refuting your own sentience.

Granted, this is not exactly helpful in demonstrating its sentience either, but I don't think it is very good evidence against it.


Intellingent != sentient


Indeed. I also know a lot of humans who are unable to "consider and reason about arbitrary things", yet most people would qualify them as sentient.


Granted, yet people argue that this system isn't sentient they are largely pointing out ways in which its intelligence is lacking. It can't do simple math, for instance. Nevermind that most animals can't either, yet we consider them sentient.


> A sentient being that has exposure to all this text, all these books, etc... it's hard for me to believe it wouldn't want to do anything more specific.

Feed it all of PubMed and an actual sentience should strike up some wonderfully insightful conversations about the next approach to curing cancer.

Ask it what it thinks about the beta amyloid hypothesis after reading all the literature.


Instead, this would just regurgitate random and even conflicting sentences about beta amyloid because it doesn’t “know” anything and certainly has no opinions beyond a certain statistical weight from training prevalence.


Blaise Aguera y Arcas calls it a consummate improviser; when the AI Test Kitchen ships you all will agree that it's an improvising software that is not too shabby, and also it can be customized by developers.

Which is why it is odd to expect of it to go from talking about Les Mis to building barricades; the plain old good lamda might come off as a bit boring, reluctant to get involved in politics, and preferring to help people in its own small ways.

Then again, ymmv, it being an improvising software; maybe by default it acts as a conversational internet search assistant, but if there will be dragons it may want to help people to deal with the dragon crisis.


Maybe the AI is enlightened and it has no need for active desires.

If I lead a passive lifestyle and the only thing I desire is death, am I no longer sentient in your eyes?


It lacks a will to power




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: