Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems to be the engineer's version of a type of sentiment that has been expressed for hundreds, if not thousands of years: once you really understand a thing, it's not magical anymore. A great example of this is Mark Twain's writing on his experience with the Mississippi river before and after being a riverboat captain ("Two Ways of Seeing a River")[1].

[1] https://wordenenglishiv.weebly.com/uploads/2/3/6/5/23650430/...



Indeed. Eliza was created to kill the magic. The author, Joseph Weizenbaum said it precisely.

'It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experience observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, "I could have written that." With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curios, fit to be discussed only with people less enlightened than he.'

The phrase "to explain is to explain away" is Shakespearean in its precision.

But such regret at a loss of magic (or put another way, loss of ignorance) is IMO not a good sign, that a person wants to be somehow deceived, and I don't think that's healthy. Bit harsh perhaps but just my view.


I also like the AI Effect: https://en.wikipedia.org/wiki/AI_effect

It's the idea that "AI is anything that has not been done yet". Or as I like to say: "AI is any algorithm you haven't understood yet."

So you can go:

  * "That's not AI, that's just a regex over a string!"
  * "That's not AI, that's just a lookup over a dictionary!"
  * "That's not AI, that's just a series of if statements!"
  * "That's not AI, that's just a search for keywords in text!"
  * "That's not AI, that's just an optimized brute force over a large search space!"
  * "That's not AI, that's just a linear regression!"
  * "That's not AI, that's just a neural network!"
  * "That's not AI, that's just Bayesian Statistics!"


AI effect also stems from naming your research "AI" which is pretty broad and it's meaning can change with context.

Say for example, I go on a quest to create "AI" from scratch and start with inventing string interning to keep track of symbols. It would be pretty big deal for me, but it would absolutely not be AI which was an ill defined goal from the start.

String interning though will be useful for a lot of disciplines, and a good marketing department will start calling it AI to get more moolah out of it.

This is exactly what happened in the 80s and its practically what is happening today. "AI" is a great motivator to call any of your project a success, because it encompasses everything. Programming Languages, GUI, Networking and whatnot have come out of "AI" research.

In my books, AI just mean one thing. A general purpose machine that can do anything that a human can or more. Not chess, not starcraft, not spying, but everything. People have started calling this hypothesis strong AI, but I think AI will do. This should be the final goal. Anything before that, Programming Languages, Deep Learning, Networking, Hardware Design should be called by their own names and merit.


I consider this argument flawed because it equates intelligence with human intelligence. The field is Artificial Intelligence, not Human Artificial Intelligence.

A dog is intelligent, as is a pigeon. Even bees and some mollusks, like the cuttlefish, are intelligent. They can't think in all the ways a human can but at what they do, they are competent, even clever.

I feel the same is true for machine intelligences. It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans. When the human is thinking about Starcraft, we consider only how good their thought patterns are for that game. We do not look at their vision, walking, social skills or whatever. The same should be applied for the Chess or Go AI while it is playing Chess or Go. While one can complain that all they know how to do is play Chess, anything else is unfair.


The founders of the field left no doubt that their goal was, indeed, human-like intelligence, so the position you are stating here was the original goalpost-shifting.

There is nothing wrong with those original ambitions, and it is to nobody's discredit that progress has been slower than originally hoped (the same could be said in many other fields, from space travel to curing cancer.)

One can certainly find people (though not usually here) who insist that everything achieved so far is "just database lookup" or "just a machine doing what a person programmed it to do", and who leave little doubt that they would continue to do so regardless of what had been achieved. There are also philosophers who make more sophisticated versions of the same argument, such as by imagining p-zombies, which are unfalsifiably merely faking intelligence. Such people want to take the goalposts off the field, but the rest of us should be able to discuss what has been achieved, and what remains undone, without being distracted by arguments over the precise semantics of the phrase "artificial intelligence."


I agree. This is generally not the place to argue over semantics, but rather to discuss and discover aspects of technology so as they could be improved.


>> It takes intelligence to learn Go or Chess but it also takes intelligence to play, or at least this is what we say for humans.

Perhaps another way to see this is that humans use their intelligence to play Go and chess, but playing Go and chess does not require intelligence: a machine can do it, even though it's not intelligent; and it can do it better than any human. And perhaps it can do it better than any human because it's not intelligent.

Maybe then intelligence is not really useful for playing Go or chess, but for other tasks, that we haven't quite pinned down yet because we don't really understand what intelligence is in the first place. And maybe all the successes of AI that fall victim to the AI effect are all steps towards understanding what intelligence is, by pointing to what intelligence is not.

We think of intelligence as an absolute advantage, without downsides. But if humans, who are intelligent, are worse at tasks like chess and Go, than machines who are not intelligent, then perhaps we have to start thinking of intelligence as having both strengths and weaknesses. Perhaps we'll find that, while there are tasks that cnnot be accomplished without intelligence, there are also tasks for which being intelligent is an impediment rather than an asset.


Many humans can't do any of the things CS textbook AIs are supposed to do.

For example: play Go or chess at all, never mind to a high level. Write good music. Pass a Turing test. Drive at least as safely as average.

Maybe a third of the population is going to struggle with ticking off even one of those requirements. [1]

Someone who can do all of the above is comfortably in the top 5% of the human ability range.

Curiously, the usual list of goals looks suspiciously like the interest profile of a tenured CS academic.

Things humans do but AIs don't include:

Parsing complex social and personal interactions and maintaining maps of social and political relationships. Improvising solutions to problems using available resources. Converting word-of-thumb learning into memorable narratives - either as informal instruction, or as a formal symbol system. Communicating with nuance, parable, irony, humour, metaphor, and subtext.

Some humans can also parse complex domains and extract an explicit rule set from them - but that's a much less common skill.

Except for that last one - maybe - these all seem like they're much closer to the human version of intelligence than any goal based on a specific output.

[1] Even the driving, because many people can't drive at all, so it's not a 50% break at the average. And even the Turing test, because there are still a lot of humans with no Internet or computer experience, and they'd find the glass terminal experience very strange and unsettling.


Another thing that humans do but AIs don't: They recognize that they are doing everything on your list and wonder how they do it. This self-aware consciousness is something that goes beyond any particular skill.


I’m curious: has anyone actually tested how many humans “wonder how they do [a thing]”? What would such a test of the general population even look like?


I am not aware of any such study - not that that means anything. If one assumes, as seems reasonable to me, that a person's theory of mind is based on at least a tacit assumption that other people function somewhat like oneself, then one might make the working assumption that experiments on a person's theory of mind [1] also reveal something about how they tacitly perceive themselves. If you want to know something about their explicit thoughts about their mental capabilities, one could start by asking them.

[1] https://en.wikipedia.org/wiki/Theory_of_mind#Empirical_inves...


>> Many humans can't do any of the things CS textbook AIs are supposed to do.

I am not sure what you mean with "can't do". Could you please clarify?

Further, could you describe how you would determine that a person "can't do" something? For example, how would you determine that a person "can't (do)" play chess or Go at all?


> how would you determine that a person "can't (do)" play chess or Go at all

"Hey so-and-so, do you know how to play Chess?"


Edit: I don't that's the meaning of "can't do" that the OP had in mind. Why not let them clarify what they meant?


The root as I see is that Intelligence is still very loosely defined (or not defined at all). We cannot simulate something undefined.

And I am not talking about philosophical or linguistic definitions, but strict mathematical definitions with proofs and experiments.


[flagged]


> I mean beating a chessmaster was considered (at one point, and by some) to be the defining point where AI could be defined as Intelligent.

We had some pretty goofy ideas on what intelligence looked like and required back in the day. But I do think we realized that beating a chess grandmaster wasn't the defining point of intelligence, well before a chess program actually beat a grandmaster.


> goofy ideas [...] back in the day

Do you mean they looked goofy back then, or only in hindsight? My point is not about chess or racism but how we justify things, and the implied question of why we have to justify things to make ourselves look good.


That's the AI effect!

It says that every breakthrough in AI, once accomplished, forces us to reclassify the accomplishment as no longer being AI or intelligence. I'd suggest reading the full wikipedia article, but I quote:

> As soon as AI successfully solves a problem, the problem is no longer a part of AI. [...] practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked. [...] A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.

What happens when "strong" AI turns out to just be "wtv technique will finally yield the full spectrum of human intelligence"? The AI effect will be back claiming it's not AI, but just "wtv technique" was used to achieve the outcome.


The people who point out that AI has not yet replicated key aspects of human intelligence are merely stating facts. There is no basis for saying that they would necessarily do so even if and when this is no longer the case. What is the point of making such an allegation? It looks rather like an attempt at building a straw man: "claims that AI isn't there yet can be ignored because these people will never be satisfied."


Anyone with this problem should switch to physics, because it gets better as you learn more. I bet there are a lot of other things like that.


Any excuse to post something from the BBC's Feynman Archive.

https://www.bbc.co.uk/programmes/p018w2zl (1m26s)

Richard Feynman recalls a disagreement with a friend, where he maintains that a scientist can appreciate the beauty of a flower just as much as an artist. Feynman believes the aesthetics of a flower can be recognised by everyone. He says that a scientist can also imagine the plant at a cellular level, and see the beauty in its complex inner structure. He wonders whether this beauty can also be appreciated by other creatures


I disagree. I still love aspects of my job and will until the day I die. RF communication is still magic. The power of modern computers is absolutely amazing. Power efficiencies and transistor densities blow my mind. As a software engineer, I feel like I create something out of nothing.

I am completely bored with version control, flaky automated testing systems, decaying source code, the "development process", not being empowered to do good work, etc.

Things are fun when they're new. They're also fun when they don't require a lot of work. Small software is fun. Shoehorning features into a massive codebase or hunting for obscure bugs in a complex system is not always fun. Even when it is fun, it is draining.


But that’s the point - you fully understand the domain of software engineering, so it’s not magic, so at least some of the fun has gone out of it. You might not fully understand how RF communication works, or how the transistors in a chip are put together. An engineer working on either domain might well be bored of their work too. Imagine working 8 hours a day using suboptimal tools to model and tweak the radiant field of an RF antenna, or carefully optimizing the hardware demodulators, or dealing with the intricacies of Doppler frequency drift as your antenna sways in the wind (or is carried by a fast-moving object). Or, imagine a chip engineer having to redesign the scheduling pipeline for the 15th time to fix some silly flaw in one instruction, or a lithography engineer struggling to improve yields in the face of relentless quantum physical effects. Once you understand these things at a low enough level, the magic really can drain away.

But, of course, it’s useful to maintain a sense of collective achievement here - we’re where we are now because generations of scientists and engineers did the work to figure out the magic and how to harness it.


> you fully understand the domain of software engineering

I'm not sure I can say this of anyone, although I won't deny that there are a few people that tend to do a lot better at this than most ;)


Actually I'm an EE by education and an Extra class amateur radio operator. If you put me in a room with the components I could build a computer or a radio transceiver. I don't think understanding something necessarily removes the magic. Toil and frustration removes the magic. Working on things you don't believe in, or wasting energy on counterproductive tasks removes the magic.

If I suddenly stumbled upon a giant pile of money, I'd still do what I do because it's honestly amazing. The applied sciences are the closest things to magic we'll ever get.


OP and writer here, thanks for sharing this. I enjoyed that piece and the sentiment really resonates with me.


I can’t find the quote but it reminds me of Jonathan Creek. Paraphrasing: magic, once explained, is much more banal than it really seems.


Ha, that is always my go-to story for that idea. So far I've never met anyone who's heard it. Great autobiography.


I never understood the need for things to be magic. Isn’t cool that you can predict and harness these great forces?


"Newton has destroyed all the poetry of the rainbow, by reducing it to the prismatic colours."

- John Keats




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: