This is why people not trained in CS shouldn't speculate on it, when they do you get garbage like this.
The brain performs computation, it is by definition a computer. Before modern computers there was a human job, called, computer. One who performs computations. The analogy of the brain to a computer is perfect, because the brain is a computer.
Except the author does in fact have a BS in CS. IMO, the brain does far more than performing computation. Sure humans were once "computers," but our ability to perform computation is not a valid argument to that computation, or the inner workings of our mind, being the same as digital computers performing computation. You failed to look up the author's credentials and I feel like you may have also failed to read the entire piece.
> the brain does far more than performing computation
In your opinion? Can you be more specific, and give examples of things the brain does that are not computable?
This is trivially true in some senses, neurones have analogue responses, that the biology does a good (but not perfect) job of thresholding. But then, the same thing can be said of transistors. It's just we're able to engineer their analogue responses out much more successfully than evolution has. It is also true that the brain is connected to a much broader system which is undoubtedly analogue (i.e. the body), but then again, it isn't clear that isn't true of any non-abstracted computer.
Comparing theoretical and idealised computing to embodied brains might feel insightful, but it doesn't actually resolve any of the real issues in the philosophy of AI.
Also, a BS in CS isn't a good minimum qualification for competency in the philosophy of AI. I wouldn't read much into that.
I was mainly commenting on the OP's position that the author was 'not trained in CS'.
Here is a list of things I believe that the human brain performs outside of computation:
The irrational motivations that take us over when we feel love. The way that our mood can affect a decision. Taking a walk to enjoy the beauty of the sunset. Writing a satirical short story to express a political fallacy. Painting an image that we saw in a dream. Using metaphors to explain an idea or to validate an argument. Telling a joke and understanding why it is funny. Buying a shirt because it looks cool.
There are many more. But to me, there is definitely more to the mind than simple computation. There is a quality to our experience that is completely lost when our inputs and outputs are equated to the workings of a digital computer.
Ultimately these arguments come down to qualia. But there is no reason to think qualia is a) present in other brains except your own, but b) not present in any non-brain information processing system. Philosophers of mind have tried to make that argument, but end up appealing to intuition.
In terms of computability, folks like Boden and Sloman showed in the 90s that emotion is compatible with computability. Even more so, that emotion is implementable in symbolic computation. Of course, one could declare such systems have no qualia of emotion. But can you do more than declare that, while not simultaneously creating arguments that could apply to other brains?
I get you are working on intuition. But there's fifty years of actual research been done on this. Waving your hands and appealing to your 'humble opinion' isn't how scholarship works.
Secondly, a good argument is not a proof. We have such a limited understanding of our own emotion and cognition that it is impossible to say that we have proven anything about how compatible emotion is with computation. Hands are continually waved over the existence of qualia and the argument, like you presented yourself, is that we have no way of proving that qualia won't exist for an artificial intelligence that has been programmed for emotional compatibility. I believe that is a fallacy, and it is side-stepping the root of the argument about qualia, and the real root of what self-realization and consciousness is. By saying, look I can make this machine do everything you do and act as if it feels like you do, you are not proving that you have encapsulated all that is cognition. You are only proving that you can mimic cognition. It could be retorted "Well how do you know this machine doesn't really feel like you feel?" I don't know that. But we really aren't learning anything about consciousness by ignoring the question with such a fallacy.
I believe AI has a very important role in rapidly evolving our way of life as we continue in this technological evolutionary cycle. However, I think it does nothing to teach us about ourselves and how our minds actually work. It is nothing more than mimicry. And nothing can be proven based on how an AI bot operates for the materialist or the idealist, so it is aimless to think that AI is how we will understand our own cognition.
> is that we have no way of proving that qualia won't exist for an artificial intelligence
You misunderstood my objection. The issue is not whether we can prove such a thing, but can we differentiate between different information processing systems in arguments that qualia exist at all. You assume qualia is a thing that brains do. On what basis do you assume anyone other than you have them, such that implies that no other systems do?
Qualia is one of those topics academics tend to roll their eyes at when it is brought up. Because it appears to do a lot, but in most cases is a variation of 'because I feel like it should be true'.
If you can simulate matter, certainly you can simulate a brain. Qualia appear not to derive from pixie dust but accumulated experience in the world. You might need to train a brain for a long time to develop qualia. You might not be able to copy qualia from one brain to another. I think these are the main objections, but I'm not sure.
Personally, I'm not convinced that simulating matter is feasible, never mind a living organism, never mind an intelligent living organism, which is what the brain is, when you account for all of it.
We know how far away Andromeda is. We know how to build a spaceship. Therefore, it is possible to go to Andromeda. Is it though? What if the Earth doesn't have enough resources? How big is your brain simulator allowed to be?
You're just describing behaviors that seem strange or unusual and then asserting that they are somehow not computation.
It doesn't matter what happens inside the brain. If it's governed by normal physics (no divine magic), there will be some computer program equivalent to it. We know that's true, even though we don't know how the brain works.
. . . our ability to perform computation is not a valid argument to that computation, or the inner workings of our mind, being the same as digital computers performing computation.
No one [1] thinks that the human brain is likely to be a digital computer performing arithmetic on fixed-width binary integers and IEEE floating-point numbers. The argument for AI is simply that the mind appears to be a material object and the laws of physics appear to be Turing-equivalent. If that holds, the mind is provably equivalent to a Turing machine (or, less likely, some less powerful class of automaton). Arguing about whether in fact that does hold, as Penrose does, can be justified (though I personally think Penrose lost the plot years ago). Arguing about whether your preferred model for categorizing abstract ideas renders the argument inconceivable, with no reference to experimental results or formalisms, can not.
[1] Feel free to add your preferred qualifiers here. The existence of people who believe tobacco companies are run by lizard-people has no bearing on whether cigarettes cause lung cancer.
> The argument for AI is simply that the mind appears to be a material object and the laws of physics appear to be Turing-equivalent.
This line of argument does achieve the goal of making it trivially true under the definitions given that the brain is a computer, but it seems to me it robs the assertion of insight -- e.g., the brain is computer because everything is a computer, including the nearest rock, since it's also a physics-governed system.
> This line of argument does achieve the goal of making it trivially true under the definitions given that the brain is a computer, but it seems to me it robs the assertion of insight
I think, rather, it reveals the fundamental lack of clarity of the contrary position.
> e.g., the brain is computer because everything is a computer, including the nearest rock, since it's also a physics-governed system.
Right. But no one is questioning the ability to build a computer that simulates the behavior of a rock; or most other physics-governed systems. The AI-is-impossible position boils down to the argument that the mind is not like all other physics governed systems, though it tends to waffle and hedge and bob and weave around that point rather than coming right out with it. Pointing to Turing equivalence and the apparent computability of natural phenomena forces the AI-is-impossible-because-the-mind-is-not-the-kind-of-thing-a-computer-can-simulate argument to come straight out and either (1) reject the universal computability of physical systems, or (2) reject the mind as a physical system.
It still, of course, leaves plenty of room for the proposition that AI is possible but really quite hard.
>But no one is questioning the ability to build a computer that simulates the behavior of a rock; or most other physics-governed systems.
Actually it sounds especially difficult.
A crude approximation (simulation) at best.
Nothing like a full rock, with its full interactions with its full environment (that might need simulating the whole universe), and with oversimplifying most of its properties (molecular interactions, heat dynamics, etc).
Now make it a "wet rock" and we're even further away (and I wont even dare ask for mold on it or anything, much less living micro-organisms)...
> But no one is questioning the ability to build a computer that simulates the behavior of a rock; or most other physics-governed systems.
I'm willing to make that challenge: a complete simulation of rock sounds like a formidable problem to me, one I'm not at all certain is within state of the art.
Particularly if (as we're generally imagining when we're talking about simulating brains) we're asking the simulation to be able to stand in for its analog as part of a process-chain connected to a non-simulated situation.
It certainly spectacularly fails to provide any advice on the practical problem of engineering AI [1]. The same is true of many non-constructive mathematical proofs, though - and I can't say I find such proofs any less insightful for that.
[1] Though, at least in my opinion, the cognitive science and AI research people have made great progress here - by studying real brains and the formalisms underlying computation.
I think there are far too many assumptions about how the body works and the laws of physics (both of which we are far away from completely understanding) to say that the mind is equivalent to a Turing machine.
But that simulation still ignores a certain qualia that has grown from our existence. There is something else happening that we don't understand and I think it is a bit short-sighted to assume that we can simulate it, definitely not now, and possibly never.
It seems like you are the one speculating . . . about the author's credentials.
Also, the fact that thinking includes computation does not imply that computation includes thinking. There is a strong case that the brain is a computer, but really no one knows. Even if the brain is 100% material, there are people who argue it cannot be simulated on a computer. In part what we're talking about is the Church-Turing Thesis, which is still just a conjecture.
It's not clear how much the brain is like most of the computers even CS specialists or engineers have encountered. Actually, that's putting it mildly. AFAICT it's pretty accurate to say that the brain is about as far from a Von Neumann computer as it is from older favorite analogies like clockwork.
Also, while it's clear the brain can do computation, it's not clear that everything it does is computation.
If you take models of computation you will learn anything that can do computation is a computer. A bag of rocks is a computer, you can add and remove rocks to perform computation.
Nope. Can I suggest you read Penrose again with a little more critical thought.
There are problems it is possible to prove are not computable. But can you prove that human beings can solve them?
Tessellation of the infinite plane? Please demonstrate a person that can solve this (i.e. not that they have a > 99.9% chance of being able to do it, or that you can show they can start out pretty successfully and you assume they'll always stay ahead of the game).
Bear in mind when working out if a computer can solve a given problem (i.e. is it mathematically computable), we're not trying to work out if it can ever solve it, or even if it can solve it in infinitely many cases, or even in an arbitrarily high proportion of cases. We're working out if it can be proven that it is (not) guaranteed to find a correct solution in all cases. There's just no way to make those judgements of a human being.
So instead, mathematical analysis of computation is compared against intuition arguments from the evidence that human beings have good, reliable strategies for solving some of them. Unsurprisingly, the brain comes off pretty well in that comparison!
Penrose was big on handwaving and appeal to quantum magic, but not very good on the specific arguments to back up his claim.
I found this article to be a huge bag of misconceptions about AI, computation, and the actual claims of AI professionals. As an argument against hyperbolic media mischaracterisation, it might be reasonable. But like Penrose, it manages a long and condescending argument from intuition that fails to take seriously the actual claims being made.
> But can you prove that human beings can solve them?
> Tessellation of the infinite plane? Please demonstrate a person that can solve this
Penrose himself would seem to be a demonstration that there is at least one person, IIRC.
Is your argument that uncomputable really just means no guaranteed success, and that just because some human can get a solution to an uncomputable problem doesn't mean that it isn't following a set of algorithms?
You misunderstand the result Penrose has shown. There are plenty of aperiodic tessellations of the plane that can be computationally generated. The issue is whether a computer can, in all cases, determine if a set of shapes can tile aperiodically.
I don't understand the second bit. But, no. Computability has very specific definition. Getting a solution to an uncomputable problem isn't generally hard. It is trivial to create a program that will solve the halting problem for an infinite class of cases. The issue is that such solutions can be shown not to be general over all cases.
Such overconfidence. There is no generally accepted evidence that anything physical, whether brain or something else, can compute something a TM cannot compute. Penrose's claims are conjectures at best and pseudoscience at worst.
Since the brain is not just a Turing machine, it would be astonishing if it was limited to the capabilities of a Turning machine. The brain is capable of emulating a Turning machine, but that is just one of its capabilities.
The things a Turing machine lacks are interupts and I/O. If you hook sensors up to a Turing machine and make the tape change depending on the state of those sensors, you don't have a Turing machine any more, in the sense that it is no longer bound by any of Turing's computability theorems.
This insistence that the limits of the most limited model of computing (Turing machines) must be applicable to any machine that computes anything--including things that are not describe by any of the formal mathematical proofs on the limits of computability because they violate the most basic assumptions upon which those proofs depend--is one of the most curious aspects this debate.
So not even computers are just Turing machines, because they too have I/O. Turing's theorems are useful and important when considering certain practical questions of computability within the limited circumstances of a computation whose inputs are entirely specified at the start and which can't be interrupted by new information coming from the outside, but they just don't apply to cases where there are cells whose values aren't known until reality provides them via some sensor mechanism.
As such, it would be a little weird if we (and computers) can't compute things a Turing machine can't, given we have capabilities that a Turing machine doesn't have. There are even examples of such things. One due to Church (IIRC) that shows how we can solve certain instances of the halting problem that a Turing machine can't.
Actually Turing's paper "On Computable Numbers" distinguishes between "automatic machines" (a-machines) and "choice machines" (c-machines), where the latter can pause to ask for input. It seems to me this accounts for your I/O. (I think this is also how you'd add a RNG to a Turing machine.) His paper pretty much only considers a-machines, so I'm curious what is written about c-machines elsewhere. It's unclear to me how much c-machines "change the story." For instance you can't escape Godel's Incompleteness Theorem by adding a finite number of axioms to fill in all the unprovable statements, because there are infinitely many of them.
Unless interrupts and the Input portion of I/O are generated by an oracle (A machine of a computational class more powerful than UTMs) of some sort, a Turing Machine with I/O and interrupts is equivalent to a UTM. I remember the proof being trivial, ie some guys did it during my CS undergrad for a short (2-week) research introduction course, but I can't find a proper paper about it atm. Granted, if the brain was an oracle, this would be true, but you would have to presume that the brain was an oracle in order to prove that it is stronger than a UTM.
In regards to your last statement, the other way works as well. There are statements that can't be proved by any human brain, but can be proved by non-brain logical systems. For example: "The brain cannot consistently assert this statement". One can then create instances of types of problems that are equivalent (when put through some bijective transformation) to the beforementioned statement.
See also: https://en.wikipedia.org/wiki/Wang_tile
n 1966, Wang's student Robert Berger solved the domino problem in the negative. He proved that no algorithm for the problem can exist, by showing how to translate any Turing machine into a set of Wang tiles that tiles the plane if and only if the Turing machine does not halt. The undecidability of the halting problem (the problem of testing whether a Turing machine eventually halts) then implies the undecidability of Wang's tiling problem.
I googled for tessellation infinite plane algorithm but could not find any pertinent information.
There are problems that the brain can solve that one can prove an algorithm cannot. That to me is a bold claim. Could you point to material I could read on this?
I would not discount Penrose with the ease the GP does. These themes are quite philosophical, therefore all sides are making some big assumptions. I would read Penrose, Dennett, Chalmers, Tononi, Searle, Putnam and many others. In philosophy the trick is to disagree with everyone.
Agreed. I am very intrigued by Penrose but not convinced. In Scott Aaronson's online lectures for Quantum Computing since Democritus he is pretty dismissive of Penrose, but in the book he asks why a renowned Oxford scholar would insist on such a position, and that's a good question! In these arguments people are too willing to clamp shut their opinion and shut down wonder.
>The brain performs computation, it is by definition a computer.
Citation needed.
Actually that's the standard motto of any era -- the brain was like god (soul), then like gears in the early mechanical era, and so on to the computer analogy.
"If we achieve artificial intelligence without really understanding anything about intelligence itself then we will have no idea how to control it."
Exactly. It's fascinating/scary to me how people still talk about A.I. or machine consciousness when we basically have no idea how consciousness works in general.
People who say things like that don't understand how to reason using limiting cases. We have artificial intelligence right now. Maybe not artificial general intelligence, but plenty of stuff traditionally thought of as being solely the domain of minds, is now performed by software. I'd say we understand it pretty well, and we certainly have the ability to control it.
The one iron-clad rule of artificial intelligence: any successful attempt at implementing it makes it not artificial intelligence, just something that computers are able to do.
(Though I guess there is the second rule of over-optimistic timelines, to be fair.)
http://crl.ucsd.edu/~saygin/papers/MMTT.pdf
It touches on the "physics is computable therefore the mind is computable", and similar such arguments, in a fascinating and IMO balanced way.