ImperialViolet

Jon Searle: I Married a Computer (11 Jul 2002)

Yesterday I made the comment that Jon Searle's chapter in "Are We Spiritual Machines?" was rubbish. Here's why:

Summary
  • Searle uses his old Chinese Room argument (and the same thing in a number of different guises). It's no better an argument now than it has ever been
  • All of Searle's differences between computers and humans can be shown to have no actual fundamental point of distinction
  • I need to know if quantum computers can be simulated on a UTM

I'm going to run a commentary on Searle's chapter and, in doing this, I'm leaving a lot out of course. You are welcome to read the chapter in full to get the context for these quotes. I purposefully avoided reading Kurzweil's reply.

Here is the difference: Kasparov was consciously looking at a chessboard, studying the position and trying to figure out his next move. He was also planning his overall strategy and no doubt having peripheral thoughts about earlier matches, the significance of victory and defeat, etc. We can reasonably suppose he had all sorts of unconscious thoughts along the same lines.

From that I would like to continue a little: Kasparov was studying the consequences of a set of possible moves. His subconscious has filtered most of the illegal moves before his conscious even considers them. The impressive pattern matching ability of his brain is making links with previous games and recalling which moves proved advantageous in those situations.

Does anything think that is unreasonable? Where is the magical factor in that?

The computer has a bunch of meaningless symbols that the programmers use to represent the positions of the pieces on the board. It has a bunch of equally meaningless symbols that the programmers use to represent options for possible moves. The computer does not know that the symbols represent chess pieces and chess moves, because it does not know anything. As far as the computer is concerned, the symbols could be used to represent baseball plays or dance steps or numbers or nothing at all.

Does Searle think that my visual cortex is aware of what chess is when it highlights edges and surfaces? The pattern matching systems of the brain are very general (try staring at clouds or even a blank wall for a while) and certainly aren't designed for processing chess positions. Why is it that symbol processing deserves such contempt?

I'm sure Searle would argue that there is a conscious kernel of the brain which is somehow `above' the mundane tasks of the brain. So is a child, playing for the first time with a rule book, not really playing chess? The child is doing the same as the computer is, looking at the rules for valid configurations; judging those configurations and then moving. At what point does the child actually start to really play chess?

Let us call it the Chess Room Argument. Imagine that a man who does not know how to play chess is locked inside a room, and there he is given a set of, to him, meaningless symbols. Unknown to him, these represent positions on a chessboard. He looks up in a book what he is supposed to do, and he passes back more meaningless symbols. We can suppose that if the rule book, i.e., the program, is skillfully written, he will win chess games. People outside the room will say, "This man understands chess, and in fact he is a good chess player because he wins." They will be totally mistaken. The man understands nothing of chess; he is just a computer. And the point of the parable is this: If the man does not understand chess on the basis of running the chess-playing program, neither does any other computer solely on that basis.

This is the Chinese Room argument (for which, Searle is best known). Usually the man in the room is processing Chinese symbols.

Again, the fallacy in this argument occurs in two places. Firstly, no neuron in my brain understands anything. If I filled a Chess Room with 1000 people, handling the task as a team, why do you expect any of them to understand it? The understanding is an emergent property of the actions of all the people and the records they are making.

The second problem is very similar to the one I outlined above. I'm sure this poor man will quickly become quite fast at processing the rules if he spends any reasonable length of time doing it. At some point he may recognise that the data he is handing out look like 2d coordinates and he could draw a grid inside his box to help him keep track with symbols for the pieces. With no previous knowledge of chess he could become quite adapt as his brain starts recognising patterns and caching the results so he doesn't repeat the same calculations. At what point is he really playing chess then?

Imagine that I, who do not know Chinese, am locked in a room with a computer program for answering written questions, put to me in Chinese, by providing Chinese symbols as answers. If properly programmed I will provide answers indistinguishable from those of native Chinese speakers, but I still do not understand Chinese. And if I don't, neither does any other computer solely on the basis of carrying out the program.

Reread the reply to the last quote. The faults are the same.

Kurzweil assures us that Deep Blue was actually thinking. Indeed he suggests that it was doing more thinking than Kasparov. But what was it thinking about? Certainly not about chess, because it had no way of knowing that these symbols represent chess positions. Was it perhaps thinking about numbers? Even that is not true, because it had no way of knowing that the symbols assigned represented numerical values. The symbols in the computer mean nothing at all to the computer. They mean something to us because we have built and programmed the computer so that it can manipulate symbols in a way that is meaningful to us.

I'm left wondering what, exactly, chess pieces mean to us that makes Deep Blue so fundamentally different. If I were to rename and redesign the pieces (without changing the rules) I'm still playing chess. Now if I number them all and replace them with bits of paper, I have to remember what the numbers mean, but I'm still playing chess. Now if I get rid of the board and imagine all the numbers on a grid in my head, I'm still playing chess. If I don't imagine a grid, but remember the positions also as numbers, I'm still playing chess (but I don't get to use the pattern matching systems of my brain). What's so different between myself and the computer now?

He confuses the computer simulation of a phenomenon with a duplication or re-creation of that phenomenon. This comes out most obviously in the case of consciousness. Anybody who is seriously considering having his "program and database" downloaded onto some hardware ought to wonder whether or not the resulting hardware is going to be conscious.

(in this, Searle is talking about downloading yourself into a computer by mapping and simulating your brain - Diaspora style)

Exactly what is so magical about neurons? Most animals have less advanced myelin sheaths than humans do, and so have slower moving nerve impulses. This small change doesn't disqualify them from Searle's elite club.

It's unfortunate that all higher animals share the same basic nerve structure, however, as it isn't possible to point to another example. But we have a pretty good understanding of how nerves work, so I can pick a single neuron and replaced it with a device that records incoming impulses and can trigger outgoing impulses. This device communicates via radio to a computer which controls it and simulates the neuron I replaced. (this is just in theory, I'm not saying I could do this today).

If I continued to replace neuron's with perfect simulations of them I doubt even Searle would suggest that I'm altering anything about my brain functionally. So either I can replace my whole brain that way and simulate it all in the computer (at which point the actual neuron replacing devices can be discarded) or he's suggesting that there is something so special about neurons that they cannot even theoretically be simulated.

Actual human brains cause consciousness by a series of specific neurobiological processes in the brain. What the computer does is a simulation of these processes, a symbolic model of the processes. But the computer simulation of brain processes that produce consciousness stands to real consciousness as the computer simulation of the stomach processes that produce digestion stands to real digestion. You do not cause digestion by doing a computer simulation of digestion. Nobody thinks that if we had the perfect computer simulation running on the computer, we could stuff a pizza into the computer and it would thereby digest it. It is the same mistake to suppose that when a computer simulates the processes of a conscious brain it is thereby conscious.

I'm afraid I'm not even going to dignify that analogy with a reply. It's just absurd. Unfortunately for Searle, the absurdity is less hidden in this case than in his arguments above.

"This thesis says that all problems that a human being can solve can be reduced to a set of algorithms, supporting the idea that machine intelligence and human intelligence are essentially equivalent."
That definition is simply wrong. The actual thesis comes in different formulations (Church's is different from Turing's, for example), but the basic idea is that any problem that has an algorithmic solution can be solved on a Turing machine, a machine that manipulates only two kinds of symbols, the famous zeroes and ones.

(The part in italics is Searle quoting Kurzweil)

In what way is my neural net fundamentally different to a computer? It may well have some impressive emergent features but if you are suggesting (again) that neurons are fundamentally different from a UTM? At this people people usually start muttering the word "quantum" as an answer. Firstly, we have quantum computers anyway and, secondly, I'm pretty sure that a UTM can simulate a quantum computer and that the quantum aspect is just a matter of speed. (can anyone confirm/deny this?).

The rest of the chapter is just Searle sniping at Kurzweil and he doesn't put forward anything new.

The conclusion is at the top in the Summary box really. I won't repeat it here.