Artificial Intelligence. We love the idea of this. Even when it's travelling through time to kill people, we still think it's cool.
And today we are going to be looking at one of the most famous philosophical arguments in recent times, John Searle's Chinese Room. And then we're going to be moving past it to look at the wider field of the philosophy of AI. First of all, the Chinese Room Argument. Imagine a sealed room into which you can pass questions written in Chinese, and from which you can receive answers, also in Chinese.
And the natural assumption is that somebody in inside the room understands Chinese. But when you look inside, what you find is a guy and loads of baskets containing all of the different Chinese characters. The guy has a very precise rulebook that tells him when you get this symbol, pass out this symbol. when you get this sequence, pass out this sequence, and so on. The guy himself does not understand Chinese, and has no idea what the symbols mean.
And, obviously, it's supposed to be an analogy for a digital computer. The guy is a CPU, the rulebook is a computer program, and the Chinese characters are the ones and zeros of binary. The conclusion that John Searle draws from this is that there is no way the guy inside the room could ever learn the Chinese language.
No matter how acu- accurate or intelligent looking the responses from the Chinese room get, no amount of simulation could ever equal genuine understanding. And he thinks that the same is true of computers, or at least computers as we understand them with ones and zeros. Even if they became scarily good at answering the questions, even if they modified their responses, like so-called learning computers that change the rulebook when they get certain inputs, no amount of syntax could ever equal. semantics.
No computer could ever become intelligent just by running programs, could ever understand anything. The reason this argument is significant is because it flies in the face of Alan Turing's Turing Test, according to which a computer would be intelligent if its responses cannot reliably be distinguished from a human's. Turing is widely regarded as a hero of early computer science, and there are lots of different versions of the Turing Test, but Searle says that even if a computer passed one of them as a result of the experiment, it would still as some now apparently can, that would not be enough. And usually, on the internet, that is where the discussion stops.
And indeed that's where I stopped last time I talked about this, in what was in my defence only my second ever video. But we can now go beyond. The best known response to John Searle is to say that whilst the guy inside the room doesn't understand Chinese, the system as a whole understands. In a way, the room understands. What a story.
To which Searle's reply is... If the guy memorized the entire rulebook and then went home and did the task on his own without the rulebook or the room actually being there, he still couldn't learn Chinese. He's now the only thing in the system and he still doesn't get it.
To which the objector replies, Well, if the guy went out and started interacting with stuff in the world, say he got a job in a Chinese restaurant, then he would, by associating the symbols with certain inputs, begin to learn what they mean. Like he'd learn, oh, these symbols must mean fish, these symbols must mean vegetables, and so on. To which Searle can quite cleverly reply that in that case, the guy is no longer just processing symbolic inputs and outputs. He's no longer just running the program. So it's still the case that just running programs won't get you anywhere.
There's also some kind of subjective sensing and perception going on here. And this is where things get a little tricky. The question of whether computers could think kind of orbits around another question. Namely, is the mind like a computer? If the mind is like a computer, or is a type of computer, then it's reasonable to think about building an artificial one.
But if it's not, then we're barking up the wrong tree. We do think of minds as being like computers, and they do a lot of the same jobs information retrieval, association, calculation, and so on, but we should be wary of that intuition. We humans do tend to say that minds are like whatever the most advanced technology is. the time is.
Nowadays we say they're like computers, people used to say that minds were like telephone networks, I find it quite funny to imagine a caveman inventing the wheel and then go, no you see guys, the mind is like a giant wheel, yeah? So what is a computer exactly? Well, in his book The Mechanical Mind, Tim Crane says that a computer is any thing which processes representations systematically.
That's it. Doesn't matter what it's made of or whether it uses electricity, that's the essence of a computer. And the two things to focus on there are representations and systematically.
A computer represents things using ones and zeros. But do minds work like that? Are thoughts just representations, or is there something else going on here as well? If thoughts contain qualiar, non-representational subjective elements. That is, if it necessarily feels like something, to have a thought or a sensory input, then the mind is not a type of computer.
And no computer could ever be a mind, because there's more going on here than just representation. Qualiar are capital C controversial. Philosophers are divided over whether or not they exist and whether they are necessary for thought if they do.
Because they can pose a problem for AI. AI, philosophers who like to defend the possibility of AI, will often try to eliminate qualiar for that reason. That's why Searle's reply to the Chinese restaurant idea could get a little bit tricky.
If thoughts are just representations then maybe the guy could learn Chinese by associating the symbols with the right representations, but if there's more to it than that, then the analogy with computers has broken down. Of course, Crane's definition of computer isn't the only one, and it's by no means uncontroversial what representation means either, so a lot of this is still being discussed. the best value, you might take into account the packaging or how high it is on the shelf, but equally you might just go, ehhh, well I'll just have this one.
Your decision making isn't specifiable according to precise rules, because you could always have some condition but breaks the rules, and yet you'd still be able to function. You don't crash. For instance, you could have a rule that says, choose the biscuits that are the best value for money. But what if those biscuits are on fire? Well, okay, you change the rule and you say, choose the biscuits that are the best value for money, unless they're on fire, in which case choose the next best value biscuits.
But what if those biscuits are infested with ants, and you can keep going and going and going? You can have rules of thumb, but your decision making isn't specified by the rules of thumb. by precise rules.
Now don't get me wrong, you could write an algorithm which predicted statistically which biscuits people will buy, and indeed the supermarket might pay you a lot of money if you do, but don't confuse somethings being algorithmic with it being modellable algorithmically. This is a very common confusion. But just remember, you can predict the weather with an algorithm, but the weather is not controlled by an algorithm. Dreyfus thinks that human thought is basically a botch. It functions systematically up to a point, and then it just makes a decision.
And that makes sense from an evolutionary standpoint. Our brains didn't evolve to be perfect decision-making engines, they just needed to be good enough and flexible enough to deal with inputs that were not predicted in advance. Our decision-making abilities just needed to be close enough for rock and roll in order to survive.
So there's a case for thinking that human minds are not like like computers, and therefore that no computer could ever be a mind. Dreyfus actually compares the search for AI to alchemy, not in that it's silly, but in that it's after something it can't get, when the reasons it can't get that are actually the most interesting things about studying it. Just as the failure of alchemy ultimately taught us loads about chemistry, Dreyfus thinks that the failure to achieve AI is teaching us loads about technology and cognitive science. So maybe computers couldn't think. Or at least, not in the way we do.
In her book The Creative Mind, Maggie Bowden has some interesting remarks about how even if computers had some process that we didn't really think of as thought, we might still like to treat them as more than inanimate. Like I said, a lot of this is still up in the air. What do you guys think? Could a computer think?
And is there more to thinking than just the systematic processing of representations? This episode was sponsored by AISB, the British... Society for Artificial Intelligence and the Simulation of Behaviour.
And they are either a brilliant society of academics and industrialists, or an evil supercomputer pretending to be that society which emailed me in order to commission this episode and prepare you for the coming machine uprising. They made this video that you can watch on whether the Turing test is actually any good, they have a website you can visit, there's a link in the description, and they also have annual conferences and publish a quarterly magazine. So if you are into Artificial Intelligence...
they are definitely the people to see. That's all the time we've got this week, I'll do some comment replies in the next video. Thank you very much for watching and I will see you then.
Bye!