Book Review: What to Think About Machines That Think

Intelligentsia impresario John Brockman has done an admirable job of assembling some very impressive thinkers for his web site Edge.org. Although most are scientists, there is a fair number of people from the other estates, and the cross-fertilisation of ideas frequently draws even hardened specialists out of their shells, to make pronouncements on things well outside their fields of expertise.

Hal and Dave
Hal and Dave

Although this is not always a good thing, it does make entertaining, stimulating discussion, and I can recommend the site wholeheartedly.

Every year Brockman sets a current question to his group (whose members are, cringingly, called “Edgies”), and turns the resulting answers into a book. This year’s question, and the book’s title, is “What do You Think About Machines That Think?”

The responses, which take the form of short essays, are only very roughly organized by theme. These themes bleed slowly from one to another, without dividing section headings. This provides a surprisingly effective minimalist structure to the book, hinting at emergent concepts that transcend the distinct points made by individual authors.

There are so many excellent ideas presented in the book, that I can recommend it, too, without reservation. So rather than write a normal critical review here, I thought it would be more useful to look at these themes, especially how the thinkers have thought through them, rather than just analyse what they’ve written. From this we can glean a lot about the state of the fields of AI and machine intelligence.

So here are some of the quasi-invisible themes that run through the book:

The Danger of the Singularity is Overblown

The hottest topic in Artificial Intelligence right now is the Singularity: the notion that the moment machines become smarter than us, some form of world-historical transformation will occur. Perhaps because the press likes a good story, public thinkers who believe the Singularity will involve machines usurping mankind, Terminator-style or otherwise, (Elon Musk, Stephen Hawking and David Chalmers among them) have owned the airwaves. Essays from more sober thinkers help redress that imbalance here.

Steven Pinker, for example, argues that in any event, machine intelligence approaching human intelligence is a long way off. Progress is, as he writes, “…hype-defyingly slow…”. Recent successes, like DeepBlue beating Kasparov at chess, or Watson beating world-champions at the quiz game Jeopardy! are few and far between, and always occur in very constrained domains. We are hardly setting the T-1000 amongst the general populace.

Pinker also questions where the motivation towards malevolence might come from. He (along with other writers in the book) point out a bad tendency to conflate intelligence with motivation, ambition, the will to dominate, or other human characteristics. “Being smart is not the same as wanting something”, he rightly says. Given the lack of an obvious connection between intelligence and wanting to take over the world, plus the amount of time the current snail’s pace of development will give us to anticipate and control any problems as they arise, Pinker is all for the project, calling the thought of thinking machines “exhilarating”.

Curiously, one person taking the middle-of-the-road position on the threat to mankind seems to be Nick Bostrom. This is curious because Bostrom has written the definitive book speculating on how the coming disaster will unfold, Superintelligence: Paths, Dangers, Strategies. In that work, he labours (and belabours) wildly improbable scenarios, everything from an artificial intelligence misinterpreting the goals it has been programmed for (for example, deciding that “maximising human happiness” might entail manipulating pleasure centres in the human brain via implants — without permission), to hiding its own intentions, or playing dumb until it can get control of the means of power. Many of the scenarios he presents in his book are Murphy’s Law gone horrifically mad.

Mercifully, in his article for Brockman’s book, Bostrom takes a less fantastical approach, accepting, as per Pinker, that time is probably on our side. So although he still sees potentially disastrous results (“Superintelligence could well be the best thing or the worst thing that will ever happen in human history…”), he feels the best approach is to bring some cool-headed discussion to the table. And the time to do that is now.

Hal Dave 200x275
Hal and Dave

The More Pressing Issue is Ceding Control to Machines We Don’t Understand

Daniel Dennett is another one to urge a balanced view, but balanced at a different centre of gravity. He dismisses “robots run amok”  as childish urban myth, and considers Bostrom’s “let’s start thinking now” position as perhaps valid, but ultimately something of a distraction: there is a more pressing and realistic problem confronting us right now. We risk not being taken over by thinking machines, but ceding our control to machines that cannot even think, “…putting civilisation on autopilot”, as he puts it.

Dennett’s analysis: because of their usefulness, we are tempted to employ AI programs in situations where they have competence to do a good job in most cases, but not the intelligence or judgement required in the most important cases. Consider the self-driving car (my example, not Dennett’s). Today everyone marvels at its ability to hold its lane, brake suddenly and efficiently, navigate and park itself, all the while employing a kind of probability-based decision-making calculus. Who wouldn’t want a car that can never be distracted or reckless?

But how good is a self-driving car at moral decision making? What is it going to do with confronted with an inescapable choice like hitting either a child or a pensioner? Perhaps on the face of it, the answer to that scenario seems evident, but what happens when the odds of killing the child are, say 15%, and killing the pensioner 95%? The moral calculus is not so easy for the machine to compute — precisely because it is not at all easy for us humans to compute. And we don’t even call it moral calculus.

This is not to say we should not go ahead with self-driving cars; it is probably demonstrably true that the number of lives they will save in day-to-day situations far outweighs the number of debatable choices they will make in rare and heart-breaking situations, where humans might not have done any better. To sharpen this point: you yourself don’t refuse to get into your own car simply because you haven’t got a definitive answer, in advance, to the child vs. pensioner conundrum.

So they will come, and soon. We will be able to prove they are great drivers, but not necessarily great moral decision makers. So Dennett’s admonition is well-placed:

“The real danger is basically clueless machines being ceded authority far beyond their competence.”

There Already Are Machines That Think

Many of the essayists make the point that there already are machines that think: us. It is interesting to see how they diverge on this point, though.

Machines That Don’t Think Sometimes – Humans

Finally, one leitmotif which appears sprinkled throughout the essays is both telling and ironic. A substantial number of authors make the unequivocal — and unsupported — claim that machines cannot think. Due to the shortness of the essays, it is sometimes difficult to distinguish between “don’t yet think” and “can never think”, but it is clear that for some authors the former frequently implies “not in 1,000 years” (indeed, Dobelli even includes this phrase in his title), and the latter position must at least be strongly represented.

What is curious is that this claim is frequently offered with no support whatsoever, or at very best a lame justification. To paraphrase some recurrent themes, “…it took evolution billions of years…”, “…computers are not brains…”, and so on.

This strikes me as a form of thoughtlessness, an “unthinkingness”, if you will, on two levels.

First, it betrays a failure to question one’s own beliefs. The history of science is riddled with — indeed, is the story of — disproven theories and hypotheses. Some of these have “fallen” despite having substantial supporting evidence. The transition from Newtonian gravity to General Relativty is a classic example, and there was nothing embarrassing for Newton in that. But other, surprisingly persistent ideas have long blocked the progress of science, despite them being founded more on psychological whim than concrete evidence. The whole series of “X-centric” theories comes to mind: geocentric, heliocentric, homo-sapiens-centric, etc. You’d think we’d know better by now; and yet the trope “machines can’t think” is a perfect example of this same thoughtlessness.

Second, it represents a complete failure to recognise that a large part of the author’s audience is very probably not in agreement. The whole thrust of the AI movement accepts as its premise that at the very least it is conceivable that machines might one day think. That is the presupposition of the book’s title, after all. Further, it is evident that a large proportion of scientists and general science readers firmly believe the idea to be plausible/possible/true. Surely writing in such a forum, a conscious author can see he or she needs to do more than offer ex cathedra, “machines-cannot-think” pronouncements?

On the whole, supporters of AI and machine intelligence have done a much better job of defending their hypotheses in the book. They have proffered their evidence, they have taken the time to explain why.

Here, we can take Wolfgang Pauli’s withering criticism of a poor explanation one step further. If a theory was weak, Pauli dismissed it as “not even wrong”. But to fail to provide any explanation whatsoever, to present, not even a theory but just the unsupported claim that machines cannot think; well, that is “not even not even wrong”.

Leave a Reply