Category Archives: Human-Level Artificial Intelligence

Notes on Human-Level Artificial Intelligence

AlphaGo: Destined to Only “Just” Win?

Out on a Limb: 5 Close Games

A good friend recently pointed out that although the Go-playing artificial intelligence AlphaGo has now beaten world champion Lee Sedol at Go in both matches to date, the results have been very close. AlphaGo has only “just” won. As I write, it is Friday March the 11th, and results of the third match have yet to be announced.

Time to go out on a limb….

Go 500x301

Go

Although I think AlphaGo has a very good chance of winning all five matches (I’ll say why in a second), it wouldn’t surprise me to see all of the matches being close (with the provisio that Sedol doesn’t have a complete, utter and very human meltdown).

Continue reading

Creating Is Not Understanding in AI

Creating and Understanding

The following line was found scrawled on Richard Feynman’s blackboard after he died:

“What I cannot create, I do not understand.”

It would be a fitting gravestone epitaph, coming from someone who was so fanatical about understanding things.

Feynman Blackboard 463x250

Feynman’s blackboard at Caltech

Unfortunately, the corollary, “What I can create, I do understand” is not always true. We only need to turn to modern developments in Artificial Intelligence to find an important example of this. Continue reading

Bricks and Mortar in Human-Level Artificial Intelligence

Bricks and Mortar:

Strategic Positioning of a Long-Term, Human-Level AI Project

Bricks and Mortar
Bricks and Mortar

Précis

  • Any 10 year project needs to have a robust strategy for dealing with change during its lifetime
  • This is particularly true for a human-level AI project, as all key aspects of the field are changing dramatically
  • To date, all previous AI projects have been narrowly-focused and highly specialized: designed to achieve one goal (diagnose a disease, play chess, decide when to sell a stock, etc.)
  • Any human-level AI project, by contrast, will be orders of magnitude more complex, integrating sub-systems that will need to work together to achieve many sub-goals simultaneously, so…
  • Any human-level AI project will end up being a cooperative affair, involving many manufacturers/labs producing many specialist components. It is useful to think of these components as bricks, and the larger project as an edifice to be built
  • This heterogeneous nature will lead to novel constructs not seen in previous AI projects. Most notably, a human-level AI project will be a distributed system, not a monolithic program. Some data and third-party AI sub-systems will be called as services, not bolt-on components
  • To carry the “bricks” metaphor to its logical conclusion, a kind of highly dynamic “mortar” will also be required to bind any human-level AI project together: a mechanism that will allow the bricks to discover one another; to coordinate and sequence their activities; to recover gracefully when a component fails, gets upgraded or goes off-line; to communicate with outside sensors, AI services and data
  • The best business strategy involves developing the mortar, not the bricks

Continue reading

Book Review: What to Think About Machines That Think

Intelligentsia impresario John Brockman has done an admirable job of assembling some very impressive thinkers for his web site Edge.org. Although most are scientists, there is a fair number of people from the other estates, and the cross-fertilisation of ideas frequently draws even hardened specialists out of their shells, to make pronouncements on things well outside their fields of expertise.

Hal and Dave
Hal and Dave

Although this is not always a good thing, it does make entertaining, stimulating discussion, and I can recommend the site wholeheartedly.

Every year Brockman sets a current question to his group (whose members are, cringingly, called “Edgies”), and turns the resulting answers into a book. This year’s question, and the book’s title, is “What do You Think About Machines That Think?”

The responses, which take the form of short essays, are only very roughly organized by theme. These themes bleed slowly from one to another, without dividing section headings. This provides a surprisingly effective minimalist structure to the book, hinting at emergent concepts that transcend the distinct points made by individual authors.

There are so many excellent ideas presented in the book, that I can recommend it, too, without reservation. So rather than write a normal critical review here, I thought it would be more useful to look at these themes, especially how the thinkers have thought through them, rather than just analyse what they’ve written. From this we can glean a lot about the state of the fields of AI and machine intelligence.

Continue reading