Bricks and Mortar in Human-Level Artificial Intelligence

Bricks and Mortar:

Strategic Positioning of a Long-Term, Human-Level AI Project

Bricks and Mortar
Bricks and Mortar

Précis

  • Any 10 year project needs to have a robust strategy for dealing with change during its lifetime
  • This is particularly true for a human-level AI project, as all key aspects of the field are changing dramatically
  • To date, all previous AI projects have been narrowly-focused and highly specialized: designed to achieve one goal (diagnose a disease, play chess, decide when to sell a stock, etc.)
  • Any human-level AI project, by contrast, will be orders of magnitude more complex, integrating sub-systems that will need to work together to achieve many sub-goals simultaneously, so…
  • Any human-level AI project will end up being a cooperative affair, involving many manufacturers/labs producing many specialist components. It is useful to think of these components as bricks, and the larger project as an edifice to be built
  • This heterogeneous nature will lead to novel constructs not seen in previous AI projects. Most notably, a human-level AI project will be a distributed system, not a monolithic program. Some data and third-party AI sub-systems will be called as services, not bolt-on components
  • To carry the “bricks” metaphor to its logical conclusion, a kind of highly dynamic “mortar” will also be required to bind any human-level AI project together: a mechanism that will allow the bricks to discover one another; to coordinate and sequence their activities; to recover gracefully when a component fails, gets upgraded or goes off-line; to communicate with outside sensors, AI services and data
  • The best business strategy involves developing the mortar, not the bricks

Introduction

I recently applied for a job with a company who is mounting a long-term (10+ years) project to build human-level artificial intelligence.

Any business venture — of any sort — with the phrase “10+ years” in the mission statement is bound to be ambitious, and will require excellent long-term strategic planning at the outset.

This is particularly true of a project to develop human-level AI, because of the rapidly-changing foundations upon which such a project would rest. Artificial intelligence theory, computer hardware technology, the sciences of cognition and neurology, and many related philosophical issues are some of the fastest moving academic and tech fields today. So, the following question arises:

How is it possible to produce a long-term strategic plan in a field which at best will be very quickly unrecognizable, even well before the mid-term of the project?

I want to answer that question here.

Change

Change In AI

It is hard to date the practical inception of AI. Certainly theoretical discussions date back to (and pre-date) Alan Turing, but it really wasn’t until some time in the 60’s and 70’s that advances in hardware technology made practical, in silico AI any kind of realistic possibility.

Since then, the field has undergone a rollercoaster ride of success and failure, where, it might be said, the biggest changes have occurred due to troughs, rather than peaks, in its course. In 1965, Herbert Alexander Simon stated, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Marvin Minsky pronounced, “Within a generation…the problem of creating artificial intelligence will substantially be solved.”

The 20 year period, the generation, has long-since passed, and we are nowhere near fulfilling these promises.

Nonetheless, and despite these setbacks, the field has made great advances, and the nay-sayers on the opposite side of the fence have seen their own pronouncements battered and their noses bloodied. At one point, it was considered that chess was one of the quintessential human activities, requiring creativity, originality, human insight, strategy and theory of mind. Skeptics said a computer could never play the game to any reasonable level of skill. Yet Kasparov fell to Deep Blue in 1997. In another very human domain, Watson defeated two all-time Jeopardy! champs in 2010.

The past 10 years truly have seen revolutionary advances in the theory of AI and Machine Learning. Algorithms have matured at the same time as their functioning has become more efficient. Fruitful sub-fields have flourished and show no signs of dying out.

AI is changing, and changing fast.

Change In Computing Power

The first integrated circuit was produced in 1958, and the first proper computers to use them in 1963. By the standards of the day, they were mind-boggling. But by the standards of today, they were mind-bogglingly crude. The point has been so well made previously that it is now tiresome to belabour it here.

Moreover, even this comparison of integrated circuits, then and now, fails to encompass the point completely: almost no useful integrated circuits are entirely isolated today; everything is linked; the machine under your desk or the tablet on your lap spends much of its time drawing on and integrating the power of many many machines world-wide. With the Internet it is trivial to pull current data from a weather station in New Zealand and another from its (near) antipode in Lisbon, process the data on a server in California and use it in a presentation on climate change in Johannesburg.

No chip is an island. We’ve moved from “one chip per computation” model, to one where a single computation can span continents, and it is hard to see exactly where the “thinking” gets done.

But even this revolution is about to be revolutionized. The old technology, silicon doped with gallium-something-or-other, will soon be replaced with memristors, graphene and/or carbon nanotubules. Even the names sound much more advanced. Great strides are being made in quantum computing. The fundamental architecture invented by Turning and von Neumann is about to be overthrown.

Change in Cognition and Neural Studies: Capacities vs. Functions

Although it is not a given that an AI project absolutely must attempt to mimic the functioning of the human brain (i.e. – the faithful mirroring of its sub-systems), many AI projects — “strong AI” — do attempt to mimic its capacities. The goal is to produce some machine which (according to the ambitiousness of the project) approaches, equals or exceeds human brainpower.

The human brain is our benchmark.

As the human brain is the only system that currently has these capabilities, it becomes the logical place for study. And although many AI researchers believe human capacities can successfully be mimicked using non-human algorithms, to sharpen the point, the human brain is the only system which proves that a tractable functioning solution to the problem exists.

Therefore, any project which attempts to avoid mimicking human brain functioning risks venturing down a blind alley; any which stays on that path at least knows it is following a plan which already has been proven to work.

On the other hand, powerful though it is, evolution is commonly assumed to be blind. (In fact I disagree with this assessment, as you shall see in another post to this blog.) It is therefore reasonable to assume that by designing new algorithms, rather than slavishly following the brain’s functioning, we can outperform it. To cite a trivial example, even the most basic of computers does arithmetic millions of times faster than a human.

In either case (mimicking the brain’s functioning, or re-designing it), studies of how the human brain actually works are going to inform how any AI project will proceed.

Suffice it to say that studies in human cognition are proceeding apace. The US BRAIN project (2013) will command a budget of US$300 million per year for 10 years. The European Commission’s Human Brain Project (2014) promises €1.2 billion in total over 10 years. The Japanese Brain/MINDS project (2014) will be funded at the rate of US$35 million per year.

Conclusion: Human brain research is a hot topic, backed by powerful institutions and big money. What we know about how the brain functions will change dramatically over the next decade.

Change == Uncertainty

Predictive Impossibility

The result of all of the change outlined above is uncertainty. Predicting the future is a difficult task.

But despite this difficulty, most business, institutions and indeed people do engage in forward planning to foresee a 5-year, 10-year, or even longer future period. They take up hedge positions, they build nuclear deterrents, they put money away for retirement. However, most anchor their plans on whatever solid ground they can find: relative economic stability, long-term international alliances and enmities, job security and health stability.

However, a long-term human-level AI project is unique in that all of its foundations, as described above, are shifting. There is no solid ground upon which to take a stand. Although the pundits may pun, no one really knows what is going to happen to AI, IT and neurological studies in a 5 or 10 year framework.

It is my position that anyone who thinks differently is kidding themselves. Any realistic long term planning must embrace uncertainty, not ignore it.

Advancing in the Face of Predictive Impossibility

Cooperation Will Be the Key

So, do we throw up our hands in despair?

No.

Although we don’t know what the future will look like, we can be pretty sure about how some aspects of it will play out. I believe the thing we can be most certain about is that any human-level AI project will be a cooperative affair.

Up until now, all AI projects (none of which have seriously attempted to approach human-level AI) have been narrowly-focused and highly specialized. They have been designed to achieve one main goal, like diagnose a disease, play chess, decide when to sell a stock, and so on. And there have been some tremendous successes.

But I would contend that all of these projects have been trivial in comparison to the overarching project of equaling or surpassing human intelligence. At best, each has represented one very small facet, a sub-system of a sub-system, of what needs to be accomplished to achieve human-level AI. And yet many have been major, long-term development efforts, involving dozens of researchers and programmers, and a lot time and money.

Accepting this as fact provides a new way of looking at a long-term AI project strategically. It means the solution will be one of high granularity, will come in bits and pieces, over extended periods of time, from many different labs and companies. No one project can build the whole itself.

The ultimate solution will be built from a very large number of very heterogeneous bricks. Those bricks will be highly functional, highly specialized and very complex. But to reiterate: complex and powerful though they might be, each will only be solving a small facet of the overall problem.

Conclusion: Any human-level AI project will be a cooperative venture, pulling in dozens of home-built and third-party components. There will be no one killer app, no one great discovery.

Bricks and Mortar

But there is something missing here.

Brick
A Brick

The specialist components, the “bricks” mentioned above, cannot just be thrown together, and expected to work. They need to be assembled in an ordered yet creative way; hierarchical yet re-entrant; self-referential; sequential, rigidly iterative, but with a dash of the random, the stochastic thrown in.

Therefore, one of the key technological challenges will be how these bricks bind together, how they discover one another, how they talk to one another, how they cooperate, how they react when one drops out, or modifies its capabilities. The bricks can be heterogeneous, but the mortar that binds them together cannot be; the mortar has to be uniform, standardized, and yet highly functional, and very intelligent.

The trick is not to build the components, it is to get them to work together.

My business strategy:

Forget the bricks; create the mortar.

The Mortar

Extending the Bricks and Mortar Metaphor

Although I will stick with the “bricks and mortar” metaphor for the rest of this document, I need to extend it somewhat. The materials for building a real-world wall are the antithesis of intelligence and functionality. Rocks will do. And you might even skip the mortar: drywall construction is an option.

A human-level AI project exists at the opposite end of this intelligence spectrum: each individual brick will be highly complex and specialized. And far from being dispensable, the mortar will be an essential element that supplies much of the intelligence and organization of the overall system.

More Than an Interface

At this stage in the discussion, the experienced programmer might think I am referring to as mortar is what is traditionally referred to in the programming world as an interface. In fact I am thinking of something much more dynamic.

Programming interfaces are typically static definitions of how a given object or set of objects can be seen and used by the outside world. This definitional role will certainly be important in allowing individual AI components to work together to provide an overall solution.

But a working human-level AI solution will need to be much more dynamic. Not only will the underlying components be from different manufacturers, they will live on different cores, machines and networks, and very possibly be offered as distributed remote services, rather than “bolt-on” components. They will evolve. They will change. They will merge. They will die out.

So the mortar that binds the bricks together will have to be dynamic in itself. It will need to provide a set of services so that a project can be defined initially. But more, it will allow for new components to be discovered and wired together dynamically and in real-time. And that wiring will provide a smart switching mechanism between individual components, so that output from one module can be conditionally routed to whichever module, chosen from a group of candidates, can best provide downstream processing.

The mortar will further need to provide a coordinating role. Processing will be massively parallel. Issues of queuing and timing (especially if my idea of “AI As A Service” holds true) will be crucial. A whole set of mortar services will be required to translate between modules (unit conversion is a good example).

At this point, the bland, real-world concept of mortar starts to break down. What we will require is a dynamic, intelligent, proactive mortar. And it will need to supply some pretty sophisticated features.

Essential Components of Mortar

The mortar will need to provide the following essential features

  • A project definition language, that describes at code-level how components are wired together. A real-world equivalent of this is UML
  • A marshaller, that can read the project definition language, and actually bind the components together into a functioning system
  • A component definition language, that will allow individual components to “advertise” what they do, and describe how they are employed, at the code level. This is very much what a traditional programming interface does
  • A set of dynamically-configurable flow-control components, so that the output of one module can be examined and, according to its contents, forwarded on to one or more other components. Somewhat like the control and decision boxes on a flow chart
  • A system to coordinate timing, maintain state, and provide multi-tasking and parallelization features. Similar to the core components of an operating system.
  • IO (Part 1): Something to manage outside sensors, to work with the “Internet of Things”, and the Internet itself
  • IO (Part 2): Persistent storage for components and system state
  • External Services Manager: Outside, third-party services will supply public data and “AI As A Service” components. The External Services Manager will know how to contact and interact with these services
  • Data Description Language: Will provide semantic descriptions of data sources, internal and external
  • User Interface Manager: Software to allow users to interact with the system
  • Utilities: Provides features like unit conversions, date conversions, calendar services, and so on
  • Integrated Development Environment and Debugging Services: The tools and toolboxes that will allow programmers to develop the system

Conclusion: Mortar As Inevitable; Mortar as A Standard

Much of the foregoing is based on the premise that any human-level AI project will be a cooperative effort between many third-party suppliers. It further presupposes the “AI As A Service” model. The current reader might not buy in to either of these assumptions.

Rather than try to further argue the point, I would rather simply note that some form of mortar is going to be inevitable for any human-level AI project, even one built entirely in-house. The very fact that we are talking about a 10 year plus venture proves that the end result is going to be a great deal more complex than anything that exists today.

Although previous projects may have been programmed as monolithic code sets, it seems inevitable that modularity will rule the day, and therefore inter-module communication and coordination will be essential.

Separating the bricks from the mortar that binds them together makes good design sense.

Moreover, mortar could become a saleable product allowing other development ventures to concentrate the on the logic of their AI task, and not the superstructure that binds their modules together. And anyone to whom you sell the mortar automatically becomes a supplier of technology you yourself can use.

Sell enough mortar, and it becomes a standard. And as so often is the case in business, he who controls the standards controls the market.

Don’t Really Forget the Bricks

The distilled statement of my strategy was “Forget the bricks; create the mortar.” I only half meant this.

Any mortar project is going to need real life bricks as a kind of test harness for debugging. And as these bricks won’t exist at the outset, a mortar project will have to build them. So there is no reason why the mortar building company cannot have, as a parallel goal, the development of some functional human-level AI system. And indeed, I would guess that is where much of the fun will happen.

Eyes on the Prize

But I think any businessperson, who is really keeping their eyes on the prize, will always ensure that the mortar project takes priority. Human-level AI is a staggeringly complex venture, a point already acknowledged by the 10 year plus time frame. But victory will only come to the swift, and the swift will be those who build only what they need to build themselves, and who are wise enough to buy-in the rest.

Getting disparate AI components working together will be a key, the key, to bringing human-level AI to life.

©2016 Brad Varey

Leave a Reply