Ben Goertzel

As the Singularity Institute's Director of Research, Ben Goertzel, Ph.D., is responsible for overseeing the direction of the Institute's research division. He has contributed over 70 publications, concentrating on cognitive science and AI, including Chaotic Logic, Creating Internet Intelligence, Artificial General Intelligence (edited with Cassio Pennachin), and Hidden Pattern. He is chief science officer and acting CEO of Novamente, a software company aimed at creating applications in the area of natural language question-answering.

 

He also oversees Biomind, an AI and bioinformatics firm that licenses software for bioinformatics data analysis to the NIH's National Institute for Allergies and Infectious Diseases and CDC. Previously, he was founder and CTO of Webmind, a 120+ employee thinking-machine company. He has a Ph.D. in mathematics from Temple University, and has held several university positions in mathematics, computer science, and psychology, in the US, New Zealand, and Australia.

 

This interview was conducted by Norm Nason and was originally published in the website, Machines Like Us, on October 20, 2007. © Copyright Norm Nason - all rights reserved. No portion of this interview may be reproduced without written permission from Norm Nason.

 

 

NORM: It's a pleasure being able to talk to you, Ben. Thanks for joining me.

 

BEN: Thanks for having me. I've enjoyed your site for a while, so it's a pleasure to be an active participant.

 

NORMPerhaps we can begin by defining a few terms. What is intelligence? What is artificial general intelligence?

 

BEN: Sure. Whenever I talk about AI I like to make the distinction between

 

  • narrow AI – programs that solve particular, highly specialized types of problems
  • general AI or AGI – programs with the autonomy and self-understanding to come to grips with novel problem domains and hence solve a wide variety of problem types

 

Unlike an AGI, a narrow AI program need not understand itself or what it is doing, and it need not be able to generalize what it has learned beyond its narrowly constrained problem domain.

 

Pretty much the whole field of AI deals with narrow-AI systems -- systems that do one or another special thing intelligently. But if you want to apply such a system to a new sort of problem, you need to change the program itself -- these programs aren't flexibly adaptable. They lack general intelligence.

 

Building narrow AI programs is important—it can lead to useful software, and can teach you a lot. But in itself, it's not going to get you to the grand goal of AI—a thinking machine with general intelligence at the human level or beyond.

 

I believe that a lot of AI researchers believe narrow AI work will eventually lead to AGI—for instance, Minsky and Danny Hillis have professed that "intelligence is a lot of little things," meaning that you can make narrow AI approaches to a lot of small problems, then piece together the solutions into a solution to the big problem of AGI. I don't really think that can work. I think AGI and narrow AI are qualitatively very different problems.

 

NORM: Novamente—your AGI system—follows human development psychology. What is this philosophy of mind, and what does your system have in common with it? How closely must we model artificial intelligence on human intelligence?

 

BEN: Well, the philosophy of mind underlying Novamente is a long story. I wrote a book on it in 2006, called The Hidden Pattern: A Patternist Philosophy of Mind. I view pattern as the foundational concept underlying intelligence, and view intelligent systems as collections of patterns that are able to recognize patterns in themselves and the world—and in particular, to recognize patterns regarding which actions they may take in order to cause certain other patterns to arise in themselves and/or the world.

 

You can then go and interpret various results from empirical psychology, cognitive neuroscience and philosophy of mind and other disciplines in terms of this patternist philosophy of mind. Which is a train of thought that can lead to many, many different AGI architectures. Novamente is just one of many possible AGI architectures consistent with patternist philosophy of mind.

 

One of the many theories from standard psychological theory that I've integrated into the patternist perspective is Piaget's theory of cognitive development, which articulates the stages that young minds pass through as they transition from infancy to adulthood. Stephan Bugaj wrote a paper—which is on the Novamente website—that articulates exactly how the Piagetan theory of development can be reinterpreted in the context of AI systems that, like Novamente, are fundamentally reliant on uncertain logical reasoning for learning how to interact in the world.

 

I think it's incredibly important, when you start an AGI project, to have a clear understanding of the conceptual and philosophical foundations. Most AGI projects in the past have gone wrong for purely conceptual reasons, not because of insufficient hardware or lack of funding. Of course, getting the conceptual picture right is only the first step—after you have that, there's still a hell of a lot of work left. But if you don't have the conceptual picture right, you're just screwed—you may learn interesting stuff along the way, but you're very unlikely to create an AGI.

 

NORMYou have said that many of the phenomena we humans take for granted are illusory. What do you mean by this?

 

BEN: As human beings carrying out our lives in the everyday world, we use a certain vocabulary for describing and thinking about ourselves. Many of the concepts involved in this vocabulary are what philosophers call "folk psychology"—i.e., concepts without any rigorous grounding in reality. Examples are free will, consciousness and self. The everyday interpretations of these terms are just full of contradictions and confusions. These concepts, in their standard forms are not at all useful to the AGI designer—in fact they're damaging and distracting. These concepts can be refined into useful concepts, but it takes a lot of work. In the Novamente design there is something called "attentional focus" which is related to consciousness; there is agentive causal inference that is related to what humans do when they ascribe will to themselves or others; there is a notion of a psychosocial self as a pattern a system recognizes in its own behavior. But these rigorous concepts we use in Novamente theory are very different from the cruder, less coherent concepts used in everyday discourse.

 

On a more personal level, I think humans have a lot of deep-seated illusions about their own lives that are rooted in taking these folk psychology concepts too seriously. The notion of free will is one of the most absurd and dangerous ones. The idea that there is some "me" in my head somehow "deciding" stuff is quite absurd, yet it's how all of us feel intuitively sometimes—due to what combination of innate neural wiring and cultural conditioning, no one is quite certain. These sorts of cognitive illusions are something I strive to overcome in my own life, just as I strive to overcome basic errors of probabilistic reasoning as have been identified by psychologists working in the area of Heuristics and Biases.

 

The human brain is a wonderful machine but it has a lot of problems—it often assesses probabilities badly wrong even when it has adequate information; it often uses a badly false model of itself, including largely bogus concepts like "free will." This is one of the reasons I don't think AGI researchers should strive to precisely emulate the human brain. Believe me, we can do better!!! Human brain emulation is important and interesting, because there is a lot to learn from the brain, and because a lot of us humans would like to see ourselves emulated for personal and aesthetic reasons. But I believe we can make AGI's with much more intelligence than humans, and much greater ethicality and reliability as well.

 

NORMAs I understand it, your approach to AI centers around writing algorithms that will eventually control embodied agents in rich virtual worlds such as Second Life, where they will be constrained by physical laws and can interact with real people in a wide variety of situations. You hope to begin with primitive, infant-like agents—limited but flexible autonomous exploratory systems—that will learn over time and grow to achieve human-level intelligence, and more. Please tell us more about this aspect of Novamente. How far along are you with this project?

 

BEN: I often say there are four key aspects to creating a human-level AGI:

 

  1. Cognitive architecture (the overall design of an AGI system: what parts does it have, how do they connect to each other).
  2. Knowledge representation (how does the system internally store declarative, procedural and episodic knowledge; and now does it create its own representation for knowledge of these sorts in new domains it encounters).
  3. Learning (how does it learn new knowledge of the types mentioned above; and how does it learn how to learn, and so on).
  4. Teaching methodology (how is it coupled with other systems so as to enable it to gain new knowledge about itself, the world and others).

The virtual-worlds aspect addresses the fourth of these points: teaching methodology. The other three points are addressed in the Novamente software design itself, which is really the hard part, and comes out of the last 6 years of collaborative design and prototyping work between myself and the rest of the Novamente team, as well as about 15 years of theorizing and prototyping on my part before that.

 

The really hard part is making an AGI design that is conceptually correct according to philosophy and theoretical psychology, and is also computationally tractable on current computers. That's what the Novamente design accomplishes, and there are some papers about this on novamente.net, which are conference papers given at various academic conferences on Novamente. Granted they don't really tell the whole story, as Novamente is a big idea and hard to capture in a brief conference paper. I have a 350-page book manuscript describing the design and its motivations, and keep debating whether to publish it or keep it secret!!

 

Once you have a workable AGI design, though, the next step is figuring out how to teach it. This is where virtual worlds come in. They give you a wonderful combination. You have a relatively simple-to-deal-with setting in which perception, action, cognition, socialization and language can all be dealt with as a unified whole. And then you have potentially millions of people to teach your dumb baby AI and help it get smarter and smarter. The latter point is really important: look how smart Google got just by utilizing the collective intelligence in the links people put in their Web pages. Directing human intelligence into artificial intelligence is an important trick to use ... and the combination of the Novamente AGI design with virtual world embodiment will enable us to use it very effectively.

 

NORMWould you care to speculate about how intelligent Novamente might become in this environment, and how quickly it may learn?

 

BEN: How intelligent: It's hard to say what the upper limit might be, but I'm sure it'll be well beyond human intelligence. My strong feeling is that the simplicity of the virtual environment is not going to be our limitation in terms of achieving superhuman levels of intelligence, and nor is hardware. The limitations are going to be: do we have the right AGI design, and have we taught it long and enough.

 

About how long we'll need to teach it: this is harder to say. It partly depends on how many people have the incentive to simultaneously teach it the right stuff in the right way. We don't really know how much it will learn through explicit teaching versus through general, ambient interactions, for example. My gut feeling is that the learning will be faster, rather than slower, than the learning of a human child. Which is why I have posited that a Singularity by 2015 or so is not an absurdity, and one by 2020 should be patently achievable, if we put a concerted focus on it now. Build the AGI, put it online, create a situation where the residents of the online world are motivated to teach it—and from that point on, it may be years rather than decades before we have something like a fully-fledged "artificial scientist." Now, let this scientist read some advanced computer science and math books, asking questions as it goes along, and see how fast it learns to improve its own self in a manner consistent with its initial goal structure.

 

I'm not saying any of this is inevitable, just that it looks like a highly plausible course of development. I see no good reason, at this stage, why things COULDN'T progress in this manner. The Novamente design and the virtual world environment seem sufficient to support it. So let's try it and see what happens—keeping a very careful eye out for ethical gotchas along the way, of course. This is what we plan on doing with Novamente LLC, assuming funding holds up and all the business aspects as well as the technical aspects work out. Which they seem to be doing, at the moment.

 

NORMFor the benefit of readers who may be unfamiliar with the term, Singularity here refers to the hypothesized creation—usually by AI or brain-computer interfaces—of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in that progress.

 

A common problem with AI architecture is that there are typically many different learning algorithms written to handle the wide variety of cognitive processes, and they tend to "blow-up" when combined. You've used this fact to argue that whole AI systems should be designed, rather than separate components. Can you elaborate on this problem, and discuss your solution?

 

BEN: Well, that's really a very technical question. It's hard to answer without going into an awful lot of detail. But Novamente does contain three key cognitive algorithms:

 

  • Probabilistic Logic Networks (PLN), which will be described in a book coming out in 2008 published by Springer. This is a logic engine that for the first time combines probability theory and formal logic in a systematic and coherent way.
  • MOSES, which synthesizes evolutionary theory and probability theory to enable very efficient learning of computer programs based on specifications. This started out as Moshe Look's PhD thesis (see metacog.org) but has already grown a fair bit beyond there.
  • Economic attention allocation, a novel approach to deciding which pieces of knowledge and which procedures inside the Novamente system's mind deserve attention at any particular point in time.

 

And the point regarding integrative design and combinatorial explosion is that each of these algorithms, on itself,

 

A) would in principle be enough to lead to a thinking machine.

 

B) in practice, given a realistic amount of computational resources, would never be adequate to lead to a thinking machine, because as you feed them more and more complex problems, the amount of resources they use would scale up exponentially. This is called a combinatorial explosion, because as problems get more complex, the number of combinations of factors involved in them gets bigger exponentially....

 

The Novamente design explains how each of these three cognitive algorithms can help each other out—basically delegating subproblems to each other—in such a way as to avoid this kind of combinatorial explosion except for very, very large and complex problems.

 

But to go deeper in this would require a lot of AI details. For example, MOSES helps PLN by enabling very efficient inference tree pruning. MOSES helps PLN by enabling it to carry out probabilistic modeling of evolving populations using background knowledge derived from long-term memory. Etc. All this is explained in my 350-page book on Novamente, which may or may not ever get published ;-)

 

NORMIf your work leads you to create the world's first artificially intelligent agent, I wouldn't blame you for keeping your recipe secret. But that raises the question of artificial agent ownership: should a brain be patented?

 

BEN: The patent system is badly broken, such that this question isn't very interesting to me. Let's build a superhuman AGI lawyer and ask it to fix the patent system!!

 

NORMThe internet helps connect human minds together, allowing for vastly more efficient communication and information exchange than was possible before. Once AGI systems are built, might such networks facilitate the evolution of a distributed artificial consciousness?

 

BEN: I think distributed intelligence may play a role in the evolution of mind on Earth in a couple of different ways:

 

In a paper in 2003 I introduced the notion of a MindPlex, which is a mind that has an emergent level of consciousness—explicitly-goal-directed intelligence—but also has components that are individually conscious; explicitly-goal-directed minds.

 

You could have a mindplex formed solely out of software, which could emerge for instance from a bunch of separate Novamente systems collaborating very closely together and sharing mind-material, yet retaining their own separate goal systems. Or, you could have a mindplex formed from humans and AI systems interacting with each other frequently and deeply.

 

What's interesting is the possibility that a higher level of awareness and intelligence emerges, which has us humans among its parts—yet without forcing us humans into any kind of borg-like obedience or homogeneity. I think this is a very real possibility—though I stress that it's not the most out-there and advanced possibility that a technological Singularity may bring us. In the end, an emergent global brain composed of humans is still a fairly primitive thing due to its reliance on humans.

 

I wrote a lot about this kind of possibility in my 2001 book, Creating Internet Intelligence.

 

NORMAn autonomous system by definition is under its own control, which has its risks. Tell us a little about designing Novamente's psychology of empathy.

 

BEN: It's based on simulation. A Novamente system builds a little internal simulation of each agent it interacts with. So it empathizes with you because it has a little subsystem inside itself that tries to BE you. This is, in broad strokes, how human empathy works too. But I think a machine can ultimately be more empathic than a human, because it won't make as many stupid or emotionally-biased cognitive errors in assessing what it's like to be someone else. A machine, being smarter and less biased, can actually better put itself in someone else's shoes, and thus be more empathic. But it has to be wired to want to put itself in others' shoes—which Novamente is.

 

NORMHow does Novamente differ from other AI research projects, and why do you think yours will succeed when others fail?

 

BEN: Basically, we got the AGI design right, and I don't know of anyone else who has. The design is based on a sound, coherent philosophy of mind; it's computationally scalable, and it's engineered well by a great team of programmers. And, the methodology of teaching the system via embodying it in virtual worlds makes an awful lot of sense.

 

NORM: Novamente is a commercial as well as a research venture. How are you paying the bills? What are your future business goals?

 

BEN: From 2001 thru 2006 we've been paying the bills via doing an unholy variety of software consulting gigs, in various domains like data mining, bioinformatics, natural language processing, computational finance, and so on.

 

In early 2007 we shifted gears and decided to focus single-mindedly on the virtual agents domain—for two reasons. One, it is more harmonious with our long-term AGI goals than the other business areas in which we were doing consulting. Two, purely from a business perspective, it's common sense that focusing on a narrower vertical market niche is a better way for a small software firm to make money.

 

NORMYou are also Director of Research for the Singularity Institute. Why are you associated with the Institute, and what are its research program goals?

 

BENNovamente is narrow-focused on creating a thinking machine and rolling it out in a series of exciting products in virtual worlds.

 

The Novamente team cares about ethics and thinks about it, but still we're first and foremost concerned with making the AGI.

 

One of the things I think is critical about SIAI is that it focuses a lot of attention on the broader ethical issues—on how to maximize the odds that AGI's, once they're created, are positive and beneficial forces. This is a complex issue with many aspects including scientific and sociopolitical ones.

 

Furthermore, SIAI also has a valuable role to play in terms of guiding the development of open-source AGI tools. We've been considering a project called OpenCog, in which SIAI and Novamente and others would collaborate on releasing a suite of software tools to help move AGI development forward. This would include versions of some of Novamente's core cognitive algorithms, and also non-Novamente stuff from other developers.

 

Doing open-source AGI development in an ethically responsible manner is a big issue, given the huge possible implications of success at creating AGI—but this is exactly the kind of issue that SIAI was created to explore.

 

NORMA basic premise of this website is that science is incompatible with religion. As Sam Harris says, "The difference between science and religion is the difference between a willingness to dispassionately consider new evidence and new arguments, and a passionate unwillingness to do so." Do you agree?

 

BEN: Not really.... I'm not religious, and one of the reasons my first marriage dissolved is that my first wife became religious during the course of our marriage. So I'm certainly not a religion fanatic!

 

However, I'd make two points in response to your claim...

 

1) Science just ain't all that dispassionate. Read Paul Feyerabend and Imre Lakatos. Or my article on philosophy of science from a couple years back:

 

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

 

which served as the basis for a chapter in The Hidden Pattern. Science is what Lakatos called progressive, meaning it has been effective at generating new ideas and adaptively responding to new evidence -- but it has not, in terms of the historical record, been very objective or dispassionate. Maybe it will be once the scientists are AGI's!

 

2) Many religious disciplines are actually specifically devoted to dispassionateness, and to clearing the mind of predispositions and confronting the world as it is. I would say that Zen priests are on the average a lot more dispassionate than scientists, for example. So we shouldn't tar all religious pursuits with the brush of contemporary US-style Christianity.

 

So, I think there is plenty of value to be gained by studying religions, especially the mystic aspects of various religions that have focused on the purification of the mind and the freeing-up of the mind from predispositions and illusions. Many scientists would benefit a lot from this kind of mental discipline and deeper self-awareness.

 

But in the end, although they do have these lessons to teach us, I do think that all current and historical human religions are bound up with foolish superstitions ... so I'm not religious and have taught my children not to be.... The idea of a religious AGI is a pretty funny one: but an AGI built on the Novamente design would surely be too rational to fall for any traditional human superstitions ... be they free will, God, nationalism or whatever.

 

NORMFinally, Ben: Let's pretend that you have cloned yourself and are now both interviewer and the one being interviewed. Is there any question you've never been asked before, but have always wished to hear? If so, please ask, then answer, that question.

 

BEN: Ummm ... "How does it feel to have a clone?"

 

No question comes to mind, but I'll make a closing statement, which is that I've always believed, since I was a child, that human beings are incredibly held-back and tied-down by the cultural preconceptions that are pounded into their heads from childhood. Yes, culture is what makes us what we are—without culture, each of us would be little more than a monkey—but it also holds us back from manifesting the potential it has given us. Something like creating a thinking machine is outrageous from the perspective of mainstream culture—and yet, using the technological, analytical and conceptual tools provided by human culture, it could have been done a decade or two ago if anyone had managed to marshall the resources together in pursuit of the right AGI design.

 

Kurzweil says the Singularity will come in 2045—some say sooner, some say later, some say never. But the point I want to make is that the date is really determined by cultural psychology, not by technology per se. We could have a Singularity in 5-10 years if we wanted one badly enough. Or we could delay it forever if we collectively get lazy and don't really try.

 

I firmly believe that the technology and the ideas to create AGI at the human level and beyond are at our disposal right now. The Novamente design can work, and I'm sure it's not the only workable path. The obstacle is our culture and our psychology which makes it very difficult to marshall the needed level of resources toward the goal in a coherent way. This is the obstacle I'm trying to overcome with Novamente LLC—and it's a difficult effort, but also a very rewarding one, in terms of the process as well as, obviously, the end goal.

_________________________

 

For more information about Novamente and AI implementation in Second Life, see Ben's article in KurzweilAI.net called AI Meets the Metaverse: Teachable AI Agents Living in Virtual Worlds.