Steve Grand

An honorary research fellow at Cardiff University's School of Psychology and NESTA Dreamtime fellow, Steve Grand, OBE, has carved a reputation at the cutting edge of artificial life. He is Director of Cyberlife Research Ltd. and was formerly Technical Director of Creature Labs, where he was responsible for the architecture and programming of the artificial life game, Creatures. Currently Grand is developing artificial life applications as well as an intelligent living machine that embodies a set of hypotheses about the neurological mechanisms present in various species of animal.

 

He theorizes that every cortical map must be thinking about something all the time, and if there are no signals demanding its attention then the map will generate some itself. He feels this is the explanation for the endless monologue that runs in everyone's head, and the visual day dreaming we do in vacant moments. In his book, Creation: Life and How to Make It, Grand explores what constitutes the conscious essence of existence, what is intelligence, even "how we can make a soul." In Growing Up With Lucy: How to Make an Android in Twenty Easy Steps, he describes his progress building a robot capable of developing a mammal-like intelligence. Steve is currently working on an even more ambitions artificial life simulation, tentatively called Grandroids.

 

This interview was conducted by Norm Nason and was originally published in the website, Machines Like Us, on August 27, 2007. © Copyright Norm Nason - all rights reserved. No portion of this interview may be reproduced without written permission from Norm Nason.

 

 

NORM: Thank you for joining me, Steve. It's great having you here.

 

STEVE: Hi Norm, thanks for the invitation, and particularly for all your work running this site—I wouldn't have a clue what was going on in the world without MLU!

 

NORMIt's an exciting time in the field of Artificial Life: last August a team from Vanderbilt University published a detailed blueprint for assembling a synthetic cell from scratch (Molecular Systems Biology, DOI: 10.1038/msb4100090). It includes 115 man-made genes which would be combined with various biochemicals to make a self-assembling cell able to live under carefully controlled lab conditions. As of this writing, other scientists and industrialists are meeting in Switzerland amidst claims that the world's first entirely human-made genome may be only weeks away from creation. Swiss and international civil society groups are calling for swift action to control this technology but the scientists themselves are advancing preemptive proposals to evade regulation. As scientists meet in Zurich, the UK's Royal Society and the Swiss government announce plans to investigate synthetic biology. What do you think about this news?

 

STEVE: I'm not sure what to make of the claims. They seem to have arisen mostly from Craig Venter's recent announcement, which says they've successfully transferred the genome from one bacterium into the cytoplasm of another. I imagine this is an important step forward, but the result of it is no closer to being a fully synthetic organism than a mule is to being a man-made horse. On the other hand, Forster and Church's paper is a theoretical blueprint for a genuinely synthetic organism, and that's very exciting. I'm in no position to judge how close they actually are to being able to cook the recipe shown in that paper, but it does at least sound like they're well on the way towards understanding the minimum requirements for a self-replicating chemical network.

 

The result will certainly be life, but not necessarily life as we know it. Scientists and politicians alike are right to think about the implications and dangers of such technology, but as far as public perception is concerned it's important to remember that 150 genes operating in a tiny, cell-like bubble is a very, very, very long way from the multi-billion-cell extremely complex colonies that we normally think of as animals. I think Dr. Frankenstein would be pretty amused at the level of our fears in relation to the current level of our ambitions.

 

But creating a self-replicating, almost self-sustaining mixture of chemicals will count as a huge achievement; something that von Neumann and Turing would have loved to see. And at a mere 150 genes there's a good chance we might actually understand what it is we've made—the principles that are at work. That's the part that interests me.

 

NORMYour work might be summarized as a quest to truly understand those underlying principles of life. It's a tough job, as they say, and someone's got to do it—but few others seem up to the challenge. What motivates you to explore this difficult terrain, and what keeps you going?

 

STEVE: Actually, I don't think I've ever met anyone who isn't motivated to explore it. Some are more motivated than others, of course. Most people desperately want to understand what life is all about before it runs out; a few of us vaguely wonder what life was all about, after it's too late. But life is so precious, so demanding, so short and just so startlingly weird that we all want to make sense of it. All humans, at least.

 

But some of us get more involved in the quest than others, I guess. It helps if you think you have an angle—something you already understand or have an instinct for that might eventually shine a little light on the problem. In my case the way I think tends to be a bit different from the more conventional ways of looking at things, and sometimes that seems to help. Some people think mostly in words; some in pictures. I'm one of those people who think in dynamics—feedback loops and suchlike. It's handy for understanding life, because a living thing is a collection of self-sustaining feedback loops; physical bodies are just the transient manifestation of those loops. Viewing life as a process, rather than a thing, is a tenet of A-life and it helps a lot if that's the way your mind already works.

 

As for what keeps me going, there's a Pink Floyd lyric that I think sums it up nicely: "When I was a child I caught a fleeting glimpse, out of the corner of my eye; I turned to look but it was gone; I cannot put my finger on it now, the child has grown, the dream has gone; I have become comfortably numb."  Well, that's how I feel too, except I still think I might be able to put my finger on it, if only I can avoid becoming too comfortably numb.

 

That's my personal driver—the feeling that I keep getting glimpses of something fascinating and revealing and I desperately want to know what it is. But I also have wider motivations to do with our ability as humans to understand what life is and isn't. Basically, I want to discourage other people from getting too comfortably numb as well. I guess these are my political motives (with a small 'p').

One of the things I often find myself talking to the public about, for example, is the increasingly inescapable evidence that we are machines; that a human mind in all its glory is a mechanical consequence of the lawful interactions between trillions of very simple moving parts, and not some kind of vitalistic magical essence attached by a silver thread to a body. For most people this is very hard news to take, and they rebel against it. "What about culture?", they say. "What about free will?" "How dare you suggest I'm some kind of jumped-up pocket calculator!" But as an engineer I have a huge respect for machinery and see things differently. Recognizing that we are machines doesn't demean us at all; it just shows us what astounding and beautiful things machines are capable of. I find it awe-inspiring.

 

The human mind is an emergent entity, which is not independent of, and yet transcends, the mechanics of its parts. And if simple mechanics can lead to something as subtle and complex as the human mind, who is to say what else it might be capable of? The universe is an endlessly inventive place and we should be proud to be one of its most recent creations. But most people don't see this—they prefer to believe we're some kind of ectoplasmic goo, concocted by a rather stern and humourless god. More worryingly, they prefer to believe there are simplistic, dogmatic answers to questions like when life starts and stops, which kinds of living things have rights, and who they should beat the crap out of in support of their creator's supposed wishes. If I ever achieve anything useful, I'd hope it was that I helped to upset people's cozy, glib and often dangerous preconceptions about life and the nature of reality. I want people to gain a better-grounded respect and understanding for "machines like us."

 

NORMI want to ask you about the specifics of your work, and what your intuition tells you about life and consciousness, but first: I know you have strong feelings about working alone, unencumbered by others. How is it that you find this to be an asset, rather than a liability?

 

STEVE: I wouldn't dream of suggesting that it would be an asset for everyone, but it works for me. It partly follows from having an odd way of looking at the world. When I talk to myself I find that I perfectly understand every word I say, but that's rarely the case when I try to collaborate with others. There's a well-known relationship between the number of people in a team and the percentage of time spent dealing with communication problems, and if it takes one man one day to dig a hole, you can be pretty sure that two men will do it in a day and a half. Another difficulty with teams is that you have to divide the problem up, and that means nobody owns or can see the whole thing at once—it's a bit like trying to understand the picture on a jigsaw puzzle by giving each piece to a different person. And then by assigning tasks to specialists you lose that magical ability to see new connections between apparently unrelated things, which is the wellspring of most creativity. When I build my own robots, for instance, I discover things as I'm building them that make me rethink my theoretical or experimental paradigm. If they were built by a technician then I'd miss all that. And by learning new skills in electronics, say, I come up with new analogies for the brain that otherwise would never have occurred to me. Teams are good for some things, but history tells us that creative insights tend to arise from single minds working in relative isolation.

 

I hasten to point out, though, that even a loner like me rests on the shoulders of thousands of other researchers, who've made their hard-won data publicly available. I'm very grateful for that data. What I don't have to do, unlike academics, is pay any attention to these other people's theories.

 

NORMYou have certainly shown us that it is possible for a single individual to make significant strides in the field of artificial life—in contrast to, say, the large teams required to produce a moon landing. Here is what a few others have said about you and your work:

 

"Very occasionally somebody from outside academia comes along and shows us academics how to do something we've been working on for years. Steve Grand showed us how to build a universe of evolving creatures, without the prevailing academic biases."

 

~ Rodney Brooks, Director, Artificial Intelligent Laboratory, MIT.

 

"Steve Grand is the creator of what I think is the nearest approach to artificial life so far.... He illuminates more than just the properties of life; his originality extends to matter itself and the very nature of reality."

 

~ Richard Dawkins, author, evolutionary biologist, Oxford University.

 

"When Steve Grand set out to create norns and the world they lived in, he made some inspired guesses that went against a lot of received wisdom. My favorite was his decision to model about two levels deeper than most AIniks would have recommended -- for reasons of economy. (Hey, if you’re modeling a grazing cow, you’re not going to model every blade of grass -- you’re going to just have this renewable resource of undifferentiated stuff, right? Wrong.) If you do it right, as Steve did, your “uneconomical” modeling efforts pay for themselves many times over in providing foundations for realistic side-effects and multiple functions. And there’s a deeper point: hyper-idealized, oversimplified models often yield results that are just plain wrong. So my maxim, thanks to Steve, is: Always model more than you think you need."

 

~ Daniel Dennett, philosopher, co-director of the Center for Cognitive Studies at Tuffs University.

 

"Steve and I have appeared together in TV documentaries over the years and have met at conferences. If someone asked me to describe him in a few adjectives, I would say: ideosyncratic, visionary, courageous."

 

~ Hugo de Garis, Head, Artificial Intelligence Group, International School of Software, WUHAN University, China.

 

What do these guys know about you and your work that we should know?

 

STEVE: Aw shucks! I could say equally nice things about each of them, too.

 

I think maybe their kind comments have something to do with the fact that after 20 years of hard work I became an overnight success. If you think Creatures emerged fully formed out of a vacuum, you'd probably conclude that I'm a genius, but really I'd spent all my adult life thinking about these things quietly to myself and doing experiments that no-one else saw. In a way this is the downside of working alone—nobody even knows you exist until you produce something. That's more or less the position I'm in now, because it's ten years since Creatures was published, and the work that I started with Lucy the Robot is going to take a while yet to come to fruition (forever, unless I earn some money!). But the good side of it is that nobody is looking over my shoulder all the time, tracking every little bit of progress.

 

Even so, one of the most important things that these guys happen to know is just how hard the problem is, and I'm glad they think I'm on the right track. I suppose it's fair to say that Richard Dawkins is a (surprisingly rare) believer that there are fundamental principles in biology, which is why he was present at the first ever conference on artificial life, and in my work I try to discover and exploit some of those principles, looking for the simplest, most elegant computational structures that can create lifelike richness. Meanwhile, Dan Dennett and Rod Brooks have always pointed out the importance of building complete organisms, in which the whole can become more than the sum of its parts, and that's what I try to do as well.

 

But I think one of the most important aspects of my work is the way I think about computation. Since the development of the computer we've all been conditioned to think of computation as a digital, serial, stepwise process in which varying sets of instructions are used to control data. So many of our metaphors for understanding the world are now set within this paradigm. But living things, brains, social systems, and for that matter most things in the universe compute almost instantaneously, in a massively parallel, analogue way. What's more, it's the 'data' that drive the 'code', in the sense that the laws of physics (the "instructions") are fixed and universal, and all the richness we see around us is due to the changing relationships between objects. It seems to me that my job as a creator of simulations should involve taking the serial, top-down, digital computer and turning it as quickly as possible into a simulation of a parallel, analogue, bottom-up, data-driven system. From that point on I should simply arrange virtual objects in space and alter their parameters—the way the real world works. Basically I start out as a programmer and then switch as early as possible to being a biologist. I think this is important, but most people in AI, A-life and computer science find the digital paradigm a hard habit to shake.

 

Once you do shake free of the world of IF/THEN statements, you find that there are many forms of computation in the world beside algorithms. When you hit a bell and it rings with a characteristic tone, it doesn't sit there applying sequences of rules to decide which of the millions of vibrations from the collision should be kept and which discarded; they just interfere with each other and themselves, and those that interfere constructively almost instantly win out over those that don't. When we try to understand the computations of life, and especially of brains, it's these sorts of processes we should be looking at (as Steve Lehar points out in your interview with him). The digital computer was originally based on analogies of how the mind was supposed to work: memory, central processing unit, rules, procedures and suchlike. But this paradigm is unhelpful and holds us back—the brain is not like a computer at all. Happily, the computer is a fantastic device for pretending to be other kinds of machines, so we can use digital computers to simulate parallel, analogue machines, and then arrange those virtual machines inside cyberspace to produce more interesting, more biologically relevant systems. That's what I try to do—I design circuitry using virtual objects. It's an art form.

 

NORMA big part of discovering answers is asking the right questions. What are the questions you are asking yourself these days?

 

STEVE: Hmm, good point. Maybe the question I should have been asking myself is "what are the questions I'm asking myself?"... Let me think about that...

 

Actually, maybe that's a good place to start. Breakthroughs so often occur when people examine their own previously unquestioned assumptions and find that quite fundamental things they'd always taken completely for granted aren't actually true. The snag with this is the implicit paradox, of course: if your assumptions are unquestioned, how do you know what they are? I think this is where it helps to see the world with childlike eyes—all innocence and ignorance. Experts are the last people to question the foundation stones of their towering edifice of acquired knowledge, for fear the whole thing will topple.

 

Basically I try to ask really dumb questions about life in general and the brain in particular, in the hope they'll uncover assumptions that need abandoning. In broad terms I'm trying to understand consciousness, since that's the most precious commodity in the universe to me (and I presume to most conscious entities). But since nobody yet has a clue what consciousness actually is, I try to answer more tangential questions, like why did it evolve, and what structures and functions are needed to support it?

One of the most fundamental questions to ask, I think, is where we are, as conscious beings? We tend to assume implicitly that we are situated in and consciously aware of the outside world, and experience events in it as they happen. But that's not really true—by the time we have processed the perception of an event and become aware of it, it's already over. If we relied on that belated information we'd never be able to interact with a fast-moving environment and survive. In truth we're conscious of a different, imagined world; one that usually more-or-less corresponds to events in the outside world, but presages them by a fraction of a second or more. The world we are conscious of is a simulation of reality, designed to make us aware of events that are probably happening now or are about to happen, but won't actually be perceived until it's too late. The beauty of this simulation (and this may have been a lucky accident) is that it can be fast-forwarded, allowing us to make plans; it can be conditional, allowing us to ask "what if?" questions; it can be translocated, allowing us to see the world from someone else's point of view; it can even be completely fictional, allowing us to visualise things that don't exist. In short we humans (and doubtless some but not all other animals) have the capacity for imagination. So if you look for answers to consciousness in our passive awareness of the world, you're looking in the wrong place. We need to understand where our capacity to generate virtual worlds comes from and why it evolved, because those virtual worlds are where we, as conscious beings, live. It is from a particular kind of simulation that consciousness emerges. There seems to be a theme to my life here.

 

NORMThis next question comes from Peter Hankins, a philosopher and frequent contributor to Machines Like Us: Why did you decide to build Lucy—your ongoing AI experiment—as a physical robot, rather than simulate her in a computer?

 

STEVE: Good question, to which I have both good and bad answers. One of the more shameful ones is that I got fed up with people complaining that my simulated creatures are not "real." I can understand why it's hard for people to accept, but I've gone to some lengths to address this issue at a philosophical level and all I get in reply is "they're not real because they just plain aren't. Yah boo sucks." So I made Lucy a "real" robot, and that makes them happy. What they don't seem to realise is that Lucy's brain is just as virtual as ever. Her mind emerges from neurons that aren't "real." But because her body is a physical object nobody seems to care. I find that quite funny.

 

A more noble reason is that you can't bluff the laws of physics. If a robot doesn't do what you expected then you know your theory was wrong, whereas in a simulation it's very easy to cheat, whether knowingly or unknowingly. In Creatures I chose to cheat a lot -- after all, it was an entertainment product that had to work, not a research exercise that would be equally valid if it failed. So although the creatures' brains are made from simulated neurons and neuromodulators that really do learn and show other lifelike characteristics, the input to those brains is highly simplified. When a creature "sees" a carrot it doesn't receive a million different coloured blobs of light, which it then has to interpret as an object of a particular category; it just receives a signal on its "food" neuron. Bypassing perception like that avoids 99.9% of the problems, but with a robot it isn't possible, and so I'm forced to face up to the real challenges. It's a bit like pushing a box of chocolates out of reach to remove yourself from temptation! I wish more researchers would do this.

 

But the most compelling reason was purely pragmatic. If you want to understand the whole process of thought, from perception to action, you need a very high-fidelity simulation. Natural brains receive messy, complex inputs from many millions of sensory cells, and every action they take involves hundreds of muscles, interacting with gravity, inertia, friction, wind and so on. Not only that, but brains positively thrive on this complexity. Therefore, any simplifications you make in a simulation are seriously likely to mislead your thinking. At some point building a physical robot, even with all the challenges it entails, becomes an easier problem than creating a sufficiently realistic and complex simulation. With a physical robot, all the light rays, sound waves, gravity and friction come for free. Simulating such things with sufficient fidelity in realtime is not currently feasible.

 

Oh, and building robots is a lot of fun!

 

NORMWhat progress do you feel you have made? What successes? Where have you come up short?

 

STEVE: Ask me again in twenty years! On the outside I've not achieved very much at all—a robot that can (on a good day) recognize bananas. Big deal. It only takes a few lines of code to look for something yellow in an image, so simulating thousands of complex neural columns to recognise a banana by shape seems like overkill. What's more, the methods I've used almost certainly don't match anything that's actually happening in the mammalian brain. Nonetheless I do believe I'm starting to get a feel for some of the principles that might be at work. I've been exploring ways in which large-scale patterns of nerve activity can be used to compute useful things, especially coordinate transforms. Moving one's eyes toward a stimulus requires a coordinate transform from eye-centred coordinates into head-centred ones. Moving an arm to point at a banana requires a similar transform from body-centred space into shoulder vectors. Most interestingly, a series of coordinate transforms might in principle be able to project the image of a banana as Lucy sees it, into a form that looks the same from all angles, sizes and positions, and this is an important concept. A banana looks like a banana to us, no matter which way up it is or how far away it is, and yet the pattern it forms on our retinas is radically different from moment to moment. This is the central mystery of perception (and indeed action) and I feel like I'm just about starting to get a handle on the problem now. And I think all this is highly relevant to understanding the virtual world generator that gives rise to consciousness.

 

NORMCertain animals seem pretty intelligent: elephants, dolphins, chimpanzees. Are they conscious?

 

STEVE: I doubt consciousness is an all or nothing phenomenon, but the kind of awareness that I've been alluding to in this interview definitely requires certain features to be present in the brain. The question, therefore, is: "do those animals have these structures?" I'm pretty sure the answer is yes. Certainly all three groups show evidence that they can imagine things in their heads. Interestingly, if you'd mentioned certain other mammals, such as duck-billed platypuses, I'd venture to speculate that the answer is no. Imagination requires a fusion of bottom-up (sensory) and top-down (volitional or attentional) pathways in the brain, and I think it may be true that this form of organisation didn't substantially evolve until after the mammalian line began. So a rat might be conscious while a bat is not. I'm not at all sure of my facts here, but it's an interesting line of reasoning that I'd like to explore further.

 

NORMJohn Searle is among those who believe that even if you could reproduce intelligent behavior with a computer, there would be no reason to assume that real consciousness exists. He feels that the physiological processes of the human brain are essential to consciousness, and anything built using a different substrate could never be truly conscious. What do you say to this?

 

STEVE: I say balderdash and poppycock! Actually, that's not true. I think Searle had a very good point at the time he first argued this. Classical AI does indeed seek to simulate intelligent behaviour in a way that more resembles a portrait than a person. Searle's argument was against computational functionalism, and up to a point I agree with him. Explicitly trying to reproduce the outward behaviour of an intelligent system without regard to its internal structure is, I believe, futile; never mind its relevance to consciousness. That's why I focus on biologically inspired AI, because I think the substrate is very important. But the physicality of the substrate is neither here nor there. I think any attempt to locate consciousness in quantum behaviour, for instance, is just a tacit form of dualism. If physical neurons can collectively be conscious, then I think simulated ones can too. Searle's Chinese Room argument doesn't seem to me to apply. After all, real neurons don't individually understand Chinese either, but collectively they can.

MLU: Why is it important that we understand how our brains function?

 

In AI terms I think it's important because the brain is the only general-purpose intelligent machine we know of. It could be that there are many different ways to make intelligence, but given that there are an almost infinite number of ways to make a machine that's stupid it's like looking for a needle in a haystack. It seems so obvious to me that we should start our search in a place where we already know that something suitable exists.

 

In broader terms it's hard to mend something when it breaks unless you understand how it works, and broken brains are very distressing to their owners and those around them. It's also hard to pin down many moral issues (such as when life starts or stops, or which creatures deserve rights) unless you understand what minds are and where they come from. Anyway, how dare we call ourselves Homo Sapiens when we don't even understand how our wisdom arises? It's true that most people use video recorders and computers and electricity without much of a clue how any of it works. That's a travesty. It's disrespectful. But at least somebody somewhere knows these things. When it comes to the brain—the most important machine in our lives—we don't even understand its basic principles of operation yet. It's embarrassing.

 

NORMSeveral well-known futurists are predicting that machines will achieve human-level intelligence early in this century. Is AI the last thing humanity need ever invent, as they assert?

 

STEVE: I think it's possible that we'll have human-level intelligence some time soon—very unlikely, but possible. But the futurists' predictions are made on the basis of present trends, and present trends are pretty much meaningless in this instance. I liken it to trying to reach the moon by jumping. Last year you could jump so high; this year you've learned to jump twice as high. If present trends continue, you'll reach the moon in no time. But present trends won't continue—no matter how hard you try, you can't reach the moon by jumping. It requires a completely different method of propulsion. AI is like that: fifty years ago we had only just started; today we can make computers do several things that humans use intelligence to do. If present trends were to continue, we'd get to human level intelligence in a few decades. But present trends will NOT continue. The methodology and paradigm we have now simply won't work. We've been cheating by doing things the easy way up until now, like someone jumping for the moon. Quoting Moore's Law is meaningless, incidentally: computer power is not the limiting factor. What we need here is a breakthrough—a completely new way of looking at the problem—because none of the old ways work. Breakthroughs can't be predicted. You can't say you're 75% of the way towards a breakthrough.

 

If and when human-level AI is created, I see no reason to suppose it will romp ahead of us and get rapidly smarter. I think that's based on a false understanding of intelligence, or at least learning. And the dire warnings about being usurped and enslaved by machines are nothing but sensationalist nonsense. Intelligence is nothing to be feared; the smarter we humans become, the more accepting and sensitive we are towards other people, races and species. Conquering and enslaving is what stupid people do.

 

NORMGloves off, Steve: What's wrong with AL research today? What can (or should) be done to make faster, more effective progress in the field?

 

STEVE: Gloves off? Ah, yes, I was being much too reserved, I can tell... ;-)

 

Artificial Life, as a science, is pretty much moribund. Several generations of received wisdom and grant-chasing bandwagons have made it too fragmented and stultified. Maybe biology will mop up the remains, now that chemical synthesis can do what we used to have to simulate with computers. Despite a few valiant attempts, I don't think the field sufficiently embraced the concept that I have always used as a mantra: there is no such thing as half an organism. Life is a property of organisation—a systems-level concept—so trying to reduce the problem into its component parts without reassembling them into complete systems misses the point. The whole is always greater than the sum of its parts. I think we need less reductionist science and more practical engineering attempts to create complete artificial organisms, both virtual and physical.

 

As for Artificial Intelligence, it would help enormously if we admitted to ourselves that we don't have a clue how to do it. For a start it would be useful if we made a stronger distinction between "hard" and "soft" AI. Soft AI seeks to automate tasks that humans use intelligence to do, which is laudable but doesn't actually require the machines to be intelligent (for instance I need intelligence to do arithmetic; a pocket calculator can do arithmetic too, but you wouldn't call it intelligent). I think this is very misleading if it's confused, as it so often is in both the public and academic mind, with the attempt to create genuinely intelligent artifacts. At the moment I don't believe we know how to do that beyond a trivial level.

 

The answer lies, not in computer science but in neuroscience, since the brain is the only example of a fully-working intelligent machine that we have. But we don't know how that works either. I predict that the solutions to the problems of AI will come from computational neuroscience, but we need some changes to the prevailing paradigm before that is likely to happen. People who study the brain need to stop burying their heads in the sand about observations that ought to invalidate their models. I don't have space to give examples, but it's easy to make everyday observations about the brain that completely fly in the face of most existing theories. Again, I think there's an urgent need to take an holistic approach. Too many people work on memory, associative learning, action selection, visual perception or some other subcomponent for years, without realizing that their part of the story makes no sense in relation to the whole. Neuroscience in general tends to get bogged down in the details, and I think that computational neuroscience ought to be to neuroscience in general what Artificial Life was to biology: an attempt to abstract the principles from the detail, without losing sight of any awkward truths.

 

What a strange coincidence—this is exactly what I'm trying to do myself! :-)