Daniel Araya: Chris, you have a very interdisciplinary academic background, spanning engineering, philosophy, psychology, neuroscience. Has this been helpful to designing SPAUN? How did you start in this field and what motivates you now?
Chris Eliasmith: My main motivation is curiosity, and figuring out new and interesting ways of accomplishing sophisticated behaviors. That’s also how I started out… when I was doing my engineering undergrad, I was very interested in how the mind worked, and so I took a lot of extra philosophy and psychology courses. Then I just pursued my interest in understanding how people worked, regardless of the discipline.
Thinkers like Hans Moravec and Ray Kurzweil have argued that reverse engineering the human brain is inevitable. Do you believe that AI will eventually simulate all aspects of human intelligence? How do you define human intelligence?
I think reverse engineering sophisticated behavior is inevitable – the brain is far more complicated than what we need to do that. In my view you have to be very specific about what you mean by ‘reverse engineering the brain.' To do that to the point where you can copy and reproduce an individual brain is enormously more difficult (and not inevitable) than it is to discover basic computational principles for coordinating and guiding complex behavior (which I’d probably suggest is inevitable.) So the somewhat facetious answer to your first question is no. But, I do think we’ll be able to simulate pretty much all the aspects we care about (i.e. those aspects that can generate sufficiently sophisticated behavior that we’ll treat these machines as full-fledged agents.)
On defining intelligence: it’s the ability to generate adaptive behaviors in line with long and short-term goals, in a complex, uncertain environment. Of course you need to say much more about what behaviors ‘count’ as intelligent, or even simple control systems can meet such a ‘definition.' But I think we generally use behavioral outcomes (i.e., look at what people can actually do in various circumstances,) to determine what kinds of environments and behaviors are of interest.
You’ve developed a series of technologies based on the Neural Engineering Framework (NEF) and your Semantic Pointer Architecture (SPA). How does your body of work relate to deep learning? What commercial applications of your work are you focused on now? What applications are there in the future?
There are some interesting technical relationships between the NEF and deep learning, but I think the big picture view is this: Deep learning is a way of generating hypotheses about how certain problems might be solved by the brain. Those problems are quite limited at the moment (e.g. to things like image/phoneme categorization). The SPA and NEF provide methods for incorporating these hypotheses in to large-scale, integrated models of complex behavior.
In our Spaun model, we have a deep network as the visual system, but the visual system is only one small part (less than 10%) of the model overall. It’s that overall model that really allows us to address cognitive behavior, sophisticated decision making, and so on. As a result, the applications we’re thinking about now are largely integrative: e.g. integrating state-of-the-art visual and motor systems for rapid, adaptive robotic control; and integrating motor and perceptual systems with decision making to have full (if currently simple) agents (think much more interesting toy robots).
Our company, Applied Brain Research (ABR) is commercializing the technologies developed in my lab. Nengo is our neuromorphic compiler which allows developers to build complex applications with spiking neurons. We used Nengo to build Spaun (and pretty much every model we’ve done). In general, Nengo is well-suited to developing applications for the new neuromorphic chips that, in the next few years, will be moving out of labs and into early commercial applications. I believe this will be bringing advanced AI algorithms, including, but also significantly beyond deep learning, to devices such as smart phones and robots.
Currently, the applications we’re thinking about now are both for advancing state-of-the-art, and to build large, integrated systems. In the first case, we have new adaptive motor control algorithms that significantly out perform deep learning and other approaches. In the second case, we are now integrating state-of-the-art visual and motor systems for rapid, adaptive robotic control.
In the future, we intend to integrate more language abilities (beyond recognition, to reasoning), more sensory modalities, and more degrees of freedom in control. One way to think about our target is that we’d like to build something like Commander Data from Star Trek.
There’s a lot of competition to be the first research team to engineer the fundamental algorithms used in the brain and eventually develop general purpose AI. Do you have any concerns about where this race will lead? What things along the way to general AI might be of concern?
Absolutely I have concerns, and pretty much everyone in the ‘race’ does. I often think of AI as like nuclear research – there are good and bad applications. We have to identify, support and focus on the good ones; avoid, outlaw, and banish the bad. To me, the somewhat odd idea that AI will do whatever it wants gets more airplay than it ought to… it would take an awful lot of willful negligence to let that happen. But nevertheless, that could be a dangerous circumstance.
The things along the way that might be an issue are already cropping up: giving robots the ability to determine who to fire weapons at, for instance. Essentially, as we provide the ability to make more autonomous decisions, we’ll have to be careful about likely outcomes, and those will depend on the system under discussion (e.g. is the robot controlling a small arm, or a 2000lb vehicle.) But notice that letting a robot control a 2000lb vehicle might be a better option over all than letting people do it, if we’re trying to minimize fatal accidents.
Aside from AI applications of your work, what is your lab's work telling us about the brain and what use is that knowledge going to be put to? Does your research tell us anything that can help with the treatment and/or prevention of diseases of the brain?
In many ways, the AI applications are a more recent addition to our interests than the health applications. We have built models to help us understand the effects of certain kinds of strokes, and how various interventions help as well as characterize their limitations. We are able to predict observed changes in neural activity down to the impulses individual neurons generate, and how those can change with the application of various drugs.
In some ways, it is this experience with the biological system that makes it clear to me that biology is more complicated than a machine will have to be to get sophisticated behavior. Regardless, I believe models like Spaun are helping us move towards brain simulations that can be used the way automotive simulations are now – we can test ideas for designing improvements before trying them out in the real world. Notably, this isn’t a replacement for biological studies, but it’s likely to make them more informative and efficient (both in terms of dollars spent, and animals or people harmed).
What is your view on the capacities of AI to recursively improve itself at some point? Stephen Hawking, Elon Musk, Bill Gates and others have warned against AI getting out of control. How do you think AI will impact our society?
I must admit, your last question I find more interesting than whether AI will ‘get out of control’ – because I think the latter will only occur if we let it (despite all the movies to the contrary). The far more important and immediate impact of AI will be to society. AI is already having a big impact on our economies: services like Uber wouldn’t be possible without advanced optimization algorithms. I think these impacts will grow as we build more sophisticated machines able to do ‘mundane’ jobs. My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people. In short, my biggest worry about AI is its capacity to amplify the already growing gulf between rich and poor.
Chris Eliasmith is currently director of the Centre for Theoretical Neuroscience and head of the Computational Neuroscience Research Group (CNRG). He is the co-inventor of the Neural Engineering Framework (NEF), the Neural Engineering Objects (Nengo) software environment, and the Semantic Pointer Architecture, all of which are dedicated to understanding how the brain works. His team has developed the Semantic Pointer Architecture Unified Network (Spaun) which is the most realistic brain simulation thus far.
Follow Daniel Araya, the author of this post, on Twitter.
Share This Article