From ENIAC to brainiac: neuromorphic networks

Emulating the structure and function of the human brain provides enormous advances in healthcare, AI, and the development of unforeseen technologies.

Complaints of computers

John von Neumann isn’t exactly a household name, but his influence is everywhere you look, and probably in your pocket, too. In 1945, von Neumann described a simple computer architecture — a way to organize the necessary components to create a computing system [1]. This architecture was then used to build the first electronic computer, ENIAC.

The von Neumann architecture comprises two key components: a central processing unit (CPU) that handles the computation, and memory, which stores data and instructions. These are connected by a bus, which is the communication channel that carries data back and forth between the two components.

Though the field of computing has advanced significantly since the 1940s, almost all modern computing systems – from laptops and smartphones to the humble calculator – are still built according to von Neumann’s design principle.

But this architecture has one critical limitation. As the CPU and memory are separate components, computing performance is restricted by the speed at which the bus can transport data between the two. This is known as the von Neumann bottleneck.

The bottleneck hasn't yet manifested as an insurmountable roadblock to progress. Indeed, the average smartphone today is considerably more powerful and versatile than turn-of-the-century supercomputers. This is partly due to solutions such as caches (auxiliary memory units that store data for quicker retrieval) and increased bus bandwidth to speed up communication between components. We've also developed faster processors and memory to ensure we use bandwidth as efficiently as possible. But these solutions, in turn, create additional energy demands to both power and cool these modern systems.

Although it’s unlikely that von Neumann architecture will ever reach obsolescence, its limitations are becoming increasingly apparent with the growth of more computational- and data-intensive disciplines like artificial intelligence (AI). So, to ensure the only barrier to progress is our imagination, and not the von Neumann bottleneck, we need to fundamentally rethink our approach to designing computer systems.

Quantum computers are built using entirely different, non-von Neumann architectures, and make use of features inherent to quantum mechanics to accelerate very specific components in calculations. They promise raw computing power for easily solving mathematical calculations that are impossible for the most powerful supercomputers of today [2]. Whether quantum computing will ever be able to do this in an energy-efficient way, however, is a question we're far from answering.

Did you know?

  • 20W

    the average power consumption of the human brain [4]

  • 10

    IBM Summit supercomputers to equal brain capacity [3]

  • 86bn

    neurons in the average human adult brain [5]

A billion neurons for one computer

Neuromorphic computing is one of the most promising non-von Neumann avenues. As the name suggests, neuromorphic architectures aim to mimic the complex web of neurons and synapses that make up the brain’s structure — and how it processes information.

Neuromorphic architectures are not to be confused with artificial neural networks. The latter are collections of algorithms designed to loosely simulate, not emulate, the processing pathways of the brain. While they are powerful tools, they are effectively complex software environments running on von Neumann computers, and face the same limitations. Neuromorphic networks, however, are designed to replicate the structure of the brain in physical hardware.

“Take a biological neuron, which is a very complicated object. In the neuromorphic paradigm, you build a piece of hardware that works like a model neuron should work. In an artificial neural network, you're essentially simulating how this model neuron would behave. But of course, simulating is always slower than just letting the model neuron behave,” explains Mathias Winkel, Senior AI Research Scientist.

There are several benefits to modeling a computing system on the human brain. For one, even the most powerful supercomputers of today are leagues behind the raw computational power of the human brain, not to mention our gray matter's plasticity — its ability to change structurally and functionally to learn and adapt to different challenges.

To equal brain capacity, you'd need roughly 10 of the second-fastest supercomputers in the world [3]. These would occupy 20 tennis courts, weigh 3,400 metric tons, and consume 150 megawatts of power - the same as roughly 100,000 average households. Our brains, on average, consume just 20 watts – less than your standard lightbulb – weigh less than three pounds, and run at a cool 37.7778ºC [4].

3:0 for the brain

There are three main properties that make the brain both powerful and efficient. First is its ability to execute parallel processing — performing countless different tasks independently at the same time. Within the roughly 86 billion neurons of the adult brain [5], there is no distinction between processor and memory. Each of these neurons can have thousands of synapses connecting it to other neurons, meaning thousands of different buses per neuron sending and receiving data in all directions.

By comparison, the majority of existing computer systems have just one bus, and so can only perform one task after another. This is called synchronous computing. By contrast, the brain uses asynchronous or parallel computing. This doesn't just mean performing more tasks at once, though.

The speed at which a computer processor can perform tasks is governed by an internal clock. The speed at which this clock ticks – one billion times per second for a 1GHz processor, for instance – is the shortest interval between actioning different tasks.

This leads us on to the second property of the brain, which is energy efficiency. Whereas your standard computer processor is always on, with the clock constantly ticking, an asynchronous system only draws power when it is actively engaged. “The advantage of being non-clock is that the system is just operating whenever there is an actual need, when there is an incoming data flow,” says Stephan Dertinger, Senior Director and Head of Innovation Projects. “This is a natural way to be highly energy efficient, as the human brain is.”

The third main efficiency of the brain is its innate, biological method of data compression. Transistors on a chip are on or off, 1 or 0. Computers communicate in this simple, binary, digital language. Neurons, however, communicate with analog impulses, or spikes. Information is encoded in the timing and frequency of these impulses, creating the opportunity for not just 1s and 0s, but intermediate values. Communication between neurons across their thousands of connections is therefore not just quick and efficient, but information rich.

1,000 times more efficient

Reproducing these communication spikes is already being explored using artificial neural networks but we have only just begun to scratch the surface of what could be possible with neuromorphic hardware. “Currently, people are trying to build on the existing chip-making architecture as much as they can,” notes Ralph Dammel, Technology Fellow.

Companies including IBM, Intel, BrainChip and SynSense (a firm we have invested in [6]), as well as academic organizations, are pioneering experimental platforms that merge thousands of processing cores to simulate neurons and stitch them together with artificial synapses. IBM's TrueNorth chip, as an example, boasts more than one million neurons connected by over 250 million synapses [7].

We are bringing our expertise in materials and architecture to collaborations with several leading academic and industry partners, such as a project with the Transylvanian Institute of Neuroscience to better understand computation in the brain [8]; and MemryX, a startup working on in-memory computing chips that aren't limited by buses [9]. Through our subsidiary Intermolecular, we are developing advanced types of digital and analog memory that will help realize future neuromorphic designs [10].

In the near term, neuromorphic architectures promise substantial energy savings compared to von Neumann designs, which will be important for the sustainability of new technologies. Intel has already reported that its Loihi neuromorphic chip is 1,000 times more energy-efficient than traditional hardware in training neural networks [11].

“When you look at autonomous vehicles and how much energy these systems consume just to recognize cars, we are in the range of hundreds of watts, which is a lot of energy. Autonomous cars are essentially mobile computer centers, because there's so much computing that needs to be done. With a highly efficient brain architecture, the energy consumption goes down, and performance is probably the same or even better. And this solves issues of sustainability and cost,” says Dertinger [12].

A brainstorm about the future

Looking to the future, neuromorphic architectures will do much more than simply reduce power consumption. They have the potential to become the foundation of AI applications that aren't currently possible, and those we haven't even thought of yet.

The neural networks of today allow computers to learn to complete individual tasks: identifying objects in pictures or playing Go, for example [13]. But in mimicking the way the human brain behaves, neuromorphic computing will enable neural networks that are capable, to some extent, of neuroplasticity — adapting what they have learned to solving new problems.

“The hope behind neuromorphic systems is that they have really high plasticity, so are super flexible to adapt to situations. Imagine an autonomous vehicle driving through a tunnel and then out into the sunshine. Then you drive somewhere and it's raining, and instead of people, you have rabbits crossing the street. This is a normal situation in life, and it is what neuromorphic systems are very good at understanding compared to classical machine learning,” says Thomas Ehmer, Innovation Incubator Lead. “The hope is that we could build systems that are trained on one aspect and then self-adjust to new situations.”

Smarter, more flexible AI that requires less training and human intervention will make it more feasible to develop cost-effective, specialized solutions for various problems. “Modern AI techniques like deep learning require high implementation efforts and that limits their economical use to high-value scenarios, for example in mass markets like autonomous vehicles. If we can create algorithms which really are smarter, which need less training data, and are more robust to changes in input, then we could realize a lot of niche use cases,” says Winkler.

A game changer for healthcare

Neuromorphic computing and advanced AI will undoubtedly enrich our lives, but our company is also pioneering these technologies because they promise to have broad applications in healthcare. Researchers are already using neuromorphic architectures to build more human-like prosthetics [14], and some day, these same prosthetics may even be able to process neurological inputs and function like true artificial limbs.

Just as neuroscience is helping us to better understand the brain, so will it inform the development of neuromorphic architectures. And neuromorphic computing will, in turn, accelerate progress in neuroscience. Neuromorphic computing is at the heart of the European Union's Human Brain Project, for example, a multidisciplinary initiative that “is building a research infrastructure to help advance neuroscience, medicine, computing and brain-inspired technologies [15].”

The ability to model biological systems with neuromorphic architectures, as well as emulate disease progression and neurotoxicity, could yield countless breakthroughs. As Ehmer explains: “If we can use this new computing paradigm to better understand how the brain works biochemically, we could mimic how the blood-brain barrier works. That's one of the fundamental questions in making your medicines safe: does it pass the blood-brain barrier or not? And this would feed back into the generation of novel medicines.”

“If we could one day emulate Alzheimer's disease, we could then say ‘what's our paradigm for treatment?’ Then by giving a digital treatment to the digital brain, we can test if it would help to stop the spread of the emulated disease, confirming whether our paradigm might work in the real brain, too.”

Thomas Ehmer

Innovation Incubator Lead

In the far future, we may also be able to use neuromorphic models to better understand higher brain functions, like cognition, self-recognition, memory, and learning. But building systems of this complexity is a tall order. “We are still light years away from what the human brain can do,” warns Dertinger.

“There are still many steps in between,” adds Winkler. “As for the next level, recognizing a cat in a photo without having to train a system with a million cat images would already be a breakthrough. But we are still very far away from strong AI, which would essentially be able to do everything the human brain can do. Well, except for the things you need a body for.”

Neuromorphic networks are in their infancy, but the possibilities they hold for empowering huge advances in AI, healthcare, and new technologies, it seems, are endless.

“The creativity of humankind will come up with some really interesting applications that we cannot even foresee today because they are just not technically feasible,” muses Dertinger. “Let's be surprised by what people make out of it.”

In 2012, the United Nations set out 17 Sustainable Development Goals (SDGs) that meet the urgent environmental, political, and economic challenges facing our world. Three years later, these were adopted by all member states. We are committed that our work will help to achieve these ambitious targets. Our research and collaborations in the field of neuromorphic computing fit under ‘Goal 9 — Industries, innovation and infrastructure; Target 9.5 — Enhance scientific research.’ By helping to explore and support the development of neuromorphic technologies, we aim to drive the creation of new technologies, as well as advancements in artificial intelligence and medical science, that will enrich lives and improve healthcare worldwide.

Learn more about SDGs

Work With Us

Is your brain full of ideas?

Is your gray matter creating lots of ideas on how to improve life and the world? Then join us. This is your chance to work on an international team and bring your ideas to life.

View All Jobs Join our Talent Zone
Work With Us Work With Us