From ENIAC to brainiac: neuromorphic networks
Complaints of computers
John von Neumann isn’t exactly a household name, but his influence is everywhere you look, and probably in your pocket, too. In 1945, von Neumann described a simple computer architecture — a way to organize the necessary components to create a computing system . This architecture was then used to build the first electronic computer, ENIAC.
The von Neumann architecture comprises two key components: a central processing unit (CPU) that handles the computation, and memory, which stores data and instructions. These are connected by a bus, which is the communication channel that carries data back and forth between the two components.
Though the field of computing has advanced significantly since the 1940s, almost all modern computing systems – from laptops and smartphones to the humble calculator – are still built according to von Neumann’s design principle.
But this architecture has one critical limitation. As the CPU and memory are separate components, computing performance is restricted by the speed at which the bus can transport data between the two. This is known as the von Neumann bottleneck.
The bottleneck hasn't yet manifested as an insurmountable roadblock to progress. Indeed, the average smartphone today is considerably more powerful and versatile than turn-of-the-century supercomputers. This is partly due to solutions such as caches (auxiliary memory units that store data for quicker retrieval) and increased bus bandwidth to speed up communication between components. We've also developed faster processors and memory to ensure we use bandwidth as efficiently as possible. But these solutions, in turn, create additional energy demands to both power and cool these modern systems.
Although it’s unlikely that von Neumann architecture will ever reach obsolescence, its limitations are becoming increasingly apparent with the growth of more computational- and data-intensive disciplines like artificial intelligence (AI). So, to ensure the only barrier to progress is our imagination, and not the von Neumann bottleneck, we need to fundamentally rethink our approach to designing computer systems.
Quantum computers are built using entirely different, non-von Neumann architectures, and make use of features inherent to quantum mechanics to accelerate very specific components in calculations. They promise raw computing power for easily solving mathematical calculations that are impossible for the most powerful supercomputers of today . Whether quantum computing will ever be able to do this in an energy-efficient way, however, is a question we're far from answering.
Did you know?
the average power consumption of the human brain 
IBM Summit supercomputers to equal brain capacity 
neurons in the average human adult brain 
A billion neurons for one computer
Neuromorphic computing is one of the most promising non-von Neumann avenues. As the name suggests, neuromorphic architectures aim to mimic the complex web of neurons and synapses that make up the brain’s structure — and how it processes information.
Neuromorphic architectures are not to be confused with artificial neural networks. The latter are collections of algorithms designed to loosely simulate, not emulate, the processing pathways of the brain. While they are powerful tools, they are effectively complex software environments running on von Neumann computers, and face the same limitations. Neuromorphic networks, however, are designed to replicate the structure of the brain in physical hardware.
“Take a biological neuron, which is a very complicated object. In the neuromorphic paradigm, you build a piece of hardware that works like a model neuron should work. In an artificial neural network, you're essentially simulating how this model neuron would behave. But of course, simulating is always slower than just letting the model neuron behave,” explains Mathias Winkel, Senior AI Research Scientist.
There are several benefits to modeling a computing system on the human brain. For one, even the most powerful supercomputers of today are leagues behind the raw computational power of the human brain, not to mention our gray matter's plasticity — its ability to change structurally and functionally to learn and adapt to different challenges.
To equal brain capacity, you'd need roughly 10 of the second-fastest supercomputers in the world . These would occupy 20 tennis courts, weigh 3,400 metric tons, and consume 150 megawatts of power - the same as roughly 100,000 average households. Our brains, on average, consume just 20 watts – less than your standard lightbulb – weigh less than three pounds, and run at a cool 37.7778ºC .
3:0 for the brain
There are three main properties that make the brain both powerful and efficient. First is its ability to execute parallel processing — performing countless different tasks independently at the same time. Within the roughly 86 billion neurons of the adult brain , there is no distinction between processor and memory. Each of these neurons can have thousands of synapses connecting it to other neurons, meaning thousands of different buses per neuron sending and receiving data in all directions.
By comparison, the majority of existing computer systems have just one bus, and so can only perform one task after another. This is called synchronous computing. By contrast, the brain uses asynchronous or parallel computing. This doesn't just mean performing more tasks at once, though.
The speed at which a computer processor can perform tasks is governed by an internal clock. The speed at which this clock ticks – one billion times per second for a 1GHz processor, for instance – is the shortest interval between actioning different tasks.
This leads us on to the second property of the brain, which is energy efficiency. Whereas your standard computer processor is always on, with the clock constantly ticking, an asynchronous system only draws power when it is actively engaged. “The advantage of being non-clock is that the system is just operating whenever there is an actual need, when there is an incoming data flow,” says Stephan Dertinger, Senior Director and Head of Innovation Projects. “This is a natural way to be highly energy efficient, as the human brain is.”
The third main efficiency of the brain is its innate, biological method of data compression. Transistors on a chip are on or off, 1 or 0. Computers communicate in this simple, binary, digital language. Neurons, however, communicate with analog impulses, or spikes. Information is encoded in the timing and frequency of these impulses, creating the opportunity for not just 1s and 0s, but intermediate values. Communication between neurons across their thousands of connections is therefore not just quick and efficient, but information rich.
The conventional architecture
Data between the central processing unit (CPU) and memory is exchanged via a so-called "bus," according to the concept of "one after another." The consequence: the bottleneck effect.
The neuromorphic architecture
Individual entities consist of CPU and memory at the same time so that computers can independently perform countless different tasks simultaneously. Therefore, neuromorphic computing is faster and more efficient.
1,000 times more efficient
Reproducing these communication spikes is already being explored using artificial neural networks but we have only just begun to scratch the surface of what could be possible with neuromorphic hardware. “Currently, people are trying to build on the existing chip-making architecture as much as they can,” notes Ralph Dammel, Technology Fellow.
Companies including IBM, Intel, BrainChip and SynSense (a firm we have invested in ), as well as academic organizations, are pioneering experimental platforms that merge thousands of processing cores to simulate neurons and stitch them together with artificial synapses. IBM's TrueNorth chip, as an example, boasts more than one million neurons connected by over 250 million synapses .
We are bringing our expertise in materials and architecture to collaborations with several leading academic and industry partners, such as a project with the Transylvanian Institute of Neuroscience to better understand computation in the brain ; and MemryX, a startup working on in-memory computing chips that aren't limited by buses . Through our subsidiary Intermolecular, we are developing advanced types of digital and analog memory that will help realize future neuromorphic designs .
In the near term, neuromorphic architectures promise substantial energy savings compared to von Neumann designs, which will be important for the sustainability of new technologies. Intel has already reported that its Loihi neuromorphic chip is 1,000 times more energy-efficient than traditional hardware in training neural networks .
“When you look at autonomous vehicles and how much energy these systems consume just to recognize cars, we are in the range of hundreds of watts, which is a lot of energy. Autonomous cars are essentially mobile computer centers, because there's so much computing that needs to be done. With a highly efficient brain architecture, the energy consumption goes down, and performance is probably the same or even better. And this solves issues of sustainability and cost,” says Dertinger .
What excites you most about neuromorphic networks?
A brainstorm about the future
Looking to the future, neuromorphic architectures will do much more than simply reduce power consumption. They have the potential to become the foundation of AI applications that aren't currently possible, and those we haven't even thought of yet.
The neural networks of today allow computers to learn to complete individual tasks: identifying objects in pictures or playing Go, for example . But in mimicking the way the human brain behaves, neuromorphic computing will enable neural networks that are capable, to some extent, of neuroplasticity — adapting what they have learned to solving new problems.
“The hope behind neuromorphic systems is that they have really high plasticity, so are super flexible to adapt to situations. Imagine an autonomous vehicle driving through a tunnel and then out into the sunshine. Then you drive somewhere and it's raining, and instead of people, you have rabbits crossing the street. This is a normal situation in life, and it is what neuromorphic systems are very good at understanding compared to classical machine learning,” says Thomas Ehmer, Innovation Incubator Lead. “The hope is that we could build systems that are trained on one aspect and then self-adjust to new situations.”
Smarter, more flexible AI that requires less training and human intervention will make it more feasible to develop cost-effective, specialized solutions for various problems. “Modern AI techniques like deep learning require high implementation efforts and that limits their economical use to high-value scenarios, for example in mass markets like autonomous vehicles. If we can create algorithms which really are smarter, which need less training data, and are more robust to changes in input, then we could realize a lot of niche use cases,” says Winkler.
Where are networks most likely going to be used?
A game changer for healthcare
Neuromorphic computing and advanced AI will undoubtedly enrich our lives, but our company is also pioneering these technologies because they promise to have broad applications in healthcare. Researchers are already using neuromorphic architectures to build more human-like prosthetics , and some day, these same prosthetics may even be able to process neurological inputs and function like true artificial limbs.
Just as neuroscience is helping us to better understand the brain, so will it inform the development of neuromorphic architectures. And neuromorphic computing will, in turn, accelerate progress in neuroscience. Neuromorphic computing is at the heart of the European Union's Human Brain Project, for example, a multidisciplinary initiative that “is building a research infrastructure to help advance neuroscience, medicine, computing and brain-inspired technologies .”
The ability to model biological systems with neuromorphic architectures, as well as emulate disease progression and neurotoxicity, could yield countless breakthroughs. As Ehmer explains: “If we can use this new computing paradigm to better understand how the brain works biochemically, we could mimic how the blood-brain barrier works. That's one of the fundamental questions in making your medicines safe: does it pass the blood-brain barrier or not? And this would feed back into the generation of novel medicines.”
“If we could one day emulate Alzheimer's disease, we could then say ‘what's our paradigm for treatment?’ Then by giving a digital treatment to the digital brain, we can test if it would help to stop the spread of the emulated disease, confirming whether our paradigm might work in the real brain, too.”
In the far future, we may also be able to use neuromorphic models to better understand higher brain functions, like cognition, self-recognition, memory, and learning. But building systems of this complexity is a tall order. “We are still light years away from what the human brain can do,” warns Dertinger.
“There are still many steps in between,” adds Winkler. “As for the next level, recognizing a cat in a photo without having to train a system with a million cat images would already be a breakthrough. But we are still very far away from strong AI, which would essentially be able to do everything the human brain can do. Well, except for the things you need a body for.”
Neuromorphic networks are in their infancy, but the possibilities they hold for empowering huge advances in AI, healthcare, and new technologies, it seems, are endless.
“The creativity of humankind will come up with some really interesting applications that we cannot even foresee today because they are just not technically feasible,” muses Dertinger. “Let's be surprised by what people make out of it.”
 www.top500.org/lists/top500/2021/06/ (June 2021)
 Horst Simon, 17th SIAM Conference, Paris 2016