What are neuromorphic computers?

by | Mar 13, 2023

To make computers faster and more efficient, scientists are using the brain as a model in this blossoming area of computer science.
Artist's rendition of a neuron

With the development of computer chips based on silicon and other semiconductor materials, we have witnessed a technological revolution over the last several decades.

Over time, computers have shrunk from the size of whole rooms to single chips. This trend has been driven by Moore’s law, a term used to refer to the observation made by Gordon Moore in 1965 that predicted that the number of components per integrated chip will double every two years, leading to exponentially faster computers.

But with more sophisticated computers, robots, internet of things (IoT), and intelligent machines, computational demands are only growing, and the semiconductor industry is reaching the limits of its ability to miniaturize computer chips — you can only realistically fit so many transistors on one chip!

Therefore, computer scientists are turning to a new type of computer architecture called neuromorphic computing, in which computers are built to process information and interact with the world like the human brain.

This area of research is largely becoming more and more popular and is fully recognized as the first step in hardware building for computers and artificial intelligence systems. In this Explainer, we delve into everything you need to know about this burgeoning field and what it means for the future of computer science.

How does the brain store and process information?

Before we jump into neuromorphic devices and their applications, it’s best to first introduce the biological phenomenon that inspired this field: synaptic plasticity. This is our brain’s incredible ability to adapt and change in response to new information. To better understand this, we must first describe the basic mechanisms of how our own “computing center” works.

Neurons are the brain’s messenger cells. They are all interconnected through synapses, which are junction points that link them together in an expansive network through which electronic impulses and chemical signals are transmitted. They interact with one another through “spikes”, which are short, millisecond long voltage pulses.

While computer memory is expanded by simply adding more memory units, memories in the brain are created through new and strengthened connections between neurons. When two neurons become more strongly connected, we can say that the synaptic weight of the connecting synapse has increased. Our brain has a staggering 1012 or so neurons that communicate with one another through ≈ 1015 synapses. These connections and degree of communication between them changes over time and according to the number of stimuli or spikes received, enabling the brain to respond to a changing environment and create and store memory.

This capability is key to understanding two main mechanisms behind synaptic plasticity called potentiation and depression in which connections between synapses become stronger or weaker over time, playing an important role in learning and memory. This can happen over all time scales — from seconds to hours or longer.

Intuitively, higher frequency spikes which happen, for example, when learning a new skill, are associated with potentiation or strengthening of certain synapses and therefore, the establishment of long-term memory. On the flip side, lower frequency stimuli will cause a depression and therefore a weakening of the connection (or synaptic weight) at the corresponding synaptic junction, similar to forgetting something learned.

This is a bit of a simplification, and it should be noted that potentiation and depression are not dependent on just the frequency of the spikes, but the timing too. For example, if a synapse receives spikes from several neurons at the same time, the synaptic weight increases much faster compared to spikes arriving one after the other.

The process is intricate and complex, and researchers have had to get creative in order to replicate it artificially.

How does a neuromorphic computer work?

Current computers are built using the von Neumann architecture, which operates on principles first put forth by Alan Turing in the 1930s. This set up requires that data processing and memory units be kept separate, leading to a bottleneck in speed as data needs to be shuttled from one to the other, needlessly increasing power consumption.

Neuromorphic computers, on the other hand, use chip architectures that mix both memory and processing in the same unit. On a hardware level, this involves innovative new designs and a range of materials, as well as new computer components, and the field is exploding.

Whether using organic and inorganic materials, researchers all over the world are trying to design and build networks of artificial neurons and synapses that mimic the brain’s plasticity. Many of the already existing large-scale neuromorphic computers, such as IBM’s TrueNorth, BrainScales-2, and Intel’s Loihi use transistors based on well-established metal oxide semiconductor technology.

Transistors are one of the most common electronic building blocks in von Neumann computers, of which there are hundreds of different types with the most common being metal–oxide–semiconductor field-effect transistor or MOSFET for short. Within a computer chip, they act mainly as a switch (and to a lesser degree as an amplifier) for electric currents. In this way, they either prevent or allow the passage of a current, allowing each transistor to exist in an on or off state, which can be equated to binary 1 or 0.

This working principle allows information to be stored and computed very easily, and that is why electronic memory cells and logic gates for have become the building blocks of our digital world. However, our brain’s electrical signals do not consist simply of 0s and 1s. A connection between synapses for example, can exist as a variety of “weights” or strengths.

To mimic this in a neuromorphic computer, numerous devices have been built for this purpose. One specialized semiconductor transistor called a polymer synaptic transistor has been built to contain an “active layer” that is responsible for modulating the signal between units. This layer is usually made using a conducting polymer whose precise composition affects the conductance and therefore the signal output.

Application of a voltage with a specific frequency through the transistors results in changes to the active layer, producing either depressions or a potentiation of the electrical signal — similar to activity spikes in the brain. This essentially triggers plasticity, where numerical information is encoded in the spike such as its frequency, the time it occurred, its magnitude, and shape. Binary values can be turned into spikes and vice versa, but the precise way to perform this conversion is still an active area of study.

Neuromorphic hardware is also not limited to just transistors — researchers have been reporting increasingly creative ways to mimic the brain’s architecture using artificial components, including memristors, capacitors, spintronic devices, and even some interesting attempts to realize neuromorphic computing using fungi.

How is a neuromorphic computer programmed?

Neuromorphic computers typically use an artificial neural network (ANN) to perform computational tasks. Among the many different types of ANNs, spiking neural network (SNN) are especially interesting because they are based on artificial neurons that communicate with each other through electrical signals called “spikes” and incorporate time into their models. This lends to the energy efficiency of such systems, as the artificial neurons are not constantly active but only transmit information once the sum of the received spikes reaches a certain threshold.

Before the network can begin its operations, it first needs to be programmed or, in other words, the network has to learn. This is done by providing it with data to learn from. Depending on the type of ANN the learning method can vary. If, for example, the network’s task is to identify cats or dogs in images, one could feed it thousands of images with the label “cat” or “dog” for it to be able to identify the subject independently in future tasks. Identification requires vasts amounts of demanding calculations to process the color of each pixel in the image.

There are vast numbers of ANNs and choosing the right one depends on the user’s requirements. SNNs, while interesting due to their lower power consumption, are still difficult to train in general, mainly owing to their complex dynamics of neurons and the non-differentiable nature of spike operations.

Where is neuromorphic computing being used?

Experts predict that neuromorphic devices won’t necessarily replace conventional computer hardware, but they will add to and compliment especially when it comes to addressing specific technical challenges. Though there have been reports of neuromorphic computers being able to model Boolean logic, which is a key concept in any programming language used nowadays, suggesting neuromorphic computers could also be capable of general-use computing.

Regardless, neuromorphic computing will be very interesting for disciplines and applications in which the brain outperforms classic computers in terms of energy efficiency and computational times.

These include cognitive tasks, such as audio or image recognition, effectively implementing artificial intelligence (AI), , as well as providing new opportunities for brain–machine interfaces, robotics, sensing, and healthcare (to name a few).

There are still challenges to overcome as the field is still relatively new, but the rising popularity and innovative new designs of neuromorphic computing makes it a promising complement for traditional computer architectures.

Written by: Victoria Corless and Jan Rieck

The editors at Advanced Science News would like to thank Mario Lanza, associate professor of materials science and engineering at King Abdullah University of Science and Technology (KAUST) and Yuchao Yang, Boya Distinguished Professor at Peking University for their contributions to this article.

Feature image credit: Stefano Bucciarelli on Unsplash

We strive for accuracy and fairness. If you see something that doesn’t look right, contact us!

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts:

What is quantum gravity?

What is quantum gravity?

Quantum gravity seeks to describe gravity according to the principles of quantum mechanics, but can it be done?

What are mRNA vaccines?

What are mRNA vaccines?

With a long history of development, mRNA vaccines are finally making their debut and changing the face of the COVID-19 pandemic.

What is CRISPR?

What is CRISPR?

Here we take a look at how CRISPR, the revolutionary “molecular scissors”, works and where its being used.