Scientists create supervised randomness to mimic our own neural network

by | Mar 26, 2020

Controlling the probability of a series of seemingly random events is the key to mimicking the human brain to optimize neuromorphic learning.

Since the 1960s, the scaling of silicon technologies, also known as Moore’s law, has driven advances in computing. In simple terms, Moore’s law refers to the doubling of transistors on a chip every two years. This was regarded as the key to solving the challenges in developing new electronic devices, such as lower power consumption, higher speed, and higher density.

However, many people in the semiconductor industry are predicting the demise of Moore’s law as the current feature size is now reaching its fundamental limit of 1.5 nm, which was derived from the Heisenberg’s uncertainty principle. Researchers forecast that this scaling-down of technology will come to an end in the next 10 years. Now is the time to ask ourselves: will the end of scaling be equivalent to the end of advancement?

The answer, we hope, is a resounding no! Even if Moore’s law were to fail, there is still a tremendous demand for improving our computing capabilities as we’ve entered an age in which the amount of data we use, store, and process has exploded. Scientists are now seeking new types of computing algorithms that can deal with such great deal of data being processed, rather than the digital computation relying on the scaling of complementary metal oxide semiconductor (CMOS) devices that are currently widely used in electronics.

To comply with new computing algorithms, such as machine learning, there have been growing demands for developing non-digital devices, also called neuromorphic devices. In this regard, memristors are an important emerging technology for memory and neuromorphic computing, and one of the most promising candidates for next generation computing beyond Moore’s law. It is a compound term of electrical resistor and memory, as the device serves as a two-terminal resistor whose resistance can be changed and stored in a non-volatile manner upon electrical stimulation. What discerns these devices from the conventional CMOS logic devices is that they can effectively mimic the connections and learning behavior of synapses found in the human brain.

As a result, the memristors — especially when arranged into cross-point arrays, which are highly compact and thus cost and power friendly — have been favored as a promising means of facilitating next generation machine learning. In fact, this novel arrayed architecture is what has been adopted in one of Intel’s latest  products, 3DXP memory, which is being highlighted for its high performance and capacity, which researchers hope to use to potentially bridge the gap between fast DRAM and dense NAND SSD or hard disc drive in the incumbent computing architecture.  

From an algorithm perspective, stochastic learning has proven to be more efficient than deterministic learning, in which no randomness is involved and thus is more vulnerable to getting stuck; especially when it comes to processing a huge amount of data. The idea is based off the stochastic behavior of synapses, in which a single event (i.e., the transfer of a signal between neurons) seemingly happens randomly, but due to the partial coupling between local events, the long-term average follows a certain probability; let’s say “supervised randomness”. In this type of learning, a machine’s learning efficiency is dependent on the probability, and thus controlling the probability of a series of seemingly random events becomes the key enabler for optimizing the neuromorphic learning for various tasks.

Nonetheless, the actual implementation of hardware to achieve such random probability generation has been a big challenge. There are literally very few references in the literature for how to accomplish this, while most examples that do exist are still only proposals, with no real experimentation or results.

Now, in a recent paper published in Advanced Intelligent Systems, Woorham Bae from UC Berkeley and Kyung Jean Yoon from Intel suggested that the noise of an electrical ring oscillator can be used to scramble the deterministic sequence. Electric ring oscillators are circuits that produce a periodic, oscillating signal and are normally used for various applications in microprocessors and application-specific integrated circuits (ASIC). While every single event is scrambled by the noise, a feedback loop circuit supervises the long-term probability. That is, the circuit is designed to monitor the history of the events and drives the long-term probability toward a desired value once it deviates due to the presence of random noise.

Compared to most applications, the current technology exploits the system’s noise instead of suppressing it. Since the level of noise is amplified with reduced power, the proposed technique can benefit from improved efficiency (achieving a higher noise) while also requiring less power to function. Though there are few examples to compare against, the current technology achieves more than 1000x improvement on energy efficiency when compared to traditional random number generators, while also providing controllable probability — which was not achievable in those true random number generators found in the literature.

 The proposed way of processing circuit noise for probability generation dedicated for stochastic learning is anticipated to accelerate the development of novel devices and architectures for machine learning, overcoming current limitations of the digital von Neumann computing systems that still rely on static computation.

Reference: W. Bae, K.J. Yoon. ‘Weight Update Generation Circuit Utilizing Phase Noise of Integrated CMOS Ring Oscillator for Memristor‐Crossbar‐Array Neural Network Based Stochastic Learning.’ Advanced Intelligent Systems (2020). DOI: 10.1002/aisy.202000011

Related posts:

Invisible underwater robots

Invisible underwater robots

A transparent underwater robot camouflages itself to explore the ocean, reducing encounters with delicate sea life.