Society, and in turn business, needs to find more sustainable alternatives to the power-hungry calculation that has brought us to where we are today. I think it’s time to turn for inspiration to the most efficient and powerful computer of all – the human brain. Excitingly, the rise of neuromorphic computing, which mimics our neural systems, promises both extraordinary performance and transformative energy efficiency.
Before I get into its wider benefits and potential applications, let me put the energy savings into perspective. Conventional computer technology is based on the so-called von Neumann architecture, where data processing and transmission takes place intensively and continuously. Next-generation computers are expected to operate on the exascale of 1018 calculations per second. But the downside is the power consumption.
Data computation and transmission account for a large portion of this consumption, and the rapid development of machine learning and AI neural network models is increasing demand even more. As much as 10 megawatts of power could be used for some AI learning algorithms on an exascale computer. Data-centric computing requires a hardware system revolution. Computing system performance, particularly energy efficiency, sets the fundamental limit to AI/ML capability. As for neuromorphic computing? It has the potential to achieve HPC and still consumes 1/1000 of the energy.
The neuromorphic approach uses silicon artificial neurons to form a spiking neural network (SNN) that performs event-triggered computations. There is a key difference between an SNN and other networks, such as the convolutional neural network (CNN). An SNN is formed by artificial silicon neurons and performs event-triggered computations. Spiking neurons process input information only after receiving the incoming spiking signal. Spiking neural networks effectively try to make the neurons more like real neurons.
The process does not work in discrete time steps. Instead, it takes in events over a time series to help build signals within the neurons. These signals accumulate inside the neurons until a threshold is crossed, at which point it triggers computational operation.
Ultra-low power operation can be achieved due to SNNs being effectively in an “off” state most of the time and only kicking in when a change, or “event”, is detected.
Once operational, it can achieve fast computation without running a power-hungry fast clock by triggering a large number of parallel operations (equivalent to 1000s of CPUs in parallel). So it only consumes a fraction of the power compared to CPU/GPU for the same workload.
That’s why the future of neuromorphic computing is well-suited for edge AI—implementing low-power AI on end devices without connecting to the cloud. This is especially true for TinyML applications that tend to focus on battery-powered sensors, IoT devices, and so on.
Next-generation neuromorphic systems are expected to have the inherent ability to learn or handle complex data just like our brain does. It has the potential to process large amounts of digital information with much lower power consumption than conventional processors.
In the medium term, traditional hybrid computers with neuromorphic chips could significantly improve performance over conventional machines. In the longer term, fully neuromorphic computers will be fundamentally different and designed for specific applications, from natural language processing to autonomous driving.
When it comes to design, instead of the conventional architecture of portioning chips into processor and memory, the computer can be built with silicon “neurons” that perform both functions.
Building extensive “many-to-many” neuron connections will enable an efficient pipeline for signal interaction and facilitate massively parallel operation. There is a trend to develop ever increasing amounts of electronic neurons, synapses and so on in a single chip.
The design methods for neuromorphic processor chips broadly follow one of a number of distinct paths. The ASIC-based digital neuromorphic chip offers highly optimized computing performance tailored to application requirements. For AI applications, it can potentially perform both inferential and real-time learning.
The FPGA-based chip is similar to ASIC-based digital design but also offers portability and reconfiguration. Due to its highly reconfigurable nature and parallel speed, FPGAs are considered a suitable platform to mimic to some extent the natural plasticity of biological neural networks.
Analog neuromorphic chips, which include so-called ‘in-memory-computing’, have the potential to achieve the lowest power consumption. They would be primarily suited for machine learning rather than real-time learning.
The photonic integrated circuit (PIC) based neuromorphic chip offers photonic computation that can achieve very high speed at very low power consumption, while the mixed-signal NSoC (Neuromorphic System-on-Chip) design combines extremely low-power analog design for ML inference with digital SNN – architecture processor for real-time learning.
I expect that neuromorphic computing will generate development opportunities in several technological areas, such as materials, devices, neuromorphic circuits, and new neuromorphic algorithms and software development platforms—all critical factors for the success of neuromorphic computing.
There are countless potential applications. Applying neuromorphic techniques to vision applications represents a large market opportunity for many different sectors, including smart vision sensors and gesture control applications in smart homes, offices and factories.
Another use case is neuromorphic computing for myoelectric prosthetic control. Myoelectric prostheses help people with reduced mobility by sensing and processing muscle spikes. However, the inefficiencies need to be improved for improved user experience, such as increasing the granularity of motion classification and reducing computational resources to reduce energy consumption.
Low-power edge computing represents a key area with high commercial potential. As IoT applications in smart homes, offices, industries and cities grow, there is an increasing need for more intelligence at the edge as control moves from data centers to local devices. Applications such as autonomous robots, wearable healthcare systems, security and IoT all share the common characteristics of battery-powered, ultra-low power, autonomous operation.
One potential application that I find particularly fascinating is that of “Parametric Insurance”. With global attention increasingly turning to climate-related issues, this unconventional form of “catastrophe insurance” is playing an increasingly important role. It’s a product that offers pre-defined payouts based on a trigger event – and can help provide protection when standard policies are harder to come by.
To me, the correlation to neuromorphic computing is clear. Parametric insurance can be linked to a catastrophe bond (CAT) for events such as hurricanes, earthquakes and so on. Neuromorphic powered edge computing has a major role to play as it would enable highly detailed and sophisticated risk analysis, assessment and payment settlement. Everything would be on the edge – with the associated low cost.
About the author
Dr. Aidong Xu, Head of Semiconductor Capability, Cambridge Consultants
Aidong has over 30 years of experience from various industries, including some of the leading semiconductor companies. He has led large internationally based engineering teams and brought innovative industry-leading products to the global market that have achieved rapid and sustained growth. Aidong holds a Ph.D. in power electronics and power semiconductors.
#high #performance #power #promise #neuromorphic #computing #Silicon #expert #advice