SNNs learn more rapidly than other machine-learning systems with less processing power

By Brian Santo, contributing writer

BrainChip Holdings Ltd. Claims that it’s the first company to
deliver a spiking neural network (SNN) architecture to market. The company
will begin sampling its new neuromorphic system-on-a-chip (SoC) in the third
quarter of 2019, preceded by an FPGA-based development board for designers
anticipating the introduction. The Akida Development Environment is
available now for early-access customers.

BrainChip’s
forthcoming Akida Neuromorphic SoC (NSoC) will implement a newer approach
to machine learning called spiking neural networks. SNNs are said to learn much
more rapidly than other machine-learning systems in nearly real time, using smaller
data sets, all while relying on significantly less processing, which translates
into lower power consumption.

This combination
of traits will make SNNs particularly suitable for applications at the network
edge, argues BrainChip. But first, the company needs to get the chips in the
market and in use.

There are SNN
chips available today. Prominent among them are Intel’s Loihi and the IBM’s TrueNorth,
the latter developed under a DARPA contract.

Bob Beachler,
BrainChip’s senior vice president of marketing, explained to Electronic
Products that Loihi and TrueNorth are essentially experimental or research
devices, whereas BrainChip’s Akida Neuromorphic SoC will be a commercial
product for the commercial market. Given that most other companies appear to be
pursuing a different network architecture known as convolutional neural
networks (CNNs), BrainChip appears to have a good shot at being first with a
commercial SNN-based machine-learning chip.

The Akida chip will
have a neuron fabric with 1.2 million neurons and 10 billion synapses, on-chip
processing (for system management and training/inference control), memory
interfaces (for flash or LP/DDR4), a set of data interfaces for co-processor
applications, and a chip-to-chip interface so that multiple Akida SoCs can be
ganged. An on-board sensor interface currently supports five different sensor
types, including pixel-based imaging and dynamic vision sensors; it is possible
to create support for other types of sensors.

There are few
standardized measures for machine-learning systems, but there is a large,
commonly used data set of images called CIFAR-10 that different machine-learning
systems get trained on. The figures of merit for performance comparison are
accuracy (correctly identifying the image) and the efficiency of recognition,
expressed as frames per second per watt (fps/W).

BrainChip’s Akida
system is among the most accurate of machine-learning systems, and it shares
the mark for greatest efficiency. It is alone atop the the list, however, when
both measures are considered together. It is the most accurate and efficient machine-learning
system, according to data supplied by the company.

The first three
application targets for the Akida SoC are embedded vision systems (for
autonomous vehicles, surveillance, robotics, etc.), cybersecurity (packet
inspection), and financial systems (trading pattern detection, price prediction).
The first two categories will be supervised applications, meaning that the system
would be trained first and then it would look for familiar patterns.

The financial
applications would be unsupervised; the system would be set to look for (learn)
patterns not yet detected. Analysts can then evaluate to what extent these
patterns are meaningful and act accordingly.

Machine-learning types
The first
generation of neural networks work by assigning values to the connections
between any given processing node and the next. Logic pathways are formed based
on the relative strength of the connections between those nodes — the “synaptic
weights” between “neurons.” The values are variable and change as the system
learns. Classic neural networks rely heavily on backpropagation, the process of
continually feeding results back into the system to fine-tune synaptic weights.

These neural
networks can work extraordinarily well, but training them becomes a steadily
more laborious process as the tasks they are given become more complex. In
straight digital logic, you simply add more processing resources, but in neural
networks, adding neurons has diminishing returns.

Neural-network
researchers long ago anticipated that this would happen and, for some time, have been
pursuing newer approaches of machine learning to improve neural-network
performance. Among the most promising network types are CNNs and SNNs. Whereas first-generation neural networks rely heavily on backpropagation, both CNNs and
SNNs are largely feed-forward systems, which is the way that biological brains work (hence, the term “neuromorphic”).

CNNs, like
classic neural networks, manage the weights between synapses. The process
relies on convolutional algorithms. SNNs differ in that they rely on “threshold”
logic. Weights accumulate until they hit some preset threshold, and then they
fire — or “spike” — after which the connection weight goes back to a reset
level.

CNNs are quite computationally
intensive compared to SNNs and they need to be trained on typically large data
sets before deployment. SNNs, in contrast, can learn quickly and can learn in
place — after being deployed.

That will make
them more suitable for applications at the network edge that need to be
immediately responsive, according to Beachler.

It appears thus
far that the development work done on CNNs is more extensive based on the number of
companies involved and the amount of research available. If a designer is familiar
with this new generation of neural networks, they’re much more likely to be
familiar with CNNs.

BrainChip said that the
Akida SNN chip can replicate most CNN functionality. In other words, Beachler
said, work done on a CNN system can be brought over to the BrainChip
environment.

“I
would use the analogy of levels of abstraction for designing a chip,” he explained. “If you’re designing a chip at the transistor level, you’re going to
get a certain amount of efficiency on that chip. If you’re writing register
transfer language, Verilog or VHDL code, and going through a synthesis tool,
you’re not going to get the optimum performance, but you’re probably going to
get to your design faster. Similarly, if you start with a CNN and try to
convert that into an SNN, you’ll get to something working faster, but it’s not
as optimal if you designed it as an SNN.”