There are a variety of neuralnetworks that are currently under focus by the Artificial Intelligence community. Deep NeuralNetworks (DNNs), based on traditional, non- spiking, feed-forward neuralnetworks, have been impressively successful in performing image recognition tasks. Spikingneuralnetworks (SNNs) are receiving a great deal of attention because they more closely mimic the human brain, and they can perform powerful computations with less power consumption than DNNs. Recently, researchers from Sandia have designed an algorithm called Whetstone [1], which allows one to train DNNs, and then “sharpen” them into Spiking Deep NeuralNetworks (SDNNs). These are similar to SNNs, but contain some enhanced functionality beyond SNNs.
Abstract. The brained-inspired spikingneuralnetworks (SNNs), which can be applied on visual information processing and speech recognition, is attracting great attention especially when combined with emerging electronic synapse (i.e. memristor). The neuron, transmitting and receiving spiking signals, is a vital component to realize the biological rules of SNNs, such as leaky-integrate-and-fire (LIF), inhibitory period, winner takes all (WTA) and bidirectional transmission. In this work, we propose and implement a novel spiking neuron circuit based on complementary- metal-oxide-semiconductor (CMOS) technology. Experimental results show that the neuron circuit can generate spiking pulses, realize lateral inhibition and change the weight of synapse.
Abstract. In this paper, the complexity algorithm is used to locate the human eyes, and then the best threshold method is used to locate the human eyes accurately. This method is more accurate than the gray projection method, and it is faster and less affected by the light and noise. Spikingneuralnetworks, which inherit the parallel mechanism from biological system, are used to extract the face features. The spikingneuralnetworks can remember key features of a visual image through synapse strength distribution and recall the visual image by triggering a specific neuron. Based on the key features, the nearest neighbor classifier is used for matching faces. Experimental results show that the proposed algorithm of eye location works well and has advantages about eye location in multi-position and complex background, and the face features extraction based on spikingneuralnetworks can achieve high recognition rate and reduce the time. Furthermore, the algorithm can be transformed to GPU platform and can be speed up dramatically.
Abstract: Deep spikingneuralnetworks (SNNs) hold the potential for improving the latency and energy efficiency of deep neuralnetworks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neuralnetworks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
Abstract. Artificial NeuralNetworks (ANNs) are bio-inspired models of neural computa- tion that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and en- ergetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using binary spikes. While artificial SpikingNeuralNetworks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons [1, 2], the current performance is far from that of deep ANNs on hard benchmarks and these SNNs use much higher firing rates compared to their biological counterparts, limiting their ef- ficiency. Here we show how spiking neurons that employ an efficient form of neural coding can be used to construct SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on important benchmarks, while requiring much lower average firing rates. For this, we use spike-time coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out inference in deep neuralnetworks using up to an order of magnitude fewer spikes compared to previous SNNs. Adaptive spike-time coding addition- ally allows for the dynamic control of neural coding precision: we show how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally ex- tends to other forms of attention. AdSNNs thus hold promise as a novel and efficient model for neural computation that naturally fits to temporally continuous and asynchronous applications.
Spikingneuralnetworks (SNNs) that enables energy effi- cient implementation on emerging neuromorphic hardware are gaining more attention. Yet now, SNNs have not shown competitive performance compared with artificial neural net- works (ANNs), due to the lack of effective learning algo- rithms and efficient programming frameworks. We address this issue from two aspects: (1) We propose a neuron nor- malization technique to adjust the neural selectivity and de- velop a direct learning algorithm for deep SNNs. (2) Via narrowing the rate coding window and converting the leaky integrate-and-fire (LIF) model into an explicitly iterative ver- sion, we present a Pytorch-based implementation method to- wards the training of large-scale SNNs. In this way, we are able to train deep SNNs with tens of times speedup. As a result, we achieve significantly better accuracy than the re- ported works on neuromorphic datasets (N-MNIST and DVS- CIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10). To our best knowledge, this is the first work that demonstrates direct training of deep SNNs with high performance on CIFAR10, and the efficient implementation provides a new way to ex- plore the potential of SNNs.
Abstract—SpikingNeuralNetworks (SNN) are third generation neuralnetworks and are considered to be the most biologically plausible so far. As a relative newcomer to the field of artificial learning, SNNs are still exploring their own capabilities, as well as dealing with the singular challenges that arise from attempting to be computationally applicable and biologically accurate. This paper explores the possibility of a different approach to solving linearly inseparable problems by using networks of spiking neurons. To this end two experiments were conducted. The first experiment was an attempt in creating a spikingneural network that would mimic the functionality of logic gates. The second experiment relied on the addition of receptive fields in order to filter the input. This paper demonstrates that a network of spiking neurons utilizing receptive fields or routing can successfully solve the XOR linearly inseparable problem.
Biological Neurons and SpikingNeuralNetworks SNNs are the third and most recent generation of NNs and were inspired by the firing paradigm of the biological neurons. In SSNs, each neuron accumulates inputs and fires only upon reaching its firing threshold; the change in the internal state of the firing neuron is rapid and approximately identical every time, and has the appearance of a spike (Hodgkin and Huxley, 1952). Thus, the state of each neuron is analogue but passing information between neurons is binary. This accumulation of the signals within the neurons offers a novel way of incorporating the temporal relations within the dataset into the network’s activity. It has been proposed that this transient collective synchronisation makes SNNs particularly suitable for unsupervised processing of spatio-temporal patterns (Hopfield and Brody, 2001).
Real-time simulations of biological neuralnetworks (BNNs) provide a natural platform for applications in a variety of fields: data classification and pattern recognition, prediction and estimation, signal processing, control and robotics, prosthetics, neurological and neuroscientific modeling. BNNs possess inherently parallel architecture and operate in continuous signal domain. Spikingneuralnetworks (SNNs) are type of BNNs with reduced signal dynamic range: communication between neurons occurs by means of time-stamped events (spikes). SNNs allow reduction of algorithmic complexity and communication data size at a price of little loss in accuracy. Simulation of SNNs using traditional sequential computer architectures results in significant time penalty. This penalty prohibits application of SNNs in real-time systems.
The spikingneural network architecture (SpiNNaker) inspired by the mammalian brain is a massively parallel million core architecture that aims to model real time large-scale spikingneuralnetworks [29] [30]. The system blocks are realized with RISC processors like the ARM9 processor in each processing node. It suffers from the drawbacks of general multi-core processor architectures because it uses the same memory hierarchies and organization of conventional CPUs. The system is implemented with 18 ARM 968 processor cores for a node with 96kB of local memory, 128 MB of shared memory and packet router for each core. The project uses Address Event Representation (AER) encoding for communication of neural activity between neurons. It introduces lightweight multicast packet routing mechanism which is a modification of conventional AER. In this protocol significant number of small data packets are transmitted between interconnects SpiNNaker provides a fast simulation platform for large scale spikingneuralnetworks. It can support different types of network
The human brain has the amazing capacity to learn and recall patterns from SSTD at different time scales, ranging from milliseconds to years and possibly to millions of years (e.g. genetic information, accumulated through evolution). Thus the brain is the ultimate inspiration for the development of new machine learning techniques for STPR. Indeed, brain-inspired SpikingNeuralNetworks (SNN) have the potential to learn SSTD by using trains of spikes (binary temporal events) transmitted among spatially located synapses and neurons. Both spatial and temporal information can be encoded in an SNN as locations of synapses and neurons and time of their spiking activity respectively. Spiking neurons send spikes via connections that have a complex dynamic behaviour, collectively forming an SSTD memory.
The GReaNs is an artificial life software platform which was developed by Michal Joachimczak and Borys Wrobel and used to evolve Gene Regulatory Networks (GRNs). GReaNs has been used previously for evolving gene regulatory networks for tasks includ- ing controlling single cells, as unicellular entities or parts of multicellular bodies in two dimensions [80, 81] (it has been used to transform the structures into soft-bodied ani- mats swimming in a fluid-like environment) and in three dimensions [30–32], processing signals [33], and controlling animats [26, 27]. All the networks which were evolved by GReaNs in the previous work consisted of non-spiking nodes. As I mentioned in 1.1, my work is part of broader research program to compare the computational properties of spikingneuralnetworks and networks that are not spiking, so the first task I conducted in my project was to convert the GRN to a SNN in GReaNs.
Abstract—Recent studies have shown that the dielectric properties of normal breast tissue vary considerably. This dielectric heterogeneity may mean that the identification of tumours using Ultra Wideband Radar imaging alone may be quite difficult. Significantly, since the dielectric properties of benign tissue were shown to overlap with those of malignant, breast tumour classification using traditional UWB Radar imaging algorithms could be very problematic. Rather than simply examining the dielectric properties of scatterers within the breast, other features of scatterers must be used for classification. Radar Target Signatures have been previously used to classify tumours due to the significant difference in size, shape and surface texture between benign and malignant tumours. This paper investigates SpikingNeuralNetworks (SNNs) applied as a novel tumour classification method. This paper will describe the creation of 3D tumour models, the generation of representative backscatter, the application of a feature extraction method and the use of SNNs to classify tumours as either benign or malignant. The performance of the SNN classifier is shown to outperform existing UWB Radar classification algorithms.
Neuromorphic engineering is a discipline that studies, designs and implements hardware and software that mimic the way in which nervous systems work, focusing its main inspiration in how the brain solves complex problems easily. Currently, the neuromorphic community has a set of neuromorphic hardware, such as sensors [1], learning circuits [2], neuromorphic information fi lters and feature extractors [3], and robotic and motor controllers [4]. In the fi eld of neuromorphic sensors, diverse neuromorphic cochleae can be found [5]. These sensors are able to decompose audio signals into frequency bands, represent them as streams of short pulses, called spikes, using the Address-Event Representation [6], and then make them interface with other neuro- morphic layers. On the other hand, there are several software tools in the community, like NENGO [7] or BRIAN [8], for spikingneuralnetworks simulation with or without learning; or jAER [9], for real-time visualiza- tion and software processing of AER streams captured from the hard- ware using speci fi ed interfaces [10]. The aim of the software presented in this paper, called NAVIS, is to help the neuromorphic community to work with cochleae data in order to visualize and adapt this information to build training sets for later learning. To demonstrate these software functionalities, a 64-channel binaural Neuromorphic Auditory Sensor (NAS) for FPGA [5] has been used together with an USB-AER interface [10]. NAS responses are stored as aedat fi les through jAER. However, since this software works with aedat fi les, it can work with any cochleae sensor connected to jAER using its aedat fi les.
The mammalian brain contains more than 1010 densely packed neurons that are connected to an intricate network. In every small volumes o f cortex, thousands o f spikes are emitted each millisecond. Generally, it is agreed that the information from one neuron to another is transmitted by an action potential. The action potential can travel along the neuronal fibres at a speed o f about forty meters per second. However, there are still a few questions that remain unanswered, such as what is the information contained in such a spatial-temporal pattern o f pulses, what is the code used by the neurons to transmit that information, and how might other neurons decode the signal. Therefore, there is a lot o f ongoing research with neuronal spikes and it has resulted in several coding schemes. Among the potential coding schemes are rate coding, temporal coding, and population coding. However, in this thesis only rate coding and temporal coding are discussed in order to make a comparison o f the coding logic for these two. The next section analyses the most widely accepted coding schemes, which are rate coding and temporal coding in traditional neuralnetworks and spikingneuralnetworks respectively.
However, there is a lack of efficient methods for modeling such data and for spatio-temporal pattern recognition (STPR) that can facilitate the discovery of complex STP from streams of data and the prediction of new spatio-temporal events. The brain-inspired spikingneuralnetworks (SNN) (Gerstner & Kistler, 2002b; Maass & Zador, 1999), are considered the third generation of neuralnetworks. They are a promising paradigm for STPR as these new generation of computational models and systems are potentially capable of modeling complex information processes due to their ability to represent and integrate different information dimensions, such as time, space, frequency, phase, and to deal with large volumes of data in an adaptive and self-organizing manner.
Abstract—Speech emotion recognition (SER) is an important part of affective computing and signal processing research areas. A number of approaches, especially deep learning techniques, have achieved promising results on SER. However, there are still challenges in translating temporal and dynamic changes in emotions through speech. SpikingNeuralNetworks (SNN) have demonstrated as a promising approach in machine learning and pattern recognition tasks such as handwriting and facial expression recognition. In this paper, we investigate the use of SNNs for SER tasks and more importantly we propose a new cross-modal enhancement approach. This method is inspired by the auditory information processing in the brain where auditory information is preceded, enhanced and predicted by a visual processing in multisensory audio-visual processing. We have conducted experiments on two datasets to compare our approach with the state-of-the-art SER techniques in both uni-modal and multi-modal aspects. The results have demonstrated that SNNs can be an ideal candidate for modeling temporal relationships in speech features and our cross-modal approach can significantly improve the accuracy of SER.
Spikingneuralnetworks (SNNs), the third generation of ANNs, play an essential role in biological information processing (Gerstner and Kistler, 2002). Compared with ANNs, which use rate coding for neuronal activity representation, spiking models provide an in-depth description of biological neuronal behavior. More information has been used with the average firing rate for computations with real neurons. Furthermore, instead of rate coding, the difference in firing times may be used (Belatreche et al., 2006).
The main aim of this paper was to explore a hypothesized role for ITDP in the coordination of ensemble learning, and in so doing present a biologically plausible architecture, with attendant mechanisms, capable of producing unsupervised ensemble learning in a population of spikingneuralnetworks. We believe this has been achieved through the development of an MoE type architecture built around SEM networks whose outputs are combined via an ITDP based mechanism. While the architecture was successfully demonstrated on a visual classification task, and performed well, our central concern in this paper was not to try and absolutely maxi- mize its performance (although of course we have striven for good performance). There are various methods and tricks from the machine learning literature on ensemble learning that could be employed in order to increase performance a little, but a detailed investigation of such extensions is outside the scope of the current paper, making it far too long, and some would involve data manipulation that would move the system away from the realms of biological plausibility, which would detract from our main aims. However, one interesting direction for future work related to this involves using different input data subsets for each ensemble mem- ber. This can increase diversity in the ensemble which has been shown to boost performance in many circumstances [18, 49], a finding that seems to carry over to our spiking ensemble system according to the observations on diversity described in the previous section. Preliminary exper- iments were carried out in which each SEM classifier was fed its own distinct and separate data- set (all classifiers were fed in parallel, with an expanded, separate set of input neurons for each classifier, rather than them all using the same ones as in the setup described earlier in this paper). A significant increase in the overall ensemble performance after training was observed as shown in Fig 15. Further work needs to be done to investigate the generalization of these results and to analyse differences in learning dynamics for the ensemble system with single (one set for all classifier) and multiple (different sets for each classifier) input data sets. The issue of how such multiple input data sets might impinge on biological plausibility must also be examined. A related area of further study is in applying the architecture to multiple sensory modes, with data from different sensory modes feeding into separate ensemble networks. Some of the biological evidence for ensemble learning, as discussed in the Introduction section, appears to involve the combination of multiple modes. Although we have tested the architec- ture using a single sensory mode, there is no reason why it cannot be extended to handle multi- ple modes.
We show that we can thus compose computationally efficient Adaptive SpikingNeuralNetworks (ASNNs) through drop-in replacement of ReLU neurons in standard ReLU-based feedforward and convolu- tional ANNs, and we demonstrate identical performance to these ANNs without additional modifications. The ASNNs outperform previous SNNs like [9, 10] on a selection of standard benchmarks, including MNIST; require an order of magnitude fewer spikes, albeit analog rather than binary spikes (and still more efficient in terms of communicated bits); and are also up to an order of magnitude faster, in the sense that they need much less temporal averaging over output spike-trains. Additionally, the presented network is able to carry out ongoing asynchronous neural computation in continuous time: neurons are updated at high temporal precision and information is exchanged only sparingly in an asynchronous manner. This has to be compared to the traditional ANN which compute in a fully synchronous fashion: in a streaming classification problem, we show that for ASNNs a much sparser network updating suffices as compared to traditional ANNs.