The brain is what it is because of the structural and functional properties of its interconnected neurons. The basic function of neurons is to transfer information to other parts of the body . A neuron communicates with each other with the help of synapse. The neuron send message by releasing a chemical messenger called neurotransmitter into the synaptic cleft. The neurotransmitter cross the synapse and gets attached to the key side of the receptor of the next neuron. When the neurotransmitter gets attached to the receptors they cause a change inside the receiver neuron and the message is delivered.Similarly neuromorphic chips are basically an electronic system which performs the mimicry of human brain. Neuromorphic hardware is a link between an electronic system and human brain. The component used for implementing neuromorphic hardware is micro and nano scale transistor. The electrochemical function of human neurons is replicated with the help of non-linearity of silicon transistor. Transistors in neuromorphic chips are used as switches and they also provide neuroplasticity which is a characteristic of human brain that enables it to modify synapse while processing the data. There is a major requirement of neuromorphic computing because it provides flexibility, strong memory, asynchronous spiking signal and timing.
10 Read more
The memristors that had been mathematically predicted by Leon O. Chua in 1971 as the fourth basic circuit element  were experimentally found in 2008 . Since the first prediction of memristors, they have been thought as a potential candidate for future neuromorphic computing systems. Among the many advantages of memristors, par- ticularly, the nonlinear charge-flux relationship is import- ant in mimicking synaptic plasticity of biological neuronal systems such as human brains [3-7].
achieved by acting on the perturbation’s duration and strength. These results obtained with off-the-shelve devices operating at telecom wavelengths, added to the unique attributes of VCSELs offer great prospects for their use as a single platform for photonic excitatory  and inhibitory neurons. We foresee that these will be key building blocks in future brain-inspired networks of photonic neurons for novel ultrafast neuromorphic computing systems.
. The lay- out of this chapter is as follows: experimental evidence for a memristive response of individual nanowire junctions, and a global memristive response of a NWN is presented in section 5 . 1 . An empirical model for individual junctions is also intro- duced in this section. In section 5 . 2 , a computational routine to simulate a large network of memristive junctions is introduced and the memristive responses of a network to increasing current-flow are presented. The conductance of both junc- tions and nanowire networks are shown to scale as a power-law with increasing current levels in section 5 . 2 . The exponent of the network’s power-law is shown to be the same as the individual junctions, revealing for the first time a self-similarity between the individual and the collective. The activation patterns of nanowire networks is shown to vary according to certain measurable parameters of the mem- ristive junction model. In section 5 . 3 , a mapping technique is presented that allows for the visualisation of current flow through a network at any stage in its junction evolution. This mapping shows that for certain nanowire parameters the current flows through a single pathway between electrodes in a winner-takes-all manner. This behaviour had not been reported previously in the literature and was later confirmed with experimental measurements. The existence of localised current flows has potential application in neuromorphic computing, and a novel method to achieve independent and associative conductive states in a NWN is presented in section 5 . 4 . Finally a chapter summary is presented in section 5 . 5
226 Read more
This work provides a review of the application of RRAM synapses to mixed-signal neuromorphic computing and challenges involved in their interface with CMOS neuron circuits. The interplay of devices, circuits and algorithm is important and their co-development is critical in optimizing the overall energy-efficiency of the NeuSoC architecture and bringing it closer to the biology-like efficiency. With continued progress, such neuromorphic architectures pave the path for computing beyond the limitations set by the Moore’s scaling of CMOS transistors and the energy bottleneck of von Neumann computers. Moreover, such NeuSoCs open the possibility of realizing general purpose Artificial Intelligence in portable devices instead of always relying upon the energy-intensive Cloud infrastructure. In doing so, NeuSoCs provide newer avenue for memory technology development, where memory itself can be the next generation platform, integral to computing. In-memory computation occurring in these NeuSoC architectures will place the emphasis on dense integration of memory arrays with peripheral neural circuits, extending to 3D stacking and network on chip. Future work includes simulation and evaluation of device parameters for simultaneous development and fine-tuning of learning algorithms for targeted applications.
19 Read more
Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system.
199 Read more
Abstract:- By using Internet technology cloud provides virtualized IT resources as a service. Cloud Computing is a combination of Grid computing and Cluster computing. By using the Internet a computer grid is created whose purpose is only utilizing shared resources such as on a pay- per-use model, computer software and hardware. The main moto of cloud computing is that you can access your data in any corner of the world by using internet. Cloud computing is a general term for delivering through the internet. Cloud computing is a virtualized computer power and storage delivered via platform-agnostic infrastructures of abstracted hardware and software access over internet. Cloud computing systems usually work on various models like public, private, hybrid, and community models.
Increasing power and speed data centre is not always efficient and sometimes leads to an additional cost, so one should not expect to increase the efficiency more than a required limit. Distribution of data centers and use of closest data centers is a better and a far more optimal choice. It has been predicted that storage and computing on personal computers will be forgotten and transferred into distributed clouds. Therefore, architecture and evaluation of data centers should be performed for future of computing through suitable prediction. According to review and evaluation performed in the field of high performance computing, high performance distributed computing through grid, cluster and cloud still has a shortage in performance evaluation and special measures are required for this work. It is better to consider delay in evaluations or implement a criterion for evaluation of service level agreement because these agreements are most important for the users and one can present more accurate evaluation in future by specifying type of user‟s requests or specifying and distinguishing all users.
Abstract: The advancements in the study of the human sense of touch are fueling the field of haptics. This is paving the way for augmenting the sensory perception during objects palpation in tele- surgery, and reproducing the information through tactile feedback. Here, we present a novel tele- palpation apparatus that enables the user to detect nodules with various distinct stiffness buried in an ad-hoc polymeric phantom. The contact force measured by the platform was encoded using a neuromorphic model and reproduced on the index fingertip of a remote user through a haptic glove embedding a piezoelectric disk. We assessed the effectiveness of this feedback in allowing nodule identification under two experimental conditions of real-time telepresence: In Line of Sight (ILS), where the platform was placed in the visible range of a user; and the more demanding Not In Line of Sight (NILS), with the platform being 50 km apart. We found that the entailed percentage of identification was higher for stiffer inclusions with respect to the softer ones (average of 74% within the duration of the task), in both telepresence conditions evaluated. These promising results call for further exploration of tactile augmentation technology for telepresence in medical interventions. Keywords: Nodules detection, neuromorphic touch, polymeric phantom, sensory augmentation, tactile telepresence, teleoperation, tele-palpation, vibro-tactile stimulation.
11 Read more
One must also consider the interface between computa- tional blocks. A 1024 × 1024 imager computing at a 60 Hz im- age rate requires a parallel data rate (1024 signals) of 60 kHz. If two blocks are adjacent on the same IC, then this data rate is trivial to accommodate. However, if these signals are being passed between chips over 100 mega analog samples per sec- ond are required, which is a more challenging specification. This rate is similar to reading out pixels from any standard CMOS array. Each pixel could be directly read out in a trans- form imager, since a column scan is equivalent to multiplica- tion by a digital value moving by one position for each step. In general, this issue is significant when interfacing to a dig- ital system, since multiple “images” could be transmitted to the controlling digital system.
14 Read more
In olden days, there was time shared computing system . Grid computing is a processor architecture that associates computer resources from various areas to reach an computing, an individual, computer can connect with network of computer that can perform the task s working as a Super Processor (Devika Rani The idea of cloud computing is to come to o increase reliability increase flexibility by transforming computers (Ian Foster, . Technically speaking, grid computing enables the and data resources storage capacity to create a single system image, granting users and applications ion technology(IT) capabilities (Rahul
To mitigate the environmental hazards due to computing devices, we need to concentrate on green and sustainable computing. Several computational measures, as shown in Fig 1, are adopted that helps in min- imising the energy consumption of the computers. But this is not sufficient for realising sustainable computing absolutely. One of the strategies to attain sustainable computing to utilise the existing resources optimally and fully which will minimise the requirement of new devices which ultimately reduce the problems due to the production process as well as the e-waste. Several approaches have been opted in this regard as mentioned in Table 1. Among them, Grid and Cloud computing are the most prominent initiatives which have minimised the requirement of owning personal computer systems considerably. They have also replaced the need for cen- tralised HPC systems such as supercomputers and mainframes to some extent. Though Grid computing intends to fully utilise the existing resources (desktops), Cloud computing is not that successful in this aspect. Cloud computing needs additional resources to provide cloud services. The centralised resources such as data centres, at the cloud service providers end, consumes massive energy, leading to greenhouse gas emission substantially.
25 Read more
This allows the proposed design to be a suitable learning and computational component for large scale and low power neuromorphic circuits with high biological capability. However, one should keep in mind that, any analog VLSI design will be affected by the mismatch due to fabrication imperfections. Therefore, besides area and energy consumption, mismatch may also be taken into account when considering design of an analog synaptic plasticity circuit for learning and computational purposes. Process variation and transistor mismatch. Apart from power consumption and silicon area, transistor mismatch is another challenge that is always associated with all analog VLSI designs, specially designs for synaptic plasticity circuits. The functionality of these circuits are dependent on the synaptic parameters and changes in the values of these parameters, which can happen due to process variations, results in deviation from the synaptic circuit expected behaviour. These deviations can bring about degradation of synaptic plasticity capability. The mismatch may be taken into account from two different design perspectives.
14 Read more
Provenance refers to the derivation history of a data product, including all the data sources, intermediate data products, and the procedures that were applied to produce the data product. In Grids, provenance management has begun general built into a workflow system, from early pioneers such as Chimera, to modern scientific workflow systems, such as Swift, Kepler, and VIEW to support the discovery and reproducibility of scientific results. It has also been built as a standalone service, such as PreServ, to facilitate the integration of provenance component in more general computing models, and deal with trust issues in provenance assertion.
Principles of design for network-based neuromorphic systems have been presented by Partzsch et al. . A reconfigurable memristive dynamical system has been presented by Bavandpour et al. , which can be applied to learning and dynamical systems. Memristive crossbar circuits have also been demonstrated to be suitable for efficient neu- ral network training by Irina et al. , where they show low error rates using batch and stochastic training approaches for a handwritten digit recognition dataset. Neuro- inspired devices have been developed for unsupervised learning by Chabi et al. [51, 52], as well as for an inference engine by Querlioz et al. . A general model for voltage- controlled memristors has been developed by Kvatinsky et al. . Further, Prezioso et al.  present transistor-free metal-oxide memristor crossbars for binary image classifi- cation using a single layer perceptron. Memristor-based self healing circuits have been presented by Gu et al. . Sampath et al.  present a CMOS-memristor based FPGA architecture for memory cells.
19 Read more
In this paper, we propose a new computationally-efficient artificial neuron model that account for all-known neural behaviors. In this model, axon is an independent computational unit complementary to the classic somatic computational unit which evokes action potentials. Compared to the soma which integrates dendritic inputs in a timescale of milliseconds to seconds, the axon integrates the spikes evoked by soma in a timescale of tens of second to minutes, and consequently determines the persistent firing behavior of the axon. Besides the computa- tional model, a neuromorphic model of persistent firing neurons and its analog circuit are also proposed. In ad- dition, a polychronous spiking network  with persistent firing inhibitory interneurons is simulated, which may assist the development of spiking network-based memory and bio-inspired computer system.
14 Read more
In terms of synaptic mechanisms, many studies have ex- plored the implementation of the simple yet naive pair-based STDP rule using memristive devices , , , , , . Only a few studies report implementations of other more powerful synaptic plasticity mechanisms such as suppressive STDP . These mechanisms that have advanced synaptic plasticity (learning) abilities compared to the PSTDP rule, can improve the performance of the developed neuromorphic architectures in learning and computation. In order to reach higher learning capabilities in future neural architectures, this paper proposes a novel CMOS-memristive design for a higher order STDP rule, namely triplet STDP, which has advantages over its previous CMOS , ,  as well as memristive , , , ,  counterparts and significantly improves learning capabilities of neuromorphic synapses. The proposed synaptic circuit is composed of two memristors along with several CMOS transistors to account for the non-linearities of the triplet rule proposed by Pfister and
14 Read more
Neuromorphic engineering is a discipline characterized by two main goals. Firstly to understand the computational properties of biological neural systems using standard CMOS VLSI technology as a tool. And to make use of the known properties of biological systems to design and implement efﬁcient devices for engineering applications . It was first observed by Carver Mead that the CMOS circuits operating in the sub-threshold region have current–voltage characteristics similar to that of ion-channels present in neurons and synapse and also consume less power; hence can be used as analogues of neuron and synapse.
computing. Similar to synaptic strength in neural net- works, the conductance is controlled by the time differ- ence of incoming voltage pulses. In addition, the thresh- old voltages to switch the conductance of the device were shown to be state-dependent, which emerges from the interplay of a memristance with a memcapacitance. In contrast to other realizations of memristors, [47–49] memristance and memcapacitance switching of the pre- sented device are observed between different terminals. The memristance is measured in the two-terminal geom- etry and the memcapacitance between the lateral gates and the wire. Thus the state of the device controls the charging and discharging voltages via the gate-channel- capacitance (intrinsic feedback). This may enable the implementation of memory-dependent induction of learn- ing or the realization of counters and integrate-and-fire neurons. Here we exploited the feedback to show the ca- pability of performing arithmetic operations in different bases with clearly distinguishable reset states.
10 Read more
What is needed is a platform that can in some sense extract the computational characteristics that matter from biological neural networks, and be able to apply them in a concrete context that demonstrates why they matter, and how we might use them to engineer systems that work with messy real- world data. In this paper, we demonstrate the integration of a “neuromorphic” chip: SpiNNaker, and a complex humanoid robot, the iCub, and show how such a system can learn to recognize and attend to objects of preference in an unseg- mented scene, in real time, without relying on off-line training or imperative direction. We further indicate implications to both neuroscience and neural engineering of a structured approach to learning and architectures that might guide design toward autonomous systems. While our neurorobot cannot yet be considered autonomous, we suggest that by demonstrating real-time learning for a simple real-world task, our system
14 Read more