Top PDF Multiple-scale dynamics in neural systems: learning, synchronization and network oscillations

Multiple-scale dynamics in neural systems: learning, synchronization and network oscillations

Multiple-scale dynamics in neural systems: learning, synchronization and network oscillations

several spatial and/or temporal scales, thus providing strong interactions between the scales. For example, neurons produce spikes (very short electrical pulses that carry information from neuron to neuron) whose timing is in some cases up to one millisecond precise [57, 102, 10]. On the other hand, there are examples showing that a lot of information is transmitted between neurons in the variations of the average firing rate [37, 15]. Those variations occur on the timescale of hundreds of millisec- onds. The debate still rages among neuroscientists as to what is most important in the ‘neural code’ – precise spike timing or average firing rate? The emerging answer seems to indicate that both of them are essential and that their relative importance may depend on the situation and context [100, 63, 23, 55]. It is therefore impossible to separate temporal scales in this case and the dynamical processes with timescales from 1 ms to 100 ms should be analyzed simultaneously. This and other examples demonstrate that in most cases the proper way to harness the power of the ‘separation of scales’ approach to neural systems was not yet found. Moreover, it may turn out that it can not in principle be found and that scientists may have to deal with the problem of understanding the brain in its full integrity. It is not yet known when and how to ‘coarse-grain’ in order to move from one ‘scale’ to another, as well as if this procedure is applicable at all.
Show more

123 Read more

Reinforcement learning in a large-scale photonic recurrent neural network

Reinforcement learning in a large-scale photonic recurrent neural network

While networks with multiple nodes are more challenging to implement, they offer key advantages in terms of parallelism and speed, and for realizing the essential vector-matrix products. Here, we demonstrate a network of up to 2025 nonlinear network nodes, where each node is a pixel of a spatial light modulator (SLM). Recurrent and complex network connections are implemented us- ing a diffractive optical element (DOE), an intrinsically parallel and passive device [14]. Simulations based on the angular spectrum of plane waves show that the concept is scalable to well over 90.000 nodes. In a photonic RNN with N 900 nodes, we implement learning using a digital micro-mirror device (DMD). The DMD is intrinsically parallel as well and, once weights have been trained, passive and energy efficient. The coupling and learning concepts’ bandwidth and power consumption are in practice not impacted by the system ’ s size, offering attractive scaling properties. Here, we ap- ply such a passive and parallel readout layer to an analog hardware RNN, and introduce learning strategies improving performance of such systems. Using reinforcement learning, we implement time series prediction with excellent performance. Our findings open the door to novel and versatile photonic NN concepts.
Show more

5 Read more

Neural Differentiation Dynamics Controlled by Multiple Feedback Loop Motifs in a Comprehensive Molecular Interaction Network

Neural Differentiation Dynamics Controlled by Multiple Feedback Loop Motifs in a Comprehensive Molecular Interaction Network

aPKC_PAR3_PAR6, and PI3K, represented as GSK3B_ca, aPKC_ca, and PI3K_ca in the digested model respectively, are consistent with previous experimental results [20] (Figure 5B, C). The results of the simulation of Id2 knockdown or overexpression are also consistent with experimental results [14]. Therefore, our digested model could adequately simulate the dynamics not only of HES1 and ASCL1 but also of other molecules. Our model suggests that three loops (HES1 negative self-feedback, positive feedback between aPKC_PAR3_PAR6 and PI3K, and negative feedback between GSK3B and HES1) are important to maintain undifferentiated state oscillations. We suggest that the negative-feedback loop between beta-catenin and HES1 in the comprehensive regulatory network is most important because of its greatest contribution to the characteristic dynamics (Figure 7D). A relation between beta-catenin and HES1 plays a role in tumorigenesis [46]. As HES1 controls cancer stem cells [47], the negative feedback loop that has not been focused on may be related to proliferation and differentiation of cancer stem cells. It is expected that a further experimental study such as making perturbation to the loop by knock down will reveal detail mechanism of neural differentiation. These findings could only be made by using the analysis based on a large-scale regulatory network, thus highlighting the effectiveness of our approach. We demonstrated that focusing on feedback loop motifs instead of the whole network when constructing a model was sufficient for agreement with experimental results. Our approach could be applied to analysis of various biochemical networks by simulation. By streamlining large-scale regulatory network construction, our approach could help to analyze various biological
Show more

20 Read more

Future Directions in Nano-Scale Systems, Dynamics and Control

Future Directions in Nano-Scale Systems, Dynamics and Control

of assembling submicron particles and other building blocks using an array of hundreds of optical traps combined with chemical assembly. Jason Gorman from the National Institute of Standards and Technology (NIST) also mentioned the multiple optical trap based manufacturing of micro/nano-scale structures and devices. He showed a multi-level control scheme for the optical tweezers based nano- manipulation system as given in Figure 8. The main limitations of optical tweezers based nano- manipulation are: the minimum single object size trapped stably is around 40-50 nm (standard trapped particle size is around 0.5 or 1 P m); precision, distributed, and high-speed XYZ position control of each trap is challenging; assembled objects should be bonded by chemical and other means to form a mechanically strong and stable material. Finally, electrophoretic and dielectrophoretic forces are used to assemble carbon nanotubes in parallel [11] for nano-electronic circuit applications in the near future. Microfabricated electrodes align and trap the nanotubes in liquid and get short-circuited when any nanotube is contacted to both electrodes. By this method, massively parallel manufacturing of nano- circuits would become possible. Moreover, dielectrophoretic forces have many important biotechnology applications when used to trap, rotate, and sort biological specimens. However, precision control and in- situ manufacturing process monitoring is not possible in all of these manipulation processes, which limits the yield and future commercialization.
Show more

18 Read more

Life on the Edge: Latching Dynamics in a Potts Neural Network

Life on the Edge: Latching Dynamics in a Potts Neural Network

How can the human brain produce creative behaviour? Systems neuroscience has mainly focused on the states induced, in particular in the cortex, by external inputs, be these states simple distributions of neuronal activity or more complex dynamical trajectories. It has largely eschewed the question of how such states can be combined into novel sequences that express, rather than the reaction to an external drive, spontaneous cortical dynamics. Yet, the generation of novel sequences of states drawn from even a finite set has been characterized as the infinitely recursive process deemed to underlie language productivity, as well as other forms of creative cognition [1]. If the individual states, whether fixed points or stereotyped trajectories, are conceptualized as dynamical attractors [2], the cortex can be thought of as engaging in a kind of chaotic saltatory dynamics between such attractors [3]. Attractor dynamics has indeed fascinated theorists, and a major body of work has shown how to make
Show more

18 Read more

Incorporating scale invariance into the cellular associative neural network

Incorporating scale invariance into the cellular associative neural network

vector length used in the system. If one of the cell’s neighbours does not have any information to pass, then an empty vector will be transferred and hence included in the input to one or more modules, leading to the requirement of an arity network. For example, in Figure 1a, the “combiner” module has a total of 5 inputs—all four neighbours, and the state of the cell itself (the output of the combiner module). If a vector weight of 4 is chosen, then when all the inputs contain information the total weight is 5 × 4 = 20—this is used as the value for Willshaw’s threshold. If one of the cell’s neighbours does not pass any informa- tion, however, then the total weight will only be 4 × 4 = 16. If this were to be recalled from a CMM with a threshold value of 20, then it could never result in an output. As such, it is stored in a separate CMM with a threshold value of 16. A recall operation can then present the input vector to the correct CMM in this arity network, using the relevant threshold value.
Show more

9 Read more

Simulation Infrastructure for Modeling Large Scale Neural Systems

Simulation Infrastructure for Modeling Large Scale Neural Systems

Computational efficiency is critical for large scale simulations. The run-time en- vironment must effectively utilize all available computational resources, such as the CPUs and system memory. In addition, the run-time environment should support both time-step and discrete time event update modes to allow the se- lection of the most appropriate mode for a particular model. For example, since neuron models are continuously and frequently updated they should use the time-step mode. In contrast, models of environmental stimuli may use fewer resources with the higher overhead, but far less frequent, discrete time event updates.
Show more

10 Read more

A general deep learning framework for network reconstruction and dynamics learning

A general deep learning framework for network reconstruction and dynamics learning

A considerable amount of methods have been proposed for reconstructing network from time series data. One class of them is based on the method of statistical inference such as Granger causality(Quinn et al. 2011; Brovelli et al. 2004), and correlation measure- ments(Stuart et al. 2003; Eguiluz et al. 2005; Barzel and Barabási 2013). These methods, however, can usually discover functional connectivity and may fail to reveal structural connection (Feizi et al. 2013). This means that in the reconstructed system, strongly cor- related areas in function need to be also directly connected in structure. Nevertheless this requirement is seldom satisfied in many real-world systems like brain (Park and Friston 2013) and climate systems (Boers et al. 2019). Another class of methods were developed for reconstructing structural connections directly under certain assumptions. For exam- ple, methods such as driving response(Timme 2007) or compressed sensing(Wang et al. 2011; Wang et al. 2011; Wang et al. 2011; Shen et al. 2014) either require the functional form of the differential equations, or the target specific dynamics, or the sparsity of time series data. Although a model-free framework presented by Casadiego et al.(Casadiego et al. 2017) do not have these limitations, it can only be applied to dynamical systems with continuous variables so that the derivatives can be calculated. Thus, a general framework for reconstructing network topology and learning dynamics from the time series data of various types of dynamics, including continuous, discrete and binary ones, is necessary.
Show more

17 Read more

PID Neural Network Motor Synchronization Control Based on the Improved PSO Algorithm

PID Neural Network Motor Synchronization Control Based on the Improved PSO Algorithm

Due to the learning convergence speed of general neural network is slow, easily trapped in local minimum point and there is no guarantee for real-time control system. At the same time, the hidden layer unit number and the connection weights is difficult to determine, thus restricting its extensive applications in control system. Literature [6] proposes a new type of neural network model―PID neural network (PIDNN) model. The PIDNN is an amalgam of PID control and neural network, thus it has the advantages of neural network and PID control and overcome the shortcomings of traditional control method and the general neural network.
Show more

11 Read more

Random neural network learning heuristics

Random neural network learning heuristics

An ABC algorithm was proposed in Karaboga and Basturk [65], and performance of the ABC was compared with GA, PSO and Particle Swarm Inspired Evolutionary Algorithm (PS-EA). Results showed that the ABC algorithm outperformed GA, PSO and PS-EA algorithms. An ABC algorithm was also used for training an ANN in Karaboga et al. [64] and it was compared with BP(GD), BP(LM) and GA. It was found that the ABC algorithm can be applied for training in ANNs. In Shah et al. [75], the authors compared ABC training algorithms for ANN with BP algorithms and showed that performance of ABC was better than BP. The ABC algorithm was also applied for training the radial basis function (RBF) neural networks for classification problems in Kurban and Bes¸dok [66]. The performance of ABC algorithm was compared with GD, Kalman Filter (KF) method and GA. It was found that performance of ABC was better than the other algorithms. The ABC algorithm was also used for synthesis of ANN in Garro et al. [25] which included not only the weights, but also the architecture and transfer function of the ANN. The methodology maximised the accuracy and minimised the number of connections of ANN.
Show more

18 Read more

Learning to Adaptively Scale Recurrent Neural Networks

Learning to Adaptively Scale Recurrent Neural Networks

fit the temporal dynamics throughout the time. Although pat- terns in different scale levels require distinct frequencies to update, they do not always stick to a certain scale and could vary at different time steps. For example, in polyphonic mu- sic modeling, distinguishing different music styles demands RNNs to model various emotion changes throughout music pieces. While emotion changes are usually controlled by the lasting time of notes, it is insufficient to model such patterns using only fixed scales as the notes last differently at dif- ferent time. Secondly, stacking multiple RNN layers greatly increases the complexity of the entire model, which makes RNNs even harder to train. Unlike this, another group of multiscale RNNs models scale patterns through gate struc- tures (Neil, Pfeiffer, and Liu 2016)(Campos et al. 2017)(Qi 2016). In such cases, additional control gates are learned to optionally update hidden for each time step, resulting in a more flexible sequential representations. Yet such modeling strategy may not remember information which is more im- portant for future outputs but less related to current states.
Show more

8 Read more

Learning of N-layers neural network

Learning of N-layers neural network

Je-li  grad Ch  < 1, jsou změny koeficientu učení razantnější, což umožňuje rychlé opuštění okolí lo - kálního extrému chybové funkce a rychlejší konver - genci učení v záv[r]

10 Read more

A Network Model of the Periodic Synchronization Process in the Dynamics of Calcium Concentration in GnRH Neurons

A Network Model of the Periodic Synchronization Process in the Dynamics of Calcium Concentration in GnRH Neurons

Abstract Mathematical neuroendocrinology is a branch of mathematical neuro- sciences that is specifically interested in endocrine neurons, which have the uncom- mon ability of secreting neurohormones into the blood. One of the most striking features of neuroendocrine networks is their ability to exhibit very slow rhythms of neurosecretion, on the order of one or several hours. A prototypical instance is that of the pulsatile secretion pattern of GnRH (gonadotropin releasing hormone), the master hormone controlling the reproductive function, whose origin remains a puzzle issue since its discovery in the seventies. In this paper, we investigate the question of GnRH neuron synchronization on a mesoscopic scale, and study how synchronized events in calcium dynamics can arise from the average electric activity of individual neurons. We use as reference seminal experiments performed on embryonic GnRH neurons from rhesus monkeys, where calcium imaging series were recorded simultaneously in tens of neurons, and which have clearly shown the occurrence of synchronized calcium peaks associated with GnRH pulses, superposed on asynchronous, yet oscil- latory individual background dynamics. We design a network model by coupling 3D individual dynamics of FitzHugh–Nagumo type. Using phase-plane analysis, we con- strain the model behavior so that it meets qualitative and quantitative specifications derived from the experiments, including the precise control of the frequency of the
Show more

24 Read more

A Neural Network Approach for Intrusion Detection Systems

A Neural Network Approach for Intrusion Detection Systems

The purpose of the proposed neural network component is to deal with the above identified IDS issues. Although, the integration of this component will likely improve detection, it is wise to take in consideration possible performance issues caused by this process. First of all, the process of adding another layer (neural network) to the existing system will increase IDS complexity. This will require a careful design and implementation so that the overall performance is not affected. Another possible performance issue is related to the selected data set. Complex calculations on long data sets might affect detection time. Detection time is a really important factor, since malicious activities should be detected ideally in real time. Furthermore, some of the preferred attacking strategies against IDS systems are based on packet flooding in order to confuse detection systems. In this context, the added component should not affect in any way the overall detection or reaction speed.
Show more

10 Read more

Deep Learning in an Adaptive Function Neural Network

Deep Learning in an Adaptive Function Neural Network

In this paper, we improved the general learning rule in ADFUNN and applied it to the Iris dataset, and a phrase recognition problem. Two function smoothing methods were compared for removing noise for the learned ADFUNN. Applying ADFUNN to some classification problems is highly effective even with no hidden nodes. Of the two smoothing methods, the simple moving average is more effective and computationally efficient than the least- squares polynomial smoothing. And the smoothed curves work well when substituted back into the ADFUNN neurons.

7 Read more

Hybrid Neural Network Architecture for On Line Learning

Hybrid Neural Network Architecture for On Line Learning

Approaches to machine intelligence based on brain models use neural networks for generalization but they do so as signal processing black boxes. In reality, the brain consists of many modules that operate in parallel at different levels. In this paper we propose a more realistic biologically inspired hybrid neural network ar- chitecture that uses two kinds of neural networks simultaneously to consider short-term and long-term char- acteristics of the signal. The first of these networks quickly adapts to new modes of operation whereas the second one provides more accurate learning within a specific mode. We call these networks the surfacing and deep learning agents and show that this hybrid architecture performs complementary functions that im- prove the overall learning. The performance of the hybrid architecture has been compared with that of back-propagation perceptrons and the CC and FC networks for chaotic time-series prediction, the CATS benchmark test, and smooth function approximation. It is shown that the proposed architecture provides a superior performance based on the RMS error criterion.
Show more

9 Read more

Improved spikeprop algorithm for neural network learning

Improved spikeprop algorithm for neural network learning

Performance of SNNs is dictated by its architecture algorithm. It has been important for this research work to develop a learning algorithm for the SNN so that it is able to classify data. Biologically inspired SNN is normally capable of implementing supervised learning (ˇS´ıma, 2009). However, supervised learning rule is implementable if it operates in conjunction with backpropagation (ˇS´ıma, 2009). This learning rule is called SpikeProp which utilizes spike time temporal coding while the backpropagation is treated as a network of spiking neurons . In this case, the input and output variables perform encoding according to the correct spike time of both the output and input spikes.
Show more

36 Read more

Continuous Learning in a Hierarchical Multiscale Neural Network

Continuous Learning in a Hierarchical Multiscale Neural Network

Several works have been devoted to dynamically updating the weights of neural networks during in- ference. A few recent architectures are the Fast- Weights of Ba et al. (2016), the Hypernetworks of Ha et al. (2016) and the Nested LSTM of Moniz and Krueger (2018). The weights update rules of theses models use as inputs one or several of (i) a previous hidden state of a RNN network or higher level network and/or (ii) the current or previous inputs to the network. However, these models do not use the predictions of the network on the pre- vious tokens (i.e. the loss and gradient of the loss of the model) as in the present work. The archi- tecture that is most related to the present work is the study on dynamical evaluation of Krause et al. (2017) in which a loss function similar to the loss function of the present work is obtained empiri- cally and optimized using a large hyper-parameter search on the parameters of the SGD-like rule. 4 Experiments
Show more

7 Read more

‘Neural Network’ a Supervised Machine Learning Algorithm

‘Neural Network’ a Supervised Machine Learning Algorithm

Abstract- As a machine learning algorithm, neural network has been widely used in various research projects to solve various critical problems. The concept of neural networks is inspired from the human brain. The paper will explain the actual concept of Neural Networks such that a non-skilled person can understand basic concept and also make use of this algorithm to solve various tedious and complex problems. The paper demonstrates the designing and implementation of fully design Neural Network along with the codes. It gives various architectures of ANN also the advantages, disadvantages & applications.
Show more

7 Read more

Neural oscillations and the decoding of sensory information

Neural oscillations and the decoding of sensory information

(B) PN axon (black) projects to the MB calyx (orange) and to the LH (Ernst et al., 1977; Hansson and Anton, 2000). LHI axon (green) projects to the calyx (this study). PN and LHI axons terminate on KC dendrites (red). Neurons were stained by iontophoresis of cobalt hexamine (KC, PN) or neurobiotin (LHI) in separate preparations and were drawn with a camera lucida. Note varicosities in LHI and PN axon collaterals. Asterisk, KC axon. Bar, 50 µm. (C) Representative odor-evoked responses of two LHIs and simultaneously recorded LFPs (5-40 Hz bandpass). Note membrane potential oscillations, locked to the LFP. Identity and delivery (1 s long) of stimulus indicated by black bar. LHI, 20 mV; LFP, 400 µ V; 200 ms. (D) Instantaneous firing rate of LHI1 [in (C)] in response to various odors. Lower edge of profile shows mean instantaneous rate averaged across trials; profile thickness, SD. All LHIs responded to all odors tested, with response profiles that varied little across different odors. (E) Sliding cross-correlation between LFP and LHI2 traces (spikes clipped). Red, maxima; blue, minima. Strong locking is present throughout the response (odor delivery, vertical bar). Lower edge of correlation stripes just precedes stimulus onset due to width of the correlation window (200 ms). (F) Phase relationships between PN, KC, and LHI action potentials, and LFP. (Upper) Polar plots. LFP cycle maxima defined as 0 rad, minima as π rad (PNs: 3 cell-odor pairs, 388 spikes; LHIs: 17 cell-odor pairs, 2632 spikes; KCs: 18 cells, 862spikes). Mean phases are shown in red. Gridlines are scaled in intervals of 0.10 (probability per bin). (Lower) Schematic diagram showing LFP and mean firing phases. (G) Circuit diagram. [LHI anatomy and
Show more

163 Read more

Show all 10000 documents...