Human beings are able to accomplish extremely complex motor tasks in all kinds of environments by means of a highly organized architecture including sensors, process- ing units and actuators. From a cognitive and develop- mental perspective, and a rehabilitation standpoint, it is necessary to fully understand the complex interactions between the controller (the Central Nervous System) and the controlled object (all parts of the body). These interactions describe the process of motor control for which many theories have been developed. As far as the generation of motor commands is concerned, in literature it is generally acknowledged that nervous system gener- ates motor commands based on internal models able to take account of the kinematics and the dynamics of the biomechanical structures [2-4]. These models can be described as groups of neural connections that intrinsi- cally contain information about biomechanical proper- ties of the human body in relation both to the environment and the subject's experience.
We presented in this paper a hierarchical network architectureinspired by the mammalian ventral pathway to sparsely represent visual features for use in sensorimotor control. This sparse representation provided intrinsic low power and fault-tolerant computing substrate to sensorimotor control systems. By unsupervised learning algorithms, the learned visual models made the sensorimotor control systems to automatically adapt to uncertain and novel environment. We also show that in such a model, V2 cell receptive fields develop end-stopping properties. According to Hubel and Wiesel, the optimal stimulus for an end-stopped cell is a line that extends for a certain distance and no further. For a cell that responds to edges and is end- stopped at one end only, a corner is ideal; for a cell that responds to slits or black bars and is stopped at both ends, the optimum stimulus is a short white or black line or a line that curves so that it is appropriate in the activating region and inappropriate (different by 20 to 30 degrees or more) in flanking regions. We can thus view end-stopped cells as sensitive to corners, to curvature, or to sudden breaks in line. These contours are very crucial for shape representation in cortex V4 (Gallant, Braun et al. 1993; Wilkinson, James et al. 2000; Pasupathy and Connor 2001), thus they are very important for object representation and recognition in IT. Our approach is related to Hoyer’s contour coding network (Hoyer and Hyvarinen 2002). However, Hoyer computed the complex cell responses by a simple energy model, therefore the receptive fields in his V1 layer are fixed, or pre- calculated. In contrast, our approach uses the end-to-end learnt receptive fields, and thus represents the natural image sparsely and sufficiently (see Fig. 4). Also, the property of the receptive fields and their sizes in our architecture are richer and more in-line with the diversity known from biology. Note that repeating Hoyer’s experiments using 100,000 image patches and 100 iterations took 2 days on the same computer mentioned above, the selective resulting basis patterns are shown in Fig. 7. Practically, using the responses of V2 cells in our architecture, we have trained the
The current state of saliency and attentional models serves as a strong foundation for the development of solutions to a broad set of vision problems . However, key aspects of perception necessary for supporting end-to-end tasks and activities — especially temporal mechanisms — are not well-established. This motivates the development of a unifying framework that addresses these concerns. This dissertation looks into designing such a framework using a neuronal architectureinspired directly by biological vision systems . The choice of fairly realistic neuronal units with temporal properties holds the key to solving some of the challenges highlighted above. Beyond detecting salience in images (and video streams), the framework enables us to perform a variety of visual tasks without making significant changes to the underlying implementation. One way of specifying task-specific constraints and intentions is via a structured interface that cognitive agents can interact with. This helps us understand how common low-level perceptual processes can be woven together using cognitive control to achieve seemingly complex high-level behavior, in both computational as well as biological systems.
Chapter 7 describes work using the detection and distribution of edges (and lines) in images to control quadrotor flight. Section 7.2 introduces a statistical model based on the sampling properties of image sensors and the geometric properties of many nat- ural scenes which can estimate altitude. The implementation was robust, performing well for multiple flights in several environments. The model is computationally effi- cient, biologically plausible, and hints at a possible way that real organisms could use information in this way to estimate altitude. The model was simulated and tested on real flights with real data successfully. Section 7.4 is another altitude controller from a similar idea; that the distribution of edges in the environment is locally static and reasonably predictably distributed. By detecting the structure of the distribution of these edges a robot can maintain altitude and avoid oncoming obstacles. Unlike strate- gies based on image motion the implementation does not require self motion. It is also biologically plausible; computationally efficient and non iterative. To the best of my knowledge, the ‘maximum near-peak avoidance strategy’ was the first edge-based bio- logically inspired UAV control system. It was able to successfully replicate the altitude control behaviour [Straw et al., 2010] of Drosophila and control a quadrotor helicopter. I also understand the treatment of edges in an image as a Poission process, and the application of this to estimate real world state, is a novel first.
VisNet is a neural network architecture for shape recognition developed by Rolls and Deco and described in their book . The general philosophy involves a feature hierarchy going from simple (oriented lines) to complex (curves and corners) features. The network has 4 layers of neurons that learn and classify using mutual inhibition (over short ranges) and competition. The units from one layer converge onto neurons in the higher layers. In other words, several neurons in the lower layers are afferent on neurons in the higher layers. This implies an increasing size of receptive fields as we go higher up in the network. The input to the network is from a set of 2D spatial filters implemented as Difference of Gaussians. These are intended to mimic the orientation and spatial frequency sensitivities of simple cells in V1. Rolls and Deco do not assume any preexisting affinity for any combination of features. The layers through self-organization learn to represent the entire feature space. Feature combinations are not replicated at all positions. Instead a representative sample of images with all possible features is learnt by the lower layers of the network. These images are presented at all possible positions in the input layer. The higher layers learn feature associations specific to objects and do not bother with translational invariance. Rolls and Deco also suggest a trace rule for learning. This is similar to Hebbian learning except there is a temporal aspect to it. This temporal aspect to learning brings about some of the invariance to scale and translation. This form of Hebbian-like learning also makes the network biologically plausible. Additionally, the competitive nature of learning also brings about the distributed representation like the neurons in the IT exhibit. Overall, this hierarchical representation exhibits the ability to differentiate between different spatial arrangements of features and is similar to the behavior of cells between the V1 and IT layers of the ventral pathway.
EARA-QoS is an on-demand multipath routing algo- rithm for MANETs, inspired by the ant foraging intelli- gence. This algorithm incorporates positive feedback, neg- ative feedback and randomness into the routing computa- tion. Positive feedback originates from destination nodes to reinforce the existing pheromone on good paths. Ant-like packets, analogous to the ant foragers, are used to locally find new paths. Artificial pheromone is laid on the commu- nication links between nodes and data packets are biased towards strong pheromone, but the next hop is chosen abilistically. To prevent old routing solutions from remain- ing in the network status, exponential pheromone decay is adopted as the negative feedback.
In this paper, we propose a new Convolutional Neural Network (CNN) with biologicallyinspired retinal structure and ON/OFF Rectified Linear Unit (ON/OFF ReLU). Retinal structure enhances input images by center surround difference of green-red and blue-yellow components, which in turn creates positive as well as negative features like ON/OFF visual pathway of retina to make a total of 12 feature channels. This ON/OFF concept is also adopted to each convolutional layer of CNN. We prefer to call this ON/OFF ReLU. In contrast, conventional ReLU passes only positive features of each convolutional layer and may loose important information from negative features. Moreover, it also happens to loose learning chance if results are saturated to zero. However, in our proposed model, we use both positive and negative information, which provides a possibility to learn through negative results. We also present the experimental results conducted on CIFAR-10 dataset and atrial fibrillation prediction for health monitoring, and show how effectively the negative information and retinal structure improves the performance of conventional CNN.
Output of V1 is projected to peri-striate cortex (V2) where probably retinal images are reconstructed. Triple- stage convolution in visual pathway has inspired convolutional neural networks acting as a course to fine process; though the research has focused mostly on magnitude data . Some of the works included phase information to form an associative memory network . Table 1 compares some of basic approaches of holonomic phase-magnitude encoding approaches.
vigorous activity. The midbrain periaqueductal grey (PAG) is generally viewed as the principle intermediate level structure responsible for co-ordinating the wide range of somatomotor, autonomic, and endocrine reactions involved in defensive behaviour  . For example, it appears that the outputs of nociceptors (pain receptors) located deep within the body selectively activate regions of the PAG associated with behavioural suppression and quiescence, while nociceptors from the body surface activate parts of the PAG producing active avoidance and escape  . It is particularly interesting to note that these intermediate level systems evolved in creatures like fish, frogs and lizards but anatomical and physiological evidence suggests that many have been retained in mammals including humans. In addition to the mid-brain components noted above, the (fore-brain) basal ganglia are also implicated in the control of intermediate level defense reactions. It is known that the basal ganglia are among the oldest regions of the forebrain and are present, with the same basic connections and cell types, in all jawed vertebrates including fish  .
Most of the mathematical models of collective behavior de- scribe uncertainty in individual decision making through additive uniform noise. However, recent data driven stud- ies on animal locomotion indicate that a number of animal species may be better represented by more complex forms of noise. For example, the popular zebrafish model organism has been found to exhibit a burst-and-coast swimming style with occasional fast and large changes of direction. Based on these observations, the turn rate of this small fish has been modeled as a mean reverting stochastic process with jumps. Here, we consider a new model for collective behav- ior inspired by the zebrafish animal model. In the vicinity of the synchronized state and for small noise intensity, we establish a closed-form expression for the group polarization and through extensive numerical simulations we validate our findings. These results are expected to aid in the analysis of zebrafish locomotion and contribute a new set of mathe- matical tools to study collective behavior of networked noisy dynamical systems.
testing unnecessary since the structure could adapt its shape to account for shape changes introduced by the space environment 3 . An example would be a space telescope that could adapt its optic in space to account for unpredictable thermal expansion in order to increase its accuracy. Further application are envisioned in the field of solar sails where the morphing structure can replace the entire altitude control system by changing the solar sail surface area subjected to the solar wind. Also space based solar power satellites are in need of shape changing structures in order to direct the sunlight via a shape changing concentrator on a geostationary platform while the Earth rotates and orbits around the Sun.
On the other hand, social robots are robots that are not only aware of their surroundings. They are also able to learn from, recognize, and communicate with other indi- viduals. While other strategies are possible, robot learning by imitation (RLbI) represents a powerful, natural, and intuitive mechanism to teach social robots new tasks. In RLbI scenarios, a person can teach a robot by simply demonstrating the task that the robot has to perform. The behaviour included in the attentive stage of the proposed attention model is an RLbI architecture that provides a social robot with the ability to learn and to imitate upper-body social gestures. A detailed explanation of this architecture can be found in Bandera et al. . The inputs of the architecture are the face and the hands of the human demonstrator and her silhouette. The face and the hands are obtained using the face detector proposed by Viola and Jones , which is executed over the most salient skin coloured proto-objects obtained in the semiattentive stage. In order to obtain this proto-objects, the weights to compute the final saliency map give more importance to the skin colour feature map (as it was mentioned in Section 4).
is determined by the two human-inspired mechanisms. We applied the human-inspired MLE algorithm to combine the sensed echo collection from M = 30 UWB radars, and then the combined data are processed using discrete-cosine transform (DCT) to obtain the AC values. Based on our experiences, echo with a target generally has high and nonfluctuating AC values and the AC values can be obtained using DCT. We plot the power of AC values in Figures 7(a) and 7(b) using MLE and DCT algorithms for the two cases (with target and without target), respectively.
Here, in order to realize distributed routing protocols that combine both stable route selection and adaptation to unex- pected environmental changes beyond their local scope, we propose a distributed routing protocol based on the attrac- tor selection model. The proposed protocol has active route exploration mechanism that can adapt to rapid trac varia- tion by infrequently and stochastically changing their paths in order to obtain information of non-used paths. In order to suppress excess apping due to the route exploration and to ensure stable behaviors of the model, our extended attractor selection model has a short-term memory which stores in- ternal states of the model before active explorations. If the exploration gives unsatisfactory results, the internal state of the protocol promptly returns to the memorized state. In order to avoid serious quality deterioration due to packet loop or congestion due to the active route change, we also introduce simple loop avoidance methods to our model. In this paper, we rst explain the routing algorithm based on biologically-inspired attractor selection model in Section II, and proposed our routing protocol in Section III. Then we describe conguration of our numerical simulation in Section IV. Discussions and conclusions are given in Section V and VI.
Holland  has shown that Genetic Algorithms (GAs) offer a robust approach to evolving effective adaptive control solutions. More recent work  has demonstrated the effectiveness of distributed GAs using an unbounded gene pool and based on local action (as would be required in a multi-owner internetwork). In addition Ackley and Littman , demonstrated that to obtain optimal solutions in an environment where significant changes are likely within a generation or two, the slow learning in GAs based on mutation and inheritance needs to be supplemented by an additional rapid learning mechanism. Our bacterial algorithm  is a distributed GA with an additional rapid learning mechanism, and forms the basis of the adaptation performed by the autonomous controller in our architecture. In this paper we aim to identify the role of autonomous control in our policy driven management system and describe how the autonomous controller is integrated and provide only a brief sketch of the bacterial algorithm.
A Time Series (TS) is a sequence of observation ordered in equally spaced, discrete time intervals.A basic assumption in any time series analysis / modelling that some aspects of past pattern will continue to remain the future. Suitable forecasting time series model can be developed with minimum forecasting error. Atleast 50 observations are necessary for performing TS analysis, as propounded by Box Jenkins who were pioneers in TS modelling. The four main objectives in time series analysis are Description, Explanation, Prediction and Control. Time series analysis start by plotting the data and look for non-stationary components. Then eliminate these components using different methods, in order to have a stationary data. After identifying a suitable probability model for the time series, this model can be used for prediction. The statistical methodology available for analysing time series is referred to as Time Series Analysis.
Learning new skill over a period of time involves a shift from initially conscious activ- ity, engaging large brain areas, to the final subconscious, intuitive, automatic actions that en- gage only a few well-localized, specialized brain centers. Skill learning expressed in terms of diminishing conscious control seems to be a rather mysterious process. How does it proceed in the articon that controls a robot body or some other device? Conscious control is an illu- sion, there is only a flow of inner states. The task to be learned has to recruit a number of modules that can control the effectors, correlate their activity with activation of the sensory modules, and compare the result with the desired (or imagined) one. This may require retrain- ing of existing sensorimotor maps, or formation of new modules for prototype actions, tuned during further learning.
Light capture (interception of the leaves for photosynthesis processes) of the tree canopy is defined by rule based geometry, canopy volume, total leaf area density and angular distribution of leaf surfaces . Beer’s law on light interception computations can determine these parameters. This approach to solar orientation and ad- sorption of light energy by biochemical processes, are responsive measures, a dynamic system. Could this ap- proach of nature’s adaptive functions, of biologicallyinspired intelligent materials enable progression of real- time, reactive materials that form the surfaces of glass buildings? To develop glass from being a mere material entity, to becoming a dynamic energy system to regulate its own thermal conductivity levels, by the hour, season and weather conditions. This regulation could be pre-programmed, self-programmable intelligence by the influ- ence of mechanical and algorithmic controls in response to solar and climatic environment influence?