evidence to a less is more design philosophy for swarmroboticsystems. After all, a major motivation for these systems is that they shall be flexible enough to operate in environmental conditions that are not well-defined a priori.
In the case of object clustering, we investigated how the performance of the system ben- efits from an additional state of the line-of-sight sensor. We compared the performance of three different sensor configurations: in the first, Robots as Robots, a ternary sensor could distinguish between robots, objects and the background; in the second, Robots as Objects, a binary sensor could identify the presence of robots or objects against the background, but not distinguish between robots and objects; in the third, Robots as Background, a binary sensor could only identify the presence of objects against the background, but not the presence of robots. It turned out that all of these configura- tions deliver a satisfactory performance when the robots-to-objects ratio is sufficiently small. When this ratio increases, the robots as objects sensor configuration breaks down because the robots attempt to cluster themselves as well as the objects. On the other hand, both the Robots as Robots and Robots as Background configurations continue to perform successfully, implying that the third sensor state is not strictly required. At the same time, however, it was statistically shown this third state does lead to a significant improvement in performance, in terms of the speed of clustering. This suggests that even at the minimal extreme of information acquisition and processing, the robots in a swarm can successfully co-operate to better effect than if they had to act individually. In other words, this means that at physically small scales, it should still be possible to exploit one of the main powers of swarmsystems: that the whole can be more than the sum of the parts.
This paper presents ARDebug—a novel augmented reality tool that is specifically designed for analysing and debugging the behaviour of swarmroboticsystems, which builds on the success of Ghiringhelli et al. ( 2014 ), offering a more generalised tool that can meet the requirements of a wider variety of swarmsystems. It provides an experimenter with a single intuitive interface through which they can view the internal state of robots in a swarm in real-time, making the process of identifying, locating, and fixing bugs significantly easier. Similar mixed reality ( Hoenig et al., 2015 ) applications exist that allow robots to perceive augmented reality environments through the use of virtual sensors ( Reina et al., 2015, 2017; Antoun et al., 2016 ), which can aid the debugging process through the creation of reproducible virtual environments, but ARDebug differs from these tools in its focus on presenting detailed debugging information to a human experimenter.
2. Case study: A Wireless Connected Swarm We have developed a class of algorithms which make use of local wireless connectivity infor- mation alone to achieve swarm aggregation (Nembrini et al., 2002, Nembrini, 2005). These al- gorithms use situated communications in which connectivity information is linked to robot motion so that robots within the swarm are wirelessly ‘glued’ together. This approach has several ad- vantages: firstly the robots need neither absolute nor relative positional information; secondly the swarm is able to maintain its coherence (i.e. stay together) even in unbounded space and, thirdly, the connectivity needed for, and generated by, the algorithms means that the swarm naturally forms an ad-hoc communications network. Such a net- work would be a significant advantage in many swarm robotics applications. In this case study we
These evaluations made available sets of anno- tated data for English and other languages, used for training and evaluation. One natural question to ask is whether it is feasible to adapt the training and test data made available in these competitions to other languages, for which no such data still ex- ist. Since the annotations are largely of a seman- tic nature, not many changes need to be done in the annotations once the textual material is trans- lated. In essence, this would be a fast way to create temporal informationprocessingsystems for lan- guages for which there are no annotated data yet.
In the GRN-based model, it is assumed that each robot corresponds to a single cell in a cell-robot metaphor. Within each cell, there are two different types of protein products, namely types G and P. Protein type G consists of two proteins, which correspond to the x and y positions of a robot in a 2D environment, respectively. If a 3D shape is to be formed, then three proteins of type G are needed to describe the position of the robot. Similarly, protein type P consists of two proteins for a 2D environment and three proteins for a 3D environment, which represent an internal state vector of the robot. Meanwhile, proteins of type G can diffuse into the neighboring cells, thus influencing the protein production in these cells. This kind of local diffusion through cell-cell signaling can, in the robot metaphor, prevent the robot from colliding with its neighbors. Finally, the production of proteins is regulated by a maternal morphogen gradient M, which corresponds to the embedded information of the target pattern for the robots to form.
A hypothesis relating oscillations, waves and odor representations in the procerebral lobe can be summarized in the following way. When odor A is experienced, a short-term memory trace of odor A lasting tens of seconds is laid down in a band of neurons in the procerebral lobe. This is based on the observation that toxicosis can follow odor exposure by tens of seconds and still lead to associative conditioning (Gelperin, 1975). When the aversive stimulus is applied, neuromodulators liberated in the procerebral lobe trigger biochemical changes in the band of procerebral cells which lead to the formation of a long-term (days) memory of odor A (Fig. 3A). Some consequence of these biochemical events leads to membrane recycling, which results in internalization of Lucifer Yellow in slugs injected with Lucifer Yellow following odor training. When two different odors, A and B, are learned, two bands form in the same procerebral lobe, with the apical–basal distance between the bands being proportional to the chemical or perceptual similarity of the two odors (Fig. 3A). If two closely related odors, A and A′, are learned, two spatially contiguous bands form in the procerebral lobe with minimal spacing along
If the environment in which robots are required to operate is highly dynamic then implicit communication is not considered suitable as it relies on the changes that are brought into the environment or on the observations of other robots . So, for distributing vision based tasks and knowledge sharing among robot swarms, wireless communication medium can be considered as a sensible choice. A layered protocol structure, based on data transport layers (inspired from computer network system), to define protocols for wireless robot communication is presented in , but it assumes the provision of communication network layers in robot which could be possible if some communication middle-ware (a third party software that connects other software together) is used. A cooperative distributed problem solving system is discussed in , but it totally relies on a centrally controlled shared memory architecture for data exchange which makes it unsuitable for systems where data management or storing are not centralised, such as swarm of robots. A comparison of TCP, UDP, TEAR and Trinomial protocols for formation control of robot swarms is carried out in  with the help of middle-ware and centralised base station. The comparison of MANET routing protocols in mobile robotics is performed in , where a 7x7 grid of closely spaced WiFi nodes (equipped with a high performance system to simulate wireless connected robot modules) are used and a mobile robot moves around them. This is shown in Figure 2.8.
Intelligence can evolve using evolutionary algorithms that try to minimize the sensory surprise of the system. We will show how to apply the free-energy principle, borrowed from statisti- cal physics, to quantitatively describe the optimization method (sensory surprise minimization), which can be used to support lifelong learning. We provide our ideas about how to combine this optimization method with evolutionary algorithms in order to boost the development of specialized Artificial Neural Net- works, which define the proprioceptive configuration of particular robotic units that are part of a swarm. We consider how optimization of the free-energy can promote the homeostasis of the swarm system, i.e. ensures that the system remains within its sensory boundaries throughout its active lifetime. We will show how complex distributed cognitive systems can be build in the form of hierarchical modular system, which consists of specialized micro-intelligent agents connected through information channels. We will also consider the co-evolution of various roboticswarm units, which can result in development of proprioception and a comprehensive awareness of the properties of the environment. And finally, we will give a brief outline of how this system can be implemented in practice and of our progress in this area.
Robot-robot correspondence is satisfied quite easily, as the e-puck robots can simply be modelled as a circles with the same circumfer- ence as a real e-puck. Their positions are updated using two-wheel differential drive kinematics, with wheel speeds calculated from mea- surements of a real robot’s movement. In O’Dowd ’s original imple- mentation, each step of the simulation represented 40 ms of real-time. For this research, this was reduced to 10 ms, to increase the fidelity of simulating the robots’ movement to 100 updates per second. Robot- environment correspondence is harder to achieve, as it relies upon the use of an accurate IR sensor model. It has been shown that the response of active IR sensors depends not only on the distance from an obstacle, but also the angle, and the proportion of the beam that is reflected [ 82 ]. However, in this minimal simulator the IR sensor
These key/value pairs can also be visualised using real-time charts. For example, ARDebug will display any array of numerical values as a bar chart. This feature can be used to graphically represent a robot’s sensor readings, such as ambient and reflected infra-red light. Strings, such as the robot’s current state, are instead displayed as a pie chart showing the distribution of values across the selected robots, which are assigned colours in the visualiser in relation to the segments of the chart (as shown in Figure 1). This information can be useful to determine whether a robot is getting stuck in a particular state under certain conditions. Finally, single numerical values, such as a robot’s battery voltage, are visualised as a line chart displaying the recent history of reported values over time.
12. Morihiro, K., Isokawa, T., Nishimura, H., Matsui, N.: Emergence of ﬂocking be- havior based on reinforcement learning. In Gabrys, B., Howlett, R., Jain, L., eds.: Knowledge-Based Intelligent Information and Engineering Systems. Volume 4253 of Lecture Notes in Computer Science. Springer Berlin Heidelberg (2006) 699–706 13. Lee, S.M., Myung, H.: Particle swarm optimization-based distributed control scheme for ﬂocking robots. In Kim, J.H., Matson, E.T., Myung, H., Xu, P., eds.: Robot Intelligence Technology and Applications 2012. Volume 208 of Advances in Intelligent Systems and Computing. Springer Berlin Heidelberg (2013) 517–524 14. Vatankhah, R., Etemadi, S., Honarvar, M., Alasty, A., Boroushaki, M., Vossoughi,
which utilize one or more basic NLP algorithms. For example, Arazy and Woo  attempted to enhance information retrieval by applying standard vector space model to calculate text collocation indexes. Yazdani and Popescu-Belis  proposed a method for computing semantic relatedness and applied the method to tasks of semantic annotation, IR, and text classification. Their method improves upon graph- based random work algorithms for NLP problems. Similarly, a new approach for multi-document summarization  utilizes hierarchical Bayesian models. Some studies used multiple prototypical tasks to solve real world problems. For example, Wierzbicki et al.  used sentiment analysis and TC to improve the computational trust representation in existing trust management systems. Valencia-Garcia et al.  presented an intelligent framework for simulating robot-assisted surgical operations that employed semantic annotation and inference for simulating surgical operation.
At the beginning of this section, experiments that test hypothesis 2 directly are described. The focus is to subject the ASR decoding process to frames missing acoustic likelihood scores, and see how the decoding error rate changes accordingly. Obviously we are interested in using the presence vs. absence of an acoustic landmark as a heuristic to choose the frames to keep or drop. To quantify the importance of the information kept vs. the information discarded, dropping strategies (Landmark-keep and Landmark-drop) are compared to the non-landmark-based Random strategy. Notice the Regular strategy has been shown to be more effective than Random (e.g., in Fig 3.5); however, to make the PER result meaningful, the same number of frames should be dropped across different patterns being compared. When we keep only landmarks (Landmark-keep) or drop only landmarks (Landmark-drop), the percentage of frames dropped cannot be precisely controlled by the system designer: it is possible to adjust the number of frames retained at each landmark (thus changing the drop rate), but it is not possible to change the number of landmarks in a given speech sample. Therefore, precisely adjusting the drop rate to meet a different pattern is not practical. Depending on the test set selected, the portion of frames containing landmarks ranges from 18.5% to 20.5%. As opposed to Random, Regular does not give us the ability to select a drop rate that exactly matches the drop rate of the Landmark-drop or Landmark-keep strategies. Therefore, it is not covered in the first 2 experiments. However, in the 3rd experiment, we will compare a frame dropping strategy using landmark as heuristic against Regular dropping. But that experiment will serve a slightly different purpose.
The above mentioned techniques have the potential to reduce both the computational and data movement costs in CNN implementations. In general, data movement (memory access) cost tends to dominate the overall energy consumption in data-intensive computing systems . This is especially true for large-scale CNN implementations [36, 37]. Thus, research focus has been on reducing data movement cost via maximally reusing data locally [36,37] or in-memory computing [38,39]. Once these techniques aggressively trim down the data movement cost of large-scale CNNs as well as in the case of small-scale CNNs, the computational cost will be on the same order or even dominate the overall energy cost . In these cases, how to reduce the computational cost of CNNs becomes the primary concern. In line with this direction, one opportunity in CNNs is that matrix-vector multiply (MVM) is the most power hungry kernel and accounts for 90% of the computational cost in state-of- the-art integrated circuit implementations . In a MVM, an input vector x is projected to a set of weight vectors, i.e.:
Minimalprocessing. The whole fruits were pre- washed in chilled water (4°C) containing 100 mg/l of sodium hypochlorite (adjusted to pH 6.5 with citric acid) for 2 minutes. The peel and stone were manually removed. Each fruit was cut into slices with sharp stainless steel knives, washed in tap water at 4°C during 2 min and then dried applying a stream of cold air (4 min). The peach slices (around 90 g) were packaged in polypropylene (PP) trays thermosealed with a PP film (TECAPACK, S.L., Cordoba, Spain). Quality analyses on peach slices were carried out at the beginning of each experiment, and after 3, 6, and 9 days of storage at 4°C.
How can emerging nanotechnologies be exploited to design the next generation of intelli- gent computers? This is the central question explored in this dissertation and the underlying theme of neuromemristive systems (NMSs). An NMS is a brain-inspired, special-purpose computing platform based on nanoscale resistive memory (memristor) technology. NMSs represent a subclass of a broader movement in brain-inspired computing called neuromor- phic systems, which were pioneered by Carver Mead in the late 1980s . The primary goal of both neuromorphic and neuromemristive systems is to provide levels of intelligent informationprocessing, adaptation/learning, energy/area efficiency, and noise/fault toler- ance in niche application domains that are not achievable using conventional computing paradigms. Conventional computer architectures are limited in these aspects because of their adherence to the von Neumann model, where the hardware is digital and immutable, computation is sequential and precise, and a distinct separation exists between computation and memory. Although the von Neumann model is unparalleled for well-defined sequential problems (e.g. arithmetic and logic), it is ill-suited in application domains such as visual informationprocessing, where problems are not well-posed, data are analog and noisy, and solutions are inherently parallel. Mead and many researchers before him recognized
annotation allows to identify the intention of search of the users and to adjust the result according to the context of the information. The present research proposes a model for the retrieval of information with semantic annotation that allows to help the user to recover the most relevant information among all the information available on the web. In the model, three components (Trace-Indexing, Processing and Presentation) are developed that allow identifying the need for user information through the processing, selection and subsequent publication of the retrieved information. The crawling and indexing component allows the identification of available web sites to extract information and perform semantic annotation by applying different informationprocessing techniques. The processing component analyzes the preferences of the user and processes the query performed to calculate the similarity of the indexed information. Subsequently the results are sorted according to the relevance to show in the Presentation component a quantity of information that can be assimilated by the users. For the validation of the proposal, the metrics of precision and completeness were used to demonstrate the quality and relevance of the information retrieval with semantic annotation.
”DICOM Network” is only one information system that works with medical images data storing from different sources on national and international level in this region. Application is based on cloud technologies that makes possible to distribute it on the different locations and dynamically increase resources on demand. One of the benefits of this system is that it is using resources of scientific cloud infrastructure that makes possible to access the investigations both by specialists for daily-based activities and by scientists for research. The other applications that are working in this domain are restricted by their using in one specific institution and do not provide tools for secure data exchange inside the application or organizations. Most of the Informationsystems based on the open source software and usually these products have insufficient quality or limited functionality, or specialized PACS provided by the equipment supplier that is adopted only to the target equipment without ability of extension in large informational systems.
Dynamic task allocation in a roboticswarm is a necessary process for proper management of the swarm. It allows the distribution of the identiﬁed tasks to be performed, among the swarm of robots, in such a way that a pre-deﬁned proportion of execution of those tasks is achieved. In this context, there is no central unit to take care of the task allocation. So any algorithm proposal must be distributed, allowing every, and each robot in the swarm to identify the task it must perform. This paper proposes a distributed control algorithm to implement dynamic task allocation in a swarm robotics environment. The algorithm is inspired by the particle swarm optimization. In this context, each robot that integrates the swarm must run the algorithm periodically in order to control the underlying actions and decisions. The algorithm was implemented on ELISA III real swarm robots and extensively tested. The algorithm is eﬀective and the corresponding performance is promising.
Background: In recent years, information technology has been introduced in the nursing departments of many hospitals to support their daily tasks. Nurses are the largest end user group in Hospital InformationSystems (HISs). This study was designed to evaluate data processing in the Nursing InformationSystems (NISs) utilized in many university hospitals in Iran. Methods and Materials: This was a cross‑sectional study. The population comprised all nurse managers and NIS users of the five training hospitals in Khorramabad city (N = 71). The nursing subset of HIS‑Monitor questionnaire was used to collect the data. Data were analyzed by the descriptive‑analytical method and the inductive content analysis. Results: The results indicated that the nurses participating in the study did not take a desirable advantage of paper (2.02) and computerized (2.34) informationprocessing tools to perform nursing tasks. Moreover, the less work experience nurses have, the further they utilize computer tools for processing patient discharge information. The “readability of patient information” and “repetitive and time‑consuming documentation” were stated as the most important expectations and problems regarding the HIS by the participating nurses, respectively. Conclusions: The nurses participating in the present study used to utilize paper and computerized informationprocessing tools together to perform nursing practices. Therefore, it is recommended that the nursing process redesign coincides with NIS implementation in the health care centers.