repertoire of behaviours from an instructor, which it may subsequently refine and recall using neural adaptive techniques. Their methodology has been successfully tested with a simulated robot performing a variety of navigation tasks travel from some start to target point without colliding with obstacles present in its path. Althoefer et al.  have reported navigation system for robotic manipulators. Their control system combines a repelling influence related to the distance between manipulator and nearby obstacles with the attracting influence produced by the angular difference between actual and final manipulator configuration to generate actuating motor commands. They have used fuzzy logic for the implementation of these behaviours which leads to a transparent system that can be tuned by hand or by a learning algorithm. The proposed learning algorithm, based on reinforcement learning neural network techniques, can adapt the navigator to the idiosyncratic requirements of particular manipulators, as well as the environments they operated in. Their method is suitable for navigation of a single mobile robot.
The problem of motion planning in presence of obstacles has been extensively studied over the last decade. The main task of path planning for autonomous robot manipulators is to find an optimal collision-free trajectory from an initial to a final configuration. Since 1960’s the research on mobilerobots have been an emerging area in the field of automation and development. A vision-guided autonomous mobile machine named Shakey, is one of the earliest mobilerobots designed in the year 1966  at the Stanford Research Institute. Until recently, work has concentrated on the control of individual robots. From last two decades, there has been a great interest among scientific community to focus towards the co-ordination of multiplemobilerobots. This interest has stemmed both from practical considerations that multiplerobots are able to handle tasks that individual machines cannot, for instance carrying large, bulky and heavy loads and from a desire to create artificial systems that mimic nature in particular by exhibiting some of the primary behaviours observed in human and other animal societies. Many important contributions to this problem have been made in recent years. The design goal for path planning is to enable a mobile robot to navigate safely and efficiently without collisions to a target position in an unknown and complex environment.
This work reports the problem of intelligent control and path planning of multiplemobilerobots. Soft computing methods, based on three main approaches i.e. 1) Bacterial Foraging Optimization Algorithm, 2) Radial Basis Function Network and 3) Bees Algorithm are presented. Initially, Bacterial foraging Optimization Algorithm (BFOA) with constant step size is analyzed for the navigation of mobilerobots. Then the step size has been made adaptive to develop an Adaptive Bacterial Foraging Optimization (ABFO) controller. Further, another controller using radial basis function neural network has been developed for the mobile robot navigation. Number of training patterns are intended to train the RBFN controller for different conditions arises during the navigation. Moreover, Bees Algorithm has been used for the path planning of the mobilerobots in unknown environments. A new fitness function has been used to perform the essential navigational tasks effectively and efficiently.
ABSTRACT: In this paper, we study how and to what extent multiplemobilerobots can be guided to in a desired direction to explore an unknown environment and map an area of interest. The scope of the work focuses mainly on the controllers’ design for multiplerobots’ path planning and motion coordination. Fuzzy control methodology is developed to: 1. Ensure a best path planning and motion coordination of the mobiles, 2. Avoid the collision and store the position of the obstacles. Genetic algorithm is implemented to optimize the different parameters of the controllers and optimize the robots’ path in case of collision avoidance. To store the obstacle and map the area, the sensors localize the obstacles and store their coordinates in a binary matrix. The algorithms and their implementation are explained in addition to the demonstration of experimental results to illustrate the efficiency and the performances of the study.
Abstract: This paper describes the developments of different basic techniques for mobile robot navigation during the last 10 years. Now a day’s mobilerobots are vastly used in many industries for performing different activities. Controlling a robot is generally done using a remote control, which can control the robot to a fixed distance, we discuss three basic techniques for mobile robot navigation the first technique is the combination of neural network and Fuzzy logic makes a neuro fuzzy approach. Second method is the Radio frequency technique which is control system for a robot such that the mobile robot is controlled using mobile and wireless RF communication. Third method is robot navigation using a sensor network embedded in the environment. Sensor nodes act as signposts for the robot to follow, thus obviating the need for a map or localization on the part of the robot. Navigation directions are computed within the network (not on the robot) using value iteration.
In the last 3 decades, the idea of robots doing human works has come to reality which was earlier considered as cannot be done and just an imagination. Robotics devices and machineries are now becoming the part of human life. Now a days robots are very commonly used in service industry, armed forces, manufacture industry, etc. The one of the most challenging and demanding field of robotics is wheeled mobilerobots. One of the reasons for this is mobile robot needs mobility that helps it to move unrestrained all over its environment. As given in , a mobile robot can move its surrounding in many different ways. Therefore for a mobile robot, how it moves in its environment depends upon the type of approach used for its motion. A substantial portion of wheeled mobile robotics exploration/research includes developing robot model that mimic a car-like motion without the help of any human being. Also it has been seen as in case of well-known work in the field of mobile robotics the system strategy need not required to be very complex even with simpler one we can achieve our desired result. In case of mobile robotics, sometimes it becomes necessary that to perform a specific job we need group of mobile robot in formation. The requirement for formation of a group of mobile robot performing a specific task has leads to the growth of a new research field. The formation control problem for mobilerobots can be defined as finding a system which makes sure that the group of mobilerobots can hold on a given formation or precise set of formations. The objective of this work is to develop leader-follower structure, such that multiplemobilerobots can move in a given condition in formation. We are considering an obstacle free environment for our work. At first we need to know how our of mobile robot move in its surroundings. For understanding this process we need to understanding kinematic equation of motion. Kinematics is said to be a geometrical or mathematical analysis of how our system behaves in its environment without taking into considering the forces that cause this.
Navigation of multiplemobilerobots using neuro-fuzzy controller has been discussed in this paper. In neuro-fuzzy controller the output from the neural network is fed as an input to fuzzy controller and the final outputs from the fuzzy controller are used for motion control of robots. The inputs to the neural network are obtained from the robot sensors (such as left, front, right obstacle distances and the target angle). The neural network used consists of four layers and the back propagation algorithm is used to train the neural network. The output from the neural network is initial-steering-angle. Inputs to the fuzzy-controller are initial-steering-angle (the output from neural network) and left, front, right obstacle distances. The outputs from the fuzzy controller are the crisps values of left and right wheel velocity. From the left and right wheel velocity final-steering-angle of a robot is calculated. The neuro-fuzzy controller is used to avoid various shaped obstacles and to reach target. A Petri-net model has been developed and is used to take care of inter-robot-collision during multiplemobile robot navigation. A piece of software has been developed under windows environment to implement the neuro- fuzzy controller for robot navigation (appendix-1). Six real mobilerobots are built in the laboratory for navigational purpose (appendix-2) in reality. By using the above algorithm it is visualised that, multiplemobilerobots (up-to one thousand) can navigate successfully avoiding obstacles placed in the environment.
Topological maps try to represent the environment as a graph (Siegwart and Nourbakhsh, 2004). On the other hand, reactive techniques do not require a previous environment model. These approaches rely on a sensorial system to determine the states of the vehicle and to execute an action (Dudek and Jenkin, 2000). The navigation challenge for a robot operating in a greenhouse involves planning a reference trajectory and reacting to unforeseen events (workers, boxes, tools, etc). For this reason, the objective of this project is to develop a hybrid solution (figure 2). The first time that the robot navigates the greenhouse if a map exists, it is employed by a deliberative method. On the other hand, when there is no map, a pseudo- reactive method is used. Moreover, along the path a sensorial map is built, to be employed by the deliberative module in later runs. These layers are discussed in the following section. The two previous approaches utilize a security layer to avoid collisions. This layer uses on/off sensors. Finally, it has a low- level control or servo control layer. This layer is composed of two PID controllers that regulate the speed of the tracks. This article discusses each method separately, but the combination of both techniques is relatively easy.
Robots are more and more autonomous, as they are involved in more and more complex environments, achieving more and more complex tasks. Anyway, however their autonomy skills are (from ”simple” route following capabilities, to embedding complex AI planning algorithms), their auton- omy always rely on a navigation architecture. This navigation architecture provides both localization information and locomotion capabilities.
Evidence from behavioral experiments suggests that insects use panoramic views of their environ- ments as cues for visual navigation. Especially the appearance of ground objects in the front of the sky influences the navigational behavior of insects. However, changes of lighting conditions — over hours, days, or possibly seasons — significantly affect the appearance of the sky and ground objects. One possible solution to this problem is to extract a binary skyline by an illumination-invariant classification of the environment into two classes, ground objects and sky. In this section we ex- amine the idea of using two different color channels available for many insects (UV and green) to perform this classification. First, we collected skyline databases of suburban scenes, where the skyline is dominated by trees and artificial objects like houses as well as from mineral skylines (stones, sand, earth). On this databases, we show that a ‘local’ UV classification with adaptive thresholds applied to individual images leads to the most reliable classification. Furthermore, we show that a ‘global’ classification with fixed thresholds (trained on an image dataset recorded over several days) using UV-only information is only slightly worse compared to using both the UV and green channel. Second, we collected a wide variety of ground objects to examine their spectral characteristics under different lighting conditions. On the one hand, we found that the special case of diffusely illuminated minerals increases the difficulty to reliably separate ground objects from the sky. On the other hand, the spectral characteristics of this collection of ground objects covers well with the data collected in the skyline databases, increasing — due to the high variety of ground objects — the validity of our findings for novel environments. Third, we collected omnidirectional images, as often used for visual navigation tasks, of skylines using a UV-reflective hyperbolic mir- ror. We could show that ‘local’ separation techniques can be adapted to the use of panoramic images by splitting the image into segments and finding individual thresholds for each segment. Contrary, this is not possible for ‘global’ separation techniques. This chapter is a compilation of our publications Differt and Möller (2015) and Differt and Möller (2016).
The navigation system was validated over the course of four experimental sessions, one on the Gr` acia site and three at the Campus. An external computer, connected to the on-board computer via wireless, was used to send manual go-to requests (XY coordinates over the map) to the navigation system, and for on-line monitoring using our GUI. Note that these are high-level requests, equivalent to “send a robot to the south- east door of the A5 building”. Goals in the experiments include both long distances across the campus (the longest possible path between two points being around 150 m), and goals closer to each other to force the robot (and thus the path planning algorithm) through more complex situations such as around the trees in the square or around the columns in the A5/A6 buildings. Requests were often chained to keep the robot in motion, sending the robot to a new location just before reaching the current goal. We typically chose closer goals to keep some control over the trajectories and have the robot explore all of the area.
The paper is organized as follows. Before introduc- ing our environment modelling and robot localization methodology, we present a brief state of the art on the metric and topological representations in section II. In section III, we design the hybrid model mixing both topological and geometrical aspects for navigation pur- poses. It is shown that the structure of the topologi- cal model proceeds from the application of the sensor- based control strategy presented in Part I of this work. In section IV, we focus on the construction of the ge- ometrical model, a motion estimator method is pre- sented based on the telemetric laser measurements. Experimental results are discussed in section V and in section VI some conclusions and comments are pre- sented.
WMR can share its state information with the neighbor follower. Notice that the WMR receiving information about the input reference commands is named as the leader mobile robot and the other robots are follower robots. The linear and angular velocity inputs of each follower are formulated using first order consensus protocol, as well as the heading angle and velocity of the followers are synchronized to the corresponding values of the leader robot. The WMRs are synchronized to move off in formation with the same speed and directed orientation using consensus protocols. The separation distance and deviation angle between the leader and the follower robot motion are not controlled through the consensus protocol. The second approach is called l- (also called distance angle) which aims to control the desired distance and deviation angle between the leader and the follower robot. This approach is formulated based on Lyapunov analysis [ 24 ]. The linear and angular velocities of the follower are formulated such that the system is asymptotically stable in the sense of Lyapunov. In order to prescribe a formation maneuver, the leader’s velocities commands are needed to be specified from the desired position and angle between the leader and the follower.
In recent years, the use of mobilerobots for performing different tasks is increasing rapidly. It is now common to find robotic systems in industrial environments, military applications, agricul- tural systems and even in businesses carrying out increasingly sophisticated tasks. The develop- ment in the mobile robotics field has moved on from research in universities and businesses to use in everyday environments. The advances made in mobile robotics have been transferred to other fields, such as autonomous driving or space exploration. The obstacle avoidance algo- rithms, local and global path planning techniques and perception systems developed in mobile robotics are used for the new self-driving vehicles.
Learning the relationship between the perceptual infor- mation and actions is dominant in the literatures. We call this as end-to-end learning. The difference in previous research works is on the representations of this relationship. The first paradigm is learning the mapping of the per- ceptual data and action commands 5–8 directly. To learn this mapping, De Rengerv´e et al. 5 used artificial neural network to recognize the places according to the panorama. The recognized places combined with odometry and compass data are applied to learn the motion commands by Gaussian mixture models. Similarly, in the study by Choi et al., 7 leveraged Gaussian process regression, another statistical technique, was presented to get the navigation policy from sequences of sensor data and action pairs. This method also allows demonstrations from casual or novice users not lim- ited to experts. The associations between percepts and actions can be described by a set of fuzzy rules, 6 and pre- dictive sequence learning (PSL) algorithm 6 is used to learn these associations and to predict expected sensor events in response to executed control commands. In addition, with PSL and simulation theory, the robot can generate the expe- rience of novel sequences of events according to the learned relationships. 8
the distance to the goal. On the other hand, for RL methods with delayed reward signals the outcome is given only at the end of a trial. Whereas in the former case hints are given in every moment, the latter case defines a very less informative reward function which can delay the learning process, although it is more biologically plausible. RL algorithms such as Q-learning [?] learns an action value function Q(s, a) which indicates the utility of an action a in a state s. It tries to maximize the mean expected future reward by using a value iteration update rule. Whereas these rules are applied in an online fashion, in fitted Q iteration  the Q(s, a) is learned in batch mode with supervised learning techniques such as linear regression and artificial neural networks.
In order to form a testbed an electric wheelchair is used. The two original motors are connected with an industrial PLC that performs the low-level control of the device (odo- metric localization, path following, safety). A Windows PC, connected with the PLC, runs the high-level application that consists of the HMI, the communication with external devices (Kinect, Eye tracker, and monitor) and the path planning. However, the user in- terface of the developed prototype needs to be improved. The functionality needs to be extended offering competitive value to the start-up company owning this product. The thesis was performed as a part of an exchange program between the Tampere Uni- versity of Technology and The University of Trento, Italy. Given thesis was performed as a part of a teamwork project, which is not a standalone research topic but it is a part of a large continuous activity started more than one year ago.
and the simulation environment ready, the next step is to put things together to complete and test the implementation of the shared control algorithm. As seen in the figure 3.1, all the different components implemented will communicate with each other using ROS. ROS is an open source meta operating system that works on top of Linux and Mac Operating Systems. It can be seen as a set of tools, libraries and conventions put together to sim- plify robotics applications and enable rapid prototyping and testing of robotic software. ROS provides APIs for multiple programming languages including python, C++ and Mat- lab giving complete freedom to the user to choose the preferred coding language. ROS also has its own messaging medium using publish subscribe services which let’s codes written in different languages to still communicate with each other.
Suppose that a mobile robot (and therefore camera) at- tempts to move under pure translation. Due to an uneven floor surface and hysteresis in the robot’s mechanics, the mo- tion is unlikely to be pure translation. However, if rotation is relatively small with respect to the translation, assuming pure translation and enforcing a homography correspond- ing to pure translation allows correlation-based techniques to be used. The key point here is that this allows all ground plane pixels which have local intensity/color variation to be used in the simultaneous estimation of the ground plane ho- mography and grouping of ground plane pixels.
The goal of this section, presented in  is to develop an open source control frame- work to allow the Crazyflie nano quadrotors to be employed as a test bench for the devel- opment of advanced control and coordination algorithms. In particular, the guidance, control, and navigation layers have been designed to let a group of nano quadrotors to be controlled simultaneously using a motion tracking system. The control layer is based on the trajectory tracking controller presented in Chapter 3. This feature allows to ex- ploit the agility of the selected nano quadrotors to perform acrobatic maneuvers, as well as to effectively recover the vehicle from an arbitrary initial configuration. The cascade structure of the controller is exploited to distribute the computation on the ground and the on-board embedded processor. More specifically, the attitude loop is implemented on the onboard processor while the outer position loop as well as the overall guidance layer are implemented on a remote ground station, which consists of a PC. The guidance layer is in charge of generating the reference position and orientation to be tracked by the cascade controller and also to coordinate multiple nano quadrotors.