service robot designed to interact with people with mild dementia. It was equipped with telepresence software, which would allow remote medical consultation, and with a speech recognition system which would allow the user to communicate with the robot. The robot navigated around an indoor environment by first creating a map using learning techniques and then being able to move to an arbitrary location using this map. ‘CompanionAble’, developed in 2011 by Gross et al. [ 11 ], was designed to assist the elderly who suffer from mild cognitive impairment, in home environments. This project was geared towards developing home robots with telepresence and with the capability to detect hazardous events such as falls and using telepresence to allow the patient to communicate with caregivers. With a growing fraction of the elderly living alone in the US [ 12 ], such robots are placed to fill a void in the care afforded to this section of the population. ‘Hein II’, developed in 2011 by Tani et al. [ 13 ], was designed as a person follower for home oxygen therapy patients. Such patients need to tether around an oxygen supplier tank, which can be physically exhausting. A large number of people in Japan, where this robot was designed, are dependent on home oxygen therapy [ 14 ], and such a robotic follower would provide an improvement to their quality of life. Thus service robots, or ‘socially assistive robots’, as they have also been called [ 15 ], are gradually maturing into a useful technology.
models. The velocity obstacle approach (VO) is considered to be an easy and simple method to detect the probability of collision between two circular-shaped objects, using the colli- sion cone principle. Many studies have implemented the VO algorithm using simulation, where they considered that obstacles have circular shapes and their velocities and positions are known [Yamamoto et al., 2001;Kluge, 2003; Kluge and Prassler, 2004;Zhiqiang et al., 2008; Fulgenzi et al., 2007;Shiller et al., 2010]. However, the VO approach has two limi- tations when applied in indoorenvironments. The ﬁrst challenge is that in the real world, not all obstacles are circular, while the second is that the mobilerobot sometimes cannot move towards its goal because all its velocities to the goal are located within the collision cones. Hence, in this project, a method is proposed to extract the collision cones of non- circular objects, where the obstacle size and collision time are considered in weighting ve- locities of the robot, and a virtual obstacle principle is proposed to avoid unobserved mov- ing objects. Many simulations and experiments have been posted on the following website: https://www.facebook.com/pages/Robotics-Group/173949736063997
Navigation is a fundamental requirement of autonomous mobile robots. Referring to , navigation can be defined as a “scene of determining position, location, course, and distance travelled to a known destination”. Navigation is used for the localization of the robot to do a required task. The localization is very important issue for the mobilerobot to move autonomously to achieve the goal position in any environment. Mobilerobot cannot execute all operations related with the navigation without the localization information. As the mobilerobot moves around its environments, its actual position and orientation always differs from the position and orientation that it is commanded to hold. There are several types of method approach for the localization of the robot. For example trajectory planning, approaches in the presence of obstacle . The goal of this method is to find the optimal of robot trajectory consisting the both path and the motion along the path which avoid the collision with moving obstacle. From previous research, some papers are widely used localization algorithm known as dead reckoning. Dead reckoning is a process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime.
R obots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities re- quired for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these compo- nents are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown en- vironment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robotnavigation pipeline that can work with different types of the sen- sors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots.
PID controllers date to 1890s governor design. PID controllers were subsequently developed in automatic ship steering. Elmer Sperry is the person develops the PID type controller in 1911. However, the first published theoretical analysis of a PID controller was done by Russian-American engineer Nicolas Minorsky, in Minorsky in year 1922. In early history, PID controller implemented as a mechanical device. These mechanical controllers use a lever, spring and a mass and were often energized by compressed air .
facing each other and having a discussion, the robot will come close to them, and interact with them using voice commands to communicate available services as it could be seen in Fig 2.7. In contrary, when two humans are close to each other, and located in certain angles, the robot will know that these two human have private discussion, so it won’t interrupt them. The system uses LRF for detecting and tracking legs, high resolution color camera for face detection from far distances, and depth camera for torso tracking. To decrease the processing time of face detection, Haar feature based cascade classifier is used to detect the upper body, and then a particle filter is used for face tracking. Furthermore, Kalman filter is used to track the position of each person. The positions of humans, and their orientations are analyzed using F-formation to receive the social cues. A decision tree will be used by the robot to select the correct response. The experimental results show that the average time to identify the social situation for a groups is 4.78 seconds, and the accuracy ranges between 80-90%.
In our first approach to the robotnavigation topic, presented in Chapter 5 , our initial hypothesis is to make the robot navigate according to a human model, the ESFM, and thus, it will be more accepted by humans. We threw the following questions at the introductory Chapter 1 : should robots behave as a person to be considered as equals by people? or should robots take care of humans’ trajectories on their own way? After this work, we conclude that our initial hypothesis is only valid in simple situations (few obstacles and moving people). In more complex situations, the robot is not able to behave like a person, due to the fact that the robotic platform studied in this thesis has wheels, and their propagation model is different than a pedestrian. In addition, we realized that the embodiment of the robot would never be perceived as a person, and hence, it is wrong to make it move as it is a person if none would treat the robot as such. Chapter 5 has motivated a change in the methodology to study robotnavigation from trying to implement a completely characterized human navigation embodied in a robot into implementing a robotnavigation that considers people in the scene developing its own specific strategies to success in social environments.
As a fundamental capability for mobile robots, navigation involves multiple tasks in- cluding localization, mapping, motion planning, and obstacle avoidance. In unknown environments, a robot has to construct a map of the environment while simultaneously keeping track of its own location within the map. This is known as simultaneous local- ization and mapping (SLAM). For urban and indoorenvironments, SLAM is especially important since GPS signals are often unavailable. Visual SLAM uses cameras as the primary sensor and is a highly attractive but challenging research topic. The major chal- lenge lies in the robustness to lighting variation and uneven feature distribution. Another challenge is to build semantic maps composed of high-level landmarks. To meet these challenges, we investigate feature fusion approaches for visual SLAM. The basic ratio- nale is that since urban and indoorenvironments contain various feature types such points and lines, in combination these features should improve the robustness, and meanwhile, high-level landmarks can be defined as or derived from these combinations.
A road mapping and feature extraction for mobilerobotnavigation in road roundabout and road followingenvironments is presented in this chapter. In this work, the online mapping of mobilerobot employing the utilization of sensor fusion technique is used to extract the road characteristics that will be used with path planning algorithm to enable the robot to move from a certain start position to predetermined goal, such as road curbs, road borders, and roundabout. The sensor fusion is performed using many sensors, namely, laser range finder, camera, and odometry, which are combined on a new wheeled mobilerobot prototype to determine the best optimum path of the robot and localize it within its environments. The local maps are developed using an image’s preprocessing and processing algorithms and an artificial threshold of LRF signal processing to recog- nize the road environment parameters such as road curbs, width, and roundabout. The path planning in the road environments is accomplished using a novel approach so called Laser Simulator to find the trajectory in the local maps developed by sensor fusion. Results show the capability of the wheeled mobilerobot to effectively recognize the road environments, build a local mapping, and find the path in both road following and roundabout.
The global navigation problem deals with navigation on a larger scale in which the robot cannot observe the goal state from its initial position. A number of solutions have been proposed in the literature to address this problem. Most rely either on navigating using a pre- specified map or constructing a map on the fly. Most approaches also rely on some technique of localization. Some work on robotnavigation is landmark based relying on topological maps , which have a compact representation of the environment and do not depend on geometric accuracy. The downside of such approaches is that they suffer from sensors being noisy and the problem of sensor antialiasing (i.e. distinguishing between similar landmarks). Metric approaches to localization based on Kalman filtering  Provide precision, however the representation itself is unimodal and hence cannot recover from a lost situation (Misidentified features or states). Approaches developed in recent years based on ’Markov localization’  provide both accuracy and multimodality to represent probabilistic distributions of different kinds, but require significant processing power for update and hence are impractical for large environments. One of the attempts to solve this problem is presented in  where a sampling-based technique is used. Rather than storing and updating a complex probability distribution, a number of samples are drawn from it. The other approaches utilize partially observable Markov decision process (POMDP) models to approximate distance information given a topological map, sensor and actuator characteristics . POMDP models for robotic navigation provide reliable performance, but fail in certain environments (e.g symmetric) or suffer from large state spaces (i.e. state explosion). These approaches have different advantages, but also disadvantages or fail cases. Note that all of the above approaches assume that a map of the environment (topological and/or metric) is given a priori. None of the above approaches deal with highly dynamic environments in which topology might change. Our approach, presented here, instruments the environment with a sensor network. An ant-like trail laying algorithm is presented in , where ‘virtual’ trails are formed by a group of robots. Navigation is accomplished through trail following. The shortcoming of the algorithm is that it is dependent on perfect communication between the members of the group. In addition, the ’virtual’ trails are shared between the robots, which means redundant sharing of the state space in the group. Moreover, a common localization space is assumed. We are broadly interested in the mutually beneficial collaboration between mobile robots and a static sensor network.
Vitor Filipe uses the Kinect sensor to identify optical imprints and use them to guide a blind person. Microsoft Kinect sensor supports a large feature set and has the ability to work in low light environments. In Shrewsbury et al. the sensor is used to calculate the distance from the user to objects within its field of view . The depth image obtained is mapped and fed to a haptic glove via wireless. Figure 2.3 shows the image of Kinect sensor, which includes a depth sensor and an RGB sensor. The application was developed using C# programming language in order to test the system in areal time. Based on depth data acquired by Kinect sensor, the system is able to give information about the surrounding environment. However, this sensor need to be plugged into a computer make it neither inconvenient nor comfortably carried by the user. Moreover, there are no sound devices to deliver the real time information to the user.
The mobile robotics is a research area that deals with the control of autonomous vehicles or half-autonomous ones. In mobile robotics area one of the most challenger topic is keep in the problems related with the operation (loco- motion) in complex environments of wide scale, that if modify dynamically, composites in such a way of static obstacles as of mobile obstacles. To operate in this type of environment the robot must be capable to acquire and to use knowledge on the environment, estimate a inside environment position, to have the ability to recognize obstacles, and to answer in real time for the situations that can occur in this environment. Moreover, all these functionalities must operate together. The tasks to per- ceive the environment, to auto-locate in the environment, and to move across the environment are fundamental pro- blems in the study of the autonomous mobile robots .
In future work, an AMR will be equipped with a Nvidia Jetson AGX that has 512 CUDA cores with 32 GB 256-Bit LPDDR4x and 137 GB/s bandwidth. These specifications are better than those of Asus or Quadro due to its higher computational capacity thanks to its memory. Moreover, the training and the path following algorithms will be improved by adopting new scenarios and mathematical equations, such as a path planning algorithm, which can intelligently move the AMR to a specific location. To improve the AMR’s adaptability to other indoor scenarios, other types of net will be tested. These networks will be implemented on a Nvidia Jetson AGX to improve the application time response. An automatic labeler will also be developed.
fourth industry revolution, but surveillance industry is still using human in patrol. This will put this industry in risk due to human nature instincts. By using a mobilerobot with assist of vision sensor to patrol can bring this industry to a new level. However, the indoor corridor navigation will become a big challenge to this method. The objective of this project is to develop a navigation system using vision sensor and navigate the mobilerobot in indoor corridor environment. To perform this operation, a control system though the WLAN communication develop to guide the movement of mobilerobot. Besides that, corridor following system with vision sensor that using Sobel edge detection method and Hough transform to getting the vanish point is needed to help the robot to safely travel in the corridor. Both systems can be using MATLAB to be execute and link with the mobilerobot through WLAN connection. This system can be analysis the corridor condition base on different feature and can decide to drive the mobile car in the direction that given. The image capture by mobilerobot can be stream to MATLAB in real time and receive a feedback in short time.
Abstract— Building upon previous work that demonstrates the effectiveness of WiFi localization information per se, in this paper we contribute a mobilerobot that autonomously navigates in indoorenvironments using WiFi sensory data. We model the world as a WiFi signature map with geometric constraints and introduce a continuous perceptual model of the environment generated from the discrete graph-based WiFi signal strength sampling. We contribute our WiFi localization algorithm which continuously uses the perceptual model to update the robot location in conjunction with its odometry data. We then briefly introduce a navigation approach that robustly uses the WiFi location estimates. We present the results of our exhaustive tests of the WiFi localization independently and in conjunction with the navigation of our custom-built mobilerobot in extensive long autonomous runs.
This chapter focuses on visual motion analysis for the safe navigation of mobile robots in dynamic environments. A general framework has been designed for the visual steering of UAV in unknown environments with both static and dynamic objects. A series of robot vision algorithms are designed, implemented and analyzed, particularly for solving the following problems: (1) Flow measurement. (2) Robust separation of camera egomotion and independent object motions. (3) 3D motion and structure recovery (4) Real-time decision making for obstacle avoidance. Experimental evaluation based on both computer simulation and a real UAV system has shown that it is possible to use the image sequence captured by a single perspective camera for real-time 3D navigation of UAV in dynamic environments with arbitrary configuration of obstacles. The proposed framework with integrated visual perception and active decision making can be used not only as a stand-alone system for autonomous robotnavigation but also as a pilot assistance system for remote operation.
This thesis explored the application of deep reinforcement learning to the prob- lem of robotnavigation. The application of Q-learning, which has been shown to be capable of determining appropriate control actions directly from high-dimensional image inputs for Atari games and obstacle avoidance, was applied to learn policies that could successfully navigate to desired goal locations in simple and complex, clut- tered environments. Two modifications of the D3QN neural network architectures are proposed for estimating the value of control actions directly from depth images and the heading to the goal location. Each architecture featured a different method of incorporating direction information into the D3QN architecture such that the re- inforcement learning agent knows what direction to drive in order to approach target locations.
NIT Rourkela Page 54 Where the human being cannot work properly and different obstacles are present then with the help of mobilerobot, work can be done easily. In the current research, different environments are considered to find out the navigation path for the mobilerobot. The structure of the kinematic models of various types of wheeled mobile robots is also studied. The simulation is done by writing the code in MATLAB in different environment scenarios while avoiding obstacles present on the robot path. The success of the robotnavigation depends upon the accuracy of complete measurements of positions (mobilerobot, obstacles and destination point), velocities (or speeds of mobilerobot) and its rate of change of heading angle (of a mobilerobot). In this, we use fuzzy logic technique (FIS) for navigation of an autonomous mobilerobot in an unknown environment. The simulation results showed that the proposed method enables the mobilerobot to reach safely the destination (or goal) without colliding the obstacles. Experiments are also carried on lab build mobilerobot. Comparison between the simulation and experimental results is done for establishing the authenticity of the technique developed. The mobilerobot along with the technique can be used in different remote and hazardous applications.
As MARVIN operates in an indoor human environment, it is imperative that it has a robust method for detecting and avoiding obstacles. As ex- plained in section 3.2.1, MARVIN utilises four sensing systems to detect obstacles; the Kinect, the LiDAR, whisker sensors and the ultrasonic net- work. The sensor data from these four sources have been processed into LaserScan messages, as explained in Chapter 4, but all have different ranges and FOVs. This sensor data must be fused so that the motion planner knows about all the sensor sources, while taking into account their differ- ing parameters. This is typically achieved using an occupancy grid. An occupancy grid is essentially a local map centred at the robot’s origin. The cells of the map are either classed as occupied or free space and are set by converting the sensor data from polar coordinates into the local Cartesian coordinates. This is achieved in MARVIN using the ROS costmap 2d pack- age .