In this work, we focused on the question of how to benefit from back- ground information made available by the user in form of a topo- metric graph. Metric information is available in many cases, for ex- ample when floor plans or in our case archeological drawings are available. When the user provides the background information as a sketch map, however, the proportions of the areas drawn in the sketch are likely to be incorrect or distorted. Hence, two questions remain: How to interpret information given by the user, and how to align the spatial representations of the robot with the given background infor- mation. Other research groups have presented interesting approaches for solving these questions. Skubic et al. have presented extensive research on how to interpret hand-drawn sketch maps for wayfinding, including the role of landmarks, spatial relations, and connections to linguistics [31, 155, 156]. Shah and Campbell [149, 150] also presented systems for interpreting sketches drawn by human users and showed in user studies that the interface is natural and easy to use even for novice users. While these approaches focus on spatial relations be- tween the robot and landmarks, Boniardi et al. [25, 26] instead use floor plans sketched by a human user to localize and navigate in indoor environments. Based on a Monte Carlo localization algorithm, they additionally track two scaling variables in the robot’s state space to account for distortions in the user’s drawings, making the system robust against metric inaccuracies. Integrating such a sketch inter- pretation system with our system for generating efficient exploration strategies would lead to a natural interface for guiding a robot in exploration tasks. The sketch interpretation might be improved even further by including recent advances in sketch interpretation with neural networks  that allow the algorithm to recognize shapes and symbols within the drawing.
mander verbally instructs a robot at a remote loca- tion, guiding the robot to move around and explore a physical space. The sensors and video camera on-board the robot populate a map as it moves, enabling it to describe that environment and send photos at the Commander’s request, but the com- munications bandwidth prohibits real-time video streaming or direct teleoperation. The robot is as- sumed capable of performing low to intermediate level tasks, but not more complex tasks involving multiple or quantified goals, without clear direc- tions or plans for ordering subgoals. The physical implementation of the testbed is an indoor envi- ronment, containing several rooms and connecting hallways, located in a separate building from the Commander. We use a Clearpath Robotics Jackal, fitted with an RGB camera and LIDAR sensors, to operate in the environment.
Mobile, interactive robots that function in close proximity with humans require the ability to predict and plan around uncertain, dynamic human agents within their surroundings. Path prediction can be improved through the consideration of biomechanical aspects of human motion. Through a biome- chanical study, we confirmed the existence of anticipatory turn indicators for human walking motion. The efficacy of these indicators was demonstrated using a target prediction algorithm to predict human motion in environments with pre- defined goals. Lastly, the effect of this predictive capability for planning paths was evaluated in simulation using anytime safe-interval path planning, and significant benefits were observed. Future directions for this work include evaluation of the system within an uncontrolled, physical scenario requiring human-robot interaction.
Navigation modules are capable of driving a robotic platform without direct human participation. However, for some specific contexts, it is preferable to give the control to a human driver. The human driver participation in the robotic control process when the navigation module is running raises the share control issue. This work presents a new ap- proach for two agents collaborative planning using the optimal control theory and the three-layer architecture. In particu- lar, the problem of a human and a navigation module collaborative planning for a trajectory following is analyzed. The collaborative plan executed by the platform is a weighted summation of each agent control signal. As a result, the pro- posed architecture could be set to work in autonomous mode, in human direct control mode or in any aggregation of these two operating modes. A collaborative obstacle avoidance maneuver is used to validate this approach. The pro- posed collaborative architecture could be used for smart wheelchairs, telerobotics and unmanned vehicle applications. Keywords: Robotic Architecture; Share Control; Three-Layer Architecture; Cooperative Control;
2. If a manned vehicle is involved in an accident, generally a human driver will take the ‘blame’, with little or no controversy. If a similar injury to a human is caused by an unmanned vehicle, who is to ‘blame’? The owner of the vehicle? The manufacturer? The Engineers? The OEM equipment supplier? Perhaps all parties would share some blame. It would be certain though, that unlike a human ‘mistake’, doubt would fall upon all other unmanned vehicles in operation as well. This situation must be avoided at all costs if the technology is to move forward.
The time to produce a 'plan', by whatever method is eventually adopted, is preferably kept below the time taken for the vehicle to reach the location where that decision is needed. The term 'real time ' has a more acute meaning, distinct from that normally employed when discussing, for example, an air-line reservation system. This is especially true with a vehicle that can damage both itself and its environment, and would almost certainly be an undesirable action within most environments. The objective of 'planning' as discussed in the literature is to convert the sensed qualities via their electrical form into a condensed symbolic form. With the common 'measure' of these symbols and their relations, the aim is to manipulate the symbols and relations within this abstraction to produce decisions whose domain of validity would extend beyond the immediate sensor boundary, in a manner similar to route determination before setting off on a Journey in a car. The automatic production of plans or task scheduling, to produce the sequence of (X , Y and ) shown in figure 1.2, and that are one of the Inputs to the guidance algorithms is not examined further.
In future work, one of the first steps would be to remove the assumption of the a-priori known map, so that the experiments are better resembling reality. To do this, instead of letting the autonomous controller know each obstacle in the complete map, implementations can be made such that the autonomous controller only perceives obstacles that are within sight or in the memory. This way the behavior of the total system can be investigated when the autonomous controller cannot predict which path will eventually lead to the target. Also the interaction between the human operator and the autonomous controller can be further investigated. The subsequent step, removing the a-priori known map completely, would be to implement a SLAM 3 -algorithm. This would allow the system to operate outside of the optitrack-area or the simulation environment. In this case, it is best to extend the current implementation of the algorithms, which is completely in 2D, to work in 3D. At this moment, the flying height is set to be constant and the UAV is assumed to have clearance, but in real and unknown environments this is not guaranteed.
Advanced robot systems have already substituted the human factor in many fields of work and we can only imagine what the future will bring. In order for a robot to be intelligent, to perform better at executing tasks than human, they need to be autonomous. We developed a robot for Field Robot Event competition, where it needs to be able to autonomously navigate through a corn field. In this thesis, we developed a robot application which is respon- sible for autonomous navigation with LIDAR sensor and proposed how to improve the navigation with camera recognition. The data collected with LIDAR usually contains a lot of noise, which we reduce with DBSCAN clus- tering algorithm. In conclusion, we present the results from the competition, DBSCAN algorithm, and camera recognition. We also propose a few im- provements and future plans.
In last year’s United States Air Force Report on Technology Horizons (Dahm ), an executive level articulation of Air Force (AF) S&T advancements necessary to achieve mission readiness, the Chief Scientist of the AF stated that “two key areas for AF S&T investment are (i) increased use of autonomy and (ii) augmentation of human performance . . . and the need to certify these high levels of adaptability and autonomy”. Furthermore, conversations with AF Research Lab (AFRL) program management clarifies the motivation underlying the Chief Scientist’s mandate: estimates suggest that far too many AF personnel are needed to control a single remotely piloted aircraft; simultaneously, the AF wishes to deploy as many as 10,000 drones at a time. Obviously, the current paradigm does not scale quickly enough to meet these AF goals (see Figure 6.1 for an illustration of the “crowds” in typical DoD facilities). Insofar as operating multiple drones simultaneously is a special case of the model outlined in Section 6.2.2, we suggest that the method of IGP for shared autonomy might be applicable in this scenario.
Mobile robots have been become increasingly popular in automated indus- trial environments. Surveillance, Mars explorers, and underwater exploration are other applications of mobile robots. In many situations, the robot doesn’t have prior knowledge about the environment and providing this information is either difficult or impossible. In all of these applications collision-free path planning is an impor- tant necessity. Furthermore, using a robot in any of the aforementioned situations is acceptable only if the robot provides high efficiency and low cost. High efficiency can defined as fast customer service, low power consumption and low maintenance costs. Therefore, the robot should have the ability to autonomously find a path with a minimum risk of collision and high efficiency with no knowledge, or with minimum knowledge, about the surrounding environment. This chapter reviews previous work and research that has been done to make robotnavigation safe and efficient.
Much previous work on visual gaze estimation has been focused on its application to surveillance systems. In their most recent work , Benfold and Reid described a system to detect and track subjects’ heads in order to estimate their gaze direction. The HOG descriptor  was used for head detection, whilst tracking was performed using a modified Kalman filter. Head poses were grouped into forty five degree classes and classified using randomised ferns. A combination of HOG and Colour Triplet Comparisons was found to be the most effective feature for head pose classification. The authors’ previous paper  described a similar method for head pose classification, using lower resolution images. Random ferns were again used for pose classification, but a hidden Markov model was used to ensure temporal pose consistency. Sheikhi and Odobez  used a humanoid Nao robot to estimate the subject of a person’s attention. The method was motivated by the desire for humanoid robots to know who is speaking to whom in a particular scenario. Using a provided head pose, this “visual focus of attention” was evaluated using a continuous HMM, with a state per target, and the Gaussian mean set to the target head positions. The authors state that this obtains more temporally consistent results than previous solutions that use GMMs .
The autonomous underwater vehicle (AUV) DAGON is specifically designed as a sci- entific AUV for visual mapping and localization with stable hovering capabilities. Its high-quality stereo camera system usually acts as the main sensor system and is supplemented by an internal IMU and a pressure sensor. Using visual odome- try and SLAM approaches, a map of the seafloor and the vehicle’s trajectory can be generated [ Hildebrandt and Hilljegerdes, 2010 ]. In addition to the visual main sen- sory system, the AUV is equipped with additional navigational instruments like an Acoustic Long Baseline Navigation System (LBL), a Doppler Velocity Log (DVL) and a Fibre Optic Gyroscope (FOG), usually used to establish a ground truth to evaluate novel underwater localization techniques. The AUV has a lithium-ion battery with a capacity of 1.6 kWh, resulting in a corresponding nominal operating time of six hours, which may vary with the type of mission. DAGON can either be used as a completely autonomous vehicle, with the only communication available being the low-bandwidth acoustic modem, or connected to a fiber-optic cable for telemetry. Using this cable, a hybrid-ROV mode is also possible, where the vehicle is controlled by a human opera- tor or a control station onshore [ Hildebrandt et al., 2012 ].
This study used forward chaining as the method of inference. Forward chaining is a method that employs available facts to acquire newer facts before the final conclusion is drawn. Previous researchers have taken advantage of forward chaining for various cases for instance (1) detection of human disease [13, 14, 15, 16, 11, 17]; (2) detection of plant diseases ; (3) detection of vitamin deficiency ; (4) smart house ; (5) campus environment ; (6) financial planning ; (7) pregnant mothers and infants .
fitness value of that individual is calculated using Eq.4. These steps are repeated on all individuals in the population. The individual with highest fitness is selected as elite. The genetic operations, selection, crossover, and mutation, are then applied to generate the next generation. Since then, the same steps of calculating the fitness values are applied to each individual to find the individual that has the highest fitness. In this paper, the total number of generations is 6000, and the highest fitness value in each generation is shown in Fig. 6. It can be seen that the highest fitness value of the first generation is 2.063, and this value is continuously changed along all generations until it stabilizes at 3.614 in the last generation. The individual which has the highest fitness value (elite) in the last generation is used later in the execution task to measure its efficiency in providing the required safe navigation and smooth movement.
A commonly encountered approach in motion planning that is strongly related to clustering is spatial decomposition , . For example, roadmap methods typically construct a global, one-dimensional graphical representation (skeleton) of an environment based on its Voronoi decomposition. Hier- archical decomposition methods, based on quadtrees and oc- trees, are also successfully applied for representing environ- ments at multiple resolutions via adaptive cells. In particular, their recursive constructions yield computationally efficient solutions for robots operating in unknown and sparse envi- ronments –. In brief, spatial decomposition methods are generally employed to build efficient data structures that approximately model environments, independent of any specific robot configuration and model, whereas our intended use of clustering is to explicitly extract immanent local geometric and topological structures in configuration spaces around a given robot configuration.
NIT Rourkela Page 54 Where the human being cannot work properly and different obstacles are present then with the help of mobile robot, work can be done easily. In the current research, different environments are considered to find out the navigation path for the mobile robot. The structure of the kinematic models of various types of wheeled mobile robots is also studied. The simulation is done by writing the code in MATLAB in different environment scenarios while avoiding obstacles present on the robot path. The success of the robotnavigation depends upon the accuracy of complete measurements of positions (mobile robot, obstacles and destination point), velocities (or speeds of mobile robot) and its rate of change of heading angle (of a mobile robot). In this, we use fuzzy logic technique (FIS) for navigation of an autonomous mobile robot in an unknown environment. The simulation results showed that the proposed method enables the mobile robot to reach safely the destination (or goal) without colliding the obstacles. Experiments are also carried on lab build mobile robot. Comparison between the simulation and experimental results is done for establishing the authenticity of the technique developed. The mobile robot along with the technique can be used in different remote and hazardous applications.