In this section, the different package files as well as their nodes used in this project will be explained. Four different packages are going to be used. Although, only three of them are going to be called directly in the simulation. The other file is going to be called indirectly during these main files calling. First, one of the package will be display the Gazebo environment and the Husky. Gazebo is an open-source 3D high-fidelity robot simulator. Gazebo allows to define the characteristics of both the robot and the world, and interact with the robot via ROS in the same way the operator would interact with the real thing. Secondly, a package is used to display the RViz visualizer. RViz is a powerful 3D visualization tool for ROS. It is used to view wide variety of information, in this case, it is used to view the laser sensor data and our robot. Lastly, a package is used to compute the SLAM technique this file is called husky_navigation. This package is based on the use of a planner laser sensor for SLAM. In addition, another package is required husky_description provided by Clearpath Robotics which is the one that has the information needed to represent the Huskyrobot (URDF). This is the package that Is going to be called indirectly.
Kinect for Xbox 360 is the RGB-D sensor used in this project. The device was initially intended as a Natural user interface (NUI) for gaming and office applications, and was the first consumer grade sensor to utilize structured light. Possible use cases were inspired by early NUI research at Massachusetts Institute of Technology (MIT) and, later on, the science fiction movie Minority Report, where Tom Cruice interacts with a computer by using hand gestures [WA12]. The Kinect sensor is equipped with a depth sensor, a regular color camera, a microphone array and a tilt motor(figure 3.2). The color camera in combination with the depth sensor forms what is usually referred to as a RGB-D sensor, i.e. a combined color and depth camera (figure 3.2). The exact details on how the sensor works are not publicly available(to the knowledge of this author), but it does involve a variant of structured light. A projector on the Kinect projects an infra red speckle pattern onto the surroundings. This speckle pattern is perceived by the leftmost camera, which is equipped with a visible light filter. The sensor is able to calculate a depth map based on how the projected infra red pattern is distorted by the surroundings.
Localization is a concept that focuses on the ability of a robot to determine its current position in a given map, based on the information it has from its environment and the map. Localization can prove to be challenge in the sense that, if the robot has an ever changing map or environment, knowing exactly where it is at any point in time could be difficult. The concept of Simultaneous Localization and Mapping (SLAM) solves this challenge using various techniques. SLAM is the process of building a map of an unknown environment while simultaneously determining the location of the robot in the map. The localization challenge can also be addressed by having artificial landmarks help the robot determine where it is by embedding information in data technologies like RFID tags and QR-Codes in the environment (Rusu, Gerkey, & Beetz, 2008). This project will focus on using a SLAM GMapping 3 technique implemented by ROS in its
However, an issue that come with using an assistive nav- igation scheme is the decrease of the embodiement feeling caused by forcing the user to take the computed direction to go to the chosen interaction area. A solution to this issue could be to update the assistive control to offer the same kind of motion than the manual navigation but guiding the user and not constrain him. This could be done by dynamiclly adjust the gains on each proposed motion to guide the user to the computed direction and not forced him to this direction. This assistive control still needs to ensure a collision free navigation in order to not disturbed the performance of the BCI system as shown in the experiment. With this we could ensure to arrived at the interaction area safely without breaking the embodiement.
Getting robots immersed with our daily lives symbiotically is the grand challenge of robotics since its early beginnings. However, in order to achieve this feat, robust autonomous robot motion is among one of the many necessary precursors. This area of research alone spans multi-disciplinary fields, such as artificial intelligence, computer vision and machine learning, to name a few. There- fore, in order to decompose the challenge into one that is practicably feasible in scope, interesting robotics research platforms were introduced to the scientific community. One of them is the robot soccer game, a popular robotics platform that fosters artificial intelligence research in computer vision [1, 2], robotnavigation with target pursuit, obstacle avoidance, opponent evasion, game strategy and cooperative multi-agents. This is the benchmarking testbed of choice for the algo- rithms presented herewith, and we present navigation simulations that adhere to the platform specifications defined by the Federation of Robot Soccer Association (FIRA) .
Implementation of a mobile robotic platform (named in this work as Arlo Robot or Arlobot, in short) integrated with various sensors, including a depth camera, on-board processors with local processing and wireless communication capability.
Saifizi et al. (2012) state that local navigation, for example, vision strategies are exceptionally compelling base in the shut extent navigation. Basically, the main considerations in utilizing image sensors and processing is its influence to the exactness of the navigation system. For navigating purpose, a mobile robot need to translate its sensing information into concentrate surroundings data so that its location can be identified. Then, the robot must choose the proper way to accomplish its goal. After that, the driving system of the mobile robot will be activated to reached its destination. Their research vision system has the purpose to recognize circle as landmark and determine distance and orientation from the target image for short ranged navigation. The system is more adaptable in indoor environment for local navigation via feature (circle) pursuing algorithm using fuzzy logic to generate target trajectory of the moving robot as depicted in Figure 2.4 below.
Regarding the local planner, it is capable of effectively follow the global path executing each of the maneuvers, but its ability to avoid dynamic obstacles is quite limited, and in general the whole framework ends up finding a new path when an unexpected obstacle is found. This problem is mainly caused by the kinematic and dynamic limitations of the robot, but the algorithm could be modified in order to overcome it, for example, by increasing the dynamic window time interval, and thus allowing the local planner more time to react to changing conditions.
In this paper, authors present a series of work in order to explore unknown environment consists of path and obstacles with the Ackerman model of wheeled mobile robot (car-like). Robot operating system (ROS) is used as a basic operation platform to handle the entire of operation, such as sensor interfacing, 2D/3D mapping, and path planning. ROS is an open source framework and huge constructions consist of methods. The Ackerman mobile robot is a car-like robot as commonly sees, and techniques that have been done in this experiment can be applied to the commercial vehicles as a part of autonomous navigation system which is emerge as big issue nowadays. In this work, we had composed robust existing methods to solve the mapping problem with Ackerman mobile robot. It was concluded that the performance of the proposed work is robust for large mapping within unknown construction building.
Abstract— In robotics, it is essential to model and understand the topologies of configuration spaces in order to design prov- ably correct motion planners. The common practice in motion planning for modelling configuration spaces requires either a global, explicit representation of a configuration space in terms of standard geometric and topological models, or an asymp- totically dense collection of sample configurations connected by simple paths. In this short note, we present an overview of our recent results that utilize clustering for closing the gap between these two complementary approaches. Traditionally an unsupervised learning method, clustering offers automated tools to discover hidden intrinsic structures in generally complex- shaped and high-dimensional configuration spaces of robotic systems. We demonstrate some potential applications of such clustering tools to the problem of feedback motion planning and control. In particular, we briefly present our use of hierarchical clustering for provably correct, computationally efficient coordinated multirobot motion design, and we briefly describe how robot-centric Voronoi diagrams can be used for provably correct safe robotnavigation in forest-like cluttered environments, and for provably correct collision-free coverage and congestion control of heterogeneous disk-shaped robots.
The autonomous underwater vehicle (AUV) DAGON is specifically designed as a sci- entific AUV for visual mapping and localization with stable hovering capabilities. Its high-quality stereo camera system usually acts as the main sensor system and is supplemented by an internal IMU and a pressure sensor. Using visual odome- try and SLAM approaches, a map of the seafloor and the vehicle’s trajectory can be generated [ Hildebrandt and Hilljegerdes, 2010 ]. In addition to the visual main sen- sory system, the AUV is equipped with additional navigational instruments like an Acoustic Long Baseline Navigation System (LBL), a Doppler Velocity Log (DVL) and a Fibre Optic Gyroscope (FOG), usually used to establish a ground truth to evaluate novel underwater localization techniques. The AUV has a lithium-ion battery with a capacity of 1.6 kWh, resulting in a corresponding nominal operating time of six hours, which may vary with the type of mission. DAGON can either be used as a completely autonomous vehicle, with the only communication available being the low-bandwidth acoustic modem, or connected to a fiber-optic cable for telemetry. Using this cable, a hybrid-ROV mode is also possible, where the vehicle is controlled by a human opera- tor or a control station onshore [ Hildebrandt et al., 2012 ].
groups of disks was first introduced to robotics [221, 222], most research into vector field planners has embraced the navigation function paradigm . A recent review of this two- decade-old literature is provided in , where a combination of intuitive and analytical results yields a nonsmooth centralized planner for achieving goal configurations specified up to rigid transformation. As noted in , the multirobot generalization of a single-agent navigation function is challenged by the violation of certain assumptions inherited from the original formulation . One such assumption is that obstacles are “isolated” ( i.e., nonintersecting). In the multirobot case, every robot encounters others as mobile obsta- cles, and any collision between more than two robots breaks down the isolated obstacle assumption . In some approaches, the departure from isolated interaction has been addressed by encoding all possible collision scenarios, yielding controllers with terms grow- ing superexponentially in the number of robots, even when the workspace is not compact . In contrast, our recourse to the hierarchical representation of configurations affords a computational burden that grows merely quadratically in the number of agents. In , the problem is circumvented by allowing critical points on the boundary (with no damage to the obstacle avoidance and convergence guarantees), but, as mentioned above, very con- servative assumptions about the degree of separation between agents at the goal state are required. In contrast, our recourse to hierarchy allows us to handle arbitrary (noninter- secting) goal configurations, albeit our reliance upon the homotopy type of the underlying space presently precludes the consideration of a compact configuration space as formally allowed in .
Abstract: Since it was proposed, the Robot Operating System (ROS) has fostered solutions for various problems in robotics in the form of ROS packages. One of these problems is Simultaneous Localization and Mapping (SLAM), a problem solved by computing the robot pose and a map of its environment of operation at the same time. The increasingly availability of robot kits ready to be programmed and also of RGB-D sensors often pose the question of which SLAM package should be used given the application requirements. When the SLAM subsystem must deliver estimates for robotnavigation, as is the case of applications involving autonomous navigation, this question is even more relevant. This work presents an experimental analysis of GMapping and RTAB-Map, two ROS compatible SLAM packages, regarding their SLAM accuracy, quality of produced maps and use of produced maps in navigation tasks. Our analysis aims ground robots equipped with RGB-D sensors for indoor environments and is supported by experiments conducted on datasets from simulation, benchmarks and from our own robot.
Most navigation is based on the detect fields map, which is available in advance. So, the location of robot can be decided by coordinate of the map . And the optimal navigation algorithm can be explored. There have been many algorithms in these fields. But they can hardly adapt the non-mapping condition. That is to say, if there is a novel environment or a region that its condition is difficult to get to know, what these algorithms do is detect the whole region and acquire the whole informa- tion of the region. Only after the information of region is known, can the optimal algorithm can be adopted. Therefore, the energy may be cost in acquiring informa- tion. In order to solve the problem, we explore a kind of non-mapping navigation algorithm to guide the robot to arrive the event occurring place.
The spring stiffness, magnet and travel distance were dimensioned thanks to a model of the magnetic field so as to maximize the sensitivity of the force measurement and provide a range of ±1N . Figure 5 shows the output signal of the sensor with respect to a longitudinal force. Some hysteresis was observed in practice, due to the friction between the two moving parts, which prevents them to come back to the exact same original position after a contact. This problem is corrected in software by resetting the zero offset after each contact. A picture of the sensor mounted on the real robot is shown in figure 6. Tests in flight allowed to measure the standard deviation of the sensor in a real situation, in order to estimate the sensitivity of the force sensing. It was found that the standard deviation of the measures when no force is applied is generally around σ = 0.01N. A threshold of 3σ was used to detect contacts, which means the sensors are sensitive to axial forces as small as 0.03N .
To know the environment, to localize position and to plan towards goal by avoiding obstacle is the main task of mobile robot. The sensor present in the robot makes robot easy to localize its position in environment and to detect the obstacle. The task of generation of optimal path is accomplished by using FA over normal probability distribution in complex crowded environment. If firefly refers as the population then its theoretical interpretation can be studied under Normal Probability Distribution. A probabilistic model is generated here for incorporating the general behavior of fireflies. The arithmetic mean, variance etc. are such statistical tools which are used as the empirical data for representing the decision of firefly’s movements. This experiment is performed in the random environment and its outcomes are governed by continuous probability of the chance mechanism and the sample space. Here the firefly assigned with the random variable first then its random movement is defined over the range of possible values second and its frequency is Normal Probability Distribution finally. It’s applying for the fireflies as the corresponding random variable continuously in following section.
In order to facilitate the learning phase both in terms of exploration-exploitation trade-off and convergence rate, algorithms parameters have been accurately tuned to figure out the cor- relation between their values and the performance of the navigation. In particular, different kinds of Euclidean distance-based reward functions have been analyzed to make the naviga- tion quicker and more goal-oriented: simple Euclidean distance reward function, exponential Euclidean distance reward function and Gaussian Euclidean distance reward function. The second has been found to be the most efficient (look at the table in 5.3), because of the fact that it presents an always increasing trend also in proximity of the goal, which is not the case of Gaussian Euclidean distance that starts decreasing when the goal is closer. With the performed tuning choices, all the approaches have been demonstrated to be able to learn a (sub)optimal trajectory from the initial configuration of the robotic arm to the goal. SARSA on-policy algo- rithms have been found to be more "conservative" with respect to off-policy Q-learning: they prefer to learn longer and most likely safer trajectories, avoiding dangerous situations for the agent. As reported in table 5.3, discretized SARSA algorithm present on average smaller proba- bility of collision with respect to discretized Q-learning, i.e. 31.2% with respect to 31.5%. On the contrary, Q-learning based algorithms are more goal-oriented, shortening the required path as much as possible. As a matter of fact, as noticeable from figure 5.10, Q-learning-based algo- rithms require on average 78 actions to reach the goal with respect to 93 necessary actions for SARSA-based algorithms. Nevertheless, on-policy methodologies have been noticed not to be so much efficient in over-constrained environments like the considered ones, because avoiding obstacles on one side of the pipe can result into a collision on the other side of the pipe itself. Indeed Deep SARSA collides even more than Deep Q-learning with exponential Euclidean re- ward, 36.2% versus 29.3%. Consequently, Q-learning based algorithms have been identified as the most competent techniques in learning different configurations of the pipe network with minimum effort.
Our first step begins with detecting both the white lanes and the potholes. We are using OpenCV and our own computer vision algorithm to detect these lanes. We then determine a point in between the lanes by looking at all of the points and averaging them. We calculate the difference in angle between this point and the center of the image and consider this the error in the robots heading. The heading is controlled to minimize this error using a proportional controller as seen in Figure 18 below that keeping this point at the center of the image and directly in front of the robot. Once the robot drives to this point we repeat the previous step looking for another waypoint. We detect when we are going to enter the no lane section when an insufficient number of lane hough lines is detected and our GPS is within a certain range of our first waypoint. Once we lose these lanes, we publish a false boolean under the /lane_detection topic. The GPS node listens to this topic and executes when this flag is set to false. We follow a set of predetermined waypoints until we re-enter the lane section. We then continue our previous method until the robot makes it to the finish section.
This is to certify that the thesis entitled “Navigation of Mobil Robot using Fuzzy Logic” is the bona fide work of Krushna Shankar Sethi and SanjeevPothen Jacob under the Guidance of Dr. D.R.K.Parhi for the requirement of the award of the degree of BACHELOR OF TECHNOLOGY specialization “Mechanical Engineering” andsubmitted in the Department of Mechanical Engineeringat National Institute of Technology Rourkela, During the period 2012-2013 .
The second row of Fig. 6.5 shows the overall performance of the different methods with respect to the density of pedestrians in the scene ( 6.5 ), taking into account penalizations due to nearby people. As expected, using social interaction forces (red and green lines) highly increases the performance, it is natural to suppose that a more aware robotnavigation would help to improve its efficiency. The predictive behavior clearly enhances the performance of the task, as it can be seen in the first and second columns. Under a low density of obstacles, the predictive information becomes of great utility. Nevertheless, under a high number of obstacles in the scene, the prediction strategy scores equally. This is due to the fact that acting predictively, specially in the third scenario, makes the robot decide alternative paths rather than just follow the target; e.g. while moving, surrounding a column to the right when the target goes to the left, but the destination would be more accessible by doing this. Although this behavior denotes some intelligence, its performance is punished based on our metric, as it can be seen in the figure. Anyways, the companion performance using prediction improves the overall performance of the system as it can be seen in Fig. 6.5 , third row.