Mobile robots are gaining increased autonomy due to advances in sensor and com- puting technology. In their current form however, robots still lack algorithms for rapid perception of objects in a cluttered environment and can benefit from the assistance of a human operator. Further, fully autonomous systems will continue to be computation- ally expensive and costly for quite some time. Humans can visually assess objects and determine whether a certain path is traversable, but need not be involved in the low-level steering around any detected obstacles as is necessary in remote-controlled systems. If only used for rapid perception tasks, the operator could potentially assist several mobile robots performing various tasks such as exploration, surveillance, industrial work and search and rescue operations. There is a need to develop better human-robot interaction paradigms that would allow the human operator to effectively control and manage one or more mobile robots.
The time to produce a 'plan', by whatever method is eventually adopted, is preferably kept below the time taken for the vehicle to reach the location where that decision is needed. The term 'realtime ' has a more acute meaning, distinct from that normally employed when discussing, for example, an air-line reservation system. This is especially true with a vehicle that can damage both itself and its environment, and would almost certainly be an undesirable action within most environments. The objective of 'planning' as discussed in the literature is to convert the sensed qualities via their electrical form into a condensed symbolic form. With the common 'measure' of these symbols and their relations, the aim is to manipulate the symbols and relations within this abstraction to produce decisions whose domain of validity would extend beyond the immediate sensor boundary, in a manner similar to route determination before setting off on a journey in a car. The automatic production of plans or task scheduling, to produce the sequence of (X , Y and ) shown in figure 1.2, and that are one of the inputs to the guidance algorithms is not examined further.
that the SLAM could be performed and optimized on the ARM based architecture. A camera was used as input for feature extraction and matching, and visual landmarks were used in the application. Lee et al.  presented an embedded visual SLAM. In Lee’s approach, an upward-looking camera was used for indoor environment perception which could take advantage of the orthogonal structure and the invariance property of indoor environment. An optimized graph optimization was integrated into the standard Kalman filter. Llofriu et al.  presented an embedded particle filter SLAM implementation. In this approach, a camera and identifiable artificial landmarks were used, and the map consists of landmarks. In , OpenMP  is used for the parallelizing of GMapping. Here, we further explored two other popular task-based parallel programming modes in C++ programming language, Boost Thread  and Intel® TBB . A new efficient implementation of calculating the score of a scan at a specified pose in the map for the scan matching problem is used here by pre-computing a lookup table instead of doing computational expensive closest point searching every time. The well-known GMapping is compared with our EMB-SLAM implementation. The work presented in this section includes embedded RBPF-SLAM implementation, optimization and performance evaluation method. The section is organized as follows. After explaining the theory of Rao-Blackwellized particle filter in SLAM problem, the lookup table and parallel programming models are described, and implementation details are provided. Experiments carried out on real datasets are presented. Finally, results and conclusions are discussed. Results prove that EMB-SLAM has gained speedup using lookup table and parallel programming models.
In our original design we planned to use the global planners and local planners available for the move base node of ROS, which provides a general infrastructure to integrate different pairs of global and local planners. The node requires a 2-D occupancy map and a set of goal locations to compute a global plan and output velocity commands in realtime to the drive motors until the destination is reached. Ini- tially, the basic Navfn global planner in ROS was tested on the rover since it provides valid planning around mapped obstacles. However, it was found that nearly all ROS-based planners made critical assump- tions about the underlying mobility hardware, in par- ticular that commands and odometry feedback would be executed immediately with negligible time lag and that small incremental movements would be possible. As the rover platform uses four-wheel steering and takes time to accelerate and turn, the planning algo- rithms would repeatedly issue commands for small, incremental angular adjustments (or in the case of the ROS teb local planner, back-and-forth steering movements) and when this failed due to either actua- tor lag or lack of precise movement, execute recovery behaviours. To remedy this problem, we developed a new and simplified planner that relaxed the con- straints on timing and precision in favour of a best- effort approach.
-effective robotic rover platform for autonomous envi- ronment monitoring in agriculture, and discussed sev- eral key technologies implemented in ROS for odom- etry and localization, identification of objects, and, path planning under limited kinematic precision. Our results to date regarding navigation indicate that the rover can successfully reach set navigational points with high precision so long as accurate and real- time localization is available such as that provided by Hector SLAM against fixed obstacles. We have also established that the methods used for indoor and feature-rich localization and odometry are not suit- able for use outdoors in uniformly-visual farm fields. We are nonetheless able using a low-cost rover plat- form with a minimal sensor set to traverse naviga- tional goals efficiently and quickly using a simple, ef- ficient, and kinematically-tolerant planning algorithm for our rover platform.
Last but not least, Samsung VC-RP30W. The purpose of these mobile robots is to act as a vacuum cleaner. It can draws a 3-D map of the environment to identify its relative location, enabling faster and more efficient cleaning of a defined area. A less advanced automated vacuum cleaner navigates randomly until it faces an obstacle, blindingly crawling the area. The smarter Samsung cleaner knows which area needs to be cleaned, with a much more accurate result. A user can also program in the working time and cleaning options in advance, so that the robot cleans the area automatically when the user is away.
Since time immemorial there has been a constant effort to build and construct a conscious machine (robot) that should be capable of thinking like human beings, for this there is a tremendous craze among the modern thinkers, philosophers and researchers. Here we are mainly focusing on robotics and its potential utility in engineering, medical, industries, mines biomedical science and many more. So what is there in robotics that has attracted thousands of scholars from various backgrounds and most probably each of them having different requirements .This is because robots can work as conscious as that of a human being ,more over it work with such a precision that left human being with wonder struck. Robots need very less human involvements this is another big advantages of adopting robotics in real life. Now, we are moving towards autonomous mobilerobot, i.e.,it should be capable of doing things in an undefined and unmodified environment and without human interventions. In simple words it should be capable of acting in a real world environment. A well to do autonomous robot should be capable of doing many things like,
Development of mobilerobot in navigation has gained much interest for decades. One of the main reasons is because the research attempts to build a truly autonomous mobilerobot that able to explore and identify the surrounding area especially in hazardous conditions. One of the famous problem in navigation is the Simultaneous Localization and Mapping (SLAM) which requires the mobilerobot to observe the area of interest and concurrently build a map while at the same time localize itself in the map -. The study has covered varieties of issues such as the inconsistencies of estimation, existent of uncertainties, data association problem, and computational complexity -.
The neuro-fuzzy navigation system is shown in Fig. 1. The robot detects its environment using two groups of sensors. Four IR range sensors detect obstacles nearby and four photoelectric detectors find the orientation of the light source placed on the goal. Navigation is performed using two fuzzy logic controllers, one to process information from the left side of the robot and another to deal with the right. The fuzzy logic controllers consist of a rule base that fuzzifies range values and goal direction and produces defuzzified control actions. In addition, the controllers implement a reactive control strategy. For example, if the robot finds an obstacle in front of it and in the path to the goal, both fuzzy logic controllers will determine the best steering direction and speed to avoid collision while heading to the goal.
To know the environment, to localize position and to plan towards goal by avoiding obstacle is the main task of mobilerobot. The sensor present in the robot makes robot easy to localize its position in environment and to detect the obstacle. The task of generation of optimal path is accomplished by using FA over normal probability distribution in complex crowded environment. If firefly refers as the population then its theoretical interpretation can be studied under Normal Probability Distribution. A probabilistic model is generated here for incorporating the general behavior of fireflies. The arithmetic mean, variance etc. are such statistical tools which are used as the empirical data for representing the decision of firefly’s movements. This experiment is performed in the random environment and its outcomes are governed by continuous probability of the chance mechanism and the sample space. Here the firefly assigned with the random variable first then its random movement is defined over the range of possible values second and its frequency is Normal Probability Distribution finally. It’s applying for the fireflies as the corresponding random variable continuously in following section.
This is the issue we have mainly tackled in the present paper. A server-client architecture has been designed in which the server contains a GIS based map of the area, commonly large, where a team of robots is supposed to work, and the clients, actually a robot or team of robots, can connect to the general data base in order to retreive a part or the whole of the map needed to perform their tasks. Clients can, at the same time, provide up to date information in order to maintain updated the general data base. So, given a large area to be covered by a team of robots for which a human-like map format exists, we have provided a method that allow this team of robots to safely and robustly navigate in this large area. A communication protocol has been presented between the GIS server and the robot clients tht used DBMS and SQL in its implementation. A real experiment in a common outdoor environment, a University Campus, has been performed with a single robot in order to validate the GIS based navigation system using point to point wireless network technology.
models. The velocity obstacle approach (VO) is considered to be an easy and simple method to detect the probability of collision between two circular-shaped objects, using the colli- sion cone principle. Many studies have implemented the VO algorithm using simulation, where they considered that obstacles have circular shapes and their velocities and positions are known [Yamamoto et al., 2001;Kluge, 2003; Kluge and Prassler, 2004;Zhiqiang et al., 2008; Fulgenzi et al., 2007;Shiller et al., 2010]. However, the VO approach has two limi- tations when applied in indoor environments. The ﬁrst challenge is that in the real world, not all obstacles are circular, while the second is that the mobilerobot sometimes cannot move towards its goal because all its velocities to the goal are located within the collision cones. Hence, in this project, a method is proposed to extract the collision cones of non- circular objects, where the obstacle size and collision time are considered in weighting ve- locities of the robot, and a virtual obstacle principle is proposed to avoid unobserved mov- ing objects. Many simulations and experiments have been posted on the following website: https://www.facebook.com/pages/Robotics-Group/173949736063997
R obots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities re- quired for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these compo- nents are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown en- vironment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robotnavigation pipeline that can work with different types of the sen- sors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots.
Autonomous navigation of mobile robots in an unknown littered environment is quite a difficult task in the recent years. Different applications for mobile robots represent different navigation problem. It is essential for intelligent mobile robots having sensing, planning and actuation capabilities in order to plan its line of action and generate a wide variety of intelligent behaviors.[1-3]. These capability should be integrated to get desired results define by control architecture. Since, various control schemas have been developed to design and develop of robust, flexible, reliable and high performance control systems for autonomous navigation. This control architecture can be classified as: Deliberative (Centralized) navigation, Reactive (Behaviour-based) navigation and hybrid (Deliberative - Reactive) navigation . The paper mainly focus on hybrid navigation, but to achieve this all capability the system needs a driver which guaranteed that all functions will be complete within a deadline without blocking of system functionality. For that purpose system needs a Real-Time Operating System (RTOS), where the purpose of a RTOS is to schedule tasks in order to guarantee that inputs are acquired and outputs are produced according to timing constraints [6-7][13-14]. In robotic applications, tasks periodically receive information about the environment through sensors or user interfaces, whereas commands to actuators and other outputs are sent at periodic intervals. The paper flow is given as: Section-I gives detail information about paper, Section-II describes hybrid control architecture where RTOS is used to improve some performance parameter like accuracy which is explain brief in Section-III and for improved task management capability author introduced an algorithm SeART in Section-IV, proposed by Fulvio Mastrogiovanni(2013).
For uneven and unstructured natural outdoor environments as considered in this paper, a 2.5D map is appropriate (Stelzer et al., 2012). Here, each point on the map has an associated height. For creating such a map, the disparity images are reconstructed and transformed into a world coordinate system by using the computed ego motion. We use a statistical approach for fusing the measurements with existing information of the map. Next, the map is evaluated for traversability, by calculating the roughness and slope. Figure 4 (left) shows a map that is colorized according to traversability. The area around stones is marked by red. Green signals traversable terrain for the robot. This permits planning a safe path to a given target point. It is important, that the map changes all the time when new disparity images are integrated. Thus, the path to the target point is re- planned accordingly, if necessary.
The RA is also known as the highway approach. It is the way toget from one place to another and the connection among the freespaces is represented by a set of one-dimensional curves .When the roadmap is built, then it is utilized as an arrangement ofhomogeneous ways where the planner will seek to discover theideal arrangement. Here, nodes play an important role in gettingthe desired path for the robot. The RA is used to find the shortest path from the robot's initial position to its target position; Voronoiand visibility graphs are used to develop the roadmap. The visibilitygraph method connects the initial and the goal position with nodesfrom the map. This methodis also used for an environment with polygonal obstacles in whichthe vertices of the polygon are represented by the nodes and edgesas a connector between the nodes . The Voronoi diagram is another roadmap algorithm used for the path planning of therobot. This method divides the region into sub-regions where alledges of the figure are constructed using equidistant points fromthe adjacent two points on the obstacle's boundaries. The application of theVoronoi diagram in the field of mobilerobotnavigation aroundobstacles is presented.
value by correlating along a line of constant α (i.e., horizon- tally) in the first RP image of the pair. (We map the second to the first rather than vice versa, due to field of view con- siderations.) The maximum value of correlation is retained as a value for s(α). A correlation window size of 64 pixels was used in the RP image, with a search window of 150 pix- els. Due to the high-frequency repetitive structure of the car- pet microtexture shown in Figure 1a, often many local max- ima are generated in the correlation process. However, due to low-frequency components of spatial frequency, the global maximum, in most cases, corresponds to the correct image motion that we wish to recover. Even if this is not the case, it is possible to include pixels with significant local maxima, if these strong local maxima are consistent with the extracted planar motion, that is, they lie su ﬃ ciently close to the ex- tracted sinusoid model.
Some of the commonly used SLAM techniques assume sufficient information in the environment for correspondence based techniques to be successful. Indoor hall- ways, in which our system is intended to be used, do not provide consistent richness in texture or depth variations for either the visual odometry or scan matching tech- niques to generate consistently accurate estimates. To cope with this, the proposed system uses a combination of Manhattan rotation estimates [ 59 , 60 ] with translation estimates from dead reckoning to generate a map of the robot’s environment. To get Manhattan rotation estimates, lines are detected in a 2-D scan of the workspace and a dominant direction for these lines is inferred. This dominant direction is compared with a reference which is established in the map initialization frame. The Manhat- tan assumption is that lines in an indoor environment are orthogonal to each other. This property is used to estimate frame by frame rotation. Section 4.3 explains this mapping technique.
The principle of working of an accelerometer can be explained by a simple mass (m) attached to a spring of stiffness (k) that in turn is attached to a casing, as illustrated in Figure 5. The mass used in accelerometers is often called the seismic-mass or proof-mass. In most cases the system also includes a dashpot to provide a desirable damping effect. The dashpot with damping coefficient (c) is normally attached to the mass in parallel with the spring. When the spring mass system is subjected to linear acceleration, a force equal to mass times acceleration acts on the proof-mass, causing it to deflect. This deflection is sensed by a suitable means and converted into an equivalent electrical signal. Some form of damping is required, otherwise the system would not stabilize quickly under applied acceleration. To derive the motion equation of the system Newton’s second law is used, where all real forces acting on the proof-mass are equal to the inertia force on the proof-mass. Accordingly a dynamic problem can be treated as a problem of static equilibrium and the equation of motion can be obtained by direct formulation of the equations of equilibrium. This damped mass-spring system with applied force constitutes a classical second order mechanical system. From the stationary observer’s point of view, the sum of all forces in the z direction .