InertialNavigation Systems (INS) have been developed for a wide range of vehicles. Sukkarieh  developed a GPS/ INS system for straddle carriers that load and unload cargo ships in harbors . Bennamoun et al  developed a GPS/INS/SONAR system for an autonomous submarine. The SONAR added another measurement to help with accuracy, and provided a positional reference when the GPS antenna got submerged and could not receive a signal . Ohlmeyer et al  developed a GPS/INS system for a new smart munitions, the EX-171. Due to the high speed of the missile, update rates of 1 second from a GPS only solution were too slow, and could not provide the accuracy needed. Jorge Lobo et al  describes a prototype of an inertialnavigationsystem for use in mobile land vehicles, such as cars or mobile robots. Dieter Fox et al , describes operate autonomously, mobile robots must know where they are. Mobilerobot localization, that is the process of determining and tracking the position (location) of a mobilerobot relative to its environment, has received considerable attention over the past few years . Robot localization has been recognized as one of the most fundamental problems in mobile robotics [9, 10]. Pirník et al  uses INS for control of wheel chassis from inertial sensors based on data. Abramov and Božek  used INS for application of inertial measurement system in machinery.
Abstract: This paper describes the developments of different basic techniques for mobilerobotnavigation during the last 10 years. Now a day’s mobile robots are vastly used in many industries for performing different activities. Controlling a robot is generally done using a remote control, which can control the robot to a fixed distance, we discuss three basic techniques for mobilerobotnavigation the first technique is the combination of neural network and Fuzzy logic makes a neuro fuzzy approach. Second method is the Radio frequency technique which is control system for a robot such that the mobilerobot is controlled using mobile and wireless RF communication. Third method is robotnavigation using a sensor network embedded in the environment. Sensor nodes act as signposts for the robot to follow, thus obviating the need for a map or localization on the part of the robot. Navigation directions are computed within the network (not on the robot) using value iteration.
mobilerobot between urban buildings . GPS can be combined with LRF and dead reckoning for navigating vehicles in roads using beacons and landmarks , combined with camera vision for localizing mobilerobot in color marked roads , combined with 3D laser scanner for building compact maps to be used in a miscellaneous navigation applications of mobilerobot , combined with IMU, camera vision, and sonar altimeter for navigating unmanned helicopter through urban building  or combined with LRF and inertial sensors for leading mobilerobot ATRV-JR in paved and rugged terrain [11, 12]. GPS can be combined with INS and odometry sensor for helping GPS to increase its positioning accuracy of vehicles . GPS is combined with odometry and GIS (geographical information system) for road matching-based vehicle naviga- tion . GPS can also be combined with LIDAR (light detection and ranging) and INS for leading wheelchair in urban building by 3D map-based navigation . GPS is combined with a video camera and INS for navigating the vehicles by lane detection of roads . GPS is combined with INS for increasing position estimation of vehicles . GPS is combined with INS and odometry for localizing mobilerobot in urban buildings . Camera video with odometry is used for navigating land vehicle, road following by lane signs and obstacles avoiding . Omni-directional infrared vision system can be used for localizing patrol mobilerobot in an electrical station environment . 3D map building and urban scenes are used in mobilerobotnavigation by fusing stereo vision and LRF .
The interdisciplinary nature of navigation leads us to drawing on knowledge contained in solu‐ tions used in related technical fields. An example of this trend is combining it with elements of robotics, in which SLAM (Simultaneous Localization And Mapping) is commonly used for posi‐ tioning a vehicle. To calculate position changes, the location of characteristic objects on a contin‐ uously updated map of an environment is used. The attractiveness of the implementation of this technology in connection with marine navigational aids, stems from the possibility of enhancing positioning accuracy in harbor, off‐shore or narrow areas. That is in the areas where there is a built up hydro‐technical infrastructure, such as breakwaters, waterfronts or navigational infra‐ structure in the form of marked water fairways and anchorages. In this article an analysis of SLAM combined with INS (InertialNavigationSystem) is carried out. It focuses on the possibili‐ ties of enhancing accuracy in fixing position coordinates for a submarine. The first part of the article presents a mathematical base for combining INS and SLAM using the Extended Kalman Filter. The second part describes a study on the accuracy in positioning a mobilerobot (in this instance a wheeled vehicle) which employs a navigationsystem based on INS and INS aided SLAM. The final part of the article includes the results of the study and their analysis. It also con‐ tains generalized conclusions indicating advantages and disadvantages of the proposed solution. Key words:
Doitsidis, Murphy, Long, et. al. [13–15] studied the use of a distributed framework for autonomous multi-robot control. The distributed field robot architecture, also known as DFRA or distributed-SFX, builds a layer on top of the existing software architecture used for individual robots. The layer itself is designed around Sun Microsystem’s Jini, which is a network architecture for building modular service-based distributed systems. The entire architecture is based on Java, and uses Jini as a middle-ware layer. Work done in [13, 14] describes an integration of distributed-SFX with the MATLAB environment for rapid prototyping of behavioral and control modules. Multi-sensor fuzzy logic controllers, implemented in MATLAB, would evaluate the sensor data and determine the necessary inputs to the robot drive system for navigation. Experimental results show the applicability of the system for controlling two wheeled robots in an outdoor environment.
• Due to the variety of infrastructure, devices and deployment e ff ort requirements and lack of uniform testing methods, comparison of di ff erent approaches is relatively di ffi cult . So, in this thesis, a SHS system which is the current best method in dead reckoning approach, was re-implemented for the comparison purposes. As a next step, other exist- ing methods can also be re-implemented with similar indoor environment conditions. With the results from this experiment, it is observed that dead reckoning systems employing low cost sensors cannot track users for a long period of time because of the exponentially ac- cumulating drift from the actual location even with external corrections. So, dead reckoning can only be used to track only for a short time period using frequent initiations from reliable external location fixes. Fingerprinting systems, GPS fixes can be used for these initializations opportunistically. Indoor mapping guidance systems can use these dead reckoning systems to guide indoor passengers in an e ffi cient way. This leads to a hybrid method implementations. These have already been explored with the use of maps and particle filters.
The pocket navigationsystem is divided into two subsystems, hardware and software. The hardware subsystem, the left box in Figure 1 , consists of a magnetic and inertial measurement unit (MIMU). All of the experiments and the data provided in this article have been recorded with the MIMU Xsens MTw. The navigationsystem described in this article offers the advantage that the MIMU can be located in the front pocket of the trousers. No additional fastening is required unless the trousers are loose enough that the movement of the MIMU in the pocket is completely decoupled from the movement of the pedestrian’s leg. For these cases, the sensor can be directly attached to the leg.
Mobile positioning becomes of increasing interest for the wireless telecom operators. Indeed, many applications re- quire an accurate location information of the mobile (context-aware application, emergency situation, etc.). While many outdoor solutions exist, based on GPS/AGPS, in in- door environments, the received signals are too weak to pro- vide an accurate location using those technologies. Currently, given that many buildings are equipped with WLAN access points (shopping malls, museums, hospitals, airports, etc.), it may become practical to use these access points to deter- mine user location in these indoor environments. Moreover, new regulations will impose to VoWiFi (voice over WiFi) op- erators to integrate a positioning solution in their terminals to comply with the E911 policy . The location technique is based on the measurement of the received signal strength (RSS) and the well-known fingerprinting method [2, 3]. The accuracy depends on the number of positions registered in the database. Besides, signal fluctuations over time introduce errors and discontinuities in the user’s trajectory.
b. A machine or device that operates automatically or by remote control. According to Isaac Asimov's, robots must follow 3 laws of robotics. The laws are: a. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
deleted with outliers in succeeding algorithms. Therefore the stage at which the building is deleted makes a little difference for outlier detectors with variable threshold, but can be crucial for algorithms based on fixed parameters. Glass buildings can be dangerous for entropy minimization algorithms, as these aren’t Lambertian surfaces and they mostly reflect non desirable sky. If glass buildings are detected and deleted on this stage, they won’t create an additional disruption for entropy minimization algorithms. The other problem typical but not exclusive to urban environments appears, when mobilerobot is turned towards a tall object nearby, such as building or dense forest. In such composition, image visible by robot can entirely lack sky area. The proposed sky detection algorithm would still classify the most bright part of an image as sky. This issue wasn’t addressed in the thesis, since this situation isn’t very common. Moreover, in some cases it may be beneficial for above mentioned reasons. In worst case scenario, algorithm would classify as sky less than 50% of image pixels, all located in upper half. This way the most critical part of the road can never be wrongly classified as sky and road detection can still be safely performed.
Our first step begins with detecting both the white lanes and the potholes. We are using OpenCV and our own computer vision algorithm to detect these lanes. We then determine a point in between the lanes by looking at all of the points and averaging them. We calculate the difference in angle between this point and the center of the image and consider this the error in the robots heading. The heading is controlled to minimize this error using a proportional controller as seen in Figure 18 below that keeping this point at the center of the image and directly in front of the robot. Once the robot drives to this point we repeat the previous step looking for another waypoint. We detect when we are going to enter the no lane section when an insufficient number of lane hough lines is detected and our GPS is within a certain range of our first waypoint. Once we lose these lanes, we publish a false boolean under the /lane_detection topic. The GPS node listens to this topic and executes when this flag is set to false. We follow a set of predetermined waypoints until we re-enter the lane section. We then continue our previous method until the robot makes it to the finish section.
Giuseppina and Alberto (2002) prove that a single camera can take care of issues within indoor environment with modestly alterable framework as the mobilerobot proceeds onward horizontal surface. This single camera vision capable to navigate mobilerobot on an indoor floor while avoiding obstacle and people by using a little prior information, on board calculation, and without omni-directional vision as depicted in the Figure 2.5 below. The rest of the tasks are: map learning; route decision making and avoidance of collision with obstacle; and self-localization. The learning of map is a very much contemplated region, yet build of map utilizing monocular vision would give much benefits compared to other sensors that are higher reliable but higher cost either such as laser rangefinder. The ground image data can be changed into grip map, which consists of certainty value to determine whether there is obstacle on the floor.
To know the environment, to localize position and to plan towards goal by avoiding obstacle is the main task of mobilerobot. The sensor present in the robot makes robot easy to localize its position in environment and to detect the obstacle. The task of generation of optimal path is accomplished by using FA over normal probability distribution in complex crowded environment. If firefly refers as the population then its theoretical interpretation can be studied under Normal Probability Distribution. A probabilistic model is generated here for incorporating the general behavior of fireflies. The arithmetic mean, variance etc. are such statistical tools which are used as the empirical data for representing the decision of firefly’s movements. This experiment is performed in the random environment and its outcomes are governed by continuous probability of the chance mechanism and the sample space. Here the firefly assigned with the random variable first then its random movement is defined over the range of possible values second and its frequency is Normal Probability Distribution finally. It’s applying for the fireflies as the corresponding random variable continuously in following section.
This is to certify that the thesis entitled “Navigation of Mobil Robot using Fuzzy Logic” is the bona fide work of Krushna Shankar Sethi and SanjeevPothen Jacob under the Guidance of Dr. D.R.K.Parhi for the requirement of the award of the degree of BACHELOR OF TECHNOLOGY specialization “Mechanical Engineering” andsubmitted in the Department of Mechanical Engineeringat National Institute of Technology Rourkela, During the period 2012-2013 .
were captured before and after the robot moved in the trans- lation mode. The images were then converted using an inter- polation process (this may be a simple linear interpolation or based on cubic splines) to RP (ρ, α) form. Figure 1a shows one of the original images which has its FOE sixteen pixels above the top center edge of its image. Its RP transform is in Figure 1b and shows the angular ( α ) axis and lines of con- stant (reciprocal) radius in the vertical direction. The axis representing (scaled) reciprocal distance from the FOE ( ρ ) and reciprocal radial lines (constant angle) are in the hori- zontal direction. For a more intuitive viewing, the horizontal rendering of the RP image is such that r increases from left to right, thus ρ increases from right to left. In this render- ing of the RP image, the FOE is out at infinity to the left of the image. In order to usefully constrain the size of the RP image, any pixels less than 64 pixels from the FOE are not in- cluded in the RP plot. In any case, the motion near the FOE is so small that it is not possible to make accurate height mea- surements in this image region. In terms of implementation details, we used VGA-sized images (640 × 480) as the raw in- put images and we generated RP images such that there was no information loss due to pixel compression.
Pri eksperimentih smo uporabljali program, s katerim smo robotu podali ukaz, naj vozi z določeno hitrostjo naprej ali pa se obrača, dokler ne doseže zadanega cilja. Pri eksperimentu, kjer je robot vozil naravnost, smo preizkusili tri različne hitrosti: 0,1, 0,25, in 1 m/s. Preden je robot začel vožnjo smo njegov začetni položaj označili na tleh, da smo kasneje lahko prepotovano pot izmerili. Robot je prevoženo pot izračunal iz podatkov, ki jih je dobil iz odometrije. Vozil je še nekaj časa po tem, ko je dosegel cilj, ker se ni mogel takoj ustaviti. Robot je je posredoval dva podatka, ki jih je izračunal iz odometrije. Prvi je bila razdalja od začetka poti do cilja, ko je program prenehal pošiljati ukaze, drugi pa razdalja od začetka poti do dejanske točke zaustavitve robota.
technique. Therefore, the proper selection of the navigationaltechnique is the most important step in the path planning of a robotwhen working in a simple and complex environment. At present,many techniques have been developed by various researchers inthe field of mobilerobotnavigation and it is the most researchedtopic of today. Mobilerobotnavigation is classified into three categories:global navigation, local navigation and personal navigation.The capability to define the position of elements in theenvironment with respect to the reference axis, and to stir towardsthe pre-decided goal, is global navigation. Local navigation dealswith the identification of the dynamic conditions of the environmentand the establishment of positional relationships amongvarious elements. To handle the various elements of the environmentrelative to each other, by considering their position, is personalnavigation. The basic steps involved in the functioning of therobot  are presented in Fig. 1.
door environments, and the use of appearance-based mapping requires large numbers of distinct features for accurate odometry and loop closure. RTAB-Map does not provide any correction on the odometry es- timation between two successive frames, only large loop closures are subject to optimization once a map has been created. In the agricultural context the great- est challenge is traversal across farmers’ fields, where very few nearby visual references are available for odometry and loop closure. Figure 5 shows a naviga- tional map obtained from a farm field with a fenced boundary, in which good visual odometry is main- tained as long as distinct objects around the bound- ary are present. This represents a relatively easy map to obtain at a speed of approximately 0.5 m/s due to the diversity of features available and navigation is straightforward as long as the boundary is in view of the rover. A more difficult situation is shown in 6, where the farm field boundary is grassy and lacking in distinct features. A slower forward speed of 0.2 m/s or lower is necessary to perform mapping while travel- ling along the boundary of this field, and visual odom- etry is less consistent. It is still possible to maintain good odometry when travelling along the perimeter of this field as long as some features are tracked well. However, as soon as the boundary is out of the rover’s view when entering the centre of the field, odometry is easily lost and the quality of the map is compro- mised.
Abstract Many algorithms have been devised on mobilerobotnavigationsystem. Some of them use algorithm based on newer concept like fuzzy logic or genetic algorithm. Some are stick on older algorithm like odometry. A simple, less inten- sive calculation algorithm for a lightweight telepresence robot is required. To accommodate the requirement, odometry algorithm is chosen speci ﬁ cally. Obstacle avoidance feature is proposed to enhance the algorithm. A mobilerobot prototype is implemented using Arduino UNO, two DC geared motors, single caster wheel, and an ultrasonic sensor. Two kind of experiments are performed. The ﬁ rst experiment is to verify that odometry algorithm is working. The second one is used to verify obstacle avoidance mechanism. The results are satisfying. Mobilerobot can avoid the obstacle and reach the destination correctly. Further experiment is needed to decide about ultrasonic sensor placement in a real telepresence robot.