This paper has presented a solution to the autonomousnavigation for light vehicles problem using advanced sensors. Major key issues considered in the present work were generation of real-time disparity image generation due to its huge processing time, orienting the mobilerobot towards the most available gap among all the obstacles including the pedestrians, and finally avoidance of the dynamic obstacles during the movement. Processing load of the disparity image generation was shifted to the GPU unit and rest of the post processing for calculation of gap among different obstacle was done by the CPU unit to make the system real-time. Avoidance of dynamic obstacles was done by the use of well tuned Breitenberg algorithm. The various tasks that make up the navigation algorithm have been identified in format software modules endowed with a certain ability to communicate between them, giving the system great flexibility, scalability and reusability. The work has shown the hardware/software platform used. The hardware is based on the Summit XL mobilerobot equipped with a stereo vision setup and a laser rangefinder. The software control architecture developed is based on the robot programming framework of ROS. The paper has described the proposed solutions to various problems related with the autonomousnavigation. These solutions have solved the indoor and outdoor navigation with obstacle avoidance. The results obtained with this platform are very satisfactory. Since the obstacle detection is done by the use of depth property any kind of obstacle could be detected. The amount of angular rotation the mobilerobot would perform at the time of occurrence of dynamic obstacle is least possible which makes the system more goal- oriented.
Mobilerobotautonomousnavigation is a currently active area of robot research. Many navigation approaches have been developed by robotic researchers with the aim of controlling robots to successfully navigate in unknown, unstructuredenvironment. One of the most important functions of mobile robots operating in such environments is its ability to avoid collision. For this it uses one of the two well-established approaches namely global approach and local approach. A navigation method is said to be global if it calculates the whole trajectory at the beginning of the navigation task, and whenever a new situation arises. A global algorithm has the ability to take into consideration every detail of the navigation map therefore yields a more intelligent result than local ones. However, a global algorithm has the following drawbacks:
In literature, there are considerable amount of efforts to let mobile robots to navigate visually within unknown, unstructured environments. In reference to Bonin-Font et. al. , they presented a substantial survey with a detailed summary of the most outstanding visual navigation studies from 1987 to late 1990s. Ghazouan et. al.  have also presented a new technique for model update using prior and current observations on the voxel state, they mentioned that; “environment is modeled using depth information provided by a stereo vision system”. Ghazouan et. al.  additionally mentioned that, “Workspace is decomposed into voxels which are the smallest volume of environment. A first observation on the state of the voxels is calculated based on stereo system provided 3D points and triangulation error propagation. The proposed update function uses a credibility value that denotes how strongly a new observation shall influence the voxel state based on the
tic processes as the Dynamic Bayesian Network formalism. It has been shown that this technique can be used to learn models for behaviors with controllable parameters, such as reactive navigation (Infantes, Ingrand, and Ghallab 2006). Even though stochastic methods allow robots take into ac- count environment uncertainty during the learning process, we worked with deterministic techniques because we are currently using deterministic planning and dealing with the non-determinism during execution by monitoring and re- planning. Classical versions of both regression and decision trees have been used for action modelling in autonomous robots in ROGUE (Haigh and Veloso 1999). ROGUE used decision trees to acquire rules that prioritize its activities ac- cording to the values of its sensors. It learned rules that are compiled into heuristics (in the form of control rules) used when planning, while in our work we change the domain model. ROGUE used feature-based ML and we use rela- tional ML instead. Notice that PELEA will be able to inte- grate both learning approaches.
Navigation is fundamental for mobilerobot applications. In order to complete any given task, the robot first needs to have capability to safely reach the target . Navigation of mobile robots thus has been receiving much research attention. The exiting methods can be classified into two main categories: hierarchical architectures and reactive or behavior-based architectures . The hierarchical architecture operates through sequent steps of sensing, planning and acting based on known model of the environment. This architecture is thus appropriate for static and structured environments. For unknown or unstructured environments, the behavior-based architecture is often used. This approach splits a complex navigation task into sub-tasks or behaviors. Each behavior has its own objective and executes independently. They are then combined in accordance to the state of environment to generate a global response. As the combination only uses the local data, the behavior- based architecture does not need to have a global map of the environment. The division into behaviors additionally enables the modularization and extendability of the architecture.
The author applied the proposed fuzzy controller to the autonomous wheeled mobile robotic platform moving in an unstructuredenvironment with obstacles. The control strategy was tested through simulations of wheeled mobilerobot motion [24-27]. A simulation example of a wheeled autonomousmobile robotic platform is presented in Figure 3. The corresponding fuzzy control is implemented to perform tasks of obstacle and collision avoidance. In particular, the navigation strategy proved to be extremely sensitive to the balance between avoid obstacle and reach the target behaviors. Simulation results are shown in Figure 3.
Modelling and Fuzzy Control of a Four-Wheeled MobileRobot by Istvan et al.  in this, paper they compare the results of PID controller and fuzzy controllers. For this, they have been upgraded the simplified kinematic bicycle model of a four-wheeled robot car . From these results, the fuzzy controller shows good results in terms of speed, and it is because its behaviour is closer to reality, although fuzzy controller required more calculations than PID controller. Adaptive Dynamic Motion Controller Design for a Four-Wheeled Omnidirectional MobileRobot by Ching-Chih Tsai et al.  developed a dynamic model and a dynamic motion controller for stabilization and trajectory tracking of an omnidirectional mobilerobot with four independent driving omnidirectional wheels equally spaced at 90 degrees from one to another.
Working with an architecture company, TEECOM, our team of six Cal Poly Computer Engineering students came together to create a product which facilitates the work done by the company. Through constant communication with TEECOM’s research and development division, our team was able to meet the needs of the company and gain context for how the product will be used. TEECOM works with companies to outfit business areas with technology ranging from wall-sized screen displays to webcams in meeting rooms. In order to plan how a space will look and how it will be organized, they create building floor plans. In order to make this process more efficient, the Planar Learning AutonomousNavigationRobot (PLANR) was created.
512 one in a sequence the output obtained after the execution of AI will be a set of position and velocity vectors. So the task of the trajectory will be to guide the robots through the opponents to reach the destination.The routine used for this purpose is the potential field method (also an alternative new method is in progress which models the robot motion through opponents same as the flowing of a bulk of water through obstacles). In this method different electrical charges are assigned to our robots, opponents and the ball. Then by calculating the potential field of this system of charges a path will be suggested for the robot. At a higher level, predictions can be used to anticipate the position of the opponents and make better decisions in order to reach the desired vector. In our path planning algorithm, an artificial potential field is set up in the space; that is, each point in the space is assigned a scalar value. The value at the goal point is set to be 0 and the value of the potential at all other points is positive. The potential at each point has two contributions: a goal force that causes the potential to increase with path distance from the goal, and an obstacle force that increases in inverse proportion to the distance to the nearest obstacle boundary. In other words, the potential is lowest at the goal, large at points far from the goal, and large at points next to obstacles. If the potential is suitably defined, then if a robot starts at any point in the space and always moves in the direction of the steepest negative potential slope, then the robot will move towards the goal while avoiding obstacles. The numerical potential field path planner is guaranteed to produce a path even if the start or goal is placed in an obstacle. If there is no possible way to get from the start to the goal without passing through an obstacle then the path planner will generate a path through the obstacle, although if there is any alternative then the path will do that instead. For this reason it is important to make sure that there is some possible path, although there are ways around this restriction such as returning an error if the potential at the start point is too high. The path is found by moving to the neigh boring square with the lowest potential, starting at any point in the space and stopping when the goal is reached.
Classical or the line following robot without PID controller has slow response and the robot will not be able to follow the line smoothly and sometimes robots tends to move out from the line. If there is maximum speed beyond which cannot use this classical algorithm, otherwise the robot will overshoot from the line. This problem also will cause the motion of the robot to wobble and its path similar a zigzag pattern, it is wasting valuable time and power supply. Although the mobilerobot can follow the line track, its motion and overshoot problem while following in maximum speed needs to be improved. The line following robot faced difficulties to obtain stable and precise navigation. When the speed of the line following robot increases, the robot would not be able to navigate effectively and follow a line precisely. The robot also would not be able to take a sharp turn and tend to move out from the track. These operating problems due to some constraints, slow responses of sensor, time delay and other disturbances. In order to overcome the problem, a better controller is needed to make the robot follow the line accurately, smoothly and without leaving the track of course. The robot will gradually go steer in this project, PID controller is used to improve the motion which forms an effective closed loop system. In developing a PID controller for a line following robot, there are a few requirements which needed to be considered in order to complete the project. Therefore, implementation of PID controller would be able to overcome the problem and increase the performances of the line following robot.
Matlab\simulink environment computes the location of the goal and obstacles, simulate the sensory data, finds the distance from the obstacles and feed this information to the fuzzy controller that make the movement decision (speed and orientation) and used for testing the ability of autonomousmobilerobot for goal reaching with static obstacle avoidance in the way toward the goal . Two fuzzy logic controllers are used for path planning to navigate among static obstacles, one controller is used for goal reaching and another for obstacle avoidance depending on sensing information from unknown environment .The navigation from start point to end(target)point. The mobilerobot stops when it arrives the goal point within 0.03 m accuracy of distance between the center of the mobilerobot( , ) and the target point ( , ) .Figure (20) shows the navigation of mobilerobot from starting point(0,0) toward target point(12,16) and the mobilerobot can move among the obstacles without hit .
The spring stiffness, magnet and travel distance were dimensioned thanks to a model of the magnetic field so as to maximize the sensitivity of the force measurement and provide a range of ±1N . Figure 5 shows the output signal of the sensor with respect to a longitudinal force. Some hysteresis was observed in practice, due to the friction between the two moving parts, which prevents them to come back to the exact same original position after a contact. This problem is corrected in software by resetting the zero offset after each contact. A picture of the sensor mounted on the real robot is shown in figure 6. Tests in flight allowed to measure the standard deviation of the sensor in a real situation, in order to estimate the sensitivity of the force sensing. It was found that the standard deviation of the measures when no force is applied is generally around σ = 0.01N. A threshold of 3σ was used to detect contacts, which means the sensors are sensitive to axial forces as small as 0.03N .
Then, for each planning query, a partial graph is built on-line to try to find a path connecting the current and goal configurations. When the solution path in the graph is found, the actual trajectory is built by appending the feasible motions associated to each of the edges of the path. The number of primitives used for the control set directly affects the computational complex- ity of the graph search algorithm, so there exist a trade-off between keeping the computational complexity low while exploiting the maneuverability of the robot as much as possible. Each motion primitive is assigned a cost that will be taken into account when searching for the best path.
The time to produce a 'plan', by whatever method is eventually adopted, is preferably kept below the time taken for the vehicle to reach the location where that decision is needed. The term 'real time ' has a more acute meaning, distinct from that normally employed when discussing, for example, an air-line reservation system. This is especially true with a vehicle that can damage both itself and its environment, and would almost certainly be an undesirable action within most environments. The objective of 'planning' as discussed in the literature is to convert the sensed qualities via their electrical form into a condensed symbolic form. With the common 'measure' of these symbols and their relations, the aim is to manipulate the symbols and relations within this abstraction to produce decisions whose domain of validity would extend beyond the immediate sensor boundary, in a manner similar to route determination before setting off on a journey in a car. The automatic production of plans or task scheduling, to produce the sequence of (X , Y and ) shown in figure 1.2, and that are one of the inputs to the guidance algorithms is not examined further.
A point to note is that systems that use this multifea- ture, multicue philosophy will be highly compute-intensive, due to having a range of processes with a high-bandwidth data input (raw video data). We envisage that the high- bandwidth low-level feature extractors will be implemented as a hardware layer with parallel processing pipelines, either via field-programmable gate arrays (FPGAs) or special pur- pose DSPs, which output a feature stream. Higher-level layers of software will be able to access the much lower-bandwidth feature stream and select/combine/fuse those parts of the stream, thus providing information or constraints relevant to the particular visual task (in our case robotnavigation). Of course, how best to design a framework that allows a computer vision system to understand how to combine fea- tures and cues in the context of the current visually-driven task and in the context of the current visual environment is a di ﬃ cult, open, cross-disciplinary research question. What we aim to do is less ambitious: we aim to develop computer vision applications that are robust, because they are able to make use of multiple features/cues. There is no explicit scene understanding as such, but we need to understand how to build multifeature, multicue algorithms manually before we can develop systems that learn and adapt the way in which they integrate features and cues.
Robots are becoming ubiquitous in our modern world. In industry, they relieve humans from tedious, monotonous and dangerous work. At home, they do vacuum cleaning or lawn mowing. On other planets like Mars, robots explore the environment and take scientific measurements. And for search and rescue, systems for reconnaissance and tele-operated manipulation are in development. For many of these applications, robots have to be mobile and must navigate autonomously. For robots at home, autonomy is required for making them useful at all. If we would have to control and supervise the systems closely, then it would be easier to do the work ourselves. In planetary exploration, scientists want to be in control for defining the next task or next waypoint. However, a signal to Mars and back takes between 6 Minutes and 40 Minutes, depending on the constellation of Mars and Earth. This large latency prevents an effective remote control. Therefore, going to the next waypoint is done autonomously. Similarly, in search and rescue applications, humans want to define the high level tasks, but a certain level of autonomy is required to relieve the rescue workers from fine grained control of single robots or large teams of robots that may work in parallel.
Projek ini dijalankan berasaskan rekaan dan binaan satu mobilerobot yang menggunakan dua DC motor untuk bergerak kehadapan, belakang, kiri dan kanan. Kawalan motor digunakan untuk mengawal pergerakkan, orentasi dan kedudukan terkini mobilerobot. Bagi mengerakkan robot kekiri dan kekanan, salah satu DC motor akan dimatikan.
cope with various uncertainties to complete the tasks. These uncertainties arise from various reasons; knowledge about the environment is partial and approximate; environmental changes are dynamical and can be only partially predicted, the robot’s sensors are imperfect and noisy; and the robot’s control is imprecise (Kaplan, 2006).
MARVIN’s 2014 redevelopment replaced its original robotic development environment (RDE), Microsoft’s Robotics Developer Studio (MRDS), with Robot Operating System (ROS) . ROS is an open-source RDE that runs on Linux. It uses a graph-based messaging network to communicate be- tween subsystems (called nodes) . Each node can subscribe and publish to message topics. This standardises and abstracts the inter-node commu- nication making it transparent. This allows nodes to be written in different styles and languages without having to redevelop communication meth- ods between them, which reduces development time. ROS handles the processing of each node, allowing multiple nodes to run simultaneously. This provides distributed processing for each of the robot’s subsystems on the same machine, which leads to more efficient and robust control. ROS also has a highly active development community, which provides support and a vast library of packages for performing standard robotic tasks. This also dramatically reduces development times. For these reasons, ROS will continue to be used throughout this project.