Adaptation to change in the target position To verify the adaptability of the system in a dynamic environment, we perform an experiment in which we continuously displace the target while the robot approaches it (see Fig. 15). Dur- ing the reproduction, the position of the target is updated based on the output of the stereo vision system. Since the modulated dynamics preserves the asymptotic stability of the model, the system can adapt its motion on-the-fly to the change in the target position. Note that the instant adaptation to the target position is an inherent property of the SEDS modeling. In this experiment we are demonstrating the fact that our approach preserves all the properties of the SEDS model, while enabling it to perform obstacleavoidance. Adaptation to change in both the target and obstacle po- sitions To evaluate the performance of the system in the presence of a moving obstacle, we extend the previous ex- ample to a case where both the target and the obstacle posi- tions are changed as the robot approaches the target. Please note that in this experiment we assume that the obstacle movement is “quasi-static”. This assumption requires the obstacle approaching speed (the projection of the obstacle velocity onto the vector connecting the obstacle center to the robot end-effector) to be significantly smaller than the robot movement in that direction. Figure 16 demonstrates the obtained results. In this experiment, at the time between t = 0 and t = 6 seconds, the target is moved from its origi- nal position first in the opposite and then along the direction of the y-axis. The box also starts moving in the period be- tween t = 0 and t = 2 seconds. During the reproduction, the target position and the box center and orientation are contin- uously updated based on the output of the stereo vision sys- tem. Similarly to the previous example, the system remains
The implementation of obstacleavoidancesystem is important application for robotics and in generally is most part used in many applications such as, security or military. This project applied by utilized FPGA platform (DE0-nano Board). The robot can distinguish the obstacle by using ultrasonic sensor (HRLV-MaxSonar). Perception approach and motion planning is the most basic part in this project. Ultrasonic was utilized to detect any obstacle while, the DE0-Nano board is the project platform and the characteristic of FPGA offer programmability and makes it easier to implement on different mobile robot platforms. The Sensor was integrated with the DE0-Nano board. In this project a structure VHDL coding is utilized for design the obstacleavoidance and Quartus II 13.0sp1 as a development CAD tool. The implementation of complex obstacleavoidance with FPGA platform (DE0-Nano) is possible because of the rich logic elements. A specific sensor characteristics testing was carried and robot stability to master these sensor and robot. The result for our project that already got shown the frequency for DE0-nano achieved up to 1.3 GHz, also the total logic elements was used for this project is 4,042 and shown the result for ultrasonic sensor is a high precision and higher accuracy for detection the obstacle and avoids it.
techniques for doing this: Mean of Maximum, Centre of Area, and Centroid of the Largest Area (Yen & Pfluger 1995). The Mean of Maximum (MOM) method returns the average of those values sharing the maximum membership value for that set. The Centre of Area (COA) method returns the centre of gravity of the entire fuzzy command. The Centroid of Largest Area (CLA) method (de- veloped by Yen & Pfluger (1995)) first divides a multiple-peak fuzzy command into several fuzzy subsets. COA is then applied to the subset with the largest area. The COA and CLA methods are shown in an example in Figure 2.13. As shown, COA selects a direction that sits between the two peaks in this ex- ample. This is very undesirable in the case of obstacleavoidance—rather than selecting just one of the two options (left or right) for avoiding the obstacle, the robot chooses to “compromise” between the two options and drives forwards, most likely resulting in a collision. The MOM approach will select a desirable outcome provided there is only one local maximum. If the left and right peaks are equal, however, it will respond in much the same way as COA. A major disadvantage of MOM is that it does not use all of the information contained in the fuzzy command, which results in difficulty in producing behaviour that changes smoothly over time. The CLA method is the most popular. In this case it selects the fuzzy subset on the right as it has the largest area, and returns a result guiding the robot away from the obstacle. It is able to produce behaviour that changes smoothly over time because of its use of centre of gravity.
We have seen various techniques which talks about monocular vision and most of them work over feature detection and matching, which will eventually lead the research orientation towards two approaching namely comparing the current im- age with database image and secondly process feature detection and matching on each frame taking previous image as reference image. Although, analyzing each of these one by one we can come to a conclusion that, In first approach of keeping reference images as database image, the method need a lot of database of images to avoid different kind of obstacle in the environment. For example a tree in the database will probably not match in the tree of real world scenario which will intern make the system more unrealistic of having infinite number of database images to make the system realistic. In the second method where the feature de- tection and matching is done over previous and current frame, the method requires a lot of computation power and advanced CPU to process that information so as to have a quick reaction from the vehicle. Calculating features on each and every
Fuzzy logic based approach has been successfully to control nonlinearity, uncertainty and complexity system recently. Fuzzy control system is suited to apply for autonomous mobile robots which have complex control architectures. The design of the autonomous robots is complex because of the uncertain input signal from the unknown environments and sensor inputs. The autonomous robot cannot be developed by using only the microcontroller and input signals from the ultrasonic sensors. Thus, the behaviour-based control architecture such as obstacleavoidance behaviour and wall following behaviour will be used to control the robot.
Laser is a new navigation tool developed with the development of laser technology, it is applicable for vehicle navigation, field exploration and other work under poor light. In recent years, many research achievements in the fields of obstacleavoidance, autonomous driving and map construction have been produced. Lindström and Eklundh  developed an algorithm that identifies range readings in areas that was detected earlier as free is described without incorporating any grid maps that are inherently memory and computationally consuming. Liu et al.  analyzed three typical obstacles belonging to three categories (active obstacles, inactive obstacles and inactive obstacles) based on frequency analysis respectively. Kondaxakis et al.  presented an innovative approach that addresses all issues (robot’s relative motion compensation, feature extraction, measurement clustering, data association and targets’ state vector estimation) exploiting various probabilistic and deterministic techniques. Arnay et al.  developed a method to combine a low-cost 16 beam solid state laser sensor and a conventional video camera for obstacle detection. Lundell et al.  presented a method to estimate the distance directly from raw 2D laser data. .
"translate" the readings of sonar sensors into occupancy values of each grid cell for building metric maps. Meng and Kak proposed a NEURO-NAV system for mobile robot navigation . In the NEURO-NAV, in order to drive the robot to move in the middle of the hallway, a feedforward neural network, which is driven by the cells of the Hough transformation of the corridor guidelines in the camera image, is used to obtain the approximate relative angles between the heading direction of the robot and the orientation of the hallway . self-organizing Kohonen neural networks are well known for their capability to carry out classification, recognition, data compression and association in an unsupervised manner . In , self-organizing Kohonen neural networks are applied to recognize landmarks using the measurements from laser sensors in order to provide coordinates of the landmarks for triangulation.
ABSTRACT: In order to allow hospital staff to spend a bigger part of their time with actual patients, this paper has focused on how to implement an autonomous robot to do some of their work. The robot is made to navigate in a hospital environment to perform some basic functions as well as avoiding obstacles and humans. The solution this paper presents is a multi-sensor system which is able to navigate through an unknown environment without hitting any obstacles whilst maintaining a certain heading. Analog distance sensors, motion sensor for vehicles’ orientation and velocity and a camera for location identification have been implemented .The LabView program enables the mobile robot to travel from a starting point to a user desired destination point, avoiding undefined obstacles on its route. This paper explores the scope to the robotics “Sense, Think and Act” approach where the robot senses for random obstacles on its path via an ultrasonic sensor, makes a decision based on a non colliding threshold distance in order to execute collision avoidance routine and returns to the process of reaching the predefined destination point. The PID controller is implemented to constantly maintain a threshold distance with the obstacle to control the speed of the vehicle and to avoid collision with the obstacles.
The application and complexity of mobile robots are slowly growing every day. They are gradually making their way into real world settings in different fields such as military, medical fields, space exploration, and everyday housekeeping . Motion being a vital characteristic of mobile robots in obstacleavoidance and path recognition has a major impact on how people react and perceive an autonomous system. This enables an autonomous robot to be able to navigate from one place to another without human intervention. Computer vision and range sensors are primary object detection methods used in mobile robots’ detection. Computer vision as an obstacle detection method is more rigorous and expensive technique than the range sensors’ method. However, most commercial autonomous robots use range sensor to detect obstacles. The use of radar, infrared (IR) sensor and ultrasonic sensor for developing an obstacle detection system had started as early as the 1980’s . Although, after testing these technologies it was concluded that the radar technology was the most suitable for use as the other two technology options were prone to environmental constraints such as rain, ice, snow, dust and dirt. The radar approach was also a very cost effective technology both for the present and the future.  presented a method using a single charge-coupled device (CCD) camera in conjunction with a spherically shaped curved reflector
In this step, an image Region Of Interest (ROI) of diagonal 62 ◦ Field of View (FOV) is taken, in order to be processed instead of the whole image, as shown in Figure 4 . The selection of the diagonal 62 ◦ ROI is based on the results that are obtained from the experiments. Where, it has been found that any object detected out of the area of this ROI will not cause any danger to the UAV, and only the objects that are detected in the scope of this diagonal 62 ◦ ROI can be considered as an obstacle. Furthermore, processing the diagonal 62 ◦ ROI instead of the whole diagonal 92 ◦ image, leads to a significant minimizing in computational time. Test performed proved the viability of this approach, and the results will be discussed in following sections.
The main objective of this autonomous car is to detect and avoid obstacle. This autonomous car is controlled by the IR sensors via DC motor and Relay circuit board by taking energy or power from both battery and solar panel i.e. whenever the 12V battery get discharged, the solar panel take the thermal energy from Sun and provides it to battery.
The controller for navigation consists primarily of a set of three subnets neural designs (see 4 and 5). The first two are responsible for the most important behaviours of an intelligent vehicle, which are the location of the target and obstacleavoidance. Both controllers are classifiers, which are trained with backpropagation-supervised techniques. The third neural network acts as a supervisor and is responsible for the final decision based on the outputs of the first two networks. This driver or neural network is trained by the algorithm variant of the associative reward-penalty for learning. Due to its hierarchical structure, system complexity is reduced, resulting in a fast response time.
Abstract — Unexpected events and not modeled properties of the robot environment are some of the challenges presented by situated robotics research field. Collision avoidance is a basic security requirement and this paper proposes a probabilistic approach called Bayesian Programming, which aims to deal with the uncertainty, imprecision and incompleteness of the information handled to solve the obstacleavoidance problem. Some examples illustrate the process of embodying the programmer preliminary knowledge into a Bayesian program and experimental results of these examples implementation in an electrical vehicle are described and commented. A video illustration of the developed experiments can be found at http://www.inrialpes.fr/sharp/pub/laplace
The principle of this algorithm is the same as Dijkstra's algorithm with just minor differences and is also easy to implement. Compare the figure 3.3a) with 3.1a). As can be seen immediately, the path found by best-first search is longer than the path found with Dijkstra's algorithm. This is because each node is scored based on its distance from the goal instead of the distance to the starting point. Therefore, the closer a node is to the goal, the lower the score. The algorithm thus tends to go straight to the goal, and when an obstacle is found the path is redirected around this obstacle. With Dijkstra's algorithm, all nodes on the same radius from the starting point have the same score and thus a better path is found.
In order to reduce dependence of a UAV on human operators, autonomous control is required. In navigating a desired path, an autonomous UAV must take into account static and dynamic obstacles such as buildings, terrain, other vehicles, and people. The successful avoidance of such obstacles is dependent on accurate obstacle detection logic . There are a number of sensing schemes that may provide the needed environment data, the most commonly implemented of which are computer vision, LIDAR  and sonar  schemes. Ground-based robots have developed far enough to use suites of combined sensor types , though their relatively constant proximity to the ground enables the use of novel approaches not available to air-based platforms  . Instead, most research in UAV obstacle detection in recent years has taken advantage of the relatively low cost and light payload weight of vision-based systems . These systems are also flexible in their design: different numbers of sensors may be used, with different obstacle detection algorithms in each situation. Most commonly, either one or two cameras are used to provide environment data. In the case of a single camera, features such as edges or corners are identified and tracked from one sensor frame to the next in a process known as optical flow. In dual-camera implementations, the disparity of pixels between the left and right sensors is used to compute a depth mapping  . It has also been shown that a combination of these methods is possible, using cameras of different framerate to perform both stereo correspondence and feature tracking . These computer vision approaches are made easier by the availability of software packages built for the implementation of computer vision algorithms. One such package, OpenCV, was selected for use in this project due to the completeness of its feature-set and its implementation in both C++ and Python .
A field experiment was performed during which a group of people were instructed to perform some obstacles avoidance tasks at two levels of normal and high speeds. Trajectories of the participants are extracted from the video recordings for the following intentions: (i) to find the impact of total speed (ii) to observe the impact of the speed on the movement direction, (iii) to find out the impact of speeds on the lateral direction. The results of the experiments could be used to enhance the current pedestrian simulation models.
Abstract —Design highlights of a ‘Three-wheeled Autonomous Navigational Robot’ are presented in this paper. An efﬁcient modular architecture is proposed for ease of adding various modules to the robot. Obstacle detection, pattern recognition and obstacleavoidance are the key aspects of the design. The robot has intelligence built into it that enables it to recognize and pick up balls of a particular colour and ignore other objects in its path. A single board computer mounted on the robot acts as the central controller. It communicates with ultrasonic sensors and motors through multiple microcontrollers and controls the entire motion of the unit. As part of the robot design, a modiﬁed H-bridge circuit for driving DC motors efﬁciently is proposed in this paper.
The visual tracking system installed on the follower enables the follower to follow the leader robustly in a variety of environment. Despite this advantage, there are quite a number of disadvantages for using vision system. One of the main disadvantages of the vision system is that it is costly to build. This is due to the high computational power required to process the raw data obtained from vision sensor. This reason makes the vision tracking system less preferable to be used.
RACCOON is a vision-based system that tracks car taillights at night as described in . The RACCOON system was developed at Carnegie Mellon University. The prototype was built and integrated with RACCOON system. This system enables the autonomous vehicle to chase the leading car effectively under low light condition. According to , this project was inspired by following reason:
Internet of vehicles (IOV) is a new field of research that aims to study remote agents (people, vehicles, robots) as they interact and collaborate to sense the environment, process the data, propagate the results and more generally share resources. But there are several untrusted zones (cloud services) were there may be chances of hacking all the private data. The driving habits of various individuals are different and we don’t have any system which can monitor these conditions. Here we provide secure and privacy-preserving access control to users, which guarantees any member in a group to anonymously utilize the cloud resource. A Driver Behavior Reporting System that works by collecting and sending actual, real-time data directly from nearby car whenever it is being driven. You stay aware and informed, so you can reinforce responsible driving habits, or immediately address areas of concern.