Top PDF Vision Based Obstacle Avoidance For Mobile Robot Using Optical Flow Process

Vision Based Obstacle Avoidance For Mobile Robot Using Optical Flow Process

Vision Based Obstacle Avoidance For Mobile Robot Using Optical Flow Process

Abstract: The paper is discuss on develop and implement a vision based obstacle avoidance for mobile robot using optical flow process. There are four stages in this project which are image pre-processing, optical flow process, filtering, object stance measuring and obstacle avoidance. The optical flow process are an image resizing, set parameters, convert color to grayscale, Horn-Schunk method and change grayscale image to binary number. Next process is a filtering done by smoothing filter then image center will be defined. The maximum distance object from a camera has been set as 20 cm. Therefore, the decisions of the robot to move whether left or right are based on the direction of optical flow. This avoidance algorithm allows the mobile robot to avoid the obstacles which are in different shape either square or rectangular. A friendly graphical user interface (GUI) had been used to monitor the activity of mobile robot during run the systems.
Show more

5 Read more

Vision based obstacle avoidance for a small, Low cost robot

Vision based obstacle avoidance for a small, Low cost robot

ble to implement this algorithm with a less powerful CPU. Since only 10% of the CPU time is spent on processing data, a CPU running at 20 MHz would be sufficient. So instead of the Gumstix computer we can use a micro-controller such as a Brainstem or a BA- SIC STAMP for both image processing and motion control without any decrease in performance. The memory usage is nearly one Mb which is rather big. We did not try to optimise memory usage while im- plementing the code so improvements could be made. We plan to implement this control algorithm on a micro-controller instead of the Gumstix. This change will reduce the cost and power usage of the robot by a large amount. To the best of our knowledge, there has not been a mobile robot that can perform reliable obstacle avoidance in unconstrained environ- ments using such low resolution vision and slow mi- croprocessor. A robot with only obstacle avoidance behaviour might not be very useful apart from appli- cations such as exploration or surveillance. However given that our algorithm is very computationally ef- ficient and requires low resolution images, it can be incorporate into a more complex system at a small price and leaves plenty of resources for higher level behaviours.
Show more

8 Read more

Obstacle Avoidance In Indoor Environment Using Sensor Fusion For Mobile Robot

Obstacle Avoidance In Indoor Environment Using Sensor Fusion For Mobile Robot

The vision-based obstacle avoidance technique is an appearance-based technique which classifies the input colour image from a monocular vision camera into defined classes such as ground, walls and obstacles. The robot is able to use the information from these image to localize itself in the environment. Vision sensors provide detailed information of the environment which can be used for navigation of mobile robots to follow the path and avoid the obstacles [14].

24 Read more

Obstacle Avoidance Based on Ultrasonic Sensors and Optical Encoders

Obstacle Avoidance Based on Ultrasonic Sensors and Optical Encoders

Abstract— Mobile robot applications become interested in various fields of life, the navigation methods of mobile robot depend mainly on the application which can be utilized to use by mobile robot. A robot’s safe movement, should be considered in any navigation method to avoid collision with other objects that may introduce errors in the navigation path, or avoid reaching the targets properly.

8 Read more

Application of optical flow sensors for dead reckoning, heading reference, obstacle detection, and obstacle avoidance

Application of optical flow sensors for dead reckoning, heading reference, obstacle detection, and obstacle avoidance

Dead reckoning is the estimation of position and can also be referred to as self- localization or position tracking. Odometry is the estimation of speed and distance. In the case of ground robots, sensors are usually attached to the wheels, and the collected data is analyzed to estimate the motion of the robot. The most popular and widely used sensors are the absolute and incremental rotary optical encoders. Unfortunately, these sensors generate unbounded errors, especially when paths are not straight. Inertial sensors are also used for dead-reckoning and odometry applications, but they suffer from the same type of errors. One factor behind the lack of precision of rotary encoders is that when a wheel-based robot slips, the wheels do not spin, leading to zero data collected by the sensors. This is the major handicap for indoor and outdoor robots; therefore, sensors based solely on optical flow computation must be adopted in robotics to solve the problems of inaccuracy and unreliability of traditional sensors. Among all the optical flow sensors, the use of an array of high speed optical flow mice has been proposed as a solution for dead reckoning and odometry issues [9]–[10].
Show more

154 Read more

Optimisation-based verification process of obstacle avoidance systems for unicycle-like mobile robots

Optimisation-based verification process of obstacle avoidance systems for unicycle-like mobile robots

In this paper, the safety analysis of collision avoidance systems is presented. The optimization-based verification process method is applied for verification of collision avoidance algorithms for an unicycle robot. First, kinematic and dynamic equations of the unicycle robot are introduced and the controller is presented based on these equations. The inner-outer-loop control architecture is used where the inner-loop controller is a proportional speed controller. A local planner in the outer-loop is developed using the artificial potential field method. Then an optimisation based approach is developed to find the worst cases which are defined by the minimum distance to the obstacle in the presence of all possible described variations. Mass and inertia variations are considered in this case study. The local optimisation method does not give the unique solution. It clearly shows that the optimal solutions do not converge to the global minimum. Different worst cases are identified when the optimisation starts from different initial conditions. Therefore, the local optimisation is not suitable for verification of collision avoidance algorithms for this case study. As demonstrated by Fig.8 and Fig.9, it is a non convex nonlinear optimisation problem and it is possible to miss the worst case. To overcome this problem, global optimisation algorithms are required.
Show more

10 Read more

Efficient Optical flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

Efficient Optical flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

Our main contribution is that the presented method provides both velocity and distance estimates, while still being compu- tationally efficient enough to run close to the frame rate on a very limited embedded processor. As such, the method enables unstable MAVs such as tiny quadcopters to perform fully autonomous flights in unknown environments. The EdgeFlow and EdgeStereo methods will be explained in more detail in section II. Off-line results for velocity estimates with a set of images is shown in section III. From here, the algorithm is embedded on the lightweight stereo camera and placed on 40 g pocket drone for velocity estimation (section IV-B). Finally, the velocity estimate is used together with EdgeStereo-based obstacle detection to perform fully autonomous navigation in an environment with obstacles (section IV-C). This is followed by some concluding remarks.
Show more

7 Read more

Implementation of Robot Operating System in Beaglebone Black based Mobile Robot for Obstacle Avoidance Application

Implementation of Robot Operating System in Beaglebone Black based Mobile Robot for Obstacle Avoidance Application

Abstract— The Robot Operating System (ROS) is a collection of tools, libraries, and conventions that focus on simplifying the task of creating a complex and advanced robotics system. Its standard framework can be shared with another robotics system that has a similar platform and suitable for being introduced as an educational tool in robotics. However, the problems found out in the current robot platform available in the market are expensive and encapsulated. The development of an open source robot platform is encouraged. Therefore, this research is carried out to design and develop an ROS based obstacle avoidance system for existing differential-wheeled mobile robot. The ROS was installed under Ubuntu 14.04 on a Beaglebone Black embedded computer system. Then, the ROS was implemented together with the obstacle avoidance system to establish the communication between program nodes. The mobile robot was then designed and developed to examine the obstacle avoidance application. The debugging process was carried out to check the obstacle avoidance system application based on the communication between nodes. This process is important in examining the message publishing and subscribing from all nodes. The obstacle avoidance mobile robot has been successfully tested where the communication between nodes was running without any problem.
Show more

7 Read more

Vision-based Obstacle Avoidance for Small UAVs

Vision-based Obstacle Avoidance for Small UAVs

In order to reduce dependence of a UAV on human operators, autonomous control is required. In navigating a desired path, an autonomous UAV must take into account static and dynamic obstacles such as buildings, terrain, other vehicles, and people. The successful avoidance of such obstacles is dependent on accurate obstacle detection logic [11]. There are a number of sensing schemes that may provide the needed environment data, the most commonly implemented of which are computer vision, LIDAR [12] and sonar [13] schemes. Ground-based robots have developed far enough to use suites of combined sensor types [14], though their relatively constant proximity to the ground enables the use of novel approaches not available to air-based platforms [12] [15]. Instead, most research in UAV obstacle detection in recent years has taken advantage of the relatively low cost and light payload weight of vision-based systems [16]. These systems are also flexible in their design: different numbers of sensors may be used, with different obstacle detection algorithms in each situation. Most commonly, either one or two cameras are used to provide environment data. In the case of a single camera, features such as edges or corners are identified and tracked from one sensor frame to the next in a process known as optical flow. In dual-camera implementations, the disparity of pixels between the left and right sensors is used to compute a depth mapping [17] [18]. It has also been shown that a combination of these methods is possible, using cameras of different framerate to perform both stereo correspondence and feature tracking [19]. These computer vision approaches are made easier by the availability of software packages built for the implementation of computer vision algorithms. One such package, OpenCV, was selected for use in this project due to the completeness of its feature-set and its implementation in both C++ and Python [20].
Show more

40 Read more

The Integration of Fuzzy Logic System for Obstacle Avoidance

Behavior of Mobile Robot

The Integration of Fuzzy Logic System for Obstacle Avoidance Behavior of Mobile Robot

In this section we compare and evaluate the performance of both Mamdani and Sugeno FLC in term of the path travelled, the smoothness and the efficiency of the FLC for single and multi-robot. For integration of FLS for obstacle avoidance behavior of mobile robot, the simulation begins upon the successful remote application program interface (API configuration) between V-REP and MATLAB. If the distance from the target and robot position is more than 0.1m, the robot is far from the target location. So, the robot will start move towards the target with three conditions. The second condition is, if the angle is bigger than 0.1 radian the robot will rotate left whereas the third condition is if the angle is smaller than 0.1 radian the robot will rotate right. While executing these three conditions, if any of the proximity sensors of roots detects the obstacle, the FLC either Mamdani or Sugeno, will control the left velocity and right velocity of robot based on the 15 rules defined. This process will keep on repeating until the distance between the robot and target location is not more than 0.1m.
Show more

8 Read more

Mobile Robot Following Obstacle Avoidance And Collision

Mobile Robot Following Obstacle Avoidance And Collision

The visual tracking system installed on the follower enables the follower to follow the leader robustly in a variety of environment. Despite this advantage, there are quite a number of disadvantages for using vision system. One of the main disadvantages of the vision system is that it is costly to build. This is due to the high computational power required to process the raw data obtained from vision sensor. This reason makes the vision tracking system less preferable to be used.

24 Read more

Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments

Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments

   Abstract: Due to payload restrictions for micro aerial vehicles (MAVs), vision-based approaches have been widely studied with their light weight characteristics and cost effectiveness. In particular, optical flow-based obstacle avoidance has proven to be one of the most efficient methods in terms of obstacle avoidance capabilities and computational load; however, existing approaches do not consider 3-D complex environments. In addition, most approaches are unable to deal with situations where there are wall-like frontal obstacles. Although some algorithms consider wall-like frontal obstacles, they cause a jitter or unnecessary motion. To address these limitations, this paper proposes a vision-based obstacle avoidance algorithm for MAVs using the optical flow in 3-D textured environments. The image obtained from a monocular camera is first split into two horizontal and vertical half planes. The desired heading direction and climb rate are then determined by comparing the sum of optical flows between half planes horizontally and vertically, respectively, for obstacle avoidance in 3-D environments. Besides, the proposed approach is capable of avoiding wall-like frontal obstacles by considering the divergence of the optical flow at the focus of expansion and navigating to the goal position using a sigmoid weighting function. The performance of the proposed algorithm was validated through numerical simulations and indoor flight experiments in various situations.
Show more

25 Read more

Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot-

Path Planning and Obstacle Avoidance for Boe Bot Mobile Robot-

[6, 7] propose specific systems for exception handling. The approach described in [6] introduces the concept of the supervisor which has the role of a handler for a group of system’s agents. He defines two types of exceptions: the internal exceptions that are treated by the agent itself and the external ones handled by the supervisor which has a global access to the agents that it supervises. Hagg [7] proposes a strategy for exception handling using sentinels. Sentinels are guardian agents which protect the multi-agent system from failing in undesirable states. They have the authority to monitor communications in order to react to fault. This approach is costly in terms of computation and communication and it causes point of failure since sentinels, also, are subject of fault. These two works are based on the idea of controlling agent, this means the use of a generic agent that controls the execution process and handles exceptions when they occur.
Show more

9 Read more

Monocular vision, optical blur and visual looming as a basis
for mobile robot obstacle avoidance

Monocular vision, optical blur and visual looming as a basis for mobile robot obstacle avoidance

The averaging approach for blur finding has disadvantages, in that it is simplistic and assumes that all objects in the area are on the same plane and orientation relative to the camera. However, for the controlled test environment this was considered an acceptable assumption, the averaging approach providing consistent results (see below). Although such a method could not be used in a real mobile application (the assumption that the area contains one at object would almost certainly be violated), the overall method of looming-through-blur could be implemented by substituting this area averaging technique for a simple object tracking method. The critical point is to find the previous frame history of blur radius r for a given object. In the experiments we were able to assume that the area of interest contained the same object in each frame. The advantages of this experimental set up are that, while it is not real-time, it is controlled and produces clear data, illustrating the efficacy of looming-through-blur. While modifications needed to implement this concept in real time avoidance for mobile robots they were considered relatively minor.
Show more

6 Read more

Monocular vision, optical blur and visual looming as a basis
for mobile robot obstacle avoidance

Monocular vision, optical blur and visual looming as a basis for mobile robot obstacle avoidance

The organizing committee is soliciting contributed papers and invited session proposals. Review of the contributed papers will be based on Extended Abstracts of 3 to 4 pages. Invited session proposals should include a session title, and the abstracts of 5 to 6 papers with the contact information of the session proposer. Special Publication in Journals:

5 Read more

Real time implementation of obstacle avoidance for an autonomous mobile robot using monocular computer vision

Real time implementation of obstacle avoidance for an autonomous mobile robot using monocular computer vision

While far more efficacious than any other edge detection method, the edge tracing algorithm was not selected for final use in the finished program for a number of reasons. When identified to the author, writing for the final segmentation algorithm (see below) was already nearing completion. It was initially felt that additional expenditure of time could not be justified, especially as virtually all existing work would have to be rewritten and for a method that was bound to have unforeseen and time consuming side issues. Secondly, it was believed that the edge based approach suffered from a serious flaw. By definition only the outlines of objects are considered. While this requires far less computational power than other methods (only a tiny fraction of the image is under consideration (Billingsley 26th May 2011), it renders most of the image a blank map with no information about those pixels. If the machine was to approach a blank wall, an edge detection device could only extract information about the line where the floor meets the wall.
Show more

186 Read more

Real-Time On-Board Obstacle Avoidance for UAVs based on Embedded Stereo Vision

Real-Time On-Board Obstacle Avoidance for UAVs based on Embedded Stereo Vision

In past years a substantial amount of work has been done in the field of navigating mobile robots securely through a known or unknown environment. Ground operating robots, or slow mov- ing and heavy UAVs, typically use light detection and ranging (LiDAR) technology to conceive the surrounding environment (Bachrach et al., 2012). Scherer et al. (2008) used a heavy he- licopter able to carry maximum payload of 29 kg and flying with speeds of up to 10 m/s. The LiDAR system has a range up to 150 m, which is suitable given the speeds and the mass of the sys- tem. However, for fast reactions in dynamic or cluttered scenes, heavy LiDAR systems are not practical. They consume signifi- cantly more power than a camera-based system, thus requiring a corresponding power supply, making them heavier and sluggish. Due to higher speeds and more agile movements, UAVs need fast, light and energy-efficient solutions for robust obstacle avoidance. Systems relying on a single camera typically use optical flow in- formation, computed between two consecutive frames to perceive the structure of the environment. The main advantage of such structure-from-motion approaches is that they are lightweight and energy-efficient and can therefore be used for small UAVs. Mer- rell and Lee (2005) proposed a probabilistic method of comput- ing the optical flow for more robust obstacle detection. Ross et al. (2012) used a data driven optical flow approach learning the flight behaviour from a human expert. Nonetheless, since the informa- tion gained by a single camera system is limited, such systems are typically infeasible for robot navigation.
Show more

8 Read more

Vision-Based Mobile Robot

Vision-Based Mobile Robot

This article is written together by Shiao Ying Shing, Yang Jui Liang, and Su Ding Tsair from Department of Electrical Engineering from National Chang-hua University of Education, Taiwan. This article provides an alternative way for vision-based approach navigation. The control objective for the vision-based tracking problem is to manipulate the WMR to follow the desired trajectory. This system is composed of a ceiling-mounted fixed camera whose outputs are connected to a host computer, and a WMR with two atop different color, round marks to differentiate the front and rear of the WMR as show in figure 2.4.
Show more

24 Read more

Object Recognition and Obstacle Avoidance Robot

Object Recognition and Obstacle Avoidance Robot

time, constantly sends messages to the SONAR module. The ultrasonic sensors of the SONAR module respond back with an acknowledgement whenever an obstacle is detected. The SONAR module also calculates the distance of the object and sends it to the master. When the distance is within the reach of the robot arm, a message is sent by the master to the base unit to apply the brakes and stop the unit. With the help of three sonar sensors, the base is aligned to be exactly in front of the object. This ensures proper focusing of the webcam to capture the image of the obstacle accurately. A frame is then captured and the image is analyzed to identify the shape and the colour of the object. If it matches the specification of our object then the arm unit is activated to pick up the ball, else the robot moves around the obstacle and continues on its path.
Show more

5 Read more

NAO robot fuzzy obstacle avoidance in virtual environment

NAO robot fuzzy obstacle avoidance in virtual environment

noise during the walk, as Nao walk is unstable. A Neural Approach for Robot Navigation based on Cognitive map Learning is presented in [3]. This paper presents a neural network architecture for a robot learning new navigation behaviour by observing a human’s movement in a room. There are also a lot of works that study the control of autonomous robots using neural networks, fuzzy, PSO, ANFIS . In [4] a fuzzy logic technique for Romeo Autonomous Vehicle Navigation is proposed , an on‐ line navigation technique for a wheeled mobile robot in an unknown dynamic environment using fuzzy was developed in [5], a new behaviour-based fuzzy control method for Voyager II mobile robot navigation is presented in paper in [6], a fuzzy discrete event systems framework for the control of sampled data systems that have to fulfil multiple objectives has been introduced by Schmidt K.W. et al. in [7]. Nao and humanoid robot control have also been the object of other research works of the authors, such as Melinte O et al. in [8] for haptic robots, Wang X et al. for rescue robots[9], Feng Y. in [10], Wang H. for rehabilitation robots in [11] and the results are applied in this paper. 2. NAO robot control
Show more

6 Read more

Show all 10000 documents...