Top PDF Vision-based Obstacle Avoidance for Small UAVs

Vision-based Obstacle Avoidance for Small UAVs

Vision-based Obstacle Avoidance for Small UAVs

In order to reduce dependence of a UAV on human operators, autonomous control is required. In navigating a desired path, an autonomous UAV must take into account static and dynamic obstacles such as buildings, terrain, other vehicles, and people. The successful avoidance of such obstacles is dependent on accurate obstacle detection logic [11]. There are a number of sensing schemes that may provide the needed environment data, the most commonly implemented of which are computer vision, LIDAR [12] and sonar [13] schemes. Ground-based robots have developed far enough to use suites of combined sensor types [14], though their relatively constant proximity to the ground enables the use of novel approaches not available to air-based platforms [12] [15]. Instead, most research in UAV obstacle detection in recent years has taken advantage of the relatively low cost and light payload weight of vision-based systems [16]. These systems are also flexible in their design: different numbers of sensors may be used, with different obstacle detection algorithms in each situation. Most commonly, either one or two cameras are used to provide environment data. In the case of a single camera, features such as edges or corners are identified and tracked from one sensor frame to the next in a process known as optical flow. In dual-camera implementations, the disparity of pixels between the left and right sensors is used to compute a depth mapping [17] [18]. It has also been shown that a combination of these methods is possible, using cameras of different framerate to perform both stereo correspondence and feature tracking [19]. These computer vision approaches are made easier by the availability of software packages built for the implementation of computer vision algorithms. One such package, OpenCV, was selected for use in this project due to the completeness of its feature-set and its implementation in both C++ and Python [20].
Show more

40 Read more

Monocular vision, optical blur and visual looming as a basis
for mobile robot obstacle avoidance

Monocular vision, optical blur and visual looming as a basis for mobile robot obstacle avoidance

significant, especially when the image is sharpest. We believe that at sharp points in the image the blur recovery is strongly affected by noise, with small noise points being erroneously reported as ‘sharp’ objects. While the variance decays rapidly, its final state is still quite high. Qualitative examination of the blur recovered illustrates that a ‘penumbra’ of less accurate blur exists around each edge in the image, as the method of Hu and de Haan is most accurate on the edge [11]. These increasingly inaccurate blur values could account for the relatively wide spread of the mean. However, this was not considered a serious problem due to the smooth evolution of the mean and its clear relationship with range. With a more advanced tracking system, this might not be an issue.
Show more

6 Read more

Monocular vision, optical blur and visual looming as a basis
for mobile robot obstacle avoidance

Monocular vision, optical blur and visual looming as a basis for mobile robot obstacle avoidance

The organizing committee is soliciting contributed papers and invited session proposals. Review of the contributed papers will be based on Extended Abstracts of 3 to 4 pages. Invited session proposals should include a session title, and the abstracts of 5 to 6 papers with the contact information of the session proposer. Special Publication in Journals:

5 Read more

Design And Construction Of A small Scale Underwater Vehicle With Obstacle Avoidance System

Design And Construction Of A small Scale Underwater Vehicle With Obstacle Avoidance System

Dagon AUV is a hovering AUV developed in 2010 which is purposed specifically for algorithm evaluation and visual mapping. It weighs around 70 kg in air and has a depth rating of more than 150 meters. To counter high underwater pressure, strong metal alloy material has been used to develop the pressure hull body. The AUV has five DC propellers for propulsion system which two of them are for forward thrusting, two for up-thrusting and one more for side thrusting. It can be operated in tethered mode with fiber-optic link or untethered mode with acoustic modem, depends on types of analysis. The water depth of the AUV can be measured by pressure sensor and Doppler velocity log. The orientation of the AUV is controlled by Altitude Heading Reference System (AHRS) and fiber-optic gyroscope. For visual mapping purpose, scanning sonar as well as the high resolution stereo cameras have been implemented in the AUV. Intel i7 processor based controller has been selected for high processing ability in the basic operation of the AUV as well as analysis activity in several experiments. Two block of Lithium-Ion battery with total capacity of 1.5 kWh has been selected to power up the AUV [9]. Figure 2.2 shows the hardware of the Dagon AUV in different condition.
Show more

24 Read more

Obstacle Avoidance Gloves for the Blind Based on Ultrasonic Sensor

Obstacle Avoidance Gloves for the Blind Based on Ultrasonic Sensor

It is estimated that the world has 40 million to 45 million binds from the World Health Organization, which haven’t included the low visions. According to the survey, the number of patients with low vision is 3 times the blinds. That means, there are about 140 million low visions from the world, of which 75% or more than 100 million patients can’t restore or improve their eyesight without surgery or diopter correction. And still 25% of low vision patients need low vision care, for example, wear low vision devices and visual rehabilitation instrument etc. [1]. How to cite this paper: Zhou, R.C. (2019)
Show more

11 Read more

Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments

Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments

To navigate through unknown environments, micro aerial vehicles (MAVs) need to detect and avoid obstacles. There are many types of sensors to detect obstacles such as RADAR (RAdio Detection And Ranging), LiDAR (LIght Detection And Ranging), and ultrasonic sensor. RADAR and LiDAR have a good performance in terms of the operation range and accuracy to detect obstacles around the MAV. However, they have disadvantages in terms of operating time due to heavy weight and energy consumption. Although lightweight and low power consumption products have been developed recently [ 1 ], they are still expensive. Ultrasonic sensors are small and light but typically have poor range accuracy. Vision sensors, on the other hand, have various advantages such as lightweight, large detection area, fast response time, low cost, and rich information about the environment around the MAV.
Show more

25 Read more

Monocular vision based obstacle detection

Monocular vision based obstacle detection

Detecting and preventing incidents with obstacles is a challenging problem. Most of the common obstacle detection techniques are currently sensor-based. Mobile robots like Small Unmanned Aerial Vehicles (UAVs) are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo and mono techniques. Mono methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection. A recent research in this field has focused on matching the Scale- Invariant Feature Transform (SIFT) points along with SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. However, this method is not able to distinguish between near and far obstacles nor the obstacles in a complex environment and, thus, is sensitive to wrong matched points. This paper aims to solve the aforementioned problems through using the distance-ratio of matched points. Then, every point is investigated for distinguishing between far and near obstacles. The results demonstrated the high efficiency of the proposed method in complex environments. The least achieved accuracy of the algorithm was 60.0%, and the overall accuracy was 79.0%.
Show more

9 Read more

Nonlinear Coordinated Steering and Braking Control of Vision-Based Autonomous Vehicles in Emergency Obstacle Avoidance

Nonlinear Coordinated Steering and Braking Control of Vision-Based Autonomous Vehicles in Emergency Obstacle Avoidance

An enormous deal of practice and research on the lateral motion control has been done in recent years. A nested PID steering control architecture with two independent control loops in vision-based autonomous vehicles is proposed and it can reject the disturbances on the curvature which increase linearly with respect to time. In order to simulate human decision making and analogical reasoning, an intelligent fuzzy steering control strategy is given.

6 Read more

Real time implementation of obstacle avoidance for an autonomous mobile robot using monocular computer vision

Real time implementation of obstacle avoidance for an autonomous mobile robot using monocular computer vision

While far more efficacious than any other edge detection method, the edge tracing algorithm was not selected for final use in the finished program for a number of reasons. When identified to the author, writing for the final segmentation algorithm (see below) was already nearing completion. It was initially felt that additional expenditure of time could not be justified, especially as virtually all existing work would have to be rewritten and for a method that was bound to have unforeseen and time consuming side issues. Secondly, it was believed that the edge based approach suffered from a serious flaw. By definition only the outlines of objects are considered. While this requires far less computational power than other methods (only a tiny fraction of the image is under consideration (Billingsley 26th May 2011), it renders most of the image a blank map with no information about those pixels. If the machine was to approach a blank wall, an edge detection device could only extract information about the line where the floor meets the wall.
Show more

186 Read more

NAO robot obstacle avoidance based on fuzzy Q-learning

NAO robot obstacle avoidance based on fuzzy Q-learning

In Fig. 3. (a), the length of each small square is 0.1 meter. The red star represents the obstacle and the blue point is the target location. The size of checkerboard path planning model is ( h max + 0.8) (  l max + 0.8) . h max denotes the maximum transverse distance between obstacles and l max denotes the maximum longitudinal distance between obstacles. The safe distance between the robot and the obstacle is set as 0.3 meter in consideration of the outline of the robot. So the red cross denotes danger area. The robot will collide with the obstacle if the robot is in danger area. For the checkerboard path planning model, the robot can move from an intersection to another adjoining intersection. Those intersections constitute the state set S . Fig. 3. (b) shows an optimal obstacle avoidance path that the robot moves from current position to target position.
Show more

14 Read more

Efficient Optical flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

Efficient Optical flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone

with the accompanying video and YouTube list 7 in Fig. 10(c). The mentioned failure case for the autonomous flights is difficult to overcome. If the MAV would turn and face a large open space, the distance for EdgeStereo could be far enough to compromise the quality for the velocity estimate due to the small base line of the stereo camera. As we already observed in Fig. 6, this would cause the pocket drone to drift, which is problematic when near a wall/obstacle after the turn. If an obstacle is not in its FOV, the chances of collision significantly increases. This could be solved by merging the check and turn node of the FSM, so it will only stop turning at a significant clear path. Another solution is to add a lightweight short range sensor on the sides of the pocket drone, so it will detect immediately if the drone is flying close and aside an obstacle. The obstacle avoidance logic will need some additional work, however the experiments show that Edge-FS can be used in navigation overall. During the autonomous flight, the pocket drone was stabilizing itself using the velocity estimates of its forward camera alone.
Show more

7 Read more

A Simplified Fuzzy Logic Control System Approach to Obstacle Avoidance combining Stereoscopic Vision and Sonar

A Simplified Fuzzy Logic Control System Approach to Obstacle Avoidance combining Stereoscopic Vision and Sonar

image points do indeed have matching intensities. As discussed, the search can be limited to only occur along conjugate epipolar lines. It is likely that several points along epipolar lines will have closely matching intensities, however, so matching on a point-by-point basis will not be sufficient. Instead, matching can occur on a region-by-region basis. The size of the regions must be selected carefully. If they are too small, correct matches may be missed. If they are too large, localised variations in depth within individual regions may be too great. One enhancement to the basic intensity based method involves the use of multiple resolutions of the image pairs. A reduction in the resolution of an image pair increases the likelihood that they will match, and allows smaller search regions to be used. Once the disparity at low resolution has been established, increasingly higher resolutions can be considered, using the disparity of the previous lower resolution as a guide for calculating the more detailed disparity (Iocchi & Konolige 1998). A further variation employs stochastic relaxation at each resolution to refine the matching of the previous resolution (Nalwa 1993).
Show more

75 Read more

DEVELOPMENT OF AN ARDUINO-BASED OBSTACLE AVOIDANCE ROBOTIC SYSTEM FOR AN UNMANNED VEHICLE

DEVELOPMENT OF AN ARDUINO-BASED OBSTACLE AVOIDANCE ROBOTIC SYSTEM FOR AN UNMANNED VEHICLE

The application and complexity of mobile robots are slowly growing every day. They are gradually making their way into real world settings in different fields such as military, medical fields, space exploration, and everyday housekeeping [1]. Motion being a vital characteristic of mobile robots in obstacle avoidance and path recognition has a major impact on how people react and perceive an autonomous system. This enables an autonomous robot to be able to navigate from one place to another without human intervention. Computer vision and range sensors are primary object detection methods used in mobile robots’ detection. Computer vision as an obstacle detection method is more rigorous and expensive technique than the range sensors’ method. However, most commercial autonomous robots use range sensor to detect obstacles. The use of radar, infrared (IR) sensor and ultrasonic sensor for developing an obstacle detection system had started as early as the 1980’s [2]. Although, after testing these technologies it was concluded that the radar technology was the most suitable for use as the other two technology options were prone to environmental constraints such as rain, ice, snow, dust and dirt. The radar approach was also a very cost effective technology both for the present and the future. [3] presented a method using a single charge-coupled device (CCD) camera in conjunction with a spherically shaped curved reflector
Show more

7 Read more

Optimisation-based verification process of obstacle avoidance systems for unmanned vehicles

Optimisation-based verification process of obstacle avoidance systems for unmanned vehicles

Stochastic global optimisation algorithms including GA and GLOBAL methods were applied to the problem. In order to understand the proposed method of optimisation-based verification algorithm, very simple unicycle mobile robot was considered. Static obstacle avoidance algorithm with artificial potential field method is verified within the parameters range. Only two uncertain parameters of mass and inertia were defined within the lower and upper bounds. Both algorithms are performed well for this case study. Then moving collision avoidance algorithm using with potential field method is applied to the more complicated Pioneer 3DX robots. Initial robustness analysis were carried out, and most significant eight uncertain parameters including obstacle sensor data uncertainties were chosen within the lower and upper bounds. It is a very challenging task to find the true worst case scenario for this case study because this is the verifying moving OAS with more design variables including sensor data uncertainty. Furthermore, it is a non-linear analysis problem in the search space with many local minima. It seems to be a real-world problem. The GA and GLOBAL algorithms are almost converged to the same global solution. However, GA is very faster than GLOBAL algorithm. GA took only 2 hours 20 minutes to converge while GLOBAL took around 5 hours 20 minutes. After considering the verification of OAS application for UGVs, this work was extended to the UAVs. Based on a 6 Degree of Freedom (6DoF) kinematic and dynamic model of a UAV, the path planning and collision avoidance algorithms were developed in 3D space. Static and moving obstacle avoidance algorithms were developed for UAVs using with potential methods. Proposed OAS was verified at nominal parameters, and then clearance criterion of minimum distance to the obstacle was defined as the objective function in the time domain. Mass and two aerodynamic coefficients variations were considered for the verification purpose. The convergence results of GA and GLOBAL for these cast studies are almost closer. And again, GA performs faster the GLOBAL algorithm.
Show more

174 Read more

Mobil Robot Following Obstacle Avoidance And Collision

Mobil Robot Following Obstacle Avoidance And Collision

RACCOON is a vision-based system that tracks car taillights at night as described in [2]. The RACCOON system was developed at Carnegie Mellon University. The prototype was built and integrated with RACCOON system. This system enables the autonomous vehicle to chase the leading car effectively under low light condition. According to [2], this project was inspired by following reason:

24 Read more

Mobile Robot Following Obstacle Avoidance And Collision

Mobile Robot Following Obstacle Avoidance And Collision

RACCOON is a vision-based system that tracks car taillights at night as described in [2]. The RACCOON system was developed at Carnegie Mellon University. The prototype was built and integrated with RACCOON system. This system enables the autonomous vehicle to chase the leading car effectively under low light condition. According to [2], this project was inspired by following reason:

24 Read more

A dynamical system approach to realtime obstacle avoidance

A dynamical system approach to realtime obstacle avoidance

Adaptation to change in the target position To verify the adaptability of the system in a dynamic environment, we perform an experiment in which we continuously displace the target while the robot approaches it (see Fig. 15). Dur- ing the reproduction, the position of the target is updated based on the output of the stereo vision system. Since the modulated dynamics preserves the asymptotic stability of the model, the system can adapt its motion on-the-fly to the change in the target position. Note that the instant adaptation to the target position is an inherent property of the SEDS modeling. In this experiment we are demonstrating the fact that our approach preserves all the properties of the SEDS model, while enabling it to perform obstacle avoidance. Adaptation to change in both the target and obstacle po- sitions To evaluate the performance of the system in the presence of a moving obstacle, we extend the previous ex- ample to a case where both the target and the obstacle posi- tions are changed as the robot approaches the target. Please note that in this experiment we assume that the obstacle movement is “quasi-static”. This assumption requires the obstacle approaching speed (the projection of the obstacle velocity onto the vector connecting the obstacle center to the robot end-effector) to be significantly smaller than the robot movement in that direction. Figure 16 demonstrates the obtained results. In this experiment, at the time between t = 0 and t = 6 seconds, the target is moved from its origi- nal position first in the opposite and then along the direction of the y-axis. The box also starts moving in the period be- tween t = 0 and t = 2 seconds. During the reproduction, the target position and the box center and orientation are contin- uously updated based on the output of the stereo vision sys- tem. Similarly to the previous example, the system remains
Show more

22 Read more

Design of obstacle avoidance controller for agricultural tractor based on ROS

Design of obstacle avoidance controller for agricultural tractor based on ROS

compass to achieve the automatic navigation of tractors, however, the GPS signal reception and positioning accuracy are not ideal. This is because the branches and leaves of fruit trees block the GPS receiver, making it unable to receive satellite signals stably. Meanwhile, Wei et al. [9] proposed a set of field obstacle detection system based on binocular vision. The proposed detection system can detect the obstacles higher than the canopy of the field crop and obtain the range and distance of the obstacle through stereo
Show more

8 Read more

Embedded Stereokamera-basierte reaktive Kollisionsvermeidung für UAVs Embedded stereo vision based reactive collision avoidance for UAVs

Embedded Stereokamera-basierte reaktive Kollisionsvermeidung für UAVs Embedded stereo vision based reactive collision avoidance for UAVs

matischen Kollisionsvermeidung f ¨ur UAVs basierend auf einer Stereokamera. Dazu haben wir ein echtzeitf¨ahiges und energieef- fizientes System realisiert, das mit beschr¨ankten Ressourcen zu- rechtkommt. Zur Hinderniserkennung werden aus den Bildern der Stereokamera Disparit¨atskarten berechnet. Diese werden in die sog. U- bzw. V-Maps konvertiert, mit deren Hilfe mit Me- thoden der Mustererkennung Objekte erkannt werden. Die Kol- lisionsvermeidung basiert auf einem reaktiven Ansatz, welcher den k ¨urzesten Weg zum Umfliegen eines sich in kritischen Ent- fernung befindlichen Hindernisses sucht.
Show more

14 Read more

Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

The selected configuration of 62 ◦ ensured the capabilities of detecting the border of an object of the size of the actual drone (58.4 × 1.3 × 54.4 ) located in the center of the image at distances higher the 15 cm, which allows to avoid obstacles, for higher obstacles or closer distances, the drone proved to be able to stop and preform hover movement, avoiding the collision. Bigger obstacles located at longer distances were avoided due to the use a high quality camera able to detect obstacle at long distances. In case of faster speeds required, the frame rate calculation and the angle should be adjusted, to allow the drone to do the calculation at proper detection. However, the change of the field of view of the camera, would only be advisable in order to allow further maneuverability in extremely dense scenarios, with short distance detection requirements which are not common in aerial scenarios where UAVs are deployed.
Show more

23 Read more

Show all 10000 documents...