Top PDF Collision Avoidance and Navigation of UAS Using Vision-Based Proportional Navigation

Collision Avoidance and Navigation of UAS Using Vision-Based Proportional Navigation

Collision Avoidance and Navigation of UAS Using Vision-Based Proportional Navigation

8 The aviation industry, specifically those interested in SAA, see TCAS as a viable option due to its current and widespread implementation, as well as its functionality for Visual Measurement Conditions (VMC) and Instrument Measurement Conditions (IMC) (Yu & Zhang, 2015). Automatic dependent surveillance – broadcast (ADS-B) is another similar, and proven, cooperative technology that provides participating receivers with aircraft global positioning system (GPS) coordinates, velocity, mission intent, and a specific identification value (Zeitlin & McLaughlin, 2007). While this is not yet a required or fully supported system, ADS-B can supply data exchange 240 𝑘𝑚 to and from ground stations, making it very favorable as a leading SAA technology. Both of these cooperative solutions, however, are limited in a few aspects in regards to use as SAA technology. TCAS is very capable with individual vehicles, but has not been proven to be able to incorporate multiple vehicles. ADS-B has the disadvantage of not working with ground or stationary objects (Yu & Zhang, 2015). The largest drawback of both these cooperative systems as SAA technologies stems from the necessity for all other vehicles to use the same technology to ensure collision trajectories are detected and avoidance maneuvers are made (Angelov, 2012).
Show more

105 Read more

A Collision Avoidance System for Piloted UAS.

A Collision Avoidance System for Piloted UAS.

There’s been quite a volume of research pertaining to autonomous indoor navigation of UAS, as evidenced by the work in [SMK11], [FHHLMTP12], [WCPCL13] and [BHR09]. This is a different approach to collision avoidance than the one taken in this project. With autonomous navigation, collisions are avoided by the path planning algorithm. This approach leads to never ending up in a situation where the UAS has to actively take action to avoid an imminent collision. This type of research, however, has a lot in common with this project in the type of sensing that it requires. The team in [ CLDLCL14 ] built a UAS to navigate in a forest environment. This shares a lot of similarities with our plan in that it uses a 2D laser scanner for obstacle detection, and an IMU combined with SLAM for motion estimation. The group in [ SMK11 ] use SLAM along with data from an IMU to create a sophisticated self location and mapping system. The system used loop closure to reduce the error from SLAM as the UAS traverses the environment. Both in [WCPCL13] and [BHR09] location of the UAS is estimated using Kalman Filter fused data from SLAM, Optical flow and an IMU, while in [ CLDLCL14 ] SLAM and IMU readings are used. The teams in [ CLDLCL14 ] , [ SMK11 ] and [ WCPCL13 ] used advanced motion capture and characterization systems to characterize the dynamics of their UAS, and determine the parameters of their respective controllers. This allowed them to develop a model relating the controllable inputs pitch,roll,throttle,yaw to the motion of their platform. With this model they could then safely simulate most facets of the algorithm and tune off-line, before testing on a live platform. The system in [ WCPCL13 ] handles everything on-board and live, while the system in [ BHR09 ] handles most processing at a ground station, and has a 1-2 second delay for SLAM scan-matching which makes navigation slow. That system, however, was developed over 5 years ago and given current technology, the same processing could most likely be handled on-board and closer to real-time.
Show more

53 Read more

Vision-Based navigation system for unmanned aerial vehicles

Vision-Based navigation system for unmanned aerial vehicles

(II) Obstacle Detection and Collision Avoidance , in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the ex- pansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the

225 Read more

Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

For the purpose of collision avoidance and path following approaches, different types of sensors such as camera, infrared sensor, ultrasonic sensor, and GPS can detect different aspects of the environment. Each sensor has its own capability and accuracy, whereas integrating multiple sensors enhances the overall performance and detection of obstacles. Many researchers have used sensor fusion to fuse data from various types of sensors, which improved the decision making process of routing the mobile robot. A hybrid mechanism was introduced by [2] which uses the neuro-fuzzy controller for collision avoidance and path planning behavior for mobile robots in an unknown environment. Moreover, an adaptive neuro-fuzzy inference system (ANFIS) was applied for an autonomous ground vehicle (AGV) to safely reach the target while avoiding obstacles by using four ANFIS controllers [3]. Another sensor fusion based on Unscented Kalman Filter (UKF) was used for mobile robots’ localization problems. Accelerometers, encoders, and gyroscopes were used to obtain data for the fusion algorithm. The proposed work was tested experimentally and was successfully capable of tracking the motion of the robot [4]. In [5], Teleoperated Autonomous Vehicle (TAV) was designed with collision avoidance and path following techniques to discover the environment. TAV includes GPS, infrared sensors, and the camera. Behavior based architecture is proposed which consist of obstacle avoidance module (OAM), Line Flowing Module (LFM), Line Entering Module (LEM), Line Leaving Module (LLM) and U-Turn Module (UTM). Sensor fusion based on Fuzzy logic was used for collision avoidance where neural network fusion was used for the line following approach.
Show more

24 Read more

Collision Free Mobile Robot navigation using Fuzzy Logic Approach

Collision Free Mobile Robot navigation using Fuzzy Logic Approach

Autonomous mobile robots’ navigation has become a very popular and interesting topic of computer science and robotics in the last decade. Many algorithms have been developed for robot motion control in an unknown (indoor/outdoor) and in various environments (static/dynamic). Fuzzy logic control techniques are an important algorithm developed for robot navigation problems. The aim of this research is to design and develop a fuzzy logic controller that enables the mobile robot to navigate to a target in an unknown environment, using WEBOTS commercial mobile robot simulation and MATLAB software. The algorithm is divided into two stages; In the first stage, the mobile robot was made to go to the goal, and in the second stage, obstacle avoidance was realized. Robot position information (x, y, Ø) was used to move the robot to the target and six sensors data were used during the obstacle avoidance phase. The used mobile robot (E_PUCK) is equipped with 12 IR sensors to measure the distance to the obstacles. The fuzzy control system is composed of six inputs grouped in doubles which are left, front and right distance sensors two outputs which are the mobile robot’s left and right wheel speeds. To check the simulation result for proposed methodology, WEBOTS simulator and MATLAB software were used. To modeling the environment in different complexity and design, this simulator was used. The experimental results have shown that the proposed architecture provides an efficient and flexible solution for autonomous mobile robots and the objective of this research has been successfully achieved. This research also indicated that WEBOT and MATLAB are suitable tools that could be used to develop and simulate mobile robot navigation system.
Show more

7 Read more

Computer Vision for Mobile Robot Navigation

Computer Vision for Mobile Robot Navigation

After having a reliable set of inlier, the relative transformation between the current and the past view can be calculated in closed form by singular value decomposition, from the 3D reconstructions of the corners (Haralick et al., 1989; Arun et al., 1987). Thereafter, the reprojection error, or for the sake of speed, the ellipsoid error (Matthies and Shafer, 1987) can be minimized by non-linear optimization. Finally, the error of the transformation can be calculated by error propagation (Stelzer et al., 2012). The transformation error estimate is important for the following processing steps. Pure incremental visual odometry suffers from drift, since small errors accumulate over time. We use a keyframe based approach that computes the transformation of the current frame to a few keyframes. The transformation with the minimal error is used for further processing. Then, the current frame is inserted into the keyframe list, such that it replaces a keyframe which is not visible any more or one with a high transformation error. In this way, visual odometry is drift free if the system is standing. Furthermore, the keyframe approach reduces drift if the system is moving slow in relation to the used frame rate.
Show more

12 Read more

Implementation of Fuzzy Decision Based Mobile Robot Navigation Using Stereo Vision

Implementation of Fuzzy Decision Based Mobile Robot Navigation Using Stereo Vision

platform for stereo vision based navigation. There are many tasks that are essentials to be achieved by KSU-IMR, while relying on active vision system. This includes: (i) obstacle detection. (ii) autonomous navigation. (iii) floor landmark extraction. (iv) gaining of new landmark information, (v) metric distal measurements. In reality, the mobile robot is to perform such tasks sequentially, and in parallel once it is in motion. Given such required system function- alities, therefore a system hierarchy is needed to manage the massive visual information during motion. A hierarchy management was developed for the active stereo vision system, hence to use it commonly for multiple purposes. In reference to Fig.1, KSU-IMR system architecture do consist of three fundamental layers of intelligence. This includes: upper layer, mid-layer, and lower layer of intelligence.
Show more

8 Read more

Vision Navigation Based on the Fusion of Feature based Methods and Direct Methods

Vision Navigation Based on the Fusion of Feature based Methods and Direct Methods

Abstract. Vision navigation is an alternative to Global Position System (GPS) in environments where access to GPS is denied. Yet, the classical feature-based method has to extract feature observations from the image as input, which performs poor in environments with little features. To address this problem, a fusion of feature-based method and direct method is designed. The feature-based method is used in feature rich regions, while the direct method is used in regions with little features. Thus we utilize the fusion of two methods to improve the environmental adaptability of vision navigation. To improve the robustness to outliers, the Huber weight function is applied. Then the nonlinear optimization method is utilized to obtain the optimal camera pose. Experimental results demonstrate that the proposed method can meet the needs of real-time autonomous navigation.
Show more

6 Read more

Vision based indoor navigation of a Micro Aerial Vehicle using a Planar Pattern

Vision based indoor navigation of a Micro Aerial Vehicle using a Planar Pattern

Indoor navigation of micro aerial vehicle (MAV) is still a challenging task due to blocked GPS signal in cluttered indoor environment. GPS signals are not available in indoor environment for autonomous navigation of MAV. Several vision-based navigation algorithms have been developed by researchers in the last few decades for navigation of MAV in indoor environment. This paper describes a vision based pose estimation algorithm for a MAV using a planar pattern. Forward looking raspberry pi 3 camera is mounted on the MAV to acquire image frames of the Planar pattern (landmark). The raspberry pi 3 Camera is calibrated to obtain the intrinsic and extrinsic parameters. Using the image corner points extracted from the planar pattern, world coordinates and the intrinsic parameters the position of the camera is estimated. Using the position of the camera the position of the MAV is estimated.
Show more

9 Read more

Vision-based Navigation and Mapping Using Non-central Catadioptric Omnidirectional Camera

Vision-based Navigation and Mapping Using Non-central Catadioptric Omnidirectional Camera

and mapping results. For this task, translation and orientation errors were introduced in the simulation model of the catadioptric system used for the experiments. From this sensitivity analysis, it is observed that an error of a degree in relative rotation, or of a few millimetres in translation in calibration, result in an error of decimeters in object space in navigation and mapping. This is primarily due to the shape of the omnidirectional mirror and the set up of the catadioptric system. A slight translation error leads to a miscalculation of mirror coor- dinate of the object. Due to the reflection property of the mirror, the error in positioning of the catadioptric system or mapping of a tie point increases proportionally with the distance between the camera and the object. In, summary, estimation errors of rotation angle of the camera relative to the mirror or in relative translation, changes the estimated projection of mirror point in the image and this further translates into an error in navigation and mapping. For example, a miscalculation of 1 mm in position of camera leads to an average of 18 cm change in the elevation of a point on a wall 3 meters away (Figure 5.35). Similar planimetric errors are obtained for estimation errors along the XY plane. This high error propagation is due to the change in the reflection angle at different points on the mirror. This means that a one mm error in translation (during calibration of mirror-camera system) causes a miscalcu- lation of mirror coordinate, which in turn causes an error in estimated reflection angle which leads to increasing positioning error proportional to object-camera distance. Since the change
Show more

158 Read more

Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information

Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information

Methods based on pattern matching do not use image intensity values directly. The patterns are information on a higher level typically represented with geometrical models. This property makes such methods suitable for situations when the terrain presents distinct landmarks which are not a ff ected by seasonal changes (i.e., roads, houses). If recognized, even a small landmark can make a large portion of terrain unique. This characteristic makes these methods quite dissimilar from correlation-based matching where small details in an image have low influence on the overall image similarity. On the other hand these methods work only if there are distinct landmarks in the terrain. In addition, a pattern detection algorithm is required before any matching method can be applied. A pattern matching approach which does not require geometrical models is the Scale Invariant Feature Transform (SIFT) method [24]. The reference and sensed images can be converted into feature vectors which can be compared for matching purposes. Knowledge about altitude and orientation of the camera relative to the terrain is not required for matching. Correlation methods are in general more efficient than SIFT because they do not require a search over image scale. In addition SIFT features do not have the capability to handle variation of illumination condition between reference and sensed images [8, 25].
Show more

18 Read more

Vision based indoor navigation of a Micro Aerial Vehicle using a Planar Pattern

Vision based indoor navigation of a Micro Aerial Vehicle using a Planar Pattern

Indoor navigation of micro aerial vehicle (MAV) is still a challenging task due to blocked GPS signal in cluttered indoor environment. GPS signals are not available in indoor environment for autonomous navigation of MAV. Several vision-based navigation algorithms have been developed by researchers in the last few decades for navigation of MAV in indoor environment. This paper describes a vision based pose estimation algorithm for a MAV using a planar pattern. Forward looking raspberry pi 3 camera is mounted on the MAV to acquire image frames of the Planar pattern (landmark). The raspberry pi 3 Camera is calibrated to obtain the intrinsic and extrinsic parameters. Using the image corner points extracted from the planar pattern, world coordinates and the intrinsic parameters the position of the camera is estimated. Using the position of the camera the position of the MAV is estimated.
Show more

9 Read more

On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation.
Show more

97 Read more

Autonomous robotic intracardiac catheter navigation using haptic vision

Autonomous robotic intracardiac catheter navigation using haptic vision

control and navigation, the learning curve involved in mastering a new procedure could be substantially reduced. While this would be of significant benefit during initial clinical training, it could also enable mid-career clinicians to adopt new minimally invasive techniques that would otherwise require too much re-training. And even after a procedure is mastered, there are many situations where an individual clinician may not perform a sufficient number of procedures to

29 Read more

A Modified Proportional Navigation Guidance for Accurate Target Hitting

A Modified Proportional Navigation Guidance for Accurate Target Hitting

Most IR missiles with a reticle seeker for target tracking use detectors sensitive to the center of Infra-red (IR) radiation waves emitted from different parts of the target. Missiles with a detector sensitive to the target exhaust or nozzle can not attack a target in a head-on mode, because in this case the aircraft nozzle is situated in their blind area and hence can not be observed. However missiles with detectors sensitive to the target plume have no blind points and are omni-directionally capable of being fired, which results in increased capability of these missiles. Therefore, when an IR seeker with a detector sensitive to the target plume is used for tracking airborne targets, the seeker tends to follow the target hot point that is a point farther away from the target exhaust and outside of its fuselage. Hence, most available IR missiles having homing guidance based on LOS measurements by seeker show a
Show more

9 Read more

VEHICLE NAVIGATION USING ADVANCED OPEN SOURCE COMPUTER VISION

VEHICLE NAVIGATION USING ADVANCED OPEN SOURCE COMPUTER VISION

Abstract: The current operational transport and vehicle systems consist of vehicles running on fossil fuels or battery powered systems. The navigation requires control by a human driver who is responsible for a safe and comfortable journey from one place to another. However, with human intervention there are several drawbacks that may lead to a poor performance by the system. Negligence in driving leading to fatal accidents, environmental damage, infrastructure damage and destruction, health problems due to constrained sitting postures, long duration of operation and several others, have motivated researchers to look for solutions that will automate the driving process. Considering all these shortcomings of current systems, the new research consists of the use of self driving cars for transport and navigation. The complexity of this problem was seen when the initial systems were built using machine learning techniques that tried to understand and model the dynamic nature of the environment. As the research progressed, we realized that the system must be trained to respond to a number of unpredictable situations such as rain, snow, lightning, oil spills, potholes, passerby pedestrians and animals, approaching vehicles and many more. We need to consider all these aspects before a fully functional real-time system can be used. We consider the problem of autonomous vehicle by focusing on three major aspects of any self driving car which form the foundation of the entire system. Firstly, we need to be able to detect the lane lines so that our vehicle can orient itself correctly and continue to follow a safe path while being aware of the dynamic environment. Further, it needs to know its departure from the center of the lane in the scenario that it needs to move in order to avoid potholes or other road obstacles.
Show more

7 Read more

Android Based Steganography Using Navigation

Android Based Steganography Using Navigation

Abstract - In today’s world of technology, Data security is essential. In many organizations information is critical. One of the ways to ensure security is by ensuring that the data is not visible to the intruder. This can be done by hiding the message behind some other objects. Here we achieve data security through the technique of Watermarking, also known as, Steganography. An algorithm for image Steganography has been proposed to hide a large amount of confidential data presented by secret color image. This algorithm is based on least significant bits (LSB), a technique used to hide data behind an image. An extra layer of security is added to the above algorithm where in-built technology of android, i.e. GPS is used so that the file can only be decrypted at specific location.
Show more

5 Read more

Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

Similarly, there are an increasing number of studies examining the potential use of image-guided systems for oral and maxillofacial surgeries (OMS) [12, 13]. Patient or image registration (overlay) is key to associating the surgical field with its virtual counterpart [14, 15]. The disadvantages of the current navigation systems used in OMS include bulky optical trackers and lower accuracy of electromagnetic trackers in locating surgical instru- ments, invasive and error-prone image registration pro- cedures, and an additional reference marker to track patient movement [16]. In addition, errors related to position, angle, distance, vibration of the optical tracker, the reference frame and probe tip of the equipment are high. With anatomical landmark-based registration, each observer is only prone to human error based on personal preference of anatomical landmarks in the surgical field [17, 18]. Moreover, frequent hand-eye transformation, which corrects the displacement between the probe tip and the image reference frame, is required for constant comparisons between the surgical field and the displayed image. Furthermore, images in real space are projected using a 3D display via binocular stereopsis, with the dis- advantage that the observed video does not change with changes in viewing position since only relative depth is recognized. Thus, accurate 3D positioning cannot be reproduced without incurring motion parallax. Head mounted displays and head-mounted operating micro- scopes with stereoscopic vision have been used many times for AR visualization in the medical field. However, such video see through devices have two views that present only horizontal parallax, instead of the full paral- lax. Projector-based AR visualization is appropriate for large operative field overlays; however, it lacks depth perception. As described in our previous study, we have developed an autostereoscopic 3-D image overlay using a translucent mirror [15]. The integral videography (IV) principle applied in this study differs from binocular stereopsis, and allows both binocular parallaxes for depth perception and motion parallax, wherein depth cues are recognized even if the observer is in motion [15, 19–21]. Results from our previous research have shown that the 3D AR system using integral video- graphic images is a highly effective and accurate tool for surgical navigation in OMS [22].
Show more

11 Read more

Autonomous vision-based terrain-relative navigation for planetary exploration

Autonomous vision-based terrain-relative navigation for planetary exploration

In the original formulation of the segmentation-based crater detection algorithm described in Chapter 2, each group of connected pixels in the binary image is considered as a potential part of a crater. This approach has a major problem. It happens frequently that the illuminated or the shaded part of a given crater is fused with objects of other geographic structures such as mountains, ridges or other craters. The shape of this crater part is then significantly modified and it is likely that the algorithm will not be able to find its corresponding part in the other binary image. Even if it succeeds, the detected crater will be inaccurate or a false alarm. In order to solve this issue, the candidate proposes an innovative solution to segment the illuminated and the shaded area of the image into smaller objects, called convex objects. The segmentation of the illuminated and the shaded areas of the image is done using three steps: distance transform, hierarchical watershed transform and convex object characterisation. The first step consists in applying a distance transform on the binary images. This transform sets the intensity of the illuminated and shaded pixels to their distance from the closest background pixel. The distance transform is presented in details latter in this chapter. The regional maximum (group of adjacent pixels which have a higher value than their neighbour) in the distance transform correspond to the center of the convex objects. As an example, the distance transform of a simple binary object built by merging several circles of various sizes has been computed and the result is shown in the figures below:
Show more

485 Read more

Vision-based Path Estimation for the Navigation of Autonomous Electric Vehicle

Vision-based Path Estimation for the Navigation of Autonomous Electric Vehicle

This paper presents the design and implementation of an autonomous Electric Vehicle (EV) with intelligent driving control to provide the driver assistance as well as unmanned driver. It is an automatic guided vehicle and able to move automatically along the tracks in a given region. For the purpose of prototyping, a buggy car has been used and several sensors are installed. The camera has been installed to the EV as a vision system and connects to the personal computer (PC) for the processing of image information. Image processing algorithm will be employed for the detection of line and the center of gravity (COG) of the road. For the future research, another PC will be installed for controlling motors to operate acceleration pedal, brake pedal and steering wheel. Information from several sensors was fused to move the EV intelligently without control by the human.
Show more

6 Read more

Show all 10000 documents...