Top PDF Optical Flow and Deep Learning Based Approach to Visual Odometry

Optical Flow and Deep Learning Based Approach to Visual Odometry

Optical Flow and Deep Learning Based Approach to Visual Odometry

For this system, ego-motion is estimated from two consecutive video frames. Instead of inputting the two images directly into the network, a flow image is computed using the FlowNetS architecture, and then that result is used as the network input. Each of these flow images represents the change between two video frames, so the corre- sponding motion differentials can be computed from the ground truth provided by the KITTI odometry dataset [8]. The KITTI dataset gives 11 video sequences with ground truth data for each frame in the videos for training, and 11 more sequences without ground truth data to be used in the online evaluation of a visual odometry system. In this work, and in other works of similar function, the 11 sequences without ground truth are ignored, and instead, sequences 08, 09, and 10 are used for evalu- ation, and sequences 00 through 07 are used for training and fine-tuning the visual odometry system. The number of frames in each of the training and testing sequences are given in Table 3.1. Three examples of images in dataset are shown in Figs. 3.3, 3.4, and 3.5. Examples of flow images, colored for visual representation, are shown in Figs. 3.6, 3.7, and 3.8. The flow images show how pixels should move for straight vehicle movement, left turns, and right turns, respectively. Although these are color images, they are only shown for human visualization. Raw, two-channel optical flow images are used as input to the system.
Show more

74 Read more

Visual Odometry using Convolutional Neural Networks

Visual Odometry using Convolutional Neural Networks

The problem of visual odometry has been actively pursued over the past decade, as it is directly applicable to problems such as the creation of cost-effective self-driving cars, the advancement of mobile robotics, and even the improvement of Augmented Reality systems [10], [11], [12]. Visual odometry is the process of estimating transformations of an agent using only onboard visual sensors [4], [11]. This project primarily focuses on the problem of monocular visual odometry or detecting agent motion with a single onboard camera. Thus far, several non-ML techniques have demonstrated reliable and accurate performance on datasets such as SVO, Visual SLAM, and Optical Flow [10], [7], [9]. Convolutional Neural Networks have had great success in extracting complicated image features and performing well on robust object recognition tasks, irrespective of translational and rotational transformations, and lighting conditions [3], [13], [4]. Significant work has also been done to improve their speed through both parallelization and hardware acceleration [14]. This has the potential to make a
Show more

10 Read more

Dynamic Attention-based Visual Odometry

Dynamic Attention-based Visual Odometry

A significant portion of research works [4], [10], [35] have introduced optical flow estimation into their VO models in recent years. Instead of directly feeding consecutive raw RGB frames into the VO models, flow maps can be used as inputs for the VO models, as displacements of pixels (and hence, the movements of the objects) between consecutive image frames can be better employed by these models in the process of ego-motion estimation. The authors in [4], [10] introduce the famous FlowNet [36] in their VO module, while the authors in [4] additionally introduce an auto- encoder (AE) in their network architecture to enhance the flow representation. Architectures based on [36] that employ recurrent memory cells to learn sequential dependencies and complex motion dynamics of an image sequence have also been investigated in [12], [13], [37]. Similar to [12], [13], a recent work that incorporates a cascade of multiple flow networks [29] followed by a number of recurrent neural network (RNN) cells and fully-connected (FC) layers are discussed in [11]. These works differ from DAVO in that their flow maps are directly fed into the pose estimation DCNNs, without considering any attention mechanisms. B. Attention-based Approaches
Show more

8 Read more

Visual Odometry Estimation Using Selective Features

Visual Odometry Estimation Using Selective Features

stereo camera approach. Since the camera was mounted on a slider which was level and the camera’s pose was fixed, the camera had epiploic geometry. The cameras baseline distance was the length of the slider bar and this information made calculations easier. The main assumption is that neither the robot, nor the surrounding moves during the image capturing stage. Once the images were captured, corners in one image were detected using Morvec’s corner detector [9] and these corners are matched to the right image using NCC (Normalized Cross Correlation). These corners are tracked to the next consecutive frame capturing the incremental motion of the robot using optical flow. Variance in the overall flow and discrepancies in the neighboring pixel depth information of the features can be outlined for outlier rejection. With the set of 3D points tracked between subsequent frames, rigid body transformation is used to align triangulated 3D points. Weighted least square of the triangulation vector of features based on their weights was used to reduce mean error in solving the equation obtained from two sets of 3D points. Once the camera captures the nine images and analyze these images for motion estimation, the robot would move. The motion in between the image capturing stage was very minimal and hence the speed at which the robot could travel was restricted. This was a major drawback. Moravec visualized the stereo camera by setting up a camera free to slide on an axis perpendicular to the scene being captured. As the sliding is done at known distances and the images captures are from single camera, they depict stereo image pair. This approach proved to be more accurate in terms of depth computation, as the stereo computation could be done over multiple images captured at discrete known distances.
Show more

71 Read more

Learning and Searching Methods for Robust, Real-Time Visual Odometry.

Learning and Searching Methods for Robust, Real-Time Visual Odometry.

Feature detectors have long been used in computer vision to focus processing on portions of the image with strong signal, or represent the image abstractly with highly-invariant image features. These detectors are typically categorized as corner, edge, or region detectors and much work has been done in this domain. A few of the most well known detectors include the Harris [19] and FAST [20] corner detectors, the Canny edge detector [25], and region detectors such as SIFT [26] and SURF [27]. Dense methods have also been a focus of much research, such as dense disparity estimation (dense stereo) and optical flow. Scharstein and Szeliski published the well known Middlebury dense stereo datasets for evaluating dense stereo methods against ground truth [14, 15], including a review of approaches. Recent work by Newcombe et al. demonstrated monocular camera tracking with a dense representation on a GPU [47] and showed great resilience to motion blur, but heavy GPU usage may require too much power for use on small robotic platforms.
Show more

114 Read more

Learning Kalman Network: A deep monocular visual odometry for on-road driving

Learning Kalman Network: A deep monocular visual odometry for on-road driving

The pipeline of the proposed LKN-VO with 3D dense mapping is shown in Fig. 1. To be more specific, firstly the dense optical flow and depth are obtained using FlowNet2 [44] and DepthNet [38], respectively. Subsequently, the LKN si- multaneously estimates the ego-motion from current measurement and filters the states from a sequence of measurements. Consequently, a sequence of filtered states, i.e. 6 DOF relative poses, can be transformed to the global pose trajectory by the SE(3) composition layer [10]. Simultaneously, the point cloud is consis- tently generated from the estimated depth, and incrementally mapped with the learned global pose. Furthermore, an Octree depth fusion [43][45] is employed for a robust depth refinement, in which multi-view measurements are used to elim- inate inaccurate predictions. Finally, a dense 3D map can be obtained. As shown in Figs. 2 and 3, LKN is a computation graph made up of a Kalman Filter archi- tecture with learning observation and transition models, which can be trained as a complete graph from end to end. Please note that only monocular RGB images are employed for localization and mapping.
Show more

42 Read more

Fast Uncertainty Estimation for Deep Learning Based Optical Flow

Fast Uncertainty Estimation for Deep Learning Based Optical Flow

Abstract— We present a novel approach to reduce the pro- cessing time required to derive the estimation uncertainty map in deep learning-based optical flow determination methods. Without uncertainty aware reasoning, the optical flow model, especially when it is used for mission critical fields such as robotics and aerospace, can cause catastrophic failures. Although several approaches such as the ones based on Bayesian neural networks have been proposed to handle this issue, they are computationally expensive. Thus, to speed up the processing time, our approach applies a generative model, which is trained by input images and an uncertainty map derived through a Bayesian approach. By using synthetically generated images of spacecraft, we demonstrate that the trained generative model can produce the uncertainty map 100∼700 times faster than the conventional uncertainty estimation method used for training the generative model itself. We also show that the quality of uncertainty map derived by the generative model is close to that of the original uncertainty map. By applying the proposed approach, the deep learning model operated in real-time can avoid disastrous failures by considering the uncertainty as well as achieving better performance removing uncertain portions of the prediction result.
Show more

7 Read more

Using Unsupervised Deep Learning Technique for Monocular Visual Odometry

Using Unsupervised Deep Learning Technique for Monocular Visual Odometry

ABSTRACT Deep learning technique-based visual odometry systems have recently shown promising results compared to feature matching-based methods. However, deep learning-based systems still require the ground truth poses for training and the additional knowledge to obtain absolute scale from monocular images for reconstruction. To address these issues, this paper presents a novel visual odometry system based on a recurrent convolutional neural network. The system employs an unsupervised end-to-end training approach. The depth information of scenes is used alongside monocular images to train the network in order to inject scale. Poses are inferred only from monocular images, thus making the proposed visual odometry system a monocular one. The experiments are conducted and the results show that the proposed method performs better than other monocular visual odometry systems. This paper has made two main contributions: 1) the creation of the unsupervised training framework in which the camera ground truth poses are only deployed for system performance evaluation rather than for training and 2) the absolute scale could be recovered without the post-processing of poses.
Show more

13 Read more

Deep learning for interference cancellation in non orthogonal signal based optical communication systems

Deep learning for interference cancellation in non orthogonal signal based optical communication systems

Highly increasing demands for data services push the optical resource utilization to a limit. There- fore, improving spectral efficiency is becoming a crucial requirement, which is a hot research topic in the optical communication world at present. Due to the use of spectrally efficient signal wave- forms, interference is introduced and complex interference compensation processing is required at the receiver. Alternative compensation algorithms may be based on learning-based methods, which can simplify signal processing without accurate mathematical modelling. Deep learning is a data- driven learning-based method that can learn interference from a large amount of data and mitigate interference effects in communication systems. This work studied the impact of deep learning neural network architectures on a non-orthogonal signal waveform. The simulation results shown in this work indicate the possibility of using neural networks to mitigate the ICI within SEFDM signal waveform and achieve a significant gain in signal-to-noise ratio compared to the typical hard decision detector. The aim of this work is to show the feasibility of using neural networks for the signals with interference challenges. The results also clarify the correlation between neural network architectures and signal waveform characteristics. Results showed that interference features can be modelled and extracted differently via using different neural networks. Depending on the signal waveform characteristics, different neural networks lead to different performance. Therefore, a joint design of neural networks and communication signal waveforms would lead to an efficient system with trade-off, which optimizes error rate performance or complexity.
Show more

9 Read more

Advanced Machine Learning Approach: Deep Learning

Advanced Machine Learning Approach: Deep Learning

The reports conferred on top of illustrated that Deep Learning encompasses a heap of potential, however must overcome a number of challenges before changing into additional versatile tool. The interest and enthusiasm for the sector is, however, growing and already nowadays we have a tendency to see unimaginable real-world applications of this technology. Additional applications like the serving to voices of Siri and Cortana, Google Photo’s people tagging feature and sportify’s music recommendations.

5 Read more

Flow: Deep Reinforcement Learning for Control in SUMO

Flow: Deep Reinforcement Learning for Control in SUMO

Flow supports the generation of a variety of starting vehicle arrangements, including uni- formly spaced across the network and randomly within each lane. Custom initial configurations of vehicles are supported. A set of starting edges can be specified, so that vehicles are not ini- tially spread throughout the entire network but instead occupy a smaller section at greater density. Heterogeneous distributions of vehicles across lanes are supported as well. The order of vehicles can be shuffled to train policies capable of identifying and tracking vehicles across time. This shuffling can be set to occur once at the start of an experiment, or before each rollout to randomize the conditions in which the agent trains. In order to prevent instances of the simulation from terminating due to numbers of initial vehicles that will lead to overlapping vehicles, a minimum gap parameter is implemented. This variable ensures that the minimum bumper-to-bumper distance between two vehicles never drops before a certain threshold. Flow raises an error before an experiment begins if the density is too high to support this gap. Vehicle Controller Design: Custom vehicle behavior is supported in Flow. Users can create car-following models of their choosing by instantiating an object corresponding to the model with a get_accel method that returns accelerations for a vehicle. At each timestep, Flow fetches accelerations for each controlled vehicle using get_accel; these accelerations are then Euler integrated to find vehicle velocity at the next timestep, which is commanded using TraCI. The implementation of lane-changing controllers is supported also, using objects corresponding to a lateral controller that define a method get action that returns a valid lane number. Target lanes are passed to a lane-change applicator function within the base env module, which uses TraCI to send a changeLane command. Desired lane-change behavior can be set on a per-vehicle basis by specifying the relevant 12-bit SUMO lane change mode value.
Show more

18 Read more

Deep Tweets Analyzer Model for Twitter Mood Visualization and Prediction Based Deep Learning Approach

Deep Tweets Analyzer Model for Twitter Mood Visualization and Prediction Based Deep Learning Approach

decision-making. Data mining is hereby an inter-disciplinary subfield of Computer Science representing a process of examining large pre-existing databases to generate new information. It is the practice of discovering patterns in large data sets involving methods of machine learning, statistics, and database systems [2]. ‘Web mining’ is the application of data mining techniques to extract knowledge from the World Wide Web through discovering pertinent patterns. Web mining can thus be divided into three different types – Web usage mining, Web content mining and Web structure mining [2], [3]. Visualization in social networks denotes presenting the conceptual data to intensify person’s realization and open out the concealed relations along with the data. Visualization of web information has thus become unavoidable for end users to get their preferred information easily, rapidly, and correctly from the extremely huge Web [4]. Natural-language processing (NLP) is the area of computer science and artificial intelligence that is concerned with the interactions between computers and human languages. In particular, this implies how to program computers to fruitfully process large amounts of natural language data. Challenges in natural-language processing frequently involve speech recognition, natural-language understanding, and natural-language generation [2], [5]. Sentiment analysis (or sentimental analysis or opinion mining) is a term frequently used to turn up at a dual verdict: users like or dislike something, or the product is good or bad, or someone is either with or against something. Sentiment analysis is the use of NLP, text analysis, and statistics to recognize the ‘emotional attitude’ pertinent to text into affirmative (positive), unenthusiastic (negative), or impartial (don’t-care/neutral)) classes [4]-[8]. Machine learning is an artificial intelligence procedure that replicates/simulates the manner in which the brain of human being functions, aiming to furnish computers with intelligence. It is widely exploited in systems seeking knowledge discovery and applying data mining that are referred to as knowledge-based systems and expert systems. Comprehensively revised techniques for machine learning often incorporate artificial neural network (ANN). The support vector machine (SVM), is a new statistical machine learning and data mining tool [3], [5].
Show more

17 Read more

A Deep Learning Approach for Wi-Fi Based People Localization

A Deep Learning Approach for Wi-Fi Based People Localization

Recently, a number of Wi-Fi based localization systems were proposed [1-3], and [7-10]. A multi- person localization system was proposed by Adib et al. [1], they determined users’ locations based on the reflections of Wi-Fi signals from their bodies, the results show that their system was able to localize up to five people at the same time with an average accuracy of 11.7 cm. Colone et al. [2] studied the use of Wi-Fi signals for people localization, they have conducted an ambiguity function analysis for Wi-Fi signals. They have also studied the range resolution for both direct sequence spread spectrum (DSSS) and orthogonal frequency division multiplexing (OFDM) frames, for both the range and the Doppler dimensions, large sidelobes were detected, which explains the masking of closely spaced users. Chetty et al. [3] conducted an experiment in a high clutter indoor environment using
Show more

19 Read more

Improved optical character recognition with deep learning

Improved optical character recognition with deep learning

Furthermore, there is also another challenge where the performance of deep neural network will potentially be affected by the new training data distributions. For example, a neural network is pre-trained to recognize good quality of "O" character, and then if the network trained again with different "broken" pattern of poor quality of "O" character, the weights adjusted in the network will actually negatively be affected by the new training data. Philippe Henniges et al. explained that if training with over- represented class distributions, this will cause the performance of neural network to degrade [16]. From the challenges stated above, the classification and training data distribution is the most crucial stage and a challenge in this project. The aim of this work is to improve an OCR method with deep learning network that will apply transfer learning concept and achieve the high accuracy performance while keeping the training time short.
Show more

24 Read more

Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy

Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy

photoacoustic-computed tomography (PACT) and only suitable for cross-sectional B-scan images [5, 6]. Schwarz et al. [7] proposed an algorithm to correct motion artifacts between adjacent B-scan images for acoustic-resolution photoacoustic microscopy (AR-PAM). Unfortunately, the algorithm needs a dynamic reference, which is not feasible in high-resolution OR-PAM images. A method presented by Zhao et al. [8] has the capability of addressing these shortcomings but can only correct the dislocations along the direction of a slow-scanning axis. Recent methods that are based on deep learning have demonstrated a state-of- the-art performance in many fields, such as natural lan- guage processing, audio recognition and visual recognition [9–14]. Deep learning discovers an intricate structure by using a backpropagation algorithm to indicate how a net should change its internal parameters, which are used to compute the representation in each layer from that in the previous layer. A convolutional neural network (CNN) is a common model for deep learning in image processing [15]. In this study, we present a fully CNN [16] to correct mo- tion artifacts in a maximum amplitude projection (MAP) image of OR-PAM instead of a volume. To evaluate the performance of this method, we conduct both simulation tests and in vivo experiments. The experimental results
Show more

6 Read more

SVO: Fast Semi-Direct Monocular Visual Odometry

SVO: Fast Semi-Direct Monocular Visual Odometry

The proposed Semi-Direct Visual Odometry (SVO) al- gorithm uses feature-correspondence; however, feature- correspondence is an implicit result of direct motion estima- tion rather than of explicit feature extraction and matching. Thus, feature extraction is only required when a keyframe is selected to initialize new 3D points (see Figure 1). The advantage is increased speed due to the lack of feature- extraction at every frame and increased accuracy through subpixel feature correspondence. In contrast to previous direct methods, we use many (hundreds) of small patches rather than few (tens) large planar patches [18]–[21]. Using many small patches increases robustness and allows neglect- ing the patch normals. The proposed sparse model-based image alignment algorithm for motion estimation is related to model-based dense image alignment [8]–[10], [24]. However, we demonstrate that sparse information of depth is sufficient to get a rough estimate of the motion and to find feature- correspondences. As soon as feature correspondences and an initial estimate of the camera pose are established, the algorithm continues using only point-features; hence, the name “semi-direct”. This switch allows us to rely on fast and established frameworks for bundle adjustment (e.g., [25]).
Show more

8 Read more

Computationally-efficient visual inertial odometry for autonomous vehicle

Computationally-efficient visual inertial odometry for autonomous vehicle

computer vision projects. The observation of the same feature in three consecutive im- ages results in geometric constraints between three camera poses. These constraints can be used to produce control commands for visual-servoing [17]. The control law was sim- plified to enhance a fast trifocal control system while attaining global exponential stability and robust performance. In the application to assist a driver to see through the vehicle ahead for overtaking maneuvers, Rameau et al. [121] utilized TTG to filter out incorrect feature matching efficiently. This procedure also extracted the fundamental matrices and the camera trajectory to render the virtual objects. This process was similar to some visual odometry applications [34, 93]. Moreover, their method [121] employed trifocal tensor im- age synthesis and marker-based pose estimation to generate a seamless transparency effect from the rear car’s viewpoint. Their implementation reduced the quantity of information communicated between vehicles and achieved good real-time performances. Overall, the TTG employment is straightforward computation without any recursive execution. This benefit helps to reduce the computational cost of hardware implementation.
Show more

173 Read more

Visual Scene Understanding by Deep Fisher Discriminant Learning

Visual Scene Understanding by Deep Fisher Discriminant Learning

Visual scene understanding has always been a matter of interest to computer vision community. The emergence of deep neural networks, introduces automatic feature learning as a powerful approach to replace the feature hand-crafting, to address different tasks. Deep learning lets train the models with huge number of parameters and solve high-dimensional, non-convex optimization problems. One of the well-known statistical methods to tackle spaces of high dimensions, is discriminant analysis that imposes better class discrepancy. This thesis proposed Deep Fisher Discriminant Learning to link Fisher discrimination and deep learning. It targeted semantic segmentation, texture classification and object recognition as important challenges in visual scene understanding. The theoretical justifications, supported by experimental results, confirmed the advantage of the proposed algorithms to improve the performance over various standard benchmarks in the literature.
Show more

166 Read more

Simultaneous monocular visual odometry and depth reconstruction with scale recovery

Simultaneous monocular visual odometry and depth reconstruction with scale recovery

Visual odometry (VO) is a key technique for estimating camera poses through analyzing sequential camera images and has been used in a broad range of real-world applications of localization, mapping, and navigation for autonomous driving, robots, advanced driver assistance systems and aug- mented reality. Geometric VO estimates camera poses by minimizing the projection error of the three-dimensional (3D) points to consecutive image planes or minimizing the gradients of pixel intensities across consecutive images [1]. Previous works show that geometric VO has achieved great success in structured and controlled environments. However,
Show more

6 Read more

Visual odometry in the wolf spider Lycosa tarantula (Araneae: Lycosidae)

Visual odometry in the wolf spider Lycosa tarantula (Araneae: Lycosidae)

In the absence of similar research conducted using spiders, we shall relate our results to those obtained for other walking arthropods. Similar studies with self-induced optic flow have been carried out using desert ants C. fortis (Ronacher and Wehner, 1995; Ronacher et al., 2000; Wittlinger and Wolf, 2013). Contrary to our results, the optic flow in the lateral visual field of the ant is ‘neither sufficient nor necessary for correct distance estimation’ (Ronacher et al., 2000). Ronacher and Wehner carried out an experiment similar to this study, placing visual patterns on the substratum (Ronacher and Wehner, 1995). They trained ants using a stationary grating of 10 mm black-and-white-stripes (λ=20 mm) and then tested the ants using stationary patterns in which the stripe width was 5 or 20 mm. There was no significant influence of the spatial frequency on the distance travelled. In addition, they covered the ventral half of the eyes of some of the ants and found that this condition did not affect the distance walked in a test channel. Similar results were obtained by Wittlinger and Wolf in the course of a study in which they analyzed the effect of amputating two of the walking legs of C. fortis (Wittlinger and Wolf, 2013). In this study they had one experimental group in which the ventral half of the compound eye was covered but they found that the distance walked was not statistically different from that of the group in which the ventral half was uncovered.
Show more

7 Read more

Show all 10000 documents...