adaptive object-based reconstruction

Top PDF adaptive object-based reconstruction:

Adaptive deformation correction of depth from defocus for object reconstruction

Adaptive deformation correction of depth from defocus for object reconstruction

It is not necessary for one of the images to be captured with a pinhole camera setting which generates very large diffraction. In Subbarao’s method [3], the two images are captured with any different known parameters. The standard deviations of the Gaussian PSF that correspond to the two images are then determined. In both methods the depth is estimated using inverse filtering technique, i.e., the parameter of the PSF is firstly determined for two captured images, and are then used to obtain depth. One major problem in passive DfD is the shift-variance of PSF and thus different PSFs are used for different pairs of sub-images. Based on the work in [3], Markov Random Field is used in [4] to model the intensity and depth value of every image pixel, so that the PSF can be modified by considering the neigh- bouring pixels. Thus the method enforces smoothness while preserving discontinuities in depth. Finally, a maximum a posterior function is maximised using simultaneous annealing to obtain the optimal depth estimation.
Show more

23 Read more

A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity

A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity

A series of numerical simulations were carried out to validate the proposed DDTV- based algorithm. In order to validate the superiority of the DDTV-based algorithm in the adaptive directional sensitiveness, two kinds of texture images with various direc- tions were chosen as the simulation phantoms. Circular scanning with different sam- pling points was simulated, and the DDTV results were compared with those of FBP and TV. Then the Shepp–Logan image, which is often adopted to assess the image recon- struction algorithm, was used to verify the effectiveness of the DDTV algorithm quanti- tatively and qualitatively through the circular, limited-view, and linear scanning options. The PSNR, convergence speed, and robustness of FBP, TV and DDTV algorithms were also analyzed and compared. Finally, several medical images were used to test the uni- versality of the algorithm. The adaptive tunable parameter for lambda. Which was pro- posed in [31] for the TV-based algorithm, was used in this study. In this case, the initial lambda value was set to 2 for the first iteration and decreased to 0.2, when the iteration number exceeded 10. The iteration time of 10 was set for all cases under study, wherein lambda was relatively large at the beginning of the iteration and decreased as the itera- tions continued, which provided a good balance of the two parts of the object function. This adaptive tunable parameter proved to be the most effective for the iteration time of 10 [31]. The parameter λ for DDTV was set to maximize the PSNRs of the reconstructed results. In fact, λ determines the weight of DDTV term in the optimization and its large value implies that the DDTV-term is dominant. This would result in a quicker conver- gence of the algorithm, but too large value of λ will break the balance between the two parts of the objective function. The reconstructed images with a too large λ would much differ from the true ones, due to the data fidelity in the reconstruction being sacrificed to the image regularity. Based on this criterion, a moderate value of λ, which is neither too large nor too small, is preferred.
Show more

30 Read more

Fast Adaptive Reconstruction Algorithm for Line Spectrum Pair Parameters Based on Compressed Sensing

Fast Adaptive Reconstruction Algorithm for Line Spectrum Pair Parameters Based on Compressed Sensing

In the corpus, we select a number of male and female voice signals with different pronunciation contents as the coding object. Read the speech signal in turn to determine the type of each sub-frame in the speech signal, determining the superframe type according to consecutive n subframe types, and saving the superframes having the same type into the same voice file. Using the algorithm proposed in this paper, the voice files storing the same type of superframe are read out, and the superframe line spectrum frequency parameters are encoded and decoded. In the reconstruction process, the parameter  is adjusted in units of
Show more

6 Read more

Adaptive Object Oriented Software pdf

Adaptive Object Oriented Software pdf

Adaptive programming, realized by the use of propagation patterns, extends the object- oriented paradigm by lifting programming to a higher level of abstraction. In their simplest form, which also turns out to be the worst in terms of adaptiveness, adaptive programs are nothing more than conventional object-oriented programs, where no traversal is used and where every class gets a method explicitly. But, for a large number of applications, repre- sented by related customizers, nothing has to be done to an adaptive program to select the conventional object-oriented program corresponding to any of the customizers. Moreover, when changes to an adaptive program are indeed necessary, they are considerably easier to incorporate given the ability that adaptive programs oer to specify only those elements that are essential and to specify them in a way that allows them to adapt to new environments. This means that the exibility of object-oriented programs can be signicantly improved by expressing them as adaptive programs, which specify them by minimizing their dependency on their class structures.
Show more

651 Read more

Adaptive Feature Fusion Object Tracking with Kernelized Correlation Filters

Adaptive Feature Fusion Object Tracking with Kernelized Correlation Filters

The Correlation filtering object tracker MOSSE uses gray feature as object descriptor, but gray feature is easily affected by environmental factors, such as noise and illumination, etc. Therefore, the robustness of tracking is low. Subsequently, HOG (Histograms of oriented gradients) is used as the object tracking feature. The HOG [9] feature is constructed by calculating and counting the oriented gradients of histogram of the local region. The local appearance of the object is constructed by calculating and counting the oriented gradient histogram. The object local appearances and shapes can be well described by the distribution of local gradients and edges, its pair of light and other factors are insensitive.
Show more

10 Read more

Volume 2, Issue 7, July 2013 Page 453

Volume 2, Issue 7, July 2013 Page 453

Shadow areas are viewed as unwanted information which effects the quality of an image. When the light source is illuminated on any object the shadow is observed on the other side of the object. This paper deals how to detect and reconstruct the shadow regions. The shadow regions are not only detected but also classified by using classifiers like SVM. In the detection process we observe that shadow and non shadow regions are separated. In the reconstruction process all the shadow areas remains some zigzag information. After the completion of reconstructing the shadow area there may be some noise data which is remained behind the reconstructed area. This area can be smoothened by using some morphological filters like contrast sharpening and smoothening filters. The main theme in this paper is to reconstruct the shadow area without any information loss.
Show more

5 Read more

Null broadening adaptive beamforming based on covariance matrix reconstruction and similarity constraint

Null broadening adaptive beamforming based on covariance matrix reconstruction and similarity constraint

The nulling broadening algorithms can solve such prob- lems hereinabove and avoid the additional complexity of the adaptive weight vector continuously updating. In [12], the approach to robust beamforming, with null widen- ing, imparts robustness into adaptive pattern by judicious choice of null placement and width through introduc- ing the concept of a covariance matrix taper (CMT), while the performance degradation of the method is evi- dent because the relative high sidelobe and the depth become somewhat shallower when the null width is broadened. Several null widening techniques based on matrix tapers have been proposed to overcome the pattern distortion resulting from the nonstationary interference [13, 14]. Nevertheless, these methods are somewhat iden- tical in essence and attain similar performance in output SINR. In [15], a beamforming framework has been pro- posed based on a set of beam pattern shaping constraints; this method enjoys adaptive interference-rejection capa- bility and controls direct sidelobe, meanwhile, achieves robustness against steering direction errors with mag- nitude response constraints. However, these constraints consume the number of adaptive degrees of freedom (DOFs) to trade off the output SINR. The multiparamet- ric quadratic programming for covariance matrix taper minimum variance distortionless response beamformer is proposed to resolve null broadening and sidelobe control problem in [16]. Nevertheless, the sidelobe domain con- straint is obviously broadening the mainlobe beam pattern which decreases in array gain.
Show more

10 Read more

Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares

Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares

In order to reduce the radiation dose, one direct way is to lower mAs levels in CT data acquisition protocols. However, this approach will result in insufficient numbers of X-ray photons received by the detectors and hence increase the quantum noise level. This is a great challenge for these advanced methods which have taken the noise models into account. For example, PWLS (penalized weighted least-squares) based methods [2] can only deal with the noise-contaminated sinogram data to some extent. As a consequence, the radiation dose cannot be reduced evidently by this approach if the reconstructed images need to be qualified for clinical diagnosis. Another way to reduce imaging dose is to decrease the number of X-ray projections operated by fewer sampling angles. Yet, this will lead to serious streaking artifacts in the image reconstructed by the analytic-based algorithms like FBP (filtered backprojection algorithm) [3], since the analytic-based algorithms require that the number of projections should follow the Shannon/Nyquist sampling theorem [4].
Show more

21 Read more

Proposed Design for 3D Map Generation using UAV

Proposed Design for 3D Map Generation using UAV

The images obtained is analyzed and the control points within each image is obtained. As there are many images of a single object taken from different perspectives, the similar images are then overlapped. But by performing these steps, noisy data and redundancy is introduced in our images. The images then go under filtering process, which make the images more precise and accurate. Then the point cloud of the system or object is generated by analyzing the control points. The point cloud is the basic building block for 3D map generation. Pix4DMapper is used to perform the above process.
Show more

5 Read more

An adaptive approach for Linux memory analysis based on kernel code reconstruction

An adaptive approach for Linux memory analysis based on kernel code reconstruction

Memory forensics plays an important role in security and forensic investigations. Hence, numerous studies have investigated Windows memory forensics, and considerable progress has been made. In contrast, research on Linux memory forensics is relatively sparse, and the current knowledge does not meet the requirements of forensic investigators. Existing solutions are not especially sophisticated, and their complicated operation and limited treatment range are unsatisfactory. This paper describes an adaptive approach for Linux memory analysis that can automatically identify the kernel version and recovery symbol information from an image. In particular, given a memory image or a memory snapshot without any additional information, the proposed technique can automatically reconstruct the kernel code, identify the kernel version, recover symbol table files, and extract live system information. Experimental results indicate that our method runs satisfactorily across a wide range of operating system versions.
Show more

13 Read more

SURVEY ON INFORMATION EXTRACTION FROM CHEMICAL COMPOUND LITERATURES: TECHNIQUES 
AND CHALLENGES

SURVEY ON INFORMATION EXTRACTION FROM CHEMICAL COMPOUND LITERATURES: TECHNIQUES AND CHALLENGES

Video Segmentation decomposes a video into many frames throughout the sequence. Video segmentation applications are used in the field of robotics, video surveillance, traffic monitoring, video indexing etc. A group of pixels follow analogous motion segmentation. Video is a series of frames (pictures) displayed sequentially at fixed rate. All the pictures (frames) in a video files have equal size. Video contains continues series of 25 frames per second. And all the processing techniques are applied to frames. Video segmentation technique accepts video as an input and the processed output will be a data extraction from input video or a new video. It is a technique used for detecting changing frame in video. Video segmentation is classified into following types: Shape based video segmentation, edge based video segmentation, color based video segmentation and texture based video segmentation. This work is based on shape based video segmentation. In [1][2] mathematical morphological technique is used in image processing. Different mathematical operations such as dilation, erosion, opening, closing etc. are used. Set theoretic, shape
Show more

10 Read more

3D object reconstruction using multiple views

3D object reconstruction using multiple views

Automatic salient feature extraction is an essential task for the analysis of video streams in video surveillance systems. Of particular interest for the purpose of this study is the specific case of determining the features of faces (around 30×30 pixels) in a low resolution environment using multiple camera views, where the general face detectors proposed (79; 80; 81; 86) do not perform well. Recent developments in the field of face detection involve the use of local informative descriptors such as Haar wavelets in Viola and Jones (81), SIFT (Scale Invariant Feature Transform) in Lowe (46) and SURF (Speeded Up Robust Feature) in Bay et al. (3). The use of local descriptors versus global ones usually ensures the system a certain natural robustness to partial occlusion. Moreover, adequate normalisation of the descriptors allows them to be invariant to some geometrical transformations like rotation, scale changes or illumination. These are interesting properties for the detection of an object appearing at different scales or orienta- tions in the images.
Show more

178 Read more

MouldingNet: Deep-learning for 3D Object Reconstruction

MouldingNet: Deep-learning for 3D Object Reconstruction

With the rise of deep neural networks a number of approaches for learning over 3D data have gained popularity. In this paper, we take advantage of one of these approaches, bilateral convolutional layers to propose a novel end-to-end deep auto-encoder architecture to efficiently encode and reconstruct 3D point clouds. Bilateral convolutional layers project the input point cloud onto an even tessellation of a hyperplane in the (d + 1) -dimensional space known as the permutohedral lattice and perform convolutions over this representation. In contrast to existing point cloud based learning approaches, this allows us to learn over the underlying geometry of the object to create a robust global descriptor. We demonstrate its accuracy by evaluating across the shapenet and modelnet datasets, in order to illustrate 2 main scenarios, known and unknown object reconstruction. These experiments show that our network generalises well from seen classes to unseen classes.
Show more

5 Read more

II. RELATED WORK TESTING DISTRIBUTED SYSTEMS

II. RELATED WORK TESTING DISTRIBUTED SYSTEMS

Interaction property is a natural feature of a distributed system. Based on interaction property, the test work can focus on the interesting part and ignore the other part. After the selected interaction property is given, the scale of the problem us reduced. In the paper, test based on interaction property is considered. The definitions related to all test work are given. The input and the output in interaction property are selected not only randomly but also purposely. Meanwhile, in order to check whether the whole interesting work is finished correctly, several interactions are considered from test generation including test verdict, and test implementation. An algorithm is proposed to generate executable test sequence and its complexity is completely analyzed. The advantages are that the test work pertinent is enhanced so that the scale of the problem is reduced and the deployment of the test work is considered simultaneously. The coverage of a test sequence is discussed and the verdict method is given. The research work in the future is the algorithm optimization because we wish we can find an algorithm which may simultaneously cover as many as interesting transitions and has the minimum number of executable test sequences. Moreover if the selected PCOs (Point of Control and Observation) cannot be deployed in expected position, where to deploy them is also needed to consider.
Show more

6 Read more

Semantically Coherent 4D Scene Flow of Dynamic Scenes

Semantically Coherent 4D Scene Flow of Dynamic Scenes

Simultaneous semantically coherent object-based long-term 4D scene flow estimation, co-segmentation and reconstruction is proposed exploiting the coherence in semantic class labels both spatially, between views at a single time instant, and temporally, between widely spaced time instants of dynamic objects with similar shape and appearance. In this paper we propose a framework for spatially and temporally coherent semantic 4D scene flow of general dynamic scenes from multiple view videos captured with a network of static or moving cameras. Semantic coherence results in improved 4D scene flow estimation, segmentation and reconstruction for complex dynamic scenes. Semantic tracklets are introduced to robustly initialize the scene flow in the joint estimation and enforce temporal coherence in 4D flow, semantic labelling and reconstruction between widely spaced instances of dynamic objects. Tracklets of dynamic objects enable unsupervised learning of long-term flow, appearance and shape priors that are exploited in semantically coherent 4D scene flow estimation, co-segmentation and reconstruction. Comprehensive performance evaluation against state-of-the-art techniques on challenging indoor and outdoor sequences with hand-held moving cameras shows improved accuracy in 4D scene flow, segmentation, temporally coherent semantic labelling, and reconstruction of dynamic scenes.
Show more

19 Read more

Robust Adaptive Beamforming Based on Covariance Matrix Reconstruction for Look Direction Mismatch

Robust Adaptive Beamforming Based on Covariance Matrix Reconstruction for Look Direction Mismatch

During the past decade, several approaches, such as imposing multiple gain constraints in different directions in the vicinity of the presumed SV [1], diagonal loading [2], placing derivative constraints on the presumed SV [3] and eigenspace-based approaches [4], have been proposed to improve the robustness of adaptive beamformer design. The diagonal loading [2] has the advantage of being invariant to the type of mismatches but the choice of the loading factor is not obvious. The authors in [5–7] proposed robust capon beamformer (RCB) based on the idea that allows the presumed SV to be within a sphere whose radius determines the uncertainty level. These approaches efficiently calculate the loading factor. Other variable-loading based robust beamformers are also proposed in [8, 9]. Motivated by a similar idea, the authors in [10, 11] introduced different optimization formulation and proposed solutions that are based on semi-definite programming. Recently, iterative-based approaches are considered for adaptive robust beamforming problem. Hassaniean et al. [12] proposed a new optimization formulation to find the SV error that is solved iteratively using sequential quadratic programming (SQP). Later, Gu et. al. proposed to pre-estimate the covariance matrix prior to solving the optimization [13] where the pre-estimated covariance matrix has the diagonal loading form. In [14], the authors propose a robust adaptive beamforming based on the eigenstructure method to cancel the desired signal in a linearly constrained beamformer with imperfect arrays. The authors in [15] proposed an iterative RCB (IRCB) with adaptive uncertainty level, where in each iteration the estimated steering vector is updated based on the re-adjusted uncertainty level.
Show more

10 Read more

Compressed Sensing Reconstruction based on Adaptive Scale Parameter using Texture Feature

Compressed Sensing Reconstruction based on Adaptive Scale Parameter using Texture Feature

While using CS, the utilized random projection matrices are introduced. But the resultant matrices are more efficient than that. Some required measurements or improvement of the reconstruction performance are followed by using the resultant matrices [9] [10]. In the fast encoding and decoding, the structured matrices have been applied for implementing the hardware in CS to achieve efficient storage to be focused on this proposed work[11][12]. But, the costs of these constructions are in very poor recovery conditions. The number of the zero- valued elements of the m x n matrix divided by the total number of elements of an m x n matrix is called a sparse matrix. Some elements are non-zero in a sparse matrix is considered as dense. Sparse data is easy to compress naturally so it requires significantly less storage. It is not possible to work with a few very large sparse matrices using standard dense-matrix algorithms. So it allows us to store the sparse data efficiently and has {-1, +1} non-zero entries that give fast computation during acquisition. When the sparse projection matrix is combined with conventional reconstruction algorithms for solving an unacceptable performance, the joint mechanism and the reconstruction process gives better result in image recovery.
Show more

7 Read more

Influence of Ultra Low Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT

Influence of Ultra Low Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT

The CNR change was also supported by the subjective scoring. Scores were lower with a decreasing dose and higher for the stan- dard kernel than for the bone kernel. The ON showed better sub- jective visibility then the IRM. On maxillofacial CT, images are usually provided in only the bone kernel. With the bone kernel, the orbital soft tissues were highly visible with the reference dose. However, they were blurred and invisible with ultra-low doses. The standard kernel, which is typically used only for soft-tissue imaging, such as in oncology, significantly improved visibility. ASIR-50 had no remarkable effect over FBP on subjective scores, and ASIR-100 showed only a small effect. In agreement with other researchers, an insufficient number of photons may not be com- pensated for by simply increasing the iterative reconstruction strength. 21 MBIR showed the best subjective score in all images.
Show more

6 Read more

A Novel Approach for Super Resolution in Medical Imaging

A Novel Approach for Super Resolution in Medical Imaging

In almost every application, it is desirable to generate an image that has a very high resolution. Thus, a high resolution image could contribute to a better classification of regions in a multi-spectral image or to a more accurate localization of a tumour in a medical image or could facilitate a more pleasing view in high definition televisions (HDTV) or web-based images. The resolution of an image is dependent on the resolution of the image acquisition devices. However, as the resolution of the image generated by a device increases, so does the cost of the device and hence it may not be an affordable solution. Therefore we are emphasizing our work to avoid hardware updating solution which is more costly and complex.
Show more

8 Read more

Four-Dimensional Object-Space Data Reconstruction Using Spatial-Spectral Multiplexing.

Four-Dimensional Object-Space Data Reconstruction Using Spatial-Spectral Multiplexing.

One of the most straightforward passive sensors for remote sensing are eyes[10]. Human beings have been eager to know about the earth and space since a long time ago. Not limited to their eyes, humans began to make use of external tools, such as telescopes, to visualize the area outside the field of view. The invention of photography was another milestone in the history of remote sensing. Using the photography technology, humans tried to observe the ground and take photographs at a greater height on the mountain or in a balloon. During this initial period, remote sensing were still referred as aerial photography[22]. Afterwards, the technique of aerial photography steadily evolved due to the improvement in platforms, camera hardwares and photo interpolation techniques. The rapid growing of interests in remote sensing started after the successful launch of artificial satellites. To date, there are three main platforms in remote sensing: ground-based, airborne and space borne[14]. As the name implies, they are classified by the height from the earth’s surface. The remote sensors are mounted on these platforms to detect the electromagnetic energy reflected from the earth’s surface, which can be mapped to the prior knowledge of some features[16]. Multispectral images are one of the most common images acquired by the remote sensors. In recent years, further improvement in sensors and satellites has greatly increased the quality of data and the accuracy of mapping by providing us not only multispectral images but also hyperspectral images[25].
Show more

59 Read more

Show all 10000 documents...