Sampling and Reconstruction

Top PDF Sampling and Reconstruction:

Network reconstruction via density sampling

Network reconstruction via density sampling

Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.

13 Read more

Reconstruction of Wireless UWB Pulses by Exponential Sampling Filter

Reconstruction of Wireless UWB Pulses by Exponential Sampling Filter

The measurement and reconstruction of some classes of signals containing discontinuities such as impulses and edges is difficult [1,2]. Sampling methods are historically relied on Shannon’s theorem [3]. The perfect reconstruc- tion (PR) of the continuous signal from the sampled ver- sion requires that the signal is band limited, i.e., its fre- quency spectrum has a maximum frequency f M . The PR is possible only if the sampling frequency f s  2 f M . For example, signals in optical devices and radiation detec- tors are not band limited and classical sampling ap- proaches are not relevant for extracting the information.

5 Read more

Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography

Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography

With current transducer array technology, one trans- ducer can both transmit and receive ultrasound signal. Thus, when setting up the actual measurement configur- ation, depending on imaging quality requirement, we can arrange transducers on the measurement system so that the distance between two transducers can be 1°, 2°, etc.… If the distance between two transducers is small, we can arrange multiple transmitters and receivers on the meas- urement system such that we can reconstruct high- resolution images (i.e.large number of pixels in the range of interest); reciprocally, if the distance between two trans- ducers is large, the number of transmitters and receivers will be less. Therefore, we can reconstruct low-resolution images. The number of transmitters and receiverswill have to be chosen in the acceptable range in order to recon- struct good-enough image, i.e. 0.5 < r < 1. However, to be more reasonable, we should arrange the configuration such that the distance between two transducers are small, about 1°. With this arrangement, when we createthe deter- ministic sequence of the DCS, the indexes of this se- quence correspond to the positions of transducers on the measurement system. This creates a random-like system, and thus ensures conditions of reconstruction in com- pressed sampling technique [6, 7]. This set-up does not make the imaging process more complex. In fact, not all transducers in the measurement system work, only transducers whose indexes coincide with the ones of the deterministic sequence. Therefore, the volume of calculation only depends on the number of active trans- ducers on the measurement system.

16 Read more

Applied Sampling and Reconstruction of Signals on the Sphere

Applied Sampling and Reconstruction of Signals on the Sphere

We have conducted numerical experiments to show the reconstruction accu- racy of any band-limited signal on the sphere for band-limits of interest in HRTF analysis for four different configurations of the sampling scheme. Errors on the order of machine precision are obtained for the configurations where samples are taken over the whole sphere, demonstrating that the scheme allows for flexibility in the placement of samples along longitude and has a hierarchical structure. An acceptable reconstruction error can be obtained when samples are taken from a spatially limited region (excluding south polar cap region) on the sphere. We also show that the SHT can be carried out with manageable computational complex- ity. Simulations have been carried out to show that the proposed sampling scheme and SHT allows for accurate reconstruction of the HRTF over the whole sphere, including unmeasured locations, provided that a suitable band-limit is chosen. The future work includes the use of the proposed scheme for the acquisition of HRTF measurements and further investigation into improving the numerical accuracy for the configuration when samples on the sphere are not taken from the south polar cap region. The contributions made in this chapter are as follows.

162 Read more

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

We presented a method to reconstruct spatio-temporal 3D animation from dynamic point clouds using a color and depth based landmark sampling approach. We showed that data from multiple Kinects can be used to create a dynamic representation of a real-world object that can be merged together to capture the object from 360 o . Our new method for background subtraction reliably separates the foreground dynamic object from the static background. Our system can incorporate any number of cameras, as we demonstrated that not only it works for the data acquired using Kinects but also through the traditional acquisition system comprising of color cameras. Our works leads to a number of exciting directions in the future. We plan to use new Microsoft’s Kinect SDK to capture not only the depth but also the pose of the human actor. This information can greatly enhance the landmark sampling algorithm. In addition we would also like to explore 3D surface reconstruction from the dynamic 3D point cloud data. The spatio-temporal 3D animation can also be used for the motion analysis, compression and parameterization of the 3D video data.

5 Read more

Sampling and Reconstruction of Spherical Signals for Applications in Cosmology, Acoustics and Beyond

Sampling and Reconstruction of Spherical Signals for Applications in Cosmology, Acoustics and Beyond

This thesis is predominantly focussed on the extension of existing spherical signal processing techniques in order to achieve accurate reconstruction of the sig- nals on the sphere. In particular, we extend the optimal dimensionality sampling scheme [63] and propose a sampling scheme that requires the optimal number of samples for the representation of spin-s functions on the sphere (described in the next section). In addition, we analyze the iterative residual fitting (IRF) algorithm which is a well known reconstruction method when the data is not present on a pre-defined grid on the sphere. We also propose a new iterative extrapolation al- gorithm when the samples are not accessible on the sphere. We use the Slepian eigenfunctions to design a spatially constrained anti-aliasing filter in order to mit- igate the effects of spatial aliasing in acoustics. In the remainder of this chapter, we first review the previous work on development of signal processing techniques for sampling and reconstructing signals on the sphere. Then we discuss the re- search problems considered in this thesis and finally we provide the summary of our contributions and an outline of this thesis.

112 Read more

Shape Reconstruction of Unknown Targets Using Multifrequency Linear Sampling Method

Shape Reconstruction of Unknown Targets Using Multifrequency Linear Sampling Method

In this paper, the shape reconstruction of dielectric and/or conducting objects is estimated using linear sampling method (LSM) with multifrequency data. The LSM equation is made as a function of frequency so that the target’s support estimation has been done through the solution to a regularized linear inverse problem. As a result, the proposed approach requires one-time estimation of the regularization parameter only. It has been observed that the estimated results are better than the case of single frequency. This multifrequency approach cannot eliminate the drawbacks of each frequency completely but certainly will produce a better estimation than individual frequencies. The numerical results have been tested with the synthetic data as well as the experimental data for various types of objects. APPENDIX A.

7 Read more

Flexible Analog Front Ends of Reconfigurable Radios Based on Sampling and Reconstruction with Internal Filtering

Flexible Analog Front Ends of Reconfigurable Radios Based on Sampling and Reconstruction with Internal Filtering

changed. Internal filtering performed by these structures al- lows removal of conventional antialiasing and reconstruc- tion filters or their replacement by wideband low-selectivity filters realizable on a chip. This makes the AMP technol- ogy uniform and compatible with the IC technology. The RCs with internal filtering utilize the D/A output current more e ffi ciently than conventional devices, then bandpass reconstruction takes place. The SCs with internal antialiasing filtering accumulate signal energy in their storage capacitors during the sample mode. This accumulation filters out jitter and reduces the charging current of the storage capacitors by 20–40 dB in most cases. Reduced jitter enables the devel- opment of faster A/Ds. The decrease in the charging current lowers both the required gain of an AMP and its nonlinear distortions. The reduced AMP gain allows sampling close to the antenna. Smaller charging current also lowers input volt- age of the SCs. Indeed, although the same output voltage has to be provided by an SC with internal antialiasing filtering and a conventional SHA, the SC input voltage can be signifi- cantly lower when the integrator operational amplifier has an adequate gain. As mentioned in Section 2.1, a conventional SHA does not suppress out-of-band noise and IMPs of all the stages between the antialiasing filter and its capacitor. As a result of sampling, these noise and IMPs fall within the sig- nal spectrum. The SCs with internal antialiasing filtering op- erate directly at the A/D input and reject out-of-band noise and IMPs of all preceding stages. Thus, they perform more e ff ective antialiasing filtering than conventional structures.

18 Read more

MR Image Reconstruction from Pseudo-Hex Lattice Sampling Patterns Using Separable FFT

MR Image Reconstruction from Pseudo-Hex Lattice Sampling Patterns Using Separable FFT

When these more general sampling patterns are employed in k-space, many standard signal processing algorithms, such as multirate …ltering (e.g., decimation and interpo- lation) can be applied. One sampling pattern that can provide certain advantages is a hexagonal lattice, which has a more isotropic nature than a rectangular lattice. Consider the task of nearest-neighbor regridding. For the same sampling density, the maximum distance from any point in k-space to a lattice point is about 13% less in a hexagonal lattice than in a square lattice, because of the more circular nature of the pattern. However, use of a hexagonal lattice in k-space leads to a hexagonal lattice in the target image space; this in turn would require regrid- ding after image reconstruction to a square lattice, which is normally required for display and other post-processing operations. A pseudo-hex lattice, on the other hand, is a rational lattice that approximates a hexagonal one, and can lead to a rectangular lattice in the target image space.[4] That is, L V , the lattice of support in k-space,

6 Read more

Unmanned aerial vehicle field sampling and antenna pattern reconstruction using Bayesian compressed sensing

Unmanned aerial vehicle field sampling and antenna pattern reconstruction using Bayesian compressed sensing

Variational Bayesian techniques or sampling Markov Chain Monte Carlo (MCMC) method can be employed to infer the unknown parameters and estimate the sparse vector s. Bayesian approach has many advantages over deterministic formulation. The main advantage is that instead of point estimation, we obtain a distribution for the unknown elements of s via MAP estimation which is more accurate; this can give us an estimation of the reconstruction error and help in optimizing the measurement matrix to reduce the uncertainty of the estimated sparse vector. Furthermore, noise posterior distribution can be inferred. As can be observed in the experiments, the Bayesian method outperforms other formulations in terms of the reconstruction error. Regarding the above advantages, we propose a pattern reconstruction technique based on Bayesian compressed sensing. We adopt the formulation proposed in [19] where a hierarchical form of Laplace distribution is assumed as a sparse prior imposed on s :

8 Read more

Compressive  Sensing  based  Leakage  Sampling   and  Reconstruction:  A  First  Study

Compressive Sensing based Leakage Sampling and Reconstruction: A First Study

The rapid increase in the bandwidth of cryptographic devices makes it difficult to sample, store and process leakages. In this paper, we introduce Compressive Sensing, a new and highly-efficient data sampling technology for side-channel leakage sampling and compare it with classical compressive sampling. Our experiments performed on power traces ob- tained from AT89S52 micro-controller and DPA contest v1.1 clearly demonstrate that CS can use a sampling rate much lower than the original one to obtain equivalent sampling performance. It projects the original power traces onto the observation space, and obtains the observation samples far below the original dimension. CS transfers a large amount of computation from sampling devices to advanced processors, so that the compute-intensive signal reconstruction can be carried out fast without distortion. In this paper, we only introduce the basic techniques of CS for leakage sampling and verify its superiority by ex- periments. There are many studies on sparse representation of signals, observation matrix design and signal reconstruction which could be applied to the leakage sampling problem. As such, we believe this work provides a new research direction in SCA which has many avenues for investigations and opportunities for further improvements.

25 Read more

Maximum likelihood parametric reconstruction of forest vertical structure from inclined laser quadrat sampling.

Maximum likelihood parametric reconstruction of forest vertical structure from inclined laser quadrat sampling.

Results for the three example stands are shown in Figure 1. Qualitatively, the structural results agree well with field observations, both in terms of the relative density and height distribution of canopy elements. The beta mixture model was flexible enough to capture the roughly unimodal vertical distribution of tree and tall shrub canopies at Lyngsdalselva and Koppangen and the denser, more continuous canopy at Mefjordvaer, along with the dense low understory layer that was present at all three sites. Site conditions prevented de- structive sampling (e.g. vertically stratified clip-and-weigh techniques; [14]). However, the profiles for overstory trees and shrubs conform well to those predicted using a basal area- weighted allocation of canopy elements to the live canopy interval of conventionally measured trees (not shown).

5 Read more

Optimal signal reconstruction : quantification and graphical representation of optimal signal reconstructions

Optimal signal reconstruction : quantification and graphical representation of optimal signal reconstructions

So 54.3% of the system’s energy is preserved by sampling and reconstruction if a sampling period h = 4 in combina- tion with the optimal hold is used. In Example 3.3.1 the preserved energy with optimal HS was 65.0% and now with this fixed sampler 54.3% is the best one can do. Another explanation of the formula above is that if the in- put signal is white noise (see Definition A.1.11) then 54.3% of its power is preserved by sampling and reconstruction with the optimal hold in combination with this fixed sam- pler and a sampling period h = 4.

37 Read more

Sampling and Reconstruction of Ordered Sets in PCIe 3.0

Sampling and Reconstruction of Ordered Sets in PCIe 3.0

ABSTRACT: The serial protocols like PCI Express and USB have evolved over the years to provide very high operating speeds and throughput. This evolution has resulted in their physical layer protocol becoming very complex. One of the important task in the Physical layer of PCIe 3.0 is the monitoring and sampling different Ordered sets and Data Packets that come from different layers.

5 Read more

Sampling and Reconstruction of Ordered Sets in PCIe 3.0

Sampling and Reconstruction of Ordered Sets in PCIe 3.0

The LTSSM has been designed and verified using UVM methodology. The verification architecture is as shown in Fig 3(a). The MAC driver has the driving LTSSM which will keep track of the state machine transitions, whereas the MAC monitor monitors the arrival and sampling of the data packets and ordered sets from the upper layers at the transmit side of the MAC agent.The receive side of the MAC agent samples the data packets and ordered sets from the PIPE interface and drives it to the upper layers. The data packets are scrambled and descrambled in the MAC agents.The transmit side of the PIPE agent encodes and serializes the data packets on to the PIPE interface. The receive side of the PIPE agent decodes and deserializes the data packets from the PIPE interface.

5 Read more

Cascaded reconstruction network for compressive image sensing

Cascaded reconstruction network for compressive image sensing

In the traditional Nyquist sampling theory, the sampling rate must be at least twice of the signal bandwidth in order to reconstruct the original signal losslessly. On the contrary, compressive sensing (CS) theory is a sig- nal acquisition paradigm, which can sample a signal at sub-Nyquist rates but realize the high-quality recovery [1]. Later, Gan et al. proposed block compresses sens- ing to reduce the algorithm’s computational complexity to avoid directly applying CS on images with large size [2]. Due to CS’s excellent performance on sampling, CS has already been widely used in a great deal of fields, such as communication, signal processing, etc.

16 Read more

Removal of Angular Sampling Artefacts in Respiratory Tracking of 4-D Self-Gated Sequential MR Imaging.

Removal of Angular Sampling Artefacts in Respiratory Tracking of 4-D Self-Gated Sequential MR Imaging.

These image reconstruction experiments are mainly intended to prove that the image quality due to the choice of the new angular sampling increments should improve/not-worsen at the least when compared against that of the golden angular increments. To this end, a (640 X 640) Shepp-Logan phantom MRI image has been used for analyzing the performances of these angles. The image has been sampled radially at the angular increments (∆Φ = 61.18 and 72.3099 respectively) for various time durations (100, 200, 500 and 2000 time samples respectively). Once the k-space profiles are generated from the image, they are input to the NUFFT subroutine to generate the forward model of the frequency transform operator. This forward model was in turn used to generate the inverse Fourier transform in order to reconstruct the image from the sparsely sampled k-space data. Lack of knowledge on the forward model on the recorded phantom (water) dataset using the MRI scanner made us to choose a synthetic dataset (Shepp-Logan MRI head image) for the study. Moreover, the spatial sensitivity maps for the individual channels is unknown and could not be accounted for in the recorded dataset thereby making the synthetic dataset (Shepp-Logan image) as an ideal choice to work with.

67 Read more

Reconstruction of Nonuniformly Sampled Bandlimited Signals by Means of Time-Varying Discrete-Time FIR Filters

Reconstruction of Nonuniformly Sampled Bandlimited Signals by Means of Time-Varying Discrete-Time FIR Filters

derived earlier in [7], but there is a difference between the formulation in that paper and the one to be derived in this paper. The advantage of using the FB formulation in this pa- per is that it enables one to directly make use of the design procedure derived for the general case where the sampling is not periodically nonuniform. In this way, the filter orders can be reduced compared to those of the practical filters sug- gested up to now [11, 12] for approximating the ideal ones in [7] (as will be demonstrated in Examples 5 and 8). Finally , it is also pointed out that a special class of fractional-delay reconstructing filters was proposed in [13] for solving the same problem. The advantage of that approach is that the filters need not be redesigned in case the sampling pattern is changed, which is an advantage in real-time implementa- tions. The drawback of the approach in [13] is that a cer- tain amount of additional oversampling must be used. The approach in this paper (as well as that in [12]) overcomes this drawback, but it should be noted that the time-varying filters must be redesigned if the periodic nonuniform sam- pling pattern is changed during operation for whatever rea- son.

18 Read more

Digital Watermarking Using DCT & DWT Technique

Digital Watermarking Using DCT & DWT Technique

have the bandwidth lower than that of image so these sets are down sampled . Reconstruction of image is done by using up sampling ,filtering and summing the sub band .the decomposed image contains four parts or the image is divided into four sub sets ,these include one part is a low frequency of original image or written as LL1, the one bottom left is vertical one part is a low frequency of original image written as LH1, the one bottom left is vertical details of the original image, the top right contains horizontal detail of the image written as HL1, the bottom right block contain high frequency details of the original image written as HH1. In the decomposition of DWT it must be the multiple of 2n where n denotes the number of levels. The detail of original image lies in the low frequency coefficients so the watermark is embed on the low frequency coefficients [8] and the for the removal of watermark IDWT is used[9] .

6 Read more

Parameter estimation and error calibration for multi-channel beam-steering SAR systems

Parameter estimation and error calibration for multi-channel beam-steering SAR systems

Abstract: Multi-channel beam-steering synthetic aperture radar (multi-channel BS-SAR) can achieve high resolution and wide-swath observations by combining beam-steering technology and azimuth multi-channel technology. Various imaging algorithms have been proposed for multi-channel BS-SAR but the associated parameter estimation and error calibration have received little attention. This paper focuses on errors in the main parameters in multi-channel BS-SAR (the derotation rate and constant Doppler centroid) and phase inconsistency errors. These errors can significantly reduce image quality by causing coarser resolution, radiometric degradation, and appearance of ghost targets. Accurate derotation rate estimation is important to remove the spectrum aliasing caused by beam steering, and spectrum reconstruction for multi-channel sampling requires an accurate estimate of the constant Doppler centroid and phase inconsistency errors. The time shift and scaling effect of the derotation error on the azimuth spectrum are analyzed in this paper. A method to estimate the derotation rate is presented, based on time shifting, and integrated with estimation of the constant Doppler centroid. Since the Doppler histories of azimuth targets are space-variant in multi-channel BS-SAR, the conventional estimation methods of phase inconsistency errors do not work, and we present a novel method based on minimum entropy to estimate and correct these errors. Simulations validate the proposed error estimation methods.

29 Read more

Show all 10000 documents...