noising with the NL-means filter [7], aims at averaging only those pixels that share the same statistical properties, enforcing in this way the stationarity hypothesis on the average group. Many nonlocal de- noising algorithms have been proposed in the last few years. The Block Matching 3D (BM3D) algorithm [16] deserves a special men- tion because it blends very effectively the **non**-**local** approach with other sophisticated tools (e.g., wavelet transforms, Wiener filtering) to achieve the best performance to date for images affected by addi- tive white gaussian noise (AWGN). The **non**-**local** approach, however, makes no assumptions on the noise model, and hence has been read- ily extended to other types of images and tasks, and in particular to SAR image despeckling. An iterative block-wise version of NLM, the probability patch-based (PPB) algorithm, was first proposed in [19], followed soon by a SAR-oriented version of BM3D [84]. Of course, while the Euclidean distance makes perfect sense to measure block similarity in the additive gaussian noise case, a different measure is needed for speckled SAR images. The problem is solved in [19] where a probability-based similarity measure is developed, adopted in [84] as well. Just like in the AWGN case, **non**-**local** techniques look extremely promising for SAR amplitude despeckling, as well as for filtering inter- ferometric, polarimetric, and POL-**InSAR** data [18]. For this last kind of filters the topography do not impair the **estimation** as long as a set of similar pixels (statistically homogeneous pixels) is found, assuring this way the validity of the WSS hypothesis among the averaged pixels and the **local** WSS hypothesis is not a limit anymore.

Show more
191 Read more

In addition, a lot of works about baseline **estimation** have been studied except traditional **methods**. Jin et al. [11] proposed an initial baseline **estimation** of **InSAR** based on the phases of flat earth. A calibration **estimation** method [12] which uses co-registration offsets and **non**-linear least square algorithm was proposed, but the vertical baseline is inaccurate. Two ENVISAT SLC (Single Look Complex) images covering Tibet, China were applied to test this method. In terms of the vertical effective baseline, Chen et al. [13] proposed an **estimation** of **InSAR** baseline based on the Frequency Shift Theory. In addition, there is an **estimation** method based on subspace projection [14]. It is highly robust to the unwrapping phase and can restrain the noise well, but the information about some points on the ground was used in this method. A new approach for interferometric calibration was proposed [15,16], based on the idea of maximizing the correlation between the original complex interferogram and reference values of it obtained from ground control points. The main advantage with respect to traditional techniques is that the method does not require the phase to be unwrapped in advance, and the method is computationally more demanding than traditional techniques.

Show more
21 Read more

We assess predictive performance in high-dimensional gene expression data. (Calon et al. 2012) used mice experiments to identify 172 genes potentially related to the gene TGFB, and showed that these were related to colon cancer progression in an independent data set with n = 262 human patients. TGFB plays a crucial role in colon cancer and it is important to understand its relation to other genes. Our goal is to predict TGFB in the human data, first using only the p = 172 genes and then adding 10,000 extra genes that we selected randomly from the 18,178 genes with distinct Entrez identifier contained in the experiment. Their absolute Pearson correlations with the 172 genes ranged from 0 to 0.892 with 95% of them being in (0.003,0.309). Both response and predictors were standardized to zero mean and unit variance (data and R code in Supplementary Material). We assessed predictive performance via the leave-one-out cross-validated R 2 coefficient between predictions and observations. For Bayesian **methods** we report the posterior expected number of variables in the model (i.e. the mean number of predictors used by BMA), and for SCAD and LASSO the number of selected variables.

Show more
34 Read more

12 Finally, to further improve the Rician noise **estimation** required to apply bias correction to the NL-PCA filter and to provide accurate noise **estimation** to the PRI-NL-PCA method, we apply a low-pass filter to the estimated noise map. We do so to further regularize the estimated spatially varying noise field to produce a more realistic noise map (the noise field is expected to be slowly varying). We have used a kernel size of 15 mm 3 for that purpose. In the case that the noise is found to be constant across the entire image, the average of all **local** estimations can be used to provide a more accurate **estimation**. To detect this homogeneity condition we used coefficient of variation (CoV) of the estimated noise field (the stationary condition was met for CoV<0.15). In table 1 we summarize the described NL-PCA and PRI-NL-PCA **methods**.

Show more
36 Read more

behavior of the **estimation**. In some cases, the correction produced extremely large correc- tions to the REML estimates of the percentiles, often reversing the direction of the lower and upper bounds of the prediction intervals. This can perhaps be attributed to the REML estimates hitting the bounds set in the optimization algorithm, and also to the fact that the theory is based on asymptotics and may be unreliable for small samples. In the cases where the Laplace approximation produced unreliable estimates, specifically the approximation led to lower bounds for the prediction interval that were larger than the upper bounds, the REML percentile values were used for the empirical coverage probabilities. An area for future study is the cause of and adjustment for the sometimes erratic behavior of the Laplace approximation. If a suitable correction can be found, it is reasonable to assume that the empirical coverage probabilities may improve even beyond the improvements over the REML coverage probabilities seen here.

Show more
129 Read more

In this paper we have proposed a new approach to e¢ciently estimate high- dimensional **non**-linear **non**-Gaussian state space models. Due to the general applicability of the proposed approach, it will prove useful in a wide range of applications. We extend the recently developed precision-based samplers (Chan and Jeliazkov, 2009b and McCausland, Millera, and Pelletier, 2011) and sparse matrix procedures to build fast, e¢cient samplers for these **non**- linear models. We develop a practical way to sample the model **parameters** and the states jointly to circumvent the problem of high autocorrelations in high-dimensional settings. This approach uses the cross-entropy method (Rubinstein and Kroese, 2004) to obtain the optimal candidate densities q( j y). We show via an empirical example that the e¢ciency of the sampling scheme is substantially improved by drawing ( ; y) jointly. Three samplers are presented each with virtues in di¤erent circumstances. Finally, we apply these techniques in a TVP-VAR in which one of the variables is restricted to be strictly positive. Using this framework, we investigate the implications for transmission of monetary shocks of accounting for the zero lower bound (ZLB) on interest rates.

Show more
39 Read more

ventricular mass index are likely to occur most subgroups of pulmonary hypertension (with the exception of left heart dis- ease) as they are related to the pressure differential between the left and right ventricles. The measures of pulmonary arte- rial structure and function (size and relative area change) are likely to be transferable across subgroups, as they are markers of pulmonary vascular compliance and remodelling. We, therefore, feel that it reasonable to use these models in specific sub-groups of pulmonary hypertension, although validation, such as in this paper, would be useful. The models that are used in this paper use **parameters** that are stated with a degree of precision (for example an offset of 21.806 for CMR-PA/ RV), likely more than is required for this purpose. We have maintained the equations in the published form to reduce any bias in the calculations, but it is likely that fewer decimal places could be used for these **parameters** for the prediction of outcome and the presence of pulmonary hypertension in COPD.

Show more
11 Read more

ABSTRACT: Photosynthesis is a process through which plants produces food for themselves. Chlorophyll is the most important content that is required for the photosynthesis process as well as one of the most important biochemical **parameters** of plants and is usually an indicator of plants nutritional status, photosynthetic capacity and the health status of plants; that is why, it is an important information parameter in research on crop quality monitoring, ecosystem productivity **estimation**, carbon cycles, etc. In this paper we have studied **non**-destructive method to determine chlorophyll content and concentration in different plants. Reflectance measurement makes it possible to quickly and **non**-destructively assess, the chlorophyll content in leaves. Chlorophyll is a pigment that has a clear impact on the spectral responses of plants, mainly in the visible spectrum portion.

Show more
We can thus summarize the salient features of this exercise as follows: first, the REC10 approach always improves the QML one, whatever the model and the subset of **parameters** investigated. Second, the REC10 dominates 3 out of the 4 Total RMSE investigated, mostly because it provides estimates of the volatility **parameters** with smaller **estimation** errors. The results obtained in the case of the distribution’s parameter do not provide such sharp conclusions. Even though the ML estimates remain the most precise ones, the difference between the REC10 estimates and the ML ones remain small in every case. A notable exception is obtained in the GH-EGARCH case, with a lower Distribution RMSE in the REC10 case than in the ML one. This overall nice performance of the REC10 approach comes at different costs: first, the rate of convergence is smaller. In the EGARCH cases the rate of convergence drops to 50% while in the APARCH case it falls to 20%. The APARCH model requires the **estimation** of a parameter that is the power to which we should model the volatility process. This kind of parameter is typically more difficult to estimate than the usual linear GARCH **parameters**. In the meantime, the time to estimate the **parameters** using the recursive approach is due to be longer than the QML one. It is around 6 to 7 times the QML **estimation** time. The interesting element there is that ML remains a faster method than the recursive one. Hence, the rise in **estimation** quality obtained with the REC10 method is obtained with a lesser rate of convergence and with a longer **estimation** time.

Show more
34 Read more

Nonlinear system identification is a fast evolving field of research with contributions from different communities, such as the mechanical engineering, systems and control, and civil engineering communities [1]. Many identification **methods** have been developed over the last years, for a wide variety of model structures. These **methods** can be classed into two sets. In the first set, the identification procedure is transformed into a state **estimation** problem after discretizing the differential equations into discrete state equations and treating the **parameters** as state variables. In the second set, identification of the nonlinear **parameters** from the measured data is formulated as an inverse problem and is often fulfilled by solving an optimization problem. Then, various techniques are proposed to deal with the state **estimation** problem or the optimization problem [2].

Show more
Multicollinearity is a problem associated with strong intercorrelation among the explanatory variables of linear regression model which is often encountered in social sciences [1, 2]. Solving this problem has attracted and is still attracting the attention of many researchers because of the challenges associated with parameter **estimation** and hypothesis testing while the Ordinary Least Square (OLS) estimator is used. These challenges include imprecise estimates, large standard error, **non**-conformity of the sign of regression coefficient estimates with the prior and insignificance of the true regression coefficients [2-4]. Various **estimation** developed **methods** to overcome this problem include the Ridge Regression Estimator [5, 6] estimator based on Principal Component Analysis Regression [7-9] and estimator based on Partial Least Squares [10-12].

Show more
concept of support vector machines (SVM) that was developed by Vapnik [41]. Recently, SVR becomes competitive with the best available regression techniques, and it has now evolved into an active area of research. A comprehensive tutorial on SVR has been published by [42]. The SVR has two main important properties. First, it has better generalization ability than the other competitive techniques due to choosing the maximal margin hyperplane, thus minimizing the risk of over-fitting [38]. Second, it supports an efficient learning for highly nonlinear functions by applying the kernel trick [41]. According to these two properties, it is expected that SVR will give better prediction results than those given by ANNs. The objective is to estimate the **parameters** of certain function which give the best fit of the covariates of the training data. Such function approximates all pairs while maintaining the differences between estimated values and real values under a certain precision [40]. The kernel trick is applied to transform the input space (the set of covariates) to high dimensional feature space, and the SVR in this space becomes a nonlinear function in the original covariates. There are several kernels; choosing one kernel is an application-dependent (it depends on the task at hand). The commonly used family of kernels are; polynomial kernel, radial basis function (RBF) kernel, and sigmoid kernel. In this paper, we tested and applied the RBF kernel, and it gives the best results.

Show more
214 Read more

Schlumberger array. The inversion was made in RES2DINV (Loke 2000) and geostatistical calculations were made using free software SGEMS. To facilitate the calculations, we established the **local** coordinates system. The Y- axis coincides with western plot’s boundary and Z-axis runs along height system from boreholes drilling. The system is shown on Fig. 6.

11 Read more

Doppler ambiguity occurs frequently in mm-wave SAR, in which the Doppler frequency of a moving target is often larger than the pulse repetition frequency (PRF). In [15], a ground moving target indication (GMTI) based on a dual-frequency SAR was proposed. By choosing a co-prime wavelength, range velocity less than 110 m/s can be uniquely determined. In [16], an unambiguous slant- range velocity **estimation** algorithm and two other **methods** for azimuth compression were proposed. The resolution of carrier-phase ambiguity is also an important issue in the global navigation satellite system (GNSS). In [17], a three-carrier ambiguity resolution (TCAR) was proposed by utilizing a linear combination of signals at diﬀerent frequencies.

Show more
Mutation operator is based on mutation in biology. Mutation causes a gene to change its value to other value with some probability. Common types of mutation are: boundary, **non**-uniform, uniform, Gaussian and Cauchy. The uniform, Gaussian and Cauchy mutation are named after the distributions of random numbers which are used to generate new values of mutated genes. Cauchy mutation (Fig. 2) is used in the algorithm tested in this paper.

average measurement under specific image quality and flow conditions, but recent advancements have been made in the field of PIV uncertainty. Wilson and Smith [25] compared PIV velocity fields to the known particle displacement for a rectangular jet flow. For a specific combination of four **parameters** (displacement, shear, particle image density and particle image diameter), the four-dimensional uncertainty response was determined, termed the “uncertainty surface”. Flow gradients, large particle images and insufficient particle image displacements resulted in elevated measurements of turbulence levels.

Show more
12 Read more

Nonlocal **Methods**: If both the scene andcamera are static, we can simply take multiple pictures and use the mean to remove the noise. This method is impractical for a single image, but a temporal mean can be computed from a spatial mean–as long as there are enough similar patterns in the single image. We can find the similar patterns to a query patch and take the mean or other statistics to estimate the true pixel value, e.g., in [1, 6]. A more rigorous formulation of this approach is through sparse coding of the noisy input [12].

The motion of tectonic plates is usually parameterized on a sphere through an Euler vector or an Euler pole (DeMets et al., 1990; Altamimi et al., 2002; Sella et al., 2002). Such a parameterization is also useful to compare the es- timates from different sources such as space geodetic mea- surements, hot-spots tracks, transform fault azimuths, the spreading rates of ocean ridges, and earthquake slip vec- tors (Gripp and Gordon, 1990, 2002; Argus and Gordon, 1991; DeMets et al., 1994, 2010). However, all the **non**- geodetic **methods** give, in fact, only a relative measure of the plate motions. Even the direct geodetic measurements of plate motions depend on the underlying reference frame (Altamimi et al., 2002; Kreemer et al., 2003; Prawirodirdjo and Bock, 2004). Since only the relative motion of the plates can be directly observed, they are often referenced with respect to either a speciﬁc plate or a global plate cir- cuit, called no-net-rotation. Thus, the Euler vectors are nec- essary to compute the global plate circuit closure and to quantify the relative motions of tectonic plates.

Show more
dimension of the weight covariance matrix is not affected by the coregistration error [i.e., the noise subspace dimen- sion of the corresponding covariance matrix with the coregistration error μ(0 < μ ≤ 1) pixel is the same as that of the covariance matrix with accurate coregistration]. The key processing procedures of the approach are summarized as follows: after the coarse coregistration, the correlation weight joint data vector is constructed which can be used to estimate the corresponding weight covariance matrix. The noise subspace is obtained from the eigendecom- position of the estimated covariance matrix and the signal subspace is spanned by the vectors that are obtained by the Hadamard product of the principal eigenvectors (i.e., the signal eigenvectors) of the weight correlation function matrix (approximated by the magnitude of the weight co- variance matrix) and the steering vector. The terrain inter- ferometric phase **estimation** is then performed by the projection of the signal subspace onto the corresponding noise subspace, where the optimum **estimation** corres- ponds to minimizing the projection. For a pair of SAR images that are not coregistered accurately, the method can auto-coregister them and accurately estimate the corresponding terrain interferometric phase.

Show more
11 Read more

Convergence of an MC algorithm is difficult to identify, and so we examined the convergence performance of MC REML algorithms by continuing an additional 10 REML rounds more than required by corresponding analytical analyses. The obtained mean and relative standard deviations of the parameter estimates over the additional REML rounds are shown in Table 2. Three convergence criteria presented in the literature were then calculated for the MC AI REML algorithm. The first is a commonly used criterion, presented by Booth and Hobert [22], which is based on a change in consecutive parameter estimates relative to their standard errors. A value of 0.005 can be used as the critical value. The second criterion, by Kuk and Cheng [12], relies on the gradient vector and its variance-covariance matrix. Their stopping criterion is 90-percent quantile of a chi-square distribution with the number of **parameters** as degrees of freedom. This criterion attempts to stop the iteration as soon as possible. Finally, from MC AI REML round 5 onwards, convergence was also checked by a method similar to the one in Matilainen et al. [4], where the approach was to predict the parameter estimates of the next round using linear regression on previous iteration rounds. Here we took the same approach but applied the prediction method to the gradients instead of the estimates. Analyses were continued until the critical value of 10 210 as a norm for predicted round-to-round change in the gradient was reached.

Show more