successfully in clinical contexts (e.g., Florentine, Buus, & Geng, 2000), in studies with children (e.g., Wright et al., 1997) as well as in studies with a large number of subjects (e.g., Amitay, Irwin, & Moore 2006). For the same reason, it is suitable for those studies where subjects perform various tasks, therefore, when each task has to consume only a portion of the subject’s time. The ML procedure is largely known, used and appreciated by the auditory community, it has collected more than one hundred and twenty citations and the majority of these citations come from journals specialized in the auditory research [footnote 1]. Thus, the user of this procedure can benefit of a large background literature to optimise his/hers own threshold estimation. As far as we know, MLP is the first software implementing an adaptive psychophysical procedure with a graphical interface in a freely downloadable version and it is provided with several built-in, classic psychoacoustics experiments ready to use at a mouse click.
39 Read more
HANUŠ OTO, VYLETĚLOVÁ KLIMEŠOVÁ MARCELA, CHLÁDEK GUSTAV, ROUBAL PETR, SEYDLOVÁ RŮŽENA: Metaanalysis of ketosis milk indicators in terms of their threshold estimation. Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis, 2013, LXI, No. 6, pp. 1681–1692 Real time analyses of main milk components are attended in milking parlours today. Regular day information without delay is advantageous. Farmers can know milk composition every day. They can calculate milk energy quotients, identiﬁ ed subclinical ketosis in early lactation of dairy cows and thus improve ketosis prevention and avoid economical losses. Aim was to improve the estimation reliability of thresholds of milk indicators of energy metabolism for subclinical ketosis detection and its prevention support by metaanalysis. This can have higher result reliability than individual studies. Results of similar papers were analysed. These were focused on ketosis indicators in milk (acetone (AC) and milk energy quotients (fat/crude protein, F/CP; fat/lactose, F/L)) and their thresholds for subclinical ketosis. Methods for threshold derivation were speciﬁ ed: – statistically to reference procedure; – calculation according to relevant data frequency distribution; – qualiﬁ ed estimation; – combinations of mentioned procedures. This was as weight source. Variability in AC subclinical ketosis cut–oﬀ values was high (78.5%) and in ketosis milk quotients was low (from 5 to 8%). The value 10.57 mg.l −1 could be the validated estimation of milk AC cut–oﬀ limit for subclinical ketosis
12 Read more
To improve the received signal integrity, different IN miti- gation techniques , , – were proposed. Recently, reducing the peak-to-average power ratio (PAPR) of OFDM symbol before transmission over PLC channel were considered in , –, , –. This is followed by assuming that the coefficients of IN are present at the receiver of PLC systems, thus blanking, clipping or hybrid clipping-blanking can be used  to mitigate the IN. We refer to the method that uses the a priori-knowledge of the IN and the conventional optimal blanking threshold (OBT) scheme  in PLC sys- tems as the conventional optimal blanking (COB) in this study. Practically, this is not consistent as the time and probability of IN occurrences can not be predicted precisely. Secondly, OFDM signals are dynamic and exhibit non-constant peaks for each symbol frame which further obloquies the use of perfect knowledge of IN for IN removal in PLC systems. Based on this fact, the peak amplitude of each OFDM symbol is used as the blanking threshold for IN removal; a scheme named as dynamic peak-based threshold estimation (DPTE) .
13 Read more
In this paper, we consider an integrated volatility estima- tion procedure for jump-diffusion models. We are inspired by the precision of the range-based technique and the efficiency of the threshold method. Combining the advantages of the two methods, we propose a realized range-based threshold estimator for the volatility of jump-diffusion models. The simulation results illustrate that our estimator is more accu- rate than pure threshold estimator.
The paper is organized as follows: Section 2 presents the methods used in this paper. The Harris and Stephens method is introduced, and the threshold expressed in per- cent is defined. In a similar way, the Shi-Tomasi method and the appropriate threshold are introduced. After a brief introduction, the normalized cross-correlation method, which belongs to a different category than the first two methods, is presented. Similarly to the first two methods, the threshold is defined. Section 3 explains how the new overlapping quality measure is defined based on counting detections resulting from the application of three differ- ent methods. The quality measure gives positive numbers, the bigger values corresponding to better overlapping. In order to find the global maximum of the quality function, two approaches are made. One is brutal force, calcu- lating the results for all discrete threshold values and a more sophisticated one using the well-known hill climb- ing algorithm. Section 4.1 presents the results of this novel method applied to the recorded radar image of a radar surveillance station. To deeply test this procedure, the simulation model described in Section 4.3 is made. The final conclusions are given in Section 5.
12 Read more
In this paper, a new algorithm using wavelet properties to compress an image is proposed. This algorithm concern on reducing the wavelet coefficients produced by the Discrete Wavelet Transform (DWT) process. The proposed algorithm start with calculating the threshold value by using the proposed threshold value estimator at wavelet detail subbands (Diagonal, Vertical and Horizontal subband). This proposed algorithm will estimate the suitable threshold value for each individual subband. The calculated threshold values are then applied to its’ respective subband. The coefficient with a lower value than the calculated threshold will be discarded while the rest are retained. The novelty of the proposed method is it use the principle of the standard deviation method of deriving the threshold value estimator equation. Experiments show that the proposed method can effectively remove a large amount of unnecessary wavelet coefficient with a higher Peak Signal to Noise Ratio (PSNR) and compression ratio as well as shorter elapse time.
Issues of statistical investigation of aggregate instability (variability, uncertainty of composition, inconstancy of structure) remained out of focused attention of specialists until recently. Various classifications of aggregates, used in the statistical theory, do not meet the modern requirements completely or make it possible to solve the whole variety of statistical tasks in conditions of increasing turbulence. It defined the necessity for investigation of additional classificatory sections of real aggregates, such as dynamic and threshold (see more detailed data -).
In this paper we consider a three-regime threshold cointegration model. The fully modified ordi- nary least squares (FM-OLS) regression of Phillips and Hansen  is used to develop new methods for estimating cointegrating coefficients. After we remove the second-order biases of parameter estimates from the three-regime threshold cointegration model, FM-OLS estimates have a limit distribution that is mixed normal for all the nonstationary coefficients.
Wavelet threshold method, as a popular de-noising method, can basically remove the noise by thresholding the wavelet coefficients, and obtain the optimal denoising effect in the sense of mean square error. In other words, in the wavelet domain the threshold method can be used to filter out the uncertain components of the data to achieve the uncertainty separation of the data.
to the subspace methods as in Eq. (10), the number of signal sources is estimated to be N = 6, which is not true. Although the MDL method does not determine the number of signals merely by observing the eigenvalues, it is still based on the signal and noise subspaces. Considering mutual coupling compensation, these subspaces are no longer separable, making the estimation with MDL method to be a challenge and leading to error as well. However, the proposed method estimates the correct number as it takes the maximum eigenvalue λ c max together with the noise variance to make the threshold.
11 Read more
The upper and lower bounds for the cost per QALY thresholds in column (3) in Table 30 are based on making the necessary assumptions about duration of health effects of expenditure and how long a death might be averted optimistic (providing the lower bound for the threshold) or conservative (providing an upper bound for the threshold). The lower bound [see Table 30, lines (4) – (6)] is based on assuming that the health effects of expenditure are not restricted to 1 year but apply to the whole of the remaining disease duration of the population at risk in PBCs during 1 year. Although this combines optimistic assumptions, it is possible that at least some part of a change in expenditure may prevent disease so will have an impact on populations that are incident to PBCs in the future. Such effects are not captured in any of the estimates presented in this report so all estimates are conservative in this respect (the possibility of a longer and more complex lag structure for the effects of expenditure are discussed in Future research and improving estimates of the threshold). The upper bound [see Table 30, lines (7) – (9)] is based on the combination of assuming that health effects are restricted to 1 year for the population currently at risk and that any death averted is only averted for 2 years (see Chapter 4, Summary of cost per life-year estimates). The estimated QALY effects associated with each PBC can be decomposed into that part due to life - year effects adjusted for quality and that part associated with effects on QoL during disease. The proportionate share of these different aspects of the total health effects are the same as reported in Table 28; where those PBCs for which mortality is the major concern have a much greater share of total QALY effects associated with avoidance of premature death (e.g. PBC 2 and PBC 10) than those where QoL is the major concern (e.g. PBC 7).
543 Read more
The energy required to send the energy status message is greater than the amount of energy needed to transmit the sensed data to the cluster head. Making use of the same centralized clustering scheme as in LEACH-C and reducing the number of communications between the nodes and base station, an energy efficient clustering scheme through estimate was proposed by Jim-Moo-Kim . The protocol LEACH-CE achieves greater network lifetime than the previous protocols. The status message is received by the base station only at the setup phase of first two rounds. Third round onwards the remaining energy level of each node is calculated by the base station itself. The average energy expenditure of the cluster head and the node is calculated and subtracted respectively for the next round. The energy utilization of the nodes in LEACH-CE is optimal than in LEACH-C. The system lifetime, the live node count in the network at a given time, the useful data received by the base station is greater in LEACH-CE than LEACH-C. LEACHC-E  applies estimation of energy consumption during the clustering process. It collects energy and location information from nodes during first two rounds. Average energy usage by cluster-head and cluster members is derived after observing the energy usage during these two rounds. Later on, same is iteratively applied to subtract the energy consumption from cluster-heads or cluster members during the consecutive eight rounds. The frequency of energy information collection from members is shown in Figure 3. The results shown in Figure 5 for per round energy drain for
This paper is organized as follows: the second section describes the OFDM-MIMO based system following with switching algorithm. The third section deals with MIMO-OFDM channel estimation effect on the switch- ing technique. After a brief review of estimation methods, performances of the switching algorithm with non ideal CSI are showed and analyzed. Then, two solutions are proposed to improve these performances. The first one, although its additional complexity, reaches the perform- ances of ideal CSI case. The second one enhances con- siderably performances of switching algorithm without any change in the receiver design. In Section 4, Conclu- sion and perspectives of the work are outlined.
An experiment was done with 6 viewers: five males / one female, ages 20-25. The bio-sensors were calibrated for each participant by setting the threshold separately. This was done manually by observ- ing their bio-signal activity for a few minutes after they first put on the sensors. By the time of these experiments a significant amount of informal experience had been gained by the experimenter in var- ious applications of with the final software. The viewers watched the film in a laboratory environ- ment on a computer monitor, with the system adapting to their bio-signals. After completion of a full film viewing, the participants watched the other three endings which they had not yet seen. This time without wearing biosensors, as it did not make much sense to take biosensor readings out of context, i.e. without the preceding parts of the film. The experiment was not repeated multiple times with a single participant, as narrative impact would change over multiple viewings in unpredictable ways.
15 Read more
While log-linear models were used for estimation and inferences in cross-classifications, several alternative approaches were also applied to this class of response variable. Kullback and Ku (1968) and Ireland et al (1969) investigated the procedure for minimum discrimination information estimation to estimate and make inference about multinomial probabilities. Neyman (1949) derived best asymptotic normal (BAN) estimates of cell probabilities by minimising the usual Pearson chi-square statistic (X’). He introduced a modified chi-square by replacing the expected value in the dominator of the usual chi-square with the observed value. He also described an alternative way to minimise the modified chi-square (X*) by imposing constraint functions on cell probabilities. He used first order approximation for constraint functions and proved that the resultant estimates are BAN. This last approach of the problem of minimising X2 N has potential as a theoretical base for some later approaches.
407 Read more
random sampling. As far as the running time, the most time-consuming part is steps 2 to 8, in which r is independent of the input graph. It is clear that when r is small, the accuracy of EISE is low, but the estimation time is short, and vice verse. Compared with MC simulation, Algorithm 2 is much faster. In order to estimate the EIS of a node, it only generates a constant number of paths, while if MC simulation is applied instead of Algorithm 2, each time we have to re-choose the thresholds for all the nodes, and the time complexity is O ((| V | + | E |) r ) , when most of the edges are accessed each time. In the experiment, we observed that the error is less than 3% when T = 5, using an appropriate number of samples (r = 1, 000).
19 Read more
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical method- ologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 out- come data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopu- lation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators domi- nated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial.
19 Read more
where the threshold is γ = 0. The true parameter values are β 0 + = 0.5, β 0 − = 1, β + 1 = 0.2, β 1 − = 0.8, σ = 1 which were also used by Guay & Scaillet (2003) in their simulation study. Sample sizes (T ) and simulation lengths (N = T H ) are T = 100, 250, 500, 1000 and H = 10, 20, 50, 100, respectively. For each combination of T and N we simulated data from the data generating process described above and estimated the parameter vector using EMM with auxiliary models described in the previous section. The first auxiliary model is a linear AR(4) model. The second auxiliary model is polynomial AR model with two lags and second and third powers. The third model, is a mix of the previous two and was found to be the most successful auxiliary model among the three. We also used auxiliary model 4. This is asymmetric MA model with two lags. Since the estimation is more computationally burdensome for this auxiliary model we only used T = 250 with H = 10, 50. The number of replications is set to 1000 for each experiment. In our calculations we used Simulated GMM toolbox written in Matlab programming language by P. L. Fackler and H. Tastan (see Fackler & Tastan (2009)). We calculated bias and root mean squared error (RMSE) to compare the performance of auxiliary models. Results are summarized in Tables 1 through 4. We also plot bias, RMSE and kernel density estimates for each auxiliary model in Figures 5 to 14.
39 Read more
Many recent studies in ecology have been devoted to estimation of critical thresholds associated with human-induced natural habitat fragmentation (e.g., Andren 1994, Fahrig 2001). Critical thresholds occur when the response of a species or ecological process to habitat loss is not linear, but changes abruptly at some threshold level of loss (Toms and Lesperance 2003). Abrupt changes in ecological processes can also occur in other systems. Plant and animal communities change within a threshold dis tance of habitat edges (edge effects; Wales 1972, Gates and Mosher 1981). Changes in management regimes may have threshold-type effects if processes are viewed through time. Human produced disturbance from agriculture is the major cause of natural habitat loss for fish population in lakes and rivers. In this paper, most of the statisti cal analyses have been devoted for estimating threshold effects of agricultural stressor on fish population in the US Great Lakes coastal margins.
113 Read more
In recent years, ranging algorithms for UWB systems have been extensively studied. There are three main approaches. The first approach is matched filter (MF) based on coherent algorithm with high sampling rate . The second is machine learning method based on some selected channel parameters. In , a ranging method based on kernel principal component is pro- posed, where the channel parameters are projected onto a high-dimensional nonlinear orthogonal space, and then the subset from these projections is used for ranging. The third is energy detection (ED) algorithm based on non-coherent receiver with low sampling rate and low complexity [8, 9, 21]. The matched filter approach is not applicable in many practical situations due to the high complexity and high hardware requirement. As opposed to the complex matched filter method, the energy detec- tion is a non-coherent method for TOA estimation which consists of a square-law device, an integrator, a sampler, and a decision mechanism. The TOA value is estimated by the first signal sample exceeding a specific threshold which is deemed as the start of the received signal. Thus, the energy detection method is applicable in many cases because it is a method with low complex- ity and low sampling rate. In this method, how to select an appropriate threshold is a key issue. In literature , a threshold selection method based on kurtosis analysis of energy blocks was proposed, and in literature , a threshold selection method based on skewness analysis of energy blocks was also put forward. However, the TOA estimation accuracy of these methods is not very high because these parameters such as kurtosis of the re- ceived signals can only reflect statistical characteristics in time domain and ignore all the characteristics in fre- quency domain. At the same time, the received signals will be affected by the random noise, so the large ran- domness will result in the poor precision of kurtosis in time domain.
10 Read more