The two investigators who used this **procedure** both felt that it detracted from the otherwise simple procedures. It was also felt that the relaxation for steady sounds would only apply in a minority of cases, and that any potential benefit of this section was outweighed by the risk of it being applied wrongly (as seemed to have happened in these cases). Consideration was given as to whether the identification of steady and fluctuating sounds could be carried out more easily, but it was decided that it was already in the simplest form possible. Consequently it was decided to remove this section and replace it with a short note to the effect that fluctuating sounds tend to be more disturbing than steady sounds. A reference to [Mo05a] was included for a more detailed treatment of fluctuations.

Show more
12 Read more

regression analyses (Konen et al., 2002; De Vos et al., 2005; Salehi et al., 2011; Reynolds et al., 2013). The basic assumption is that LOI is due only to combustion of soil organic matter (SOM) and that the content of SOC in SOM is constant (Christensen & Malmros, 1982). No standard protocol exists for LOI analysis, but it is well documented that LOI is affected by ignition temperature, duration of ignition and ignited sample mass (Abella & Zimmer, 2007; Salehi et al., 2011; Hoogsteen et al., 2015). Further, structural water loss (SWL) from soil minerals may contribute significantly to LOI (Sun et al., 2009; Hoogsteen et al., 2015) and the validity of the conventional LOI-to-SOC conversion factor of 0.58, although widely used, remains dubious (Pribyl, 2010). When LOI and SOC are both measured, regression models for converting LOI to SOC have been **proposed** (Grewal et al., 1991; De Vos et al., 2005; Abella & Zimmer, 2007). Regression models based on less accurate analytical approaches, such as dichromate oxidation followed by titration, and soils with confounding effects from differences in clay mineralogy have been found to be less reliable (Howard & Howard, 1990).

Show more
Figure 2 Soil organic carbon (SOC) content predicted by (a) the linear model including loss- on-ignition (LOI) and the quadratic clay expression (model O2.1, Table 3 [Eq. 3]), and (b) th[r]

31 Read more

regression analyses (Konen et al., 2002; De Vos et al., 2005; Salehi et al., 2011; Reynolds et al., 2013). The basic assumption is that LOI is due only to combustion of soil organic matter (SOM) and that the content of SOC in SOM is constant (Christensen & Malmros, 1982). No standard protocol exists for LOI analysis, but it is well documented that LOI is affected by ignition temperature, duration of ignition and ignited sample mass (Abella & Zimmer, 2007; Salehi et al., 2011; Hoogsteen et al., 2015). Further, structural water loss (SWL) from soil minerals may contribute significantly to LOI (Sun et al., 2009; Hoogsteen et al., 2015) and the validity of the conventional LOI-to-SOC conversion factor of 0.58, although widely used, remains dubious (Pribyl, 2010). When LOI and SOC are both measured, regression models for converting LOI to SOC have been **proposed** (Grewal et al., 1991; De Vos et al., 2005; Abella & Zimmer, 2007). Regression models based on less accurate analytical approaches, such as dichromate oxidation followed by titration, and soils with confounding effects from differences in clay mineralogy have been found to be less reliable (Howard & Howard, 1990).

Show more
For each assay 18 flasks used and into each was pipetted 1.9 ml of unknown solution I or of standard insulin solution in buffer containing glucose and gelatin as described above.. The fl[r]

13 Read more

Conclusions Through theoretical calculations of the concentrates of materials within the laboratory, based on mass transfer analogy assuming a constant air velocity 0.2 m/s, we have got [r]

We consider the problem of variable selection for the single-index random effects models with longitudinal data. An automatic variable selection **procedure** is developed using smooth-threshold. The **proposed** method shares some of the desired features of existing variable selection methods: the resulting estimator enjoys the oracle property; the **proposed** **procedure** avoids the convex op- timization problem and is flexible and easy to implement. Moreover, we use the penalized weighted deviance criterion for a data-driven choice of the tuning parameters. Simulation studies are carried out to assess the performance of our method, and a real dataset is analyzed for further illustration.

Show more
ABSTRACT: Fourier regression is a method used to represent time series by a set of elementary functions called basis. This work was used to propose a new **procedure** for Fourier regression which has the ability to reveal the period of significant frequencies and can be used to fit a periodic trend. The **procedure** involved the use of spectral analysis for component identification, discrete Fourier transform for estimating the coefficients and 95% confidence bound of the autocorrelation function for residual diagnostic check. The method was applied to Nigerian road accidental death time series data in order to test the efficiency. From the results, the spectral analysis magnitude plot revealed one and three components Fourier regression model. The periodic trend of one and three components Fourier regression model was fitted. The three components Fourier regression model was the most suitable and appropriate model since it has a close pattern to the original series and as well revealed the cyclical movement in Nigerian road accidental death. This was validated based on the three components residual autocorrelation function values which fell within the 95% confidence bound and this indicated the residuals are whiten. In conclusion, the **proposed** **procedure** for Fourier regression model was adequate for studying the important periodicities and their frequencies, fitting periodic trend and suitable for forecasting Nigerian road accidental death time series data.

Show more
Before a definitive statement can be made about the us efulness of the suggested **procedure**, however, there remain several topics that need to be investigated. These include determining the effects of internal pressure, combined loadings, use of other materials, behavior at elevated temperature (covered in Code Subsection NH) and high strain rates, consideration of other sizes, schedules and bend angles of elbows and a consideration of other components such as tees and branches, and the relationship between Code-defined collapse and actual physical collapse. A study of cyclic loading would also seem to be worthwhile. In this case it would be necessary to redefine what is meant by collapse, and then use this definition in the calculation of a B ’ 2 stress index. If the beneficial attributes of the **proposed** **procedure** hold up under continued scrutiny, then

Show more
12 Read more

Leakage in oil and gas pipelines can potentially cause of the losses of production, environment pollution and human safety, as well as economic loss. The effects of the damage that cause by oil and gas leaks to necessitates the development of efficient leak detection systems. However, it is difficult to detect any problem that could occur in the pipeline flow with the naked eyes. Therefore, to overcome this situations, there are many methods and techniques are being **proposed**. Several methods have been developed to detect the leakage of oil and gas in pipeline such as hardware based methods, non-technical methods and software based method.

Show more
24 Read more

tion of all available state proceedings for the hearing of the federal question when petitioner brings the writ of habeas corpus) was jurisdictional and that such f[r]

20 Read more

In this paper, we extend the work of Christopoulos and Leon-Ledesma (2010) and suggest a new nonlinear unit root test **procedure** with Fourier function. In this test **procedure**, structural breaks are modeled by means of a Fourier function and nonlinear adjustment is modeled by means of an ESTAR model as **proposed** by Hu and Chen (2016). The two basic problems encountered by unit root tests have been eliminated thanks to this test **procedure**. This paper is organized as follows. The next section describes the new test **procedure**, Section 3 presents the Monte Carlo results and measure the critical values, empirical size and the power of the test. Empirical applications are presented in Section 4 and Section 5 is the conclusion.

Show more
13 Read more

Future enhancements for this system might be solving a similar problem to this like image filtering containing message and also in other way restricting the posting of one user’s image by the other without proper permission from the owning user. Even more, filtering techniques may improve in the near future so that more user- friendly options can be added to the **proposed** system.

The expert system is **proposed** for measuring the reusability degree of an object-oriented **procedure** ( Java code ) , as well as to make decisions about reusability status ( i.e software can be reuse or not ) depending on specific metrics ( cohesion and coupling ) .

The LMM is a direct analysis method which seeks to model the behaviour of a structure under the action of cyclic loads via repetitive linear simulations involving a matching modulus; which is used to replicate the actual nonlinear plastic response of a problem both spatially and in time. The LMM was derived on the basis of Koiter’s upper bound formulation [34], with the Refs [15, 16] illustrating convergence of the LMM upper bounds for the first time with respect to shakedown and plastic collapse limits, with the novel numerical development presented in this paper the latest phase in the development of the LMM. The initial development of the LMM for ratchet analysis involved a two stage method devised by Ponter and Chen [17-19], which was capable of assessing two load points in the defined load cycle, before subsequently being extended to include multiple load points in [20]. Recently, the two stage LMM **procedure** for ratchet analysis has been expanded upon via the addition of a novel lower bound approach by Ure et al [35].

Show more
42 Read more

In case 4, by applying reconfiguration with DG installation in the PSO method, the amount of power loss is improved from 103.98 kW to 82.98kW which is about 21kW. But the same application done on the GA method has given lower improvement of 22.9kW (135.3kW to 112.4kW). Thus, the **proposed** PSO method in this paper has improved greater power loss as compared to GA method. In addition, the reading of power loss from PSO method after reconfiguration with DG in case 4 is only 82.98kW as compared to the GA method which gives 112.4kW, a different of 20.1kW. From the perspective of power losses, PSO impacted positively in the analyzed distribution network, achieving 51.4% improvement. If we compare the CPU time, the computing time of the PSO method is only 13.4 seconds compared to the GA method which requires 30 seconds. Hence, PSO method is 16.6 seconds faster than GA method.

Show more
The **proposed** a protocol for protected mining of association rules in horizontally distributed database. The current leading set of rules is that of Kantarcioglu and Clifton. Our **procedure**, like theirs, is based on the Fast Distributed Mining (FDM) algorithm of Cheung et al. which is an unsecured spread version of the Apriori algorithm. The main ingredient in our **procedure** are two novel secure multi-party algorithms. One that computes the union of private subsets that each of the interacting group of actors hold, and another that tests the inclusion of an element held by one actor in a subset held by another. Our protocol offers improved separation with respect to the protocol. In addition, it is simpler and is extensively more efficient in terms of announcement rounds, announcement cost and computational cost.

Show more
Abstract. Properties of self-similarity and fractality in the processes of interactions of hadrons and nuclei at high energies are discussed. Diﬀerent methods of fractal analysis (the box counting BC, system of the equations of p-adic coverage S ePaC methods) are described. Fractal analysis of mixed events was carried out by BC and S ePaC meth- ods. The **procedure** of separation of fractals and background, estimation of the number of fractals in the original data set and contamination of the extracted data was **proposed**. The dependence of event contamination on multiplicity and background is studied. Re- construction of the spectrum of fractal dimensions is found to depend on the method and background.

Show more
In order to reduce the error introduced by the perturbation term AB ( W E q − V E q− 1 ), we present an iterative **procedure**, like the iterative ADI-FDTD method [21–23] and the iterative LOD-FDTD method [24] to improved the accuracy. Because the weighted Laguerre polynomials tend to zero as time t → ∞ , the expanded quantities of the ﬁelds will converge to zero as time progresses. When an iteration **procedure** is applied to the 3-D Laguerre-based FDTD method, the obtained solutions are therefore stable. We suppose that W 0 q is the solution of (21), we can replace AB ( W E q − V q−

Show more
Various studies have thus focused on developing path- way-based methods for cancer prognosis prediction (Jones 2008; Jones et al. 2008; Lee et al. 2008; Reyal et al. 2008; Abraham et al. 2010; Teschendorff et al. 2010; Eng et al. 2013; Huang et al. 2014). These methods can be divided into two major categories. The ﬁrst category focused on employing sophisticated statistical methods for variable se- lection with grouped predictors or pathways such as group lasso with an “all-in-all-out” idea, meaning that when one predictor in a group is chosen, then all variables in that group are chosen (Park et al. 2007; Wei and Li 2007; Jones 2008). The second category, on the other hand, re- duces the data dimension by generating pathway risk scores to be used in downstream data analysis. Abraham et al. (2010) adopted a gene set statistic to provide stability of prognostic signatures instead of individual genes. Huang et al. (2014) converted the gene matrix to a pathway matrix through “principal curve,” similar to principal components analysis. Both of these methods did not incorporate out- come when generating the pathways scores from the individ- ual genes. Eng et al. (2013) **proposed** a method to reduce the

Show more
15 Read more