Figs.4(a)-(g) present the evolution in axial velocity profile for different values of (a) Joule heating parameters (b) Brownian motion parameter (c) thermophoresis parameter (d) basic-density Grashof number (e) thermal Grashof number (f) Helmholtz- Smoluchowski velocity (g) Debye length. The axial pressure is fixed at unity in these plots. Fig. 4a clearly demonstrates that with negative Joule heating there is a significant deceleration whereas with positive Joule heating there is a marked acceleration. In all cases the distributions are semi-parabolic since the maximum velocity arises at the channel centre (only the upper channel half space is included). At the upper microchannel wall the no-slip boundary condition enforces a zero velocity (the same will be present at the lower microchannel wall, not shown). When electrical field (axial) is increased, the Joule parameter (for constant temperature difference) is also enhanced. This results in boosting the momentum for positive S value and decreasing momentum for negative S value. The absence of Joule heating falls between these two cases. Fig. 4b shows that an increase in Brownian motion parameter (N b ) induces strong acceleration in the axial flow.
We analyzed the effect that temporal aggregation has on long-range dependence. We first consid- ered the effect that temporal aggregation has on LRD processes. Making only local assumptions on the spectral density in a neighborhood of the origin we showed in Proposition 2.1.1 that the fractional order of a LRD processes is invariant under aggregation. Long-range dependence is a phenomenon that characterizes the process at low frequencies. Temporal aggregation is a form of data compression that looses the high-frequency characteristics of the data, but preserves the low-frequency behavior. We confirmed this result with our empirical analysis of the UNC network data. Provided sufficient data is available at all aggregations levels, the estimates of the LRD parameterd is invariant under aggregation, regardless whether the bandwidth parameter m used in the estimation is fixed across aggregation levels (automatic method), or it is chosen to minimize the asymptotic MSE (tuned method). Furthermore, we established that the logarithm of the scale parameter varies linearly with aggregation. Again, this fact is confirmed in our empirical analysis of the UNC network data in Chapter 4.
Wavelet has now been widely used in statistics, especially in time series, as a powerful mutiresolution analysis tool since 1990’s. See Vidakovic (1999) for reference from the statistical perspective. The wavelet’s strength rests in its ability to localize a process in both time and frequency scale simultaneously. This article presents a Bayesian Wavelet estimation method of the long-memory parameterd and variance σ 2 of a stationary long-memory I(d) process implemented in the MATLAB computing environment and the WinBUGS software package.
In view of the previous results, we conducted analyses on the three age categories: 3–4 years, 5–6 years, and 7–8 years to test whether their response fitted with EUT. We compared the frequency of choices of the lottery against 50%, i.e. the frequency obtained if children decided randomly whether to exchange or not. In accordance with EUT the percentage of returned items for each age decreased from the combination of rewards # 0 to # 10. All children presented a percentage of return significantly smaller than 50% only for combination # 10 (Table 7), which was not consistent with EUT predicting that, under risk neutrality (d = 1), children should not exchange for combinations # 9 and # 10 (Table 2). Given that under EUT, risk neutrality could not explain the behaviour of children, we inferred a risk aversion parameter (d) by detecting the combination at which the percentage of return became significantly lower than 50%. The low return rates of 3–4 years for most combinations (inferior to 50%) did not allow finding this value. For children aged 5–6 and 7–8 years, the certain loss combination (# 10) was the one for which the percentage dropped under 50% (Table 7); we found d$1.17 which is .1 and indicates that they were risk-seekers. However, risk-seeking children (d.1) should also exchange at a higher rate in combination # 5 than in combination # 6 (risk-neutral children, d = 1, should exchange at the same rate for combinations # 5 and # 6 and risk-averse children, d,1, should exchange at a lower rate in combination # 5 than in combination # 6, Table 2). Contrary to this prediction, children aged 5–6 and 7–8 years exchanged significantly more often for combination # 6 than # 5 (Student’s t-test, 5–6 years: t 47 = 23.67, p,0.01; 7–8 years: t 47 = 21.97, p,0.05, Table 7).
We have theoretically analyzed the variations of the axial response as a function of the radius of the de- tector and distance parameter of the D-shaped aper- ture for the confocal microscope with D-shaped apertures. The results show that, for a given finite size of detector, by altering distance parameterd , the axial resolution can be maximized, which is of prac- tical significance in the design and setup of the mi- croscope. For detector size ν d less than 2.58, the optimal value of d is zero; for larger ν d , the optimal value of d increases. When ν d ¼ 3:30 , the optimal va- lue of d becomes 0.1.
We discussed the exact analysis and are presented to investigate the combined effects of heat and mass transfer on the MHD flow of an incompressible viscous fluid bounded by loosely packed porous medium in an impulsively started vertical plate with variable heat and mass transfer. The expressions for the velocity, temperature and concentration are obtained by using Laplace transform technique and also discussed the physical behaviour of the dimensionless parameters such as Hartmann number M, Darcy parameterD (Permeability parameter), Radiation parameter R, second grade fluid parameter, thermal Grashoff number Gr, mass Grashoff number Gm, Prandtl number Pr and Schmidt number Sc. Figures (2-13) have been displayed for the velocity, temperature and concentration. Skin friction, Nusselt number and Sherwood number are shown in Tables (1-3). The velocity, temperature and concentration profiles for some realistic values of Prandtl number Pr (Pr = 0.71, 0.16, 3 for the saturated liquid Freon at 273.3˚ and Pr = 7 for water) and Schmidt number Sc (Sc = 0.2 for hydrogen) respectively. From figure (2), this presents the velocity profile for different values of M being other parameters fixed. We noticed that the velocity decreases with increasing the Hartmann number M. It is due to the fact that the application of transverse magnetic field results a resistive type force (Lorentz force) similar to drag force and upon increasing the intensity of the magnetic field which leads to the deceleration of the flow. Figure (3) is sketched in order to explore the variations of permeability parameterD. It is found that the magnitude of the velocity increases with increasing the values of permeability parameterD. This is due to the fact that increasing the permeability reduces
in pulsars spin-down torque (7, 26). In addition, it can also be emphasized that the phase fluctuations over decades may also probably be as a result of the change in the pulsars spin down rate characterized by the intrinsic rotational evolution of the pulsars and variations in the spin down torque. Pulsars are born with different rotational periods and young pulsars might have less periods and are, therefore, found to exhibit more timing noise than timeworn pulsars. Also, the very strong (r ≥ - 0.87) correlations which characterize the relationships between timing noise parameters and the characteristic age is a clear indication that young, active pulsars on the average display more rotational instability than their puny, old counterparts. This infers that timing noise depends on the pulse rotational variables with aged and frail radio pulsars with small spin-down rate showing much less timing noise than youthful, energetic ones. The positive relationship showed by the timing noise activity parameter and the spin down luminosity shows that pulsars with large values spin down energy loss rate show much more timing noise. As pulsars get old, their spin energy lessens and they become more stable. Consequently, the strong relationship may be attributed to the periodic changes and irregular loss in the radio pulsar rotational kinetic energy.
Steric strain between peri-atoms is relieved via naphthalene distortions which effectively change the geometry observed in unsubstituted naphthalene. Relief can be accomplished by stretching of the peri-atom bonds (a), in-plane (b) or out-of-plane (c) deflection of the peri-substituents or distortion or buckling of the naphthalene ring (d) (Figure 1.3). Large amounts of energy are associated with even a small change in bond length so few examples of peri-atom bond stretching are known. The most common form of naphthalene distortions are in-plane and out-of-plane deviations of the exocyclic bonds with few examples of buckled naphthalene ring systems reported. This can be accounted for because relatively large relief of steric interaction is accomplished by minor molecular distortions. 3
In Figure 7 (rhs) we plot the resulting empirical densities of models (a)-(d) presented in Table 1. We see that (a) and (b) provide similar results. If we model every development period individually, see models (c) and (d), we obtain the shift seen in Table 1. This shift comes from the fact that the observations for j = 20, 21 receive more weight in the latter models, see Figure 5; depending on the data the sign could also go into the other direction. More interestingly, we see that the density is more widely spread the less information we have and the more parameter we have: the least uncertain prediction is obtained in Prior Model 3 with truncation index k = 9, the most uncertain in Prior Model 1 with every development period modeled individually. The shift in claims reserves from models (a)-(b) to models (c)-(d) may raise the question whether the tail decay is judged too optimistically under an exponential decay model (since the claims reserves from the individual MLEs modeling are more conservative). In Figure 8 we include in addition to Figure 5 also confidence bounds for the MLEs (symmetric around the posterior estimate θ post(k) ). We observe large volatilities in these MLEs for large development year indexes j and, thus, our model about the exponential decay cannot be rejected in view of Figure 8 because the MLEs are all within the confidence bounds. We also see that the estimates of σ could be smoothed to obtain more monotonicity in the confidence bounds. Nevertheless, if the exponential decay is too fast, we could also try to fit a power decay of the form
The sensitivity of the alignment of reacting molecules to the details of the molecule-surface interaction makes ex- periments addressing this topic ideal for testing electronic structure theories that attempt to model this interaction. Such tests are highly relevant, and pose huge challenges. About 90% of the chemical manufacturing processes used worldwide employ heterogeneous catalysts . However, the best ab initio theory that can now be used to map out PESs for elementary molecule-surface reactions, density functional theory (DFT) at the generalized gradient ap- proximation (GGA) or meta-GGA level, can provide reaction barriers with an accuracy of no better than 2:2 kcal=mol for gas phase reactions . Even this accu- racy has only recently become available , and it is therefore not surprising that quantum dynamics calcula- tions using DFT PESs on the rotational quadrupole align- ment parameter of H 2 desorbing from metal surfaces such
Finally, we observe that for arbitrary computations checking whether a program is well defined may not be efficiently computable. In particular, the difficult task is to check the first condition, i.e., whether a program always outputs the same value for all possible choices of the inputs that are not in T . However, for the case of arithmetic circuits in (exponentially) large fields and of polynomial degree this check can be efficiently performed by using probabilistic polynomial identity testing [18,36,32]. Below we state a simple proposition that shows how to do the testing for arithmetic circuits of degree d, over a finite field of order p such that d/p < 1/2. 7
Initially models from the AMI 0.5 µm process were used until measured data from SiOG devices became available. As measured data arrived from the fabrica- tion group, the AMI 0.5 µm models were no longer used. Instead, a SPICE level 3 model was used with the model parameters chosen to best match the measured data. The SPICE model was initially a good choice since the parameter count is low, and the parameter extraction techniques are straight forward. For a particular de- vice geometry, parameters could be extracted which allowed for a fairly accurate de- scription of the on-state characteristics of the SiOG devices. Despite its ease of use, the SPICE model had many issues when applied to SiOG. The SPICE level 3 model is unable to capture subthreshold and leakage behavior, the capacitance character- istics are not accurately reproduced, the parameters had little physical meaning, and the model did not scale properly with geometry.
This research proposed a three-parameter probability distribution called Gompertz-Lindley distribution using Gompertz generalized (Gompertz-G) family of distributions. The mathematical properties of the distribution such as moment, moment generating function, survival function and hazard function were derived. The parameters of the distribution were estimated using the method of maximum likelihood and the distribution was applied to model the strength of glass fibres. Gompertz-Lindley distribution performed best (AIC = 62.8537) when compared with other generalizations of the Lindley distribution.
and Dissociation Rates between Phosphorylated IKK-complex and IkB-NFkB, and the Dissociation Rate between IkB and NFkB, represent extremely critical (sensitive) points of the proposed reaction mechanism, with average D-values ranging between 0.12 and 0.58. Furthermore, our statistical analysis also revealed that the variability of the D-values for those parameters categorized as moderately and extremely sensitive were found to be extremely large, as indicated by both the height of bars and their corre- sponding whiskers. This result strongly suggests that the robustness properties of the network can be highly variable depending on its current position in biochemical reac- tion space. It is also interesting to analyze our simulation results from the viewpoint of sloppy and stiff multidimensional parameter spaces [11,12]. According to this well-sup- ported theoretical framework, we may conclude that our proposed reaction scheme functions as a highly sloppy information processing system capable of performing robustly, despite undergoing simultaneous random perturbations in its internal reac- tion parameters. However, some stiff axes were found to be a defining feature of this multidimensional space, along which random perturbations lead predominantly to dra- matic changes in the global dynamical behavior of the system. Therefore, such stiff axes in biochemical reaction space constitute key variational constraints of the pro- posed reaction mechanism. Following this direction, our simulation results strongly suggest that those biochemical processes relying on the reaction parameters identified as critical points of the network, should represent the rate limiting steps that most effectively control the global dynamical behavior of the system. We thus predict that such critical reaction steps represent ideal candidates for manipulating the dynamic activity of the TLR4 signaling network via multi-target therapeutic strategies, which
Consolidation settlement is a major topic discussed by an engineers and geologists when designing the structures. From past experienced, many case of building problems and failure found that settlements could affect them by continuing settlement for many years with total accumulated settlements being very large. This settlement may due to creep or secondary settlement. There are many method used to predict the settlement such as Casagrande oedometer (Terzaghi, 1923: Casagrande, 1936). We can predict the primary and secondary settlement in laboratory by using 1-D consolidation oedometer test. The reliability of the prediction depends on many factor such as good samples, human, apparatus and others uncertainties especially soils. A fine-grained soil in saturated condition was subjected to an increasing compressive stress from selected loading and caused the deformation or strain of soil skeleton. Strain is a cumulative effect of grain distortion and particle rolling and slipping. This strain results in a reduction of void ratio or voids volume which can only take place as pore fluid when it displaced. Since a fine- grained soil has a low permeability coefficient, the pore fluid displacement was a rate process, or time dependent. When the compression of a soil mass is a time dependent, it is termed as a consolidation. A soil will be fully consolidated state when its volume remains constant under a constant rate of stress. A soil will be normally consolidated condition when it is currently corresponding to its maximum consolidation pressure. A soil will be over consolidated when the present day overburden pressure is less than the highest historic consolidation pressure.
Table 1 reports the 10-folder cross validation of bias variance results on the syn- thetic datasets. The first column lists three measurements, average error, bias error and variance error respectively. The second column shows the different parameters of RMCLP. The 3 th to 6 th column lists the experiment results under different training samples. In this experiments, we will do the following observations: (1) by using different parameters of (H,D,d,c) , we examine whether RMCLP varies significantly different on different parameters; (2) by comparing the bias and variance errors, we get a conclusion where the error mainly comes from, and thus we can define RMLCP as a strong or weak classifier, a stable or unstable classifier. However, we don’t report the noise error N(x) here because noises error is independent of classifiers but only related to the nature of dataset. Additionally, we give approximate values to bias error by subtract variance error from the average error. This approximation is reasonable because it won’t change their orders when compared. All of the numeric results are listed in Table 1.
band component of the j-th DWT level. Two-dimensional signal, such as images, are analyzed using the 2-D DWT. Currently 2-D DWT is applied in many image processing applications such as image compression and reconstruction [Lewis and Knowles (1992)], pattern recognition [Kronland et al. (1987)], biomedicine [Senhadji et al. (1994)] and computer graphics [Meyer (1993)]. The 2-D DWT is a mathematical technique that decomposes an input image in the multiresolution frequency space. The 2-D DWT decomposes an input image into four sub bands known as low-low (LL), low-high (LH), high-low (HL) and high-high (HH) sub band.
Another conceptual approach is to view the partial deriva- tives of model performance with respect to model parame- ters as indicators of parameter sensitivity: the steeper the re- sponse surface around a given point, the more sensitive the parameter in that region. An analytical solution would allow explicit evaluation over the entire parameter space but that is a practical impossibility for most, if not all, models. Instead numerical approximations are computed at selected points and averaged to give an indication of the relative sensitiv- ity of the model parameters. This is the Morris one-at-a-time (MOAT; Morris, 1991) approach. The sensitivity is evaluated by computing both the mean (MOAT-1) and the standard de- viation (MOAT-2) of the partial derivatives at selected sam- ple points. It is a robust method in the sense that no assump- tions are made for the relationship between model parameter values and performance.