Finally, it should be noted that in the current im- plementation, we have not applied any of the possi- ble optimizations that appear in the literature (Laf- ferty and Suhm, 1996; Wu and Khudanpur, 2000; Lafferty et al., 2001) to speed up normalization of the probability distribution q. These improvements take advantage of a model’s structure to simplify the evaluation of the denominator in (1). The particular data sets examined here are unstructured, and such optimizations are unlikely to give any improvement. However, when these optimizations are appropriate, they will give a proportional speed-up to all of the algorithms. Thus, the use of such optimizations is independent of the choice of **parameter** **estimation** method.

Show more
Enzyme kinetic parameters have been estimated using MATLAB software via the Wilkinson non- linear regression technique. The MATLAB script file written to implement this technique is short and very straightforward. Several software tools are commercially available for this purpose, with many graphical user interface (GUI) features. A routine use of these packages might offer imme- diate satisfaction of interactive hands-on experience; but in some cases the researcher might wish to write his/her own code and compare the results for further confirmation. Today MATLAB is in use in almost all the schools and laboratories as a standard software tool. So this paper is aimed at helping enzyme researchers to make use of this powerful software for **estimation** of parameters. It enables the incorporation of the analytical steps behind **parameter** **estimation** in an easy-to-follow manner and furnishes better visualization.

Show more
13 Read more

This paper focuses on the development of **parameter** **estimation** techniques for models quantifying hysteresis and constitutive nonlinearities in ferroelectric materials. These models are formulated as integral equations with known kernels and unknown densities to be identified through least squares fit to data. Due to the compactness of the integral operators, the resulting discretized models inherit ill-posedness which often must be accommodated through regularization. The accuracy of regularized finite-dimensional models is illustrated through comparison with experimental data.

Usual discussions of inverse problems in the presence of uncertainty have been in the context of a given set or sets of data carried out under various assumptions on how (e.g., independent sampling, absolute measurement error, relative measurement error) the data were collected. For many years now [4, 7, 20, 21, 22, 25, 26, 28] scientists (and especially engineers) have been actively involved in designing experimental protocols to best study engineering systems that include parameters describing mechanisms. Recently with increased involve- ment of scientists working in collaborative efforts with biologists and quantitative life scientists, renewed interest in design of the “best” experiments to elucidate mechanisms has been seen [9, 11, 12, 13, 15, 16]. Thus, a major question that experimentalists and inverse problem investigators alike often face is how to best collect the data to enable one to efficiently and accurately estimate model parameters. This is the well-known and widely studied optimal design problem for **parameter** **estimation** and is a most important step leading up to control design (sensor and actuator design and placement).

Show more
In the name of Allah S.W.T the most merciful, Alhamdulillah, with His bless I manage to complete this research, entitled: “Identification and **Parameter** **Estimation** of Multifunctional Prosthetic Hand”. I would like to thank to everyone who have involved in preparing this research. Thanks to my supervisor Dr Rozaimi bin Ghazali, that has given me a lot of encouragement and motivation as well as brilliant ideas during the development of this research.

24 Read more

The **estimation** of γ is ill-posed for the **parameter** values, spatial nodes and temporal nodes in this problem. In Figures 3.6(a) and (b), we see that the diﬀerence ˆ γ − γ 0 ( γ 0 = 10 −3 ) is very large compared to γ 0 . This is true for both the realizations of D ij ( σ ) in Figure 3.6(a) and the realizations of D rand ij ( σ ) in Figure 3.6(b). Though the diﬀerences ˆ γ λ − γ 0 are smaller than ˆ γ − γ 0 Figures 3.6(a) and (b), in Figures 3.7(a) and (c) we see that the uncertainty associated with the estimate ˆ γ λ is ver large. In Figure 3.7(a), we see that for the realizations of D ij ( σ ) the ratios SE(ˆ γ λ ) / γ ˆ λ does not appear to grow exponentially with σ but the values of SE(ˆ γ λ ) / ˆ γ λ are too large to be conﬁdent of even the sign of ˆ γ λ . The use of D rand ij ( σ ) in the OLS **parameter** **estimation** procedure does appear to aﬀect the uncertainty associated with the **parameter** estimate ˆ γ λ as we see in the exponential growth of SE(ˆ γ λ ) / γ ˆ λ with σ for realizations of D rand ij ( σ ) in Figure 3.7(c). The ratio SE(ˆ γ ) / γ ˆ varies on an exponential scale for both realizations of D ij ( σ ) and D rand ij ( σ ) in Figures 3.7(b) and (d), respectively.

Show more
46 Read more

the **parameter** **estimation** problem is different depending on which variables are assumed to be random variables and the choice of any assumed distribution for each random variable **parameter**. In Section 3, we consider the solution to the RDE to be a collection of solution trajectories to a sample deterministic system. As such, we develop a method for **parameter** **estimation** which utilizes the sample deterministic system and methods for **parameter** **estimation** in deterministic systems. In Section 4, we also utilize deterministic methods for **parameter** **estimation**; however, the method developed in this section is based on the equivalence of an RDE model to a stochastic differential equation (SDE) model and the relationship between a SDE model and deterministic system for large population sizes.

Show more
34 Read more

Multiscale methods such as averaging and homogenization have become an in- creasingly interesting topic in stochastic time series modelling. When applying the av- eraged/homogenized processes to applications such as **parameter** **estimation** and filtering problems, the resulting asymptotic properties are often weak. In this thesis, we focus on the above mentioned multiscale methods applied on Ornstein-Uhlenbeck processes. We find that the maximum likelihood based estimators for the drift and diffusion parameters derived from the averaged/homogenized systems can use the corresponding marginal mul- tiscale data as observations, and still provide a strong convergence to the true value as if the observations are from the averaged/homogenized systems themselves. The asymp- totic distribution for the estimators are studied in this thesis for the averaging problem, while that of the homogenization problem exhibit more difficulties and will be an interest of future work. In the case when applying the multiscale methods to the Kalman filter of Ornstein-Uhlenbeck systems, we study the convergence between the marginal covariance and marginal mean of the full scale system and those of the averaged/homogenized systems, by measuring their discrepancies.

Show more
164 Read more

The SIR makes the analysis computationally simpler and, due to utilizing the whole error statistics, much more accu- rate than the Kalman filter. In addition, it offers more flexi- bility allowing one to tune poorly known model parameters and easily to consider observations having non-Gaussian er- ror statistics (as it is the case for the tracer fields) and nonlin- early related to the state variables. The main problem of the method is that the solution becomes unstable when the most part of the ensemble members have bad fitness to the data due to undersampling. As it was noticed in Sect. 3, the EnKF per- forms better in this situation. This weakness of the SIR can be especially pronounced in the **parameter** **estimation** prob- lem when all but one of the members die at the resampling step while the lack of the noise in Eq. (2) prevents the en- semble from regeneration in the **parameter** space with time. A possible solution could be adding a noise to the ensemble if it is nearly to collapse. This procedure makes it possi- ble to restore the ensemble size and even to detect regular temporal oscillations of some model parameters (Losa et al., 2003). Though any procedure of this type (such as the for- getting factor) aimed to stabilize the filter cannot be justified from a rigorous probabilistic viewpoint, it can significantly improve the filter performance and reduce the number of en- semble members necessary to track the true system trajectory (Pham, 2001). This problem will be studied in the future. Acknowledgements. The author thanks his colleagues Jens Schr¨oter, Manfred Wenzel and Svetlana Losa for continuing support and discussions.

Show more
data. These parameters generally describe important processes, and we need to verify whether or not they improve the mathematical model in the system under study. In future formulations of the inverse problem, we can address the solution’s tendency to undershoot early data points and closely track the higher data points late in the season by using a weighted least squares (WLS) or more generally, a generalized least squares (GLS) statistical model [7, 8], instead of formulating the inverse problem with OLS. The OLS formulation assumes an absolute observation error, which is not necessarily a reasonable assumption for population data. Because we expect that the accuracy of a population count scales with the size of the population monitored, a weighted or gen- eralized least squares formulation of the inverse problem may yield better results. We did not employ a WLS formulation in this first attempt at the inverse problem because our primary motivation was to establish the use of the ATN model and inverse problems when applied to our study system. However, accurately specifying a statistical model is a necessary step in estimating our system parameters; without the correct statistical model, we cannot provide a metric for the uncertainty in our **parameter** **estimation**.

Show more
49 Read more

which is evaluated using the prior p(θ) and the likelihood p(y|θ). The likelihood function specifies how plausible the observed data are given model **parameter** values. Therefore, defining a proper likelihood function is the central problem in **parameter** **estimation**. The prior contains the information that we have about the parameters based on the accumulated in- formation from the past. For an introduction to Bayesian es- timation, see, for example, the book by Gelman et al. (2003). Traditionally, parameters of dynamical systems are esti- mated by comparing model simulations to observed data us- ing a measure such as a sum of squared differences between z and y. This corresponds to the assumption that the obser- vations are noisy realizations of the model values. The prob- lem in applying these techniques directly to chaotic systems is that the dynamically changing model state x is not known exactly, and small errors in the state estimates can grow in an unpredictable manner, making direct comparisons of model simulations and observations meaningless over long time pe- riods.

Show more
17 Read more

ing features are then used to build new cost functions with more effective convergence properties. These methods, however, are model specific, which makes it difficult to apply them to general oscillatory systems. For example, the dark/light cycle characteristics that were introduced in **parameter** fitting problem of [62] may not be a suitable feature for **parameter** fitting of a non-circadian biorhythm. Thus, these suggested cost functions cannot necessarily be generalized for all oscillatory systems. Recently, Siong provided a new cost function for kinetic **parameter** **estimation** in oscillatory circadian rhythms [54]. However, the performance of this approach was difficult to assess because no quantitative comparisons between this and other approaches commonly used for **parameter** **estimation** were given in [54].

Show more
88 Read more

For the simulation part, the AC1A excitation system model and AC8B excitation system model have been implemented in MALTAB/Simulink, based on the IEEE standard 421.5, which is updated in 2005. On the other hand, for the optimization part, the goal is to look for suitable parameters such that, with the same input, the simulation output will match the field data from the real machine. We formulated the problem as a least square problem and applied Damped Gauss-Newton method (DGN) and Levenberg-Marquardt (LM) method to solve it. We used both the MATLAB **Parameter** **Estimation** Toolbox and the MATLAB programs developed by us to implement the algorithms and get the parameters. For both of the AC1A models and AC8B, we did the case studies and validation. And this is also a project sponsored by Progress Energy, who provided two suites of “bump-test” field data of AC1A excitation system and AC8B excitation system as well. Besides the results, we

Show more
91 Read more

The general method studied here allows handling heavy tails in other applications as well. We give two examples in Section 6. First, we consider **parameter** **estimation** using L 1 - regularized linear least squares regression (Lasso) under random subgaussian design. We show that using the above approach, **parameter** **estimation** bounds can be guaranteed for general bounded variance noise, including heavy-tailed noise. This contrasts with standard results that assume sub-Gaussian noise. Second, we show that low-rank covariance matrix approximation can be obtained for heavy-tailed distributions, under a bounded 4 + mo- ment assumption. These two applications have been analyzed also in the independent and simultaneous work of Minsker (2013).

Show more
40 Read more

The goal of this study is to determine how this type of aggregate data can be treated in order to learn more about CAR T therapy by performing **parameter** **estimation** using a least squares problem. First, we determine that it is better to use the original data set as is rather than average the data over each time point and reduce the number of data points from 15 to 3. Secondly, we find that by assuming aggregate data, we tend to over estimate the standard error of the **parameter** in question.

30 Read more

The differing basis elements are created by adjusting the standard deviations in a manner similar to (17). These basis elements have physical interpretations and do not require constraints on the densities for decay that is required for other density representations. The only requirement imposed is the positivity of the expansion coefficients. The **parameter** **estimation** problem can now strictly be formulated as bounded optimization problem.

10 Read more

However, abduction includes an intrinsic incompleteness, which means that abduction is for- mally equivalent to the logical fallacy affirming the consequent. In this point, abduction is different from conventional **parameter** **estimation**. Therefore the study on a numerical aspect of abduction advances to **parameter** **estimation** which includes some kind of incompleteness.

In case of negative answer, one will not be able to estimate it except by some additional a priori knowledge: the model has to be reconsidered. There are three main approaches for studying identifiability of controlled and uncontrolled non linear dynamical systems described by differential systems. The first one is based on equiva- lence theory [2,3], the second one considers the Taylor series expansion [4], and the last one comes from elimi- nation theory [5]. This last theory gives a possible procedure for estimating parameters without any information about them. Few studies use it efficiently [6]. The differential elimination method necessitates model with smooth function which is not the case of our model and more some output derivatives of high order may be es- timated. A piecewise differential elimination is proposed in order to take into account the non smooth func- tions which appear in our model. Then it is shown how the resulting calculus can be used to build a **parameter** **estimation** procedure for this type of model.

Show more
Our final sampling method investigated the impact of removing a single data point as a means of identifying the data points which provide the most information for the **estimation** of the parameters. A baseline data set consisting of fifty evenly-spaced points taken over the course of the outbreak was generated. Fifty reduced data sets were created by removing, in turn, a single data point from the baseline data set. Standard errors were then computed using the true covariance matrix, Σ 0 for the reduced data set (Equation (8)). (For this experiment, use of the true covariance matrix allowed us to accurately observe these effects on standard errors that resulted from the removal of a single data point. Errors introduced by solving the inverse problem would make it impossible to ascertain trends.) The largest standard error values in this group of data sets correspond to the most informative data points since the removal of such points leads to the largest increase in uncertainty of the estimate.

Show more
34 Read more

Maximum likelihood **estimation** is used to determine the best fitting of the model. The idea is to have a parsimonious model that captures as much variation in the data as possible. Usually the simple graph model captures most of the variability in most stabilized data (Ngailo et al., 2014). We use model system (1) to the observed data on dengue fever disease in Tanzania. The model system (1) is fitted to the data for infected individuals in Tanzania. **Parameter** values were obtained from the different literatures like (Rodrigue et al., 2013; Massawe et al., 2015; Dumont et al., 2008). Other **parameter** values are estimated to vary within realistic means and given as shown below:

Show more