a nonlinear model for arbitrary parameter distributions. This is what e.g. variance-based methods like FAST  and the Sobol’ method [9, 10] do. This, of course, is computationally expensive, therefore many methods have been proposed with simplifying assumptions like linear- ity of the model (MLR ); methods that produce less sophisticated results, e.g. partial or no information on interactions (Morris method ); [13, 14] are less robust like DGSM ([15, 16]); or that use prior knowledge of the model, like Bayesian DGSM ). In this paper we use the Sobol’ method , where we have modified the orig- inal method for efficiency reasons (for more details see Section Globalsensitivityanalysis). Moreover, we intro- duce an approach to determine the sampling size a priori with an a posteriori error check. Thus, it is not likely that the proposed GSA will excel in computational efficiency, but it will excel in predictability of the costs and reliability of the results. In the biological field GSA is mostly applied to models consisting of ordinary differential equations, e.g., in pharmacology [18, 19], neurodynamics , or gene expression  and biochemical pathways  in cells.
The HL-RDHM, developed by the United States NWS, is a modeling framework for building lumped, semi-distributed, and fully distributed hydrologic models (Koren et al., 2004; Reed et al., 2004; Smith et al., 2004; Moreda et al., 2006). The model is structured using a 4 km × 4 km grid resolu- tion derived from the Hydrologic Rainfall Analysis Project (HRAP), which corresponds to the NEXRAD (Next Genera- tion Weather Radar) precipitation products developed by the US NWS. The water balance within each grid cell is mod- eled with the Sacramento Soil Moisture Accounting (SAC- SMA) model (Burnash and Singh, 1995). Figure 1c shows the water balance components of the SAC-SMA model in each grid cell. Routing between grid cells is modeled with a kinematic wave approximation to the St. Venant equa- tions. This study performs sensitivityanalysis on 14 param- eters of the SAC-SMA model within each cell of the HRAP grid as shown in Fig. 1c. Since the model contains 78 grid cells, a total of 78 × 14 = 1092 parameters are required to perform sensitivityanalysis without spatial aggregation. The sampling ranges for these parameters are derived from prior work (Van Werkhoven et al., 2008b) and in consultation with the National Weather Service. Note that the correct choice of sampling ranges is critical to ensure representative model performance in sensitivity analyses (Sobol 0 , 2001; Nossent
This section describes the globalsensitivityanalysis (SA) of an agro-climatic model embedded in a decision support system (DSS) for the water management of vineyard in the Languedoc-Roussillon region, France. The DSS is used in real time to recommend irrigation amounts in order to maintain optimal vine water stress dynamics, based on the production objective targeted by the winegrower (table wine, aging or laying-down wine, etc.). A major characteristic of agro-climatic models is the difficulty of estimating the numerous input parameters because field measurements are both costly and tedious. This is particularly true when soil-related parameters are involved - which is the case here - because their estimation requires subsoil measurements. The operational use of the model thus requires to find the right balance between data-friendliness and precision : the less input parameters asked to the end-user, the better.
The use of a Gaussian Process to emulate the APSIM–Sugar model was an efficient and effective alternative for globalsensitivityanalysis. For this analysis 800 simulations in APSIM were required (400 parameter sets * 2 treatments). By comparison the extended-FAST method as implemented by Zhou et al. (2014), would have required as many as 28000 simulations in APSIM (1000 parameter sets * 14 parameters * 2 treatments). In future emulator accuracy could be improved by removing parameters found to have negligible influence on key agronomic parameters. Improvements could also be made if more realistic information of the prior parameter distributions were identified and incorporated into the analysis. Currently the GEM-SA software allows for only uniform or normal prior distributions. Incorporating different prior distributions when know may affect the results. Furthermore when uniform distributions are used, the range used will affect the results. For example, a change in LS was more influential at lower values (Figure 1). Reducing the prior distribution of LS to lower values could potentially increase the relative influence of LS. This methodology could be extended to include other likely influential parameters at a wider range of environments and crop classes to assess potential interactions. Future research should also consider the first order interactions between influential parameters.
The seasonal and spatial distribution of incoming solar radia- tion (insolation) at the top of the atmosphere is determined by three astronomical parameters: Earth’s eccentricity, the lon- gitude of the perihelion, and Earth’s obliquity. The variation in these parameters causes sufficient changes in insolation to significantly affect climate, such as the distribution of sur- face temperature, vegetation cover, monsoon rainfall, Arctic sea ice, etc. These changes can be simulated and studied by means of experiments with global climate models. One clas- sical approach consists in identifying two epochs in the past for which sufficient data are available, running the climate model (with the implicit assumption that simulated climate is quasi-stationary with respect to the astronomical forcing), and then comparing the two resulting simulated climates. This is the approach followed, for example, by the Paleocli- mate Modelling Intercomparison Project (Braconnot et al., 2007).
Additionally, SA can also be used to discover technical defects in a model, identify the key areas of input, discern the priority of research, simplify a model and so on . Song et al.  reviewed the common methods and application areas of SA. Saltelli et al.  concentrated on the application of SA in chemical models. Borgonovo et al.  studied the measures of uncertainty and sensitivity and Perz et al.  summarized the globalsensitivityanalysis (GSA) and uncertainty analysis (UA) methods applied to ecological resilience. Although globalsensitivityanalysis is used in different fields of science, the method used in the fuel cell field has rarely been studied . For example, Li et al.  established a dimensionless steady-state calculation model for a fuel cells and evaluated the influence of various parameters on outputs using multi-parameter sensitivityanalysis (MPSA) based on this model. Srinivasulu et al.  focused on the sensitivity investigation of proton exchange membrane fuel cell (PEMFC) electrochemical model by using MPSA and aimed to determine the extent to which each parameter affects the modelling results.
Computing these sensitivities requires solving the original optimization problem with fixed parameters, and multiple large linear system solves thereafter. The randomized eigenvalue solver allows the for loop over linear system solves to be parallelized thus reducing the wall clock time to approximately four large linear system solves. The primary limitation of the method is that the sensitivities are local in parameter space and hence need to be evaluated at several different parameter samples. Challenges in high dimensional sampling prohibit a complete exploration of parameter space; however, as observed in the numerical results, local sensitivities may be taken at a modest number of sample points and inferences may be drawn if the results do not change significantly between samples. Theoretical development is needed to aid in determining when sparse sampling is sufficient. A possible extension of this work is to perform adaptive sampling in the parameter space and/or utilize a multi-fidelity approach; this is made possible by the efficiency of the local sensitivity computation.
Most estimation procedures of the Sobol’ indices and Shapley effects are based on Monte-Carlo sampling. These methods require large sample sizes in order to have a sufficiently low estimation error. When dealing with costly computational models, a precise estimation of these indices can be difficult to achieve or even unfeasible. Therefore, the use of a surrogate model (or metamodel) instead of the actual model can be a good alternative and dramatically decrease the computational cost of the estimation. Various kinds of surrogate models exist in literature, such as . In this paper, we are interested in the use of kriging as metamodels (see for example ). One particular approach, developped by , proposed an estimation algorithm of the Sobol’ indices using kriging models which also provides the meta-model and Monte-Carlo errors.
used to quantify the uncertainty and important input fac- tors controlling these projections. Globalsensitivity anal- ysis (GSA) apportions the total output uncertainty simul- taneously onto all the uncertain input factors described by marginal probability density functions, and thus is preferred over the local, one factor at a time, sensitivity analyses that have been previously reported (Homma and Saltelli, 1996; Saltelli, 1999). Monte Carlo filtering can identify sets of model simulations and input factors that meet a specified criterion or threshold. Thus globalsensitivityanalysis and Monte Carlo filtering offer an opportunity to gain insight into the sources of uncertainty and drivers of particular types of wet/dry behavior when estimating future water deficit under projected climate change.
Abstract. Complex, software-intensive, technically ad- vanced, and computationally demanding models, presum- ably with ever-growing realism and fidelity, have been widely used to simulate and predict the dynamics of the Earth and environmental systems. The parameter-induced simula- tion crash (failure) problem is typical across most of these models despite considerable efforts that modellers have di- rected at model development and implementation over the last few decades. A simulation failure mainly occurs due to the violation of numerical stability conditions, non-robust numerical implementations, or errors in programming. How- ever, the existing sampling-based analysis techniques such as globalsensitivityanalysis (GSA) methods, which re- quire running these models under many configurations of parameter values, are ill equipped to effectively deal with model failures. To tackle this problem, we propose a new approach that allows users to cope with failed designs (sam- ples) when performing GSA without rerunning the entire ex- periment. This approach deems model crashes as missing data and uses strategies such as median substitution, sin- gle nearest-neighbor, or response surface modeling to fill in for model crashes. We test the proposed approach on a 10-parameter HBV-SASK (Hydrologiska Byråns Vattenbal- ansavdelning modified by the second author for educational purposes) rainfall–runoff model and a 111-parameter Mod- élisation Environmentale–Surface et Hydrologie (MESH) land surface–hydrology model. Our results show that re- sponse surface modeling is a superior strategy, out of the data-filling strategies tested, and can comply with the dimen- sionality of the model, sample size, and the ratio of the num-
The case is designed to inculcate the principles of the scientific method and critical thinking by requiring the student to proceed through a series of four modules. The first module establishes a traditional baseline valuation for a global energy project, and each of the subsequent three modules forces a reconsideration of the baseline projections because of future uncertainties involving an environmental threat to an endangered species, a litigation risk involving illicit payments, and a concern involving professional ethics.
he motion capture process. For constrained motion, cues. The model extracted is simple, tracking only the head, torso, hands and feet, as the focus of this research is determining the activities and relationships of people in crowded scenes. The system is also capable of detecting objects carried. This approach is tested on around 200 sequences, operating at a rate of approximately 25Hz for low-resolution imagery. The approach described in [Ning 04a] employs the Condensation framework [Isard 98] to track walking people, using a learned dynamical model and a manually defined shape model to assist the recovery of full-body kinematics. Successful extraction is demonstrated with 6 subjects walking in an indoor environment and 20 outdoor subjects, although the outdoor environment is relatively clean. Learned dynamical models are also employed in [Urtasun 04a], using PCA to extract a motion model for walking and running activities. This approach is evaluated on a small number of indoor video sequences. [Yoo 03] derives a skeletal model of a walking person from their silhouette using a mean anatomical segmentation. Although this approach is restricted to clean indoor data, it is evaluated on a large number of subjects (100, with 3 sequences per subject). In [Zhang 04] a 2D polygonal mesh is fitted to walking subjects, using a learned human shape model to guide a Bayesian template matching approach. This approach is demonstrated to operate effectively on the outdoor portion of the Southampton Gait Database (see Section 1.4), although the computational requirements of the approach are somewhat high, operating at a rate of around one frame per minute.
For purpose of our analysis, we assume that most likely severity (MS) is interpreted as median of the individual loss severity distribution. We consider only sub-exponential distributions from shape-scale family for sensitivityanalysis, as these are commonly used in operational risk modelling and are suggested in AMA guidelines. Sub-exponential distributions are those distributions with slower tail decay than exponential distribution. This class includes weibull distribution (shape<1), lognormal distribution, pareto, burr distribution etc. We have not considered gamma distribution and weibull distribution (shape>1) as these are thin-tailed distributions. We have also not considered cases where pareto shape parameter declines below 1, as infinite-mean distribution would result in unrealistic capital figures.
We consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on globalsensitivityanalysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model.
Our SA shows that the choice of sediment transport formula (SED) has a very strong impact on the model functions. As sediment transport formulae are also integrated into other LEMs and geomorphic models they will affect their out- comes too. Looking at the sediment transport formulae them- selves, Gomez and Church (1989) tested 11 different sedi- ment transport formulae using the same data sets and showed widespread variation in predictions – in some cases over or- ders of magnitude. The variation in the model performance can be explained by the derivation of the sediment transport formulae themselves, which are often theory-based but fitted to limited laboratory and field data, sometimes representing temporal averages over equilibrium conditions (Gomez and Church, 1989). The formulae do not, and were likely never intended to, represent the full variation of actual flow con- ditions in a natural river. As LEMs commonly amalgamate a set of geomorphic models or transport formulae, their perfor-
Noncompliance to therapeutic regimen is a real public health problem with tremendous socioeconomic consequences. Instead of direct intervention to patients, which can add extra burden to the already overloaded health system, alterna- tive strategies oriented to drugs’ own properties turns to be more appealing. The aim of this study was establish a ra- tional way to delineate drugs in terms of their “forgiveness”, based on drugs PK/PD properties. A globalsensitivityanalysis has been performed to identify the most sensitive parameters to dose omissions. A Comparative Drug For- giveness Index (CDFI), to rank the drugs in terms of their tolerability to non compliance, has been proposed. The index was applied to a number of calcium channel blockers, namely benidipine, nivaldipine, manidipine and felodipine. Using the calculation, benedipine and manidipine showed the best performance among those considered. This result is in ac- cordance with what has been previously reported. The classification method developed here proved to be a powerful quantitative way to delineate drugs in terms of their forgiveness and provides a complementary decision rule for clini- cal and experimental studies.
One general approach to this end has been to perform snapshot experiments for specific time slices in the past. The general circulation model is run with a particular set of initial conditions for a perpetual year for a long computa- tional time until equilibrium is reached. The epoch used for defining the astronomical forcing and boundary conditions is one for which specific efforts are being undertaken to col- lect observations. This is the general spirit of projects such as COHMAP (Anderson et al., 1988) and PMIP (Braconnot et al., 2007). Specifically, the COHMAP project focused on a series of time slices spaced every 3000 years throughout the deglaciation (Kutzbach and Guetter, 1986; Anderson et al., 1988), while PMIP historically focused on the mid-Holocene and the Last Glacial Maximum, though on this basis an in- creasing number of periods are being considered, including the Eemian (Braconnot et al., 2008) and the last interglacials (Yin and Berger, 2012).
measurement, simulation and analytic modeling. Measurement technique applies only to existing systems, so it is not suitable for performance evaluation in the early stage of software development. While an analytic model captures the essence of the modeled system as a set of mathematical equations, a simulation model "mimics" the structure and behavior of the real system. The simulation models are less constrained in their modeling power, so they can capture more details. However, simulation models are, in general, harder to build and more expensive to solve. In this thesis, analytic modeling is chosen due to the fact that its cost (in terms of time and money) is lowest among the three. The Layered Queuing Network (LQN)- the analytic model - will be used in the quantitative performance analysis during the architectural design. LQN modeling is very appropriate for such a use, due to fact that the model structure can be derived systematically from the high-level architecture of the system.
Life cycle assessment (LCA) calculates the environmental im- pact of a product or production process along the entire chain. Input parameters required to describe the production chain can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty in the input parameters will cause an uncertainty around the outcome of an LCA. In this paper, uncertainty can refer to variability or epistemic uncertainty (Chen and Corson 2014; Clavreul et al. 2013) of the input parameters. Variability (e.g. natural, tem- poral, geographical) is inherent to natural systems and cannot be reduced. Epistemic uncertainty refers to unknowns in the system and can be reduced by gaining more knowledge about the system. Analysing this uncertainty can be done by means of a sensitivityanalysis and can help to gain more insight into the robustness of the result, to prioritize data collection or to simplify an LCA model. Many LCA studies have been per- formed over the last decade, and interest in addressing uncer- tainty propagation is increasing (Groen et al. 2014; Heijungs and Lenzen 2014; Lloyd and Ries 2007). However, few stud- ies apply a systematic sensitivityanalysis to address the effect of input uncertainties on the output (Mutel et al. 2013). An explanation might be that ISO 14044 recommends a sensitiv- ity analysis as part of the LCA framework to identify the importance of the input uncertainties, but does not recommend a specific technique.