comparing model

Top PDF comparing model:

Simulation of herbicide impacts on a plant community: comparing model predictions of the plant community model IBC-grass to empirical data

Simulation of herbicide impacts on a plant community: comparing model predictions of the plant community model IBC-grass to empirical data

Background: Semi-natural plant communities such as field boundaries play an important ecological role in agricul- tural landscapes, e.g., provision of refuge for plant and other species, food web support or habitat connectivity. To prevent undesired effects of herbicide applications on these communities and their structure, the registration and application are regulated by risk assessment schemes in many industrialized countries. Standardized individual-level greenhouse experiments are conducted on a selection of crop and wild plant species to characterize the effects of herbicide loads potentially reaching off-field areas on non-target plants. Uncertainties regarding the protectiveness of such approaches to risk assessment might be addressed by assessment factors that are often under discussion. As an alternative approach, plant community models can be used to predict potential effects on plant communities of interest based on extrapolation of the individual-level effects measured in the standardized greenhouse experiments. In this study, we analyzed the reliability and adequacy of the plant community model IBC-grass (individual-based plant community model for grasslands) by comparing model predictions with empirically measured effects at the plant community level.
Show more

16 Read more

Comparing model and measured ice crystal concentrations in orographic clouds during the INUPIAQ campaign

Comparing model and measured ice crystal concentrations in orographic clouds during the INUPIAQ campaign

noting the area swept out by the ice crystal and η the number of splinters produced per µg of rime. η is defined as 350 × 10 6 splinters kg −1 following Mossop and Hallett (1974), whilst the ice crystals were assumed to be spher- ical with diameters of 500 µm, and falling at 2 m s −1 . As the model resolution is finite, we define the temperature thresholds within which splinters are produced, conserva- tively using a slightly wider temperature range than Hallett and Mossop (1974), with the production rate set to 0 if the temperature was greater than − 2 ◦ C or less than − 10 ◦ C. The extended range was to prevent the splinter concentra- tion being underestimated due to any differences between the constant temperature field in the model and the real temper- ature. The cumulative number of splinters produced along each back trajectory was then calculated to provide a max- imum number of splinters that could be produced along the back trajectory. The calculation of the total concentration of ice splinters along the back trajectory assumes that every ice splinter produced along the back trajectory is transported to Jungfraujoch and measured as an ice crystal, which is un- likely as the ice crystals would be reduced along the back
Show more

22 Read more

Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

As some of the first laser scanning campaigns carried out for inventory purposes at the turn of the millennium have been repeated in recent years, change estimation assisted by laser data have become an important re- search area. Bollandsås et al. (2013), Næsset et al. (2013a, 2015), Skowronski et al. (2014), McRoberts et al. (2015), and Magnussen et al. (2015) analysed different approaches to modelling of change in biomass, such as separate modelling of biomass at each point in time and then estimate the difference, direct modelling of change with different predictor variables, such as the variables at each time point or their differences, and longitudinal models. These modelling techniques have been com- bined with different design-based and model-based esti- mators to produce change estimates and confidence intervals. Sannier et al. (2014) investigated change esti- mation based on a series of maps, which provided the auxiliary data for model-assisted difference estimation. A comprehensive review and discussion of change estima- tion can be found in McRoberts et al. (2014, 2015). Melville et al. (2015) evaluated three model-based and three design-based methods for assessing the number of stems using airborne laser scanning data. The authors reported that among the design-based estimators, the most precise estimates were achieved through stratification.
Show more

11 Read more

Comparing model reuse with model building : an empirical study of learning from simulation

Comparing model reuse with model building : an empirical study of learning from simulation

that the building of a computer simulation can be a time consuming process (Robin- son et al., 2004). In fact, given the budget restrictions often found in industry and public sector projects a simulation modeller may seek ways to reduce the scope of a simulation study; for example, only perform a limited amount of experimentation after the model is built. Thus many research projects in computer simulation have considered the problem of how models can be built quicker. One possible approach that has gained popularity is the reuse of an existing simulation model specially designed to be used with multiple similar systems. For example, an NHS trust may wish to evaluate the impact and side effects of a policy in operating theatre manage- ment on the rest of the hospital. To develop a whole hospital model from scratch is clearly an expensive and time consuming exercise and may not even be deliverable in time to aid a decision. If, however, a suitable model already existed (G¨ unal and Pidd, 2010, currently provide the only example) that could be reused for the new objective this would benefit the decision maker in two ways:
Show more

328 Read more

Comparing Model and Experimental Results of the Volume of Filtrate during Sludge Dewatering

Comparing Model and Experimental Results of the Volume of Filtrate during Sludge Dewatering

Considering the graphs plotted in Figures 1-3, it can be observed that the dif- ferences between the model and experimental results are so near to each other, and in fact in Figure 2, the two results overlapped. This indicates the nearness of the two results and confirmed that, one can either use the model or experiment way to get a good result of volume of filtrate. The results obtained from the re- search work have indicated a very good arrangement of the filter. Also the prac- tical work was monitored efficiently. These two events made the work very suc- cessful, thus giving a very good result (Tables 1-3).
Show more

11 Read more

Comparing model sensitivities of different landscapes using the ecohydrological SWAT model

Comparing model sensitivities of different landscapes using the ecohydrological SWAT model

Lowland areas are characterised by specific properties, such as flat topography, low hydraulic gradients, shallow ground- water, and high potential for water retention in peatland and lakes. Sensitivity analyses with the river basin model SWAT were carried out analysing North German lowland catch- ments to identify the dominant hydrological characteristics. The results show that groundwater and soil parameters were found to be most sensitive in the studied lowland catchments and they turned out to be the most influential factors on sim- ulated water discharge. This indicates that the water flow in
Show more

8 Read more

Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

As a caveat, the thresholds used to differentiate highly sen- sitive, sensitive, and insensitive parameters are based only on the relative magnitudes of the derivatives given in each col- umn, making them subjective and somewhat arbitrary. The thresholds were determined by ranking each column in as- cending order and then plotting the relative magnitudes of the derivatives. Results were classified as either highly sen- sitive or sensitive where the derivative values changed the most significantly. Insensitive parameters had small deriva- tive values that could not be distinguished. Note different thresholds were used for Tables 3 and 4 since the Box-Cox transformation reduced the original range of RMSE by ap- proximately an order of magnitude. The results in Tables 3 and 4 show that PEST did not detect significant changes in parameter sensitivities for high flow (RMSE) versus low flow (TRMSE) conditions. Also differences in the time-scales of predictions as well as watershed locations did not sig- nificantly change the PEST sensitivity designations in both tables. Overall PEST found the parameters for impervious cover (PCTIM, ADIMP) and those for storage depletion rates (UZK, LZPK, LZSK) significantly impacted model perfor- mance, especially for daily time-scale predictions. The mean water-equivalent threshold for snow cover (SI), upper zone storage parameters (UZTWM, UZFWM), and lower zone storage parameters (LZTWM, LZFSM, and LZFPM) were classified by PEST as being the least sensitive.
Show more

25 Read more

Comparing domain-and facet-level relations of the HEXACO personality model with workplace deviance

Comparing domain-and facet-level relations of the HEXACO personality model with workplace deviance

Personality is most commonly described using five broad domains (i.e., the Big Five or Five-Factor Model): Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional Stability (or Neuroticism) ( Goldberg, 1990 ; McCrae & Costa, 1992 ). However, re-analyses of lexical data suggest that personality may be more accurately described using six cross-culturally replicable domains ( Ashton et al., 2004 ; Ashton et al., 2014 ). The most common conceptualization of such a six- dimensional personality framework is the HEXACO model ( Ashton & Lee, 2007 ), which describes personality using the following six broad domains that form the HEXACO acronym: Honesty-Humility, Emo- tionality, eXtraversion, Agreeableness, Conscientiousness, and Open- ness to Experience. The HEXACO, just like the Big Five, is hierarchically organized, but di ffers from the Big Five in several aspects (see Lee & Ashton, 2004, 2018 , for a detailed discussion). Two major di fferences are that it adds a sixth domain called Honesty-Humility, which reflects the tendency to be genuine and fair in dealing with others, and that some facets are shifted from one domain to another (e.g., Sentimen- tality from Big Five Agreeableness to HEXACO Emotionality). Each of the six broad HEXACO domains contains four facets, amounting to 24 facets in total. Altruism is included as a 25th interstitial facet because of the importance of behaviors associated with this trait for social func- tioning ( K. Lee & Ashton, 2018 ); this facet loads on Honesty-Humility, Emotionality, and Agreeableness.
Show more

11 Read more

Crystallographically amorphous ferrimagnetic alloys : comparing a localized atomistic spin model with experiments

Crystallographically amorphous ferrimagnetic alloys : comparing a localized atomistic spin model with experiments

Using the LLG model we show the compositional depen- dence on the coercivity. The model reproduces qualitatively similar behavior to the experimental results shown in Fig. 3. The systems modeled are 62 500 spins in size due to limits on computational resources, therefore, a single domain state exists and reversal occurs via precessional switching over the energy barrier. Figure 8 shows the results of numerical calculations of the coercive field for a range of compositions of the TM-RE system. The sweep rate applied was 1 T/ns, which was required for computational efficiency. The system was first equilibrated at the given temperature and then the field was ramped in the opposing direction to the dominant sublattice. The lines are guides to the eye applied above and below T M for each composition. Qualitative agreement with the experimental results of Fig. 3 is good, showing that the divergence is due to the magnetization compensation point. This is another validation of the use of this simple atomistic model as a first approximation for this type of Ferrimagnet. Complete agreement between experiment and theory is not possible at the moment as the coercivity of a material depends on many things, amongst other things, the presence of defects, morphology, chemical segregation, formation of magnetic grains, interfacial properties and the time over which the field is swept, i.e. it is a time dependent quantity. Quantitative agreement between the LLG simulations and experiments for the whole range of temperatures and compositions is therefore highly computationally expensive. This is because of the
Show more

9 Read more

Comparing a Multiple Regression Model Across Groups

Comparing a Multiple Regression Model Across Groups

Comparison of the fit of the model from the Clinical and Experimental programs revealed that there was no significant difference between the respective R² values, Z = 1.527, p > .05. A comparison of the structure of the models from the two groups was also conducted by applying the model derived from the Clinical Program to the data from the Experimental Program and comparing the resulting “crossed” R² with the “direct” R² originally obtained from this group. The direct R²=.541 and crossed R²=.283 were significantly different, Z = 3.22, p < .01, which indicates that the
Show more

7 Read more

A Bayesian hierarchical model for comparing average F1 scores

A Bayesian hierarchical model for comparing average F1 scores

This approach largely avoids the above mentioned perils of NHST, except for the third one on complex performance measures. However, it is known that the value of Bayes factor can be very sensitive to the choice of prior distribution in the alternative model [9]. Another problem with Bayes factor is that the null hypothesis can be strongly preferred even with very few data and very large uncertainty in the estimate of the performance difference [9]. Furthermore, generally speaking, a single Bayes factor is much less informative than the entire posterior probability distribution of the performance difference provided by our Bayesian estimation based approach.
Show more

11 Read more

Abstract This study is aimed at comparing team-assisted individualization learning model and scramble learning model toward

Abstract This study is aimed at comparing team-assisted individualization learning model and scramble learning model toward

Another cooperative learning model is the Scramble learning model. The scramble learning model helps the learning process in the classroom become active because it involves all students. The scramble learning model is used to improve and develop the students’ mindset because it requires students to use and combine right-brain and left-brain performance in the learning process (Huda, 2013). In the implementation, it begins with the teacher presenting the material according to the topic studied. Teachers create study groups and share prepared worksheets for each group. Group members compile and match the questions and answers to get the right answer (Ma’ruf, 2018). In the learning activities of students, the use of this learning model is very effective and efficient because this learning model helps students understand the subject, encouraging students to think quickly, accurately and ready to answer questions from the teachers. Sugiarta (2012) proved that the learning model of Scramble can improve the activity and learning outcomes of students.
Show more

11 Read more

Comparing Technology Adoption Forecasts Generated by an Agent-Based Model and a Scaled and Adjusted Random Utility Model.

Comparing Technology Adoption Forecasts Generated by an Agent-Based Model and a Scaled and Adjusted Random Utility Model.

Multiple studies have concluded that influence from those physically nearby is significant. Discussed here are two studies relevant to the simulation introduced in Chapter 4 which focus on the highly localized but significant influence of adopters in a potential adopter’s physical vicinity on the adoption of residential solar systems. First, the work of Bollinger and Gillingham [53] found that one additional installation of a residential solar system increases the chance of another installation in the same ZIP code by .78%. In addition, the work of Rode and Weber [54] found that seed installations could feasibly increase the number of additional installations in the surrounding kilometer radius by more than one per year. The significance of localized imitation shown in these works is the reason the effect of physical vicinity was added into the social network construction model generated in this work.
Show more

146 Read more

Comparing ABET-Accredited IS Undergraduate Programs and the ACM 2010IS Model Curriculum

Comparing ABET-Accredited IS Undergraduate Programs and the ACM 2010IS Model Curriculum

The ACM IS 2010 Model Curriculum “consists of seven core courses, which specify the required knowledge units and topics that have to be covered in every IS program” (ACM 2010). The ACM recognizes that the time to cover the core material and the depth to which it can be covered will vary due to the needs of each program. Subsequently, while the depth and type of coverage will differ for each program, each core topic must be covered, whether in a specific course or as part of a larger course.

12 Read more

A Model for Comparing Free Cloud Platforms

A Model for Comparing Free Cloud Platforms

 They make savings by increasing the vol- ume or the productivity. Thus, lower costs per unit, project or product obtain.  They allow running Streamline pro- cesses, which leads to obtaining a high ef- ficiency by comparing the results ob- tained to the corresponding time and the people involved.

10 Read more

Performance Comparing and Analysis for Slot Allocation Model

Performance Comparing and Analysis for Slot Allocation Model

of the airline’s network or the commercial viability of the flight. As a result, certain slots may not be attractive enough to be actually operated by the assigned airport users, a fact that may lead to waste of a really scarce resource[16]. At present, this situation is very common in China's airports. Especially, the current schedule does not take into account the actual implementation difficulties. Although the scheduled slot is constrained by capacity, it cannot be implemented in actual operation, resulting in the accumulation and propagation of delays[27]. Acquiring the appropriate slots at two congested airports like PEK and PVG is extraordinarily difficult given the scarce capacity at both airports. This difficulty is common, as Debbage [25]has pointed out for many years, but surprisingly it has not yet been clearly integrated into any slot allocation model. Although the network-based slot allocation model has emerged in recent years [15, 19], which explicitly considers the problem of flight time matching at hinge airports, it is still subject to priority constraints. It is conceivable that if a new entrant is to operate such a competitive route, the priority of his application will be less than that of other applications with grandfather's rights when the application slot is limited coincidentally by capacity. This application may be adjusted or rejected, and other applications with grandfather's rights may not have this difficulty but still occupy a scarce slot. This is obviously unreasonable, because it will increase the cost of new entrants on this route, which in turn affects the welfare of passengers on this route, although the new entrant may obtain this slot through secondary market transactions. Therefore, incorporating implement difficulties into CAA rules seems to be one of the promising solutions to the over-concentration of departure
Show more

20 Read more

A 'small-world-like' model for comparing interventions aimed at preventing and controlling influenza pandemics

A 'small-world-like' model for comparing interventions aimed at preventing and controlling influenza pandemics

The community model was based on a complex random graph realistically describing meetings between individu- als. We first generated a set of individuals based on a par- ticular demographic profile (gender, age groups, and household sizes) adapted from French national census data [25], in which each individual is assigned to a house- hold and a place of occupation (for example, a school for a child, or a workplace for a working adult). Households and places of occupation were assigned to districts, and children were preferentially assigned to schools located in the district where they lived; 20% of working adults were assigned to workplaces located in other districts. In the reference simulation, 23% of individuals were children, 67% were adults (80% in employment), and 10% were elderly.
Show more

14 Read more

Comparing hybrid data assimilation methods on the Lorenz 1963 model with increasing non-linearity

Comparing hybrid data assimilation methods on the Lorenz 1963 model with increasing non-linearity

based on stochastic EnKF. Both of these papers are examples of using variational methods within an ensemble Kalman filter. Other methods which use ensemble methods to update variational methods have also been popular. Wang et al. (2008) looked into a hybrid ETKF-3DVAR data assimilation method for the Weather Research and Fore- casting (WRF) model. The ETKF-3DVAR uses ensemble information within a variational framework. The P b from the ETKF is combined with the climatological background error covariance matrix in 3DVAR (B c ), at the start of each assimilation window. This is our motivation for the ETKF- 4DVAR which is the same but uses 4DVAR framework instead of 3DVAR. Other examples such as Fairbarn et al. (2014) and Liu et al. (2008) looked at the four-dimensional ensemble variational method, 4DENVAR. 4DENVAR uses the four-dimensional covariance from an ensemble of model trajectories which alleviates the need for the tangent linear and adjoint model in the 4DVAR. In Fairbarn et al. (2014), both deterministic and stochastic versions ensemble of 4DENVARs (EDA-D and EDA-S) are tested against 4DVAR, deterministic-EnKF [DetEnKF, which is an ap- proximation of the ETKF for small background error *Corresponding author.
Show more

13 Read more

Comparing TIGGE multi-model forecasts with. reforecast-calibrated ECMWF ensemble forecasts

Comparing TIGGE multi-model forecasts with. reforecast-calibrated ECMWF ensemble forecasts

The second post-processing method under discussion is the calibration of single forecast systems with the help of specific training datasets. A number of different calibration methods have been proposed for operational and research applications, and a recent comparison of several methods can be found in Wilks and Hamill (2007). As most calibration methods are based on the idea of correcting the current forecast by using past forecast errors, they require some sort of training dataset. With this set of past forecast-observation pairs, correction coefficients for a regression-based calibration scheme can be determined. It has been shown that such calibration techniques are particularly successful when a “reforecast” training dataset is available (Hamill et al. 2004, 2006, 2008; Hamill and Whitaker 2006, Hagedorn et al. 2008). A reforecast dataset is a collection of forecasts from the past, usually going back for a considerable number of years or decades. In order to ensure consistency between reforecasts and actual forecasts, ideally the reforecasts are produced specifically with the same model and data assimilation system that is used to produce the actual forecasts. The availability of a large number of past forecast-observation pairs consistent with the current model system is a major factor of the success of the calibration technique used in this study.
Show more

53 Read more

Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

the forcing-related processes explicitly treated in the mod- els. For instance, the inclusion of modules for chemistry and aerosol microphysics improves the representation of vol- canic plume development and hence the climatic impacts of volcanic eruptions (LeGrande et al., 2016). Nevertheless, a multi-model assessment of the volcanic forcing generated by different aerosol climate models for a Tambora-like equa- torial eruption produces a substantial ensemble spread, rais- ing questions as to the level of model complexity that is necessary to estimate the range of inherent climate uncer- tainty (Zanchettin et al., 2016). New reconstructions of so- lar irradiance have also been developed and progress is ex- pected from a more consistent representation of spectral so- lar irradiance in the PMIP4 solar forcing datasets (Jungclaus et al., 2017). Improvements in atmospheric chemistry (partic- ularly stratospheric and tropospheric ozone responses) also increase the magnitude of response to solar changes (Shin- dell et al., 2006). The use of climate model configurations with realistic stratospheric dynamics can also improve the simulation of “top down” mechanisms of solar forcing and the quantification of solar impacts on hydroclimate variabil- ity during the CE. In addition to determining changes in the radiative energy input, solar variability can affect the Earth system by altering the chemical composition of the high- latitude middle atmosphere through energetic particle forc- ing (Seppälä and Clilverd, 2014). This forcing and its trans- fer mechanisms toward the Earth’s surface remain largely unexplored (Zanchettin, 2017), but are receiving increasing attention from the climate modeling community (Anet et al., 2014; Matthes et al., 2017). Future simulations of the CE that include stratospheric processes are anticipated over the next few years, which will provide important new insights into solar influences on hydroclimate.
Show more

50 Read more

Show all 10000 documents...