Finally, we have only evaluated the preferences of mostly healthy people from the UK general public at a given point in time. If preferences change over time, our estimates may no longer be valid albeit we will not ex- pect any dramatic differences from what we have re- ported. A possible avenue for future research is to repeat the analysis here using a panel data of discrete choices collected over time. That aside, it is well known that end-user preferences for healthcare interventions in healthy states are not the same as when they are in sick states – and that people make decisions behind a “veil of experience ” , i.e., they prefer products and services that they have previously experienced . Hence, our sam- ple which was dominated by healthy people may bias our estimates. We did indeed recognize the issue and it is for this reason we linked the variable PRIOR_ILL- NESS to the scale parameter in the HMNL model – to partially account for health-state-dependent preferences and the experience-good features of healthcare. What is more the variable for prior illness was not a statistically significant predictor of preference heterogeneity in the LCMNL model. We did not ask survey respondents the type of illness (acute or chronic) they had experienced. The variable for prior illness therefore is crude and it says nothing about the number of visits to a health facil- ity in the past year or the severity of illness and whether this affects respondents ’ cognitive abilities. It is then pos- sible that specific patient-populations (for example, those suffering from Alzheimer ’ s, diabetes or some form of cancer) may have preferences that differ from that of the sample we studied. We leave this issue of health- state-dependent preferences for future research.
employment. This will be especially important to necessity entrepreneurs. Also, some sectors are only scalable by adding more personnel, which is the case with most service industries. The choice model sees independent own account proprietors as favoured in sectors that have low entry barriers and are difficult to scale up. Or if scalable, can be managed through easy aggregation of small numbers of personnel, especially if possible through divided roles between individual partners and/or family and close networks. This can range across all sectors, except those where large scale production is essential e.g. large scale steel ship building or iron and steel production. But sole proprietorship is most favoured where higher skills are required, and/or in specialised fields with high knowledge levels. Fields especially favoured are those where the knowledge is not easily aggregable and sub-dividable. This characterised specialist manufacturers in craft industries (such as watch and instrument making), as well as many professions, such as specialist engineers, architects, doctors and lawyers. Similarly, if the only way to scale up is by increasing personnel, small firms and individual self-employed can often compete effectively on quality and/or price; as in care industries, much retailing, lodgings, and other services. In the nineteenth century it also characterised industries where large scale factory manufacture was less able to compete with the individual, such as many small scale artisan manufactures (jewellery, decorative arts and craft industries, instruments, watch and clock making), sectors where small manufactures could compete on quality or some types of product specialism (shoes, clothing, many food manufactures), many building and construction trades (painters, plasterers, carpenters, bricklayers), washing and laundry (even after large scale steam laundries began to take over), and local retail, merchanting and trading. Many of these sectors have been regarded as ‘traditional’ industries compared to those where factories and corporations had most strongly developed, or in modern time where electronic trading is possible, but many of these remain an important parts of historic and modern small business and sole trading activity.
Since the seminal papers by Lave and Train  and Manski and Sherman , automobile demand and vehicle choice have been the subjects of multiple studies by transport re- searchers. Most studies (e. g., [3, 4, 8, 10, 21, 34]) are based on disaggregate discretechoicemodelling of household be- haviour. But some are also based on aggregate sales data, whereby one estimates total demand or market shares held by various vehicle models (e.g., [1, 5, 14, 20, 22]). Common to most of these studies is that their data sets and methodology are too crude or too incomplete to allow for reliable predic- tions of the car fleet composition under varying fiscal and regulatory policy options. Some recent studies have, however, come a long way towards modelling the complex, joint deci- sion processes of vehicle choice and usage [6, 9, 18, 19, 28]. The introduction of novel fuel and propulsion technologies, such as battery, (plug-in) hybrid and fuel cell electric vehicles, and the need to combat the exhaust emission of local and global pollutants from the passenger car fleet have enhanced the political interest in the vehicle purchase choices made by private households and firms, and in how these choices can be influenced through fiscal and regulatory penalties and incen- tives. In Norway, a large number of incentives have been implemented over the last 10–12 years, most importantly a steeply CO 2 -graduated vehicle purchase tax. These incite a
To properly represent the latent nature of needs-satisfaction and the correlation it introduces between the observed leisure choices and responses to the subjective needs-satisfaction statements, we develop a structural equation model (SEM). SEMs are common practice in mathematical psychology in relating a series of indicators to psychometric constructs (e.g. Song and Lee 2012). Recently, SEMs have been introduced in the discretechoicemodelling literature to allow for the inclusion of latent constructs as explanatory variables of choices and are also known as hybrid choice models or integrated choice and latent variable models (ICLV) (e.g., Walker & Ben-Akiva, 2002; Bolduc et al. 2005). The choice model applied in this paper is that of Random Regret Minimization (Chorus, 2010), which is a regret minimization based counterpart of the conventional Random Utility Maximization model. This choice for the regret based approach was based on empirical performance (model fit and out of sample predictive ability) of the regret and utility based approaches, on our data. The ICLV model deals with measurement error as a result of the subjective needs-satisfaction statements being imperfect measures of latent anticipated needs-satisfaction. Moreover, it accounts for the possible existence of a spurious relationship between socio-economic characteristics and leisure activity participation. That is, socio-economic characteristics may explain leisure choice both directly and indirectly by explaining variations in latent needs- satisfaction. The ICLV model should thus be preferred over the direct inclusion of the subjective needs-satisfaction as explanatory variables in the choice model.
In recent years a large amount of research has centered on the estimation of structural discretechoice dynamic programming models of rational behavior. This type of model represents a theoretically appealing approach to modelling situations in which a forward looking agent makes decisions in the presence of uncertainty about future events. Existing research shows that dynamic programming models are a valuable tool for examining a wide range of topics spanning many erent elds of
The developed function, able to predict bicyclists routes, can be used afterwards for inexpensive and fast evaluation of possible changes within infrastructure or within traveler information services, for bicycle path routes. Resulting algorithm-framework can be useful for all bikers travelling within the area. Furthermore, the dataset can be enlarged by adding all kinds of information, in order to make it more accurate and reliable. Besides, this network could be used for other modes of transport or any other purpose, avoiding repeating the same data collection process made by the team of this project. Cycling is a growing mode of transport thanks to its sustainability, safety and healthiness. It is worth to pay more attention to this field and make it more attractive for everyone. This paper improves quality of service for this mode of transport, gives detailed planning information to its users and is able to predict the route choice of the average bicyclist in Norrköping.
We employed a DCE technique with eight pairwise choice situations, each with six characteristics. Re- spondents had to choose eight times between treat- ment A or B. The calculation of coefficients was performed using the maximum likelihood method. According to the underlying distribution function, dif- ferent estimation methods (in most cases probit or logit estimations) were used [17,20-23]. We tested models with all the sociodemographic variables in order to explore differences between subgroups. We generated interaction-terms of each attribute with each parameter as product-terms (i.e. each attribute was interacted with age, sex and so forth). We then calcu- lated a model with all the main effects and all the inter- actions of one parameter (“forced entry”). That means a single model for the whole set of variables. No strata or separate models for subgroups were applied because in that case we would have to deal with different constants. The combined “parsimonious model” was employed by testing a model with the main effects and all significant interactions of all parameters resulting in the analysis above and then reducing the model by eliminating the non significant interactions step by step. The resulting model contained all the main effects and the significant interactions. The aim of this model was to show all the subgroup effects “at a glance”.
This paper presents Discrete Element Model simulations of packing of non-cohesive ﬂexible ﬁbres in a cylindrical vessel. No interstitial ﬂuid effects are modelled. Each ﬁbre-particle is modelled as a series of connected sphero-cylinders. In an initial study each particle is modelled a single rigid sphero-cylinder; the method has been used before but this study considers higher aspect ratios up to 30. This posed some modelling challenges in terms of stability which were overcome by imposing limits on the particle angu- lar velocity. The results show very good agreement with experimental data in the literature and more detailed in-house experiments for packing volume fraction. Model results on particle orientation are also shown. The model is developed to include ﬂexibility by connecting sphero-cylinders as sub-elements to describe a particle. Some basic tests are shown for the joint model that connects the sub-elements. The simulation results show similar trends to the rigid particle results with increased packing fraction. The effects of number of sub-elements, joint properties and contact friction are examined. The model has the potential for predicting packing of ﬁbrous particles and ﬁbre bundles relevant to the preparation of preforms for the production of discontinuously-reinforced polymer, ceramic and metallic matrix composites.
Asphaltic materials are traditionally modelled using continuum-based approaches and finite element programs, however this approach does not focus on the micromechanical behaviour of the mixture. An alternative approach is to use the discrete element method (DEM), which is commonly used to model the behaviour of granular materials in order to gain micromechanical insight. The first notable use of DEM to model asphaltic materials (to the authors’ knowledge) was by Rothenburg et al. (1992), followed by work by authors such as Chang & Meegoda, 1993; Meegoda & Chang, 1994; You & Buttlar, 2004, 2006), all of which were 2D models. Various three-dimension models have since been developed, for example You et al. (2007), Collop et al. (2004, 2006, 2007), Carmona et al. (2007), Liu & You (2009); these generally modelled idealised asphalt mixtures using various
contact forces does not consider any additional contacts on a particle, however, they took particle coordination number into account by only allowing particles with 3 or fewer contacts to break. This imposition however is somewhat artificial, and may prevent a true fractal distribution from emerging—if only the smallest particles may break (larger particles will have more contacts), eventually larger particles (although only very few of them) will need to fragment in order to maintain fractal proportions. Hence this obfuscates the suitability of their breakage criteria, as does the fact their model did not obey conservation of mass when replacing broken particles, meaning it would not be capable of modelling the evolution of voids ratio. This breakage criterion was also used by Marketos and Bolton (2009) and Elghezal et al. (2013) but in three dimensions, measuring particle stress as F max /d
Abstract. Compaction behaviour and mechanical response of a compact show strong dependence on particle shape. In this study, a numerical model based on the discrete element method (DEM) was developed to study the compaction behaviour of spheroidal particles. In the model, particle shape was approximated by gluing multiple spheres together. A bonded particle model was adopted to describe interparticle bonding force. The DEM model was first validated by comparing the properties of packing of spheroids (packing density, coordination number) with literature data and then applied to both die compaction and unconfined compression. In die compaction, the effect of aspect ratio on the densification was mainly due to the difference in the initial packing. In unconfined compression, the increase in compressive strength with increasing aspect ratio was attributed to the increase in the number of interparticle bonding. The findings facilitate a better understanding of the relation of particle shape to the compaction behaviour and compact strength.
A cloaking tool allows the individual to only show those attributes that are important to them. This might be useful if there are many attributes that can describe the alternative. It might also justify making relatively obscure information available within the choice environment, as that information need not overwhelm those people who do not wish to see it. A literature has developed examining various attribute processing rules, where one such rule that the decision maker might employ is to ignore or not attend to certain attributes when making their choice (e.g. Rose et al., 2005). There is evidence to suggest that attribute attendance as stated by the respondent is not reliable, and that sounder methodologies include a stochastic treatment of attribute attendance (Hensher et al., 2007) or the use of other model outputs such as conditional parameter estimates (Hess and Hensher, 2010). Cloaking tools might not cover all attribute nonattendance, as revealed attributes might still be ignored, but attributes that are not revealed can be definitively considered as not attended to, and removed from the utility expressions of a choice model.
Abstract: Agent-based modeling is a promising method to investigate market dynamics, as it allows modeling the behavior of all market participants individually. Integrating empirical data in the agents’ decision model can improve the validity of agent-based models (ABMs). We present an approach of using discretechoice experi- ments (DCEs) to enhance the empirical foundation of ABMs. The DCE method is based on random utility theory and therefore has the potential to enhance the ABM approach with a well-established economic theory. Our combined approach is applied to a case study of a roundwood market in Switzerland. We conducted DCEs with roundwood suppliers to quantitatively characterize the agents’ decision model. We evaluate our approach us- ing a fitness measure and compare two DCE evaluation methods, latent class analysis and hierarchical Bayes. Additionally, we analyze the influence of the error term of the utility function on the simulation results and present a way to estimate its probability distribution.
It is clear that adding a univariate random variable to all random utilities in an ARUM does not affect choice probabilities. This means that the random utility vector U is not identified from observation of discrete choices. The following proposition makes this insight a little more precise by establishing a converse, namely that if two ARUM yield identical choice probabilities then their random utility vectors must be identical up to an additive random variable. This is then the limit for identification in an ARUM: it is possible to identify the distribution of U up to an additive univariate random variable and not more.
Calfee and Winston’s main finding is that the value of congested travel time (or, alternatively, the willingness to pay to reduce the time spent travelling in congested traffic conditions) is estimated to be between 14 to 26 percent of the gross hourly wage, which is considerably lower than the SVOT derived from mode-choice studies (Small, 1992a, finds in a review of mode choice models that a reasonable average is 50% of the gross wage). Furthermore, the value of time is found to be insensitive to alternative uses of the revenues arising from the toll. This finding is in contrast to Small (1983), (1992b) and Mohring and Anderson (1994), who suggest that the key to political acceptance of a congestion charge lies in how the revenues from the toll are spent. Calfee and Winston conclude that their findings help explain why there has been little public support for tolls in the US and elsewhere, since it is doubtful that the net benefits from a toll are high. In spite of this, however, the authors support the widely held claim that other measures directed towards reducing congestion (expanding public transportation, implementing intelligent vehicle road systems that guide motorists onto the least congested routes) may be even less desirable in the long run since “commuters who have previously avoided congested roads by, for example, driving during off-peak hours, will be lured back onto the roads by the promise of uncongested travel”.'^
angle of repose in the DEM simulation as that measured for the real pebbles during the execution of a simple slump test in the laboratory. This assumption was made, because using spheres to represent rocks, it will be very difficult to achieve an angle of repose similar to ballast aggregate, the reader is referred to McDowell et al (2011) for details of the modelling of flow of ellipsoidal rocks. Energy dissipation at contacts was modelled by the viscous damping model (Itasca, 2008) characterized by the critical damping ratio (Ginsberg& Genin, 1984). The critical damping ratio was calibrated by comparing the results of model simulation drop tests with corresponding laboratory drop tests - that is individual pebbles were dropped onto the bottom wall. The reader is referred to the work of McDowell et al (2011) for the details of the determination of particle-particle friction coefficient and critical damping ratio. The minimum size of the breakable spheres was set at a diameter of 4mm. The minimum particle size was defined to control the time step of the DEM calculation, the maximum number of particles and thus the computational time.
these approaches provides concrete guidance for imple- menters on which combination of characteristics will achieve the greatest uptake. Some studies have used discretechoice experiments (DCEs) to inform programme design and identify potential barriers and facilitators of uptake [19–21]. DCEs are a survey-based approach to eliciting user preferences. They allow the es- timation of user values in the absence of observable markets, where services are provided for free or have not yet been introduced. They can measure the strength of preferences between service attributes, for example, valuing waiting times, prices and provider gender, inde- pendently. Lastly, they can identify where preferences differ between individuals, which is particularly useful when complex interventions include targeting specific user groups.
The analysis was performed using STATA version 11 (StataCorp, College Station, Texas). The weights of the attributes were determined with a logistic regression model for panel data with the choice of the respondents as dependent variable and the attributes as independent variables. We used effect coding with the levels of the attributes set to −1 or 1 with constant set to zero. As the sum of the attributes will be zero for all scenarios, one attribute will be left out because of collinearity and will be the reference. The exponent of the coefficient of a specific attribute in the model can be interpreted as the relative influence on the decision as compared to the influence of the reference attribute that was left out.
The remainder of the paper is structured as follows. Section 2 begins with an overview of information effect studies in stated preferences. We then conduct a brief theoretical modeling exercise which shows variance in preferences for a good could vary with informative signals about the good if consumers are Bayesian updaters. Section 3 offers precise discussion of how scale and preference heterogeneity has been modelled in discretechoice studies. We then set out a new approach to represent differences in unobserved preference and scale heterogeneity in combined datasets, namely differences in mean preference and scale coefficients, as well as differences in their variances. The design and implementation of a choice experiment with two information treatments is then described. Results from applying this framework to our study follow, and we conclude with some observations on implications for future work.
ABSTRACT The paper considers the use of artiﬁcial regression in calculating diﬀerent types of score test when the log − likelihood is based on probabili- ties rather than densities. The calculation of the information matrix test is also considered. Results are specialised to deal with binary choice (logit and probit) models.