Top PDF Revisiting Random Utility Models

Revisiting Random Utility Models

Revisiting Random Utility Models

presenting a general approach for RUMs, including parametric and non-parametric models where observations can be full rankings or any form of partial ranking on the choice set. An estimator based on the Monte-Carlo-EM (MC-EM) algorithm is developed for general RUMs. Moreover, three dierent model specications are studied. The rst specication is a RUM with exponential family distributions. The second specication is a mixture of general RUMs, and the third specication adopts a non-parametric joint utility distribution through kernel density estimators on latent utility scores. For each model, theoretical properties such as identiability and log-concavity of the likelihood functions are studied. Empirical results establish scalability and eciency on dierent datasets. Flexible exponential family distributions, such as Normal distribution with a variance parameter, perform better than classic models such as Luce's. Moreover, mixture models provide interpretable groups of agents, and non-parametric models introduce a higher predictive power for applications such as rank completion.
Show more

144 Read more

Multivariate utility maximization with proportional transaction costs and random endowment

Multivariate utility maximization with proportional transaction costs and random endowment

The subject of utility-based pricing of contingent claims has been an active (and quite natural) area of research since the introduction and development of incomplete market models, in which the replication paradigm is no longer sufficient to find a unique price (hence utility comes in as an additional criterion of choice). The idea of utility indifference pricing has been first introduced in a dynamic hedging framework by [HN89] and it has been further extended by other authors in different settings, possibly under different names, see for example [Mu99] and [OZ09] (which is our main reference). In fact, the underlying concept of certainty equivalent is quite pervasive in the whole economics literature, because of its natural and intuitive interpretation. We refer to [HH09] for a more detailed overview on this subject.
Show more

22 Read more

Fuzzy Random Utility Choice Models: The Case of Telecommuting Suitability

Fuzzy Random Utility Choice Models: The Case of Telecommuting Suitability

Lien and Chen (2002) propose a fuzzy logit model combined with the Fuzzy Linguistic Scale (FLS) to estimate the probability of housing location choice. They assume that lo- cation choice has a multinomial structure that leads to a fuzzy multinomial logit (FMNL) model, and therefore, utilize this idea to deal with the problem of qualitative variables in subjective perception. The authors believe that fuzzy concepts are more capable of deal- ing with the problem of qualitative variables than conventional methods are. Thus, qualita- tive aspects are represented as linguistic labels using linguistic variables, namely variables whose values are not numbers but words or sentences in a natural language. Indicating that there has been very little literature on the application of fuzzy concepts to discrete choice models, like logit, these authors claim that the improved interpretative capacity of FMNL can overcome most of the shortco- mings of multinomial logit (MNL). Further- more, they claim that their model is closer to actual human cognition and decision making behavior for housing consumption.
Show more

16 Read more

Revisiting the Case for Explicit Syntactic Information in Language Models

Revisiting the Case for Explicit Syntactic Information in Language Models

While intuition suggests syntax is important, the continued dominance of n-gram models could in- dicate otherwise. While no one would dispute that syntax informs word choice, perhaps sufficient in- formation aggregated across a large corpus is avail- able in the local context for n-gram models to per- form well even without syntax. To clearly demon- strate the utility of syntactic information and the de- ficiency of n-gram models, we empirically show that n-gram LMs lose significant predictive power in po- sitions where the syntactic relation spans beyond the n-gram context. This clearly shows a performance gap in n-gram LMs that could be bridged by syntax. As a candidate syntactic LM we consider the Structured Language Model (SLM) (Chelba and Je- linek, 2000), one of the first successful attempts to build a statistical language model based on syntac- tic information. The SLM assigns a joint probabil- ity P(W, T ) to every word sequence W and every possible binary parse tree T , where T ’s terminals are words W with part-of-speech (POS) tags, and its internal nodes comprise non-terminal labels and lexical “heads” of phrases. Other approaches in- clude using the exposed headwords in a maximum- entropy based LM (Khudanpur and Wu, 2000), us-
Show more

9 Read more

Dual Random Utility Maximisation

Dual Random Utility Maximisation

This result is of interest first because, like all ‘revealed preference’ style axiomatisa- tions, it provides us with a way to test the theory directly through the observation of choice data, without the need for any parametric specification or for non-behaviour data. 9 An empirical test using our axioms would of course need to treat statistically the assertions that probabilities of choice are equal or different. Exactly the same need arises in any other characterisation of stochastic choice models: for example, in the case of the classical and popular Luce/logit model [23], which is characterised by a single axiom asserting the equality between two probability ratios. 10
Show more

39 Read more

Using Elicited Choice Probabilities to Estimate Random Utility Models: Preferences for Electricity Reliability

Using Elicited Choice Probabilities to Estimate Random Utility Models: Preferences for Electricity Reliability

Manski (1999) reasoned that stated choices may differ from actual ones because researchers provide respondents with different information than they have when facing actual choice problems. The norm has been to pose incomplete scenarios, ones in which respondents are given only a subset of the information they would have in actual choice settings. When scenarios are incomplete, stated choices cannot be more than point predictions of actual choices. Elicitation of choice probabilities overcomes the inadequacy of stated-choice analysis by permitting respondents to express uncertainty about their behavior in incomplete scenarios. Manski (1999) sketched how elicited choice probabilities may be used to estimate random utility models with random coefficients. This paper further develops the approach and reports the first empirical implementation.
Show more

45 Read more

Investigating the potential of the combination of random utility models (CoRUM) for discrete choice modelling and travel demand analysis

Investigating the potential of the combination of random utility models (CoRUM) for discrete choice modelling and travel demand analysis

formulation is proposed Chapter 3. The proposed formulation allows avoiding the identification issues and the computational burdens of the Mixed Logit with joint error component / random coefficient formulation, that currently represents the theoretical more general formulation available (McFadden and Train, 2000). Such Combination of Mixed RUMs is estimated on a stated survey of 1688 observations of 211 respondents (8 choice tasks per person). The Combination of Mixed RUMs, especially when combining Mixed Nested Logit, outperforms all the other tested mixed models (Mixed Logit, Mixed NL, Mixed CNL) in terms of goodness of fit. In particular, the Cross Nested Logit with random parameters seems very hard to estimate and Nested Logit with random parameters allows only partially to reproduce inter-alternative correlations, apart from the rate due to the random parameters. The Mixed Logit with joint random coefficient and error component, instead, although its theoretical generality is very hard to specify to ensure identification of the parameters (Walker et al., 2007). In fact, such formulation requires an high awareness of its theoretical background and involves very complex simulations (exploding with the dimension of the choice set) and mathematical preliminary evaluations (see rank condition, order condition and equality condition described in Section 2.2.1.3 for ensuring the identification of the parameters). Thus, it seems that this general and powerful model have advantages that are more theoretical than practical. In the real-world applications, this means that for making it operational, several strong constraints have to be introduced (for instance, parametrized covariance matrices or non-full covariance matrices). Therefore, the Combination of Mixed Nested Logit is a compromise between the not generality
Show more

190 Read more

The core with random utility and interdependent preferences: Theory and experimental evidence

The core with random utility and interdependent preferences: Theory and experimental evidence

The main hypotheses are that the bias follows from either interdependence of preferences (i.e. fairness concerns, as Eavey and Miller, 1984, suggest) or a limited grasp (or heavy discount) of outcomes in alternative matches (as for example Selten, 1972, suggests). The latter hypothesis loosely relates to the level-k model in non- cooperative games (Stahl and Wilson, 1995; Camerer et al., 2004; Costa-Gomes et al., 2009), according to which level-1 players believe the opponents are non-strategic, level-2 players believe the opponents are level 1, and so on. A cooperative solution corresponding with level-1 reasoning is the level-1 equal division core (or level-1 core, for short): Players do not consider payoffs in alternative matches at all and they con- sider an outcome to be satisfactory (“stable”) if all players’ payoffs are sufficiently close to the equal split. The next step, the cooperative solution at level 2, is the level-2 equal-division core (which Selten, 1972, called “equal-division core”): Level-2 play- ers believe that the level-1 solution (the equal split) would result in any alternative match and consider an outcome to be stable if no pair of players can benefit by form- ing such an alternative match. In the core, finally, all players have full understanding of all alternative matches. In an initial analysis of these concepts (Section 3), we find that the limitation of the level of reasoning explains most the aforementioned system- atic deviations from the core. Quantitatively, the level-1 core fits best amongst the considered models, while interdependent preferences appear to be of minor relevance only.
Show more

28 Read more

Revisiting Nash wages negotiations in matching models

Revisiting Nash wages negotiations in matching models

In numerous papers dealing with the matching models, the Symmetric Nash Bargaining solution is usually applied. However, this kind of solution could not be appropriated in some cases and leads to move away from the labour market reality. Consequently, it could skew the analysis and the policy deci- sions. Experiments due to Siegal and Fouraker (1960), Nydegger and Owen (1974) also suggest that the Nash solution is an unreasonable model of pair- wise negotiations. The reason why is that players make interpersonal compar- ison of utility gains such as would be the case with for example the equal-gain model of Myerson (1977) but can not occur with the Nash solution because of the independence of irrelevant alternatives axiom.
Show more

11 Read more

A spatial choice model based on random utility

A spatial choice model based on random utility

The MNL, because of the IIA property, implies that the new school C will draw proportionately from all both schools A and B offering profile 3. In contrast, the mixed logits predict that school C will draw more proportionately from school A than from school B. Using the estimates of MNL the ratio of choice probabilities of A and B is 2.746 for the base situation and the situation with three schools. Using MMNL1 with 10 draws we get a ratio of 1.378 for the base situation and 1.030 for the situation with three schools. If we compute the relative deviation between the two situations for both models this yields .523 for A and B using MNL and .458 for A and .275 for B using the MMNL. Since we want to map substitution patterns in future school network scenarios the MNL is not appropriate here. Although the MNL has a better goodness-of-fit. All three MMNL differ significantly (see table 2). MMNL2 will be used for simulation of scenarios in the next section, because it has best explanatory quality.
Show more

32 Read more

On ordinal utility, cardinal utility, and random utility  

On ordinal utility, cardinal utility, and random utility  

deterministic to probabilistic choice under certainty, and indeed to RUM (section 3). Such a representation of RUM is not particularly amenable to implementation, however, precluding conventional parametric methods of econometric modelling. The Fechner model is, by contrast, readily amenable to implementation, hence Marschak et al.’s (1963) interest in establishing analogy between the RUM and Fechner models (section 4). An implication of this analogy is that utility, though fundamentally ordinal, adopts working properties of cardinal utility. The latter does not in itself constitute a problem; provided utility is interpreted only in ordinal terms, it is perfectly reasonable to employ cardinal utility as a working operation of ordinal. Finally, it might be remarked that although a Fechner model with particular distributional assumptions is RUM, RUM is never a Fechner model. That the relation is unidirectional is perfectly intuitive; cardinal utility can always be interpreted in ordinal terms, but ordinal utility can never be interpreted as cardinal.
Show more

46 Read more

Revisiting consistency with random utility maximisation: theory and implications for practical work

Revisiting consistency with random utility maximisation: theory and implications for practical work

The most interesting RUMs are of course those in which utility can be represented as a function of the attributes of the alternatives and conditioned by the characteristics of the agent. As noted above, by focusing on indirect utility, these may include the price of each alternative and the income of the agent. However, in an analogous manner to the more conventional economic context of continuous consumption, the implementation of indirect utility within discrete choice contexts encounters the classic Marshallian problem of heterogeneity in the marginal utility of income and the associated issue of path dependence (e.g. Batley and Ibanez 2013). Maintaining the analogy to continuous consumption, recognition of this problem has prompted some RUM researchers to adopt the standard Hicksian solution to path dependence (e.g. Hau 1985; Karlström and Morey January 2001; Dagsvik and Karlström 2005), which is essentially to convert the numéraire from utility to money. Unfortunately, this literature has been slow to develop, and contemporary random utility modelling would seem committed to a Marshallian framework.
Show more

24 Read more

On the Existence of Quality Measures in Random Utility Models

On the Existence of Quality Measures in Random Utility Models

We will discuss the conditions sufficient for the quality measure to exist in two families of RUMs that are quite popular in the economic literature: additive random utility models and what we call the ‘pure vertical differentiation model’. It can be demonstrated that in both cases, the assumption that makes the existence of a quality measure possible eliminates the non-convexity of the bearing sets of the kind that generate the counterexample in the previous section.

17 Read more

Independence, homoskedasticity and existence in random utility models

Independence, homoskedasticity and existence in random utility models

Thus behaviour is represented as being determined exclusively by the maximisation of utility, but that the utility is not known exactly. It is perhaps useful to note that the model does not necessarily represent random behaviour. More plausible is to think of the individual’s behaviour as being deterministic, but that the analyst does not have full information on the variables motivating behaviour, or even accurate measurement of those that are known, nor are the individual’s preferences over the motivating variables known. For this reason an error term ε is introduced into the model. Equivalently, and perhaps even more plausibly, an individual consumer can be considered as representing a random sample from a population, each of whom has his or her own value of ε. Then, even if the analyst had perfect information about the distribution of preferences in the population, we would still not know the preferences of a randomly sampled individual. Either way, when we speak of random utility models, it’s the model that’s random, not necessarily the utility.
Show more

18 Read more

Empirical Welfare Analysis in Random Utility Models of Labour Supply

Empirical Welfare Analysis in Random Utility Models of Labour Supply

Of course, the normative analysis stricto sensu, exemplified by e.g. the optimal tax literature, has taken the progress in labour supply modelling on board (see among others Saez 2001 and 2002; Choné and Laroque 2005, 2009, Aaberge and Colombino 2008 and Blundell and Shephard 2009). In applied work however, many users of the models either completely eschew normative interpretations, or report conventional measures of ’welfare’ which are not necessarily consistent with the underlying model. Indeed, many applied papers report only aggregate labour supply changes or, when unable to avoid distributional analysis, present changes in labour supply and/or changes in disposable income for deciles of the gross wage distribution. There is, of course, nothing wrong in neglecting leisure and focussing on disposable income (or consumption) alone when constructing an individual welfare measure. 1 The impression prevails however, that the predominant use of disposable income as a welfare measure in applied work, is more inspired by relative neglect than based on a conscious and deliberate conceptual and normative choice. The aim of this paper is to provide empirical evidence that the choice of the normative framework within which policy reforms, affecting the labour-leisure choice, are evaluated, strongly affects the welfare analysis of the reform.
Show more

36 Read more

Advances in Random Utility Models

Advances in Random Utility Models

Abstract In recent years, major advances have taken place in three areas of random utility modeling: 1 semiparametric estimation, 2 computational methods for multinomial probit models, a[r]

13 Read more

REVISITING RANDOM WALK BEHAVIOR OF STOCK MARKET: EVIDENCE FROM INDIAN MARKET

REVISITING RANDOM WALK BEHAVIOR OF STOCK MARKET: EVIDENCE FROM INDIAN MARKET

In time series analysis unit root test is one of the underlying tests for stationarity of data. A series is said to be stationary, if its mean and variance are time invariant or constant over the time period. In case of stationary data, there will absence of unit root problem. The unit root test also denotes test for random walk of the stock market. The equation of unit root can be written as:

10 Read more

Random Coefficient Panel Data Models

Random Coefficient Panel Data Models

The variable intercept and/or error components models attribute the hetero- geneity across individuals and/or through time to the effects of omitted variables that are individual time-invariant, like sex, ability and social economic back- ground variables that stay constant for a given individual but vary across indi- viduals, and/or period individual-invariant, like prices, interest rates and wide spread optimism or pessimism that are the same for all cross-sectional units at a given point in time but vary through time. It does not allow the interaction of the individual specific and/or time varying differences with the included explanatory variables, x. A more general formulation would be to let the variable y of the individual i at time t be denoted as
Show more

44 Read more

Threshold Models with Correlated Random Components

Threshold Models with Correlated Random Components

takes into account possible changes of the threshold parameters over time. The estimation and inference approaches have been applied to two data sets. The results (Tables 2 and 3) show that observations are highly correlated within clusters in both applications. The threshold parameters have significant changes over four tooth surfaces in the first application. These parameters are constant over time period of study in the second application. Although the threshold parameters are changing over four tooth surfaces, the conclusions do not change from the model (2.1) to the model (2.2). However, in another application (submitted for the publication) the significant changes of the threshold parameters yielded different conclusions from the models (2.1) and (2.2). The results of a limited simulation (Table 1) show that the REML estimates of the fixed effect and threshold parameters are very good. The REML estimates are even unbiased for some of those parameters. The REML estimator of the variance parameter ϕ is negatively biased. However, the bias of the REML estimator for the correlation parameter ρ is very small. The method provides very good estimates of the standard error of and correlation parameter ρ . It overestimates the standard error of the estimator of ϕ .
Show more

17 Read more

Electric Utility Business Models of the Future

Electric Utility Business Models of the Future

Peter Fox-Penner’s, Smart Power: Climate Change, the Smart Grid, and the Future of Electric Utilities. (Island Press, 2010), examines the future of the power industry[r]

21 Read more

Show all 10000 documents...