To my knowledge, this is the first paper to address competition for delegation in a multidimensional action and state space. Ambrus, Baranovskyi, and Kolb (2015), the first study of competition for delegation, uses a one-dimensional setup. Battaglini (2002) studies a multidimensional setting, but communication takes the form of cheap talk. Neither studies the role of vagueness in information disclosure. My paper is connected to the disclosure literature (see Dranove and Jin (2010) for a survey). Here I briefly discuss the connection of my paper with Grossman (1981). He studies an environment in which a seller chooses whether to credibly disclose his quality and shows that all types of sellers fully disclose. The idea is that suppose two sellers of different qualities are pooled together. Then the buyer is only willing to pay the price as if the quality is an average of the two. Now the higher-quality of the two sellers can profit by disclosing his quality and receive a higher price, since the buyer now has a higher willingness to pay. The argument is based on the assumption that the seller could always have disclosed his exact quality. The counterpart of exact disclosure in my model is vagueness. An agent can always be vague and let the DM know the payoff from choosing him. The difference is that vagueness serves an additional purpose, which is masking the state of the world. This helps sustaining vagueness in equilibrium.
This dissertation is composed of threeessays considering the role of private information in economic environments. The first essay considers efficient investments into technologies such as auditing and enforcement systems that are designed to mitigate information and enforcement frictions that impede the provision of first best insurance against income risk. In the model, the principal can choose a level of enforceability that inhibits an agent's ability to renege on the contract and a level of auditing that inhibits his ability to conceal income. The dynamics of the optimal contract imply an endogenous lower bound on the lifetime utility of an agent, strictly positive auditing at all points in the contract and positive enforcement only when the agent's utility is sufficiently low. Furthermore, the two technologies operate as complements and substitutes at alternative points in the state space. The second essay considers a planning problem with hidden actions and hidden states where the component of utility affected by the unobservable state is separable from component governed by the hidden action. I show how this problem can be written recursively with a one dimensional state variable representing a modified version of the continuation utility promise. I apply the framework to a model in which an agent takes an unobservable decision to invest in human capital using resources allocated to him by the planner. Unlike similar environments without physical investment, it is shown numerically the immiserising does not necessarily hold. In the third essay, with Kyungmin Kim, I examine the effects of commitment on information transmission. We study and compare the behavioral consequences of honesty and white lie in communication. An honest agent is committed to telling the truth, while a white liar may manipulate information but only for the sake of the principal. We identify the effects of honesty and white lie on communication and show that the principal is often better off with a possibly honest agent than with a potential white liar. This result provides a fundamental rationale on why honesty is thought to be an important virtue in many contexts.
The unit o f analysis and the unemployment measure seem more important in the models that add Medicaid eligibility and the unemployment*Medicaid interaction on the right hand side (lower panels o f Tables 2, 4, and 5). This is an appealing finding since the benefits o f using county-level data should be the largest in models which already have a state-level variable - Medicaid eligibility - on the right hand side. It is noteworthy, however, that the differences in results across the three model specifications are again not large. In particular, among black women, the interacted results do not reach statistical significance in any o f the model specifications but the coefficients mostly have the same sign. Among white women, unemployment per se decreases prenatal care use and the Medicaid ‘safety net’ increases it across all three specifications. Not surprisingly, the results are most significant when county-level cells are used (lower panels of Tables 2 and 4). In fact, the Medicaid ‘safety net’ is associated with significant benefits to infant and maternal health only when county-level cells are used (lower panels of Tables 2 and 4) and the results are most consistent across outcomes studied when county-level unemployment is also employed (lower panel of Tables 2).
strategy of bidding when the signal is high and waiting when the signal is low, while bidder U randomizes his decision on whether to bid and how much to bid. This establishes an interesting situation where we may observe bidding from the uninformed bidder instead of the informed one in the first around. In fact, this phenomenon is observed in many economic environments. For example, we often observe situations of entering into a new product market by a new firm instead of an incumbent. The firm which enters into the market later has to bear the possibility of being the follower and earning a smaller market share, but gains more accurate information on the new market. This result provides an understanding for why such situations exist, and suggest when we may observe it as well.
The red line in Figure 3.13b again illustrates the toxic arbitrage based information share for days. The green line shows the mid-point of the bounds of the Hasbrouck information share (IS). The green area illustrates the range between the upper and the lower estimate from the Hasbrouck procedure. The upper bound remains close to 100% most of the time. For the first 70 observations there is only one day where it drops below 75%. On this day, the T AIS also drops. On the other days, where the T AIS drops, we also see a drop in the lower bound of the IS and hence of its midpoint. The lower bound drops more often resulting in large spreads for the Hasbrouck information share. Similarly to what we observe with the CS, drops in the upper bound of the IS where T AIS remains high, cannot be explained by the imprecision of the latter. Figure 3.13a and b illustrate that di↵erent measures lead to di↵erent results on individual days. Overall, only the downward spikes of the T AIS in the beginning of the sample fail to find similar evidence in either of the other measures. Several times we observe spikes in the standard measures, without a similar reaction in the T AIS. However, in these instances the spread in the Hasbrouck information share is usually wide, indicating less precision in the measure. On those days, we generally observe some overlap between the T AIS confidence interval and the IS interval. There are five days where we see a downward spike in the IS with narrow spread without an equivalent reaction of T AIS. While these days see downward spikes in CS as well, the di↵erence between the two measures is substantial.
Decision makers are characterized by limited information processing systems. Not only their com- putational and cognitive capacity is limited, but also their memory is confined in both capacity and duration. Accordingly, they frequently employ heuristics in order to reduce the information process- ing requirements of the tasks they face, which differ in terms of complexity (Payne, 1976). In any decision there is an underlying trade-off between the precision of the choice and the efficiency in the decisional process. When using heuristics, individuals give up to the former, in favor of the efficiency of the process. According to Payne (1976), we use heuristics in contexts of decision making under risk, when we face some probabilistic information processing. Examples of heuristics include repre- sentativeness, when individuals anticipate the most representative outcome of the existing evidence, as well as anchoring and adjustment, according to which we tend to base decisions on familiar po- sitions, with an adjustment relative to this anchor. As the task to be solved increases in complexity, “individuals will employ decision strategies resulting in a restricted pattern of information search when the choice task becomes sufficiently complex. These decision strategies can be characterized as heuristic processes similar to those found in studies of human problem solving” (Payne, 1976, p. 324).
In the static context, Biais and Casamtta (1999) study an agency model with risk- taking. The project generates three outcomes: high, medium and low. Risk-taking is hidden, destroys value and alters the returns distribution in the sense of second-order stochastic dominance. The incentive constraints they obtain are similar to mine. They further show that the optimal contract can be implemented by a mixture of debt and equity and when moral hazard issue on effort is severe enough, stock option helps to provide incentives. Palomino and Prat (2003) study risk-taking in a delegated portfolio choice model in which the underlying technology displays a high-risk-high-return relation. Their optimal contract also has the bonus feature. While I focus more on the downsize risk, my model is more suitable for discussing the high-water mark contract because this contract is dynamic in nature: the bonus threshold change over time depending on the fund performance.
Another important result in Table 3.7 shows that the 2012 average monthly income of treated families is $44.48 higher than the families in the control group – a finding that we attribute largely to increased wages for mothers. The median wage of workers in the neighborhood is the minimum wage ($292 per month in 2012) and, therefore, the income gain from the program is large. In 2013 we collected the data on mothers’ wages. Using this information we estimate that family income is up mainly because of an increase in mothers’ wages: treated mothers earn $13.33 (se=4.87) more per week than control moth- ers, i.e. more than $57 extra per month (Table 3.8, panel A), while fathers’ wages are not significantly different in the two groups. Our result of a significant impact on mothers’ wages without any similar effect on the spouse is in contrast to the findings reported in Rosero and Oosterbeek (2011) who study the effect of child care centers on labor market outcomes of mothers. They report an estimated increase in the likelihood that a mother is working by 22 percent 26 and of household income by $80, 27 but unlike in the PelCa pro- gram the latter is not driven by a rise in mothers’ earnings, but by the partners’ income. Actually Rosero and Oosterbeek (2011) even find that home visits reduce the proportion of mothers working by 17 percent, while leaving mothers’ income unaffected; however, we note that our study is based on a longer term follow up and that the long term effect of the home visits program might be different than the short term effect reported in Rosero and Oosterbeek (2011).
Jurisdictions that publicize school-level results typically update this information annually, raising concerns that parents may respond to year-to-year fluctuations that are largely noise (Kane and Staiger 2002, Mizala, Romaguera and Urquiola 2007). Our results show that English-speaking parents in low-income neighborhoods respond immediately to the first release of information, and continue to respond to subsequent releases in later years. Our data provide no way to determine whether these ongoing responses are a series of reactions to noisy information updates, or whether they simply reflect the time it takes for information to reach all members of the community. Likewise, the delayed response of non-English-speaking parents suggests substantial heterogeneity in parents’ access to public information. Consequently, annual releases of school achievement information that elicit ongoing media coverage may play an important role in communicating that information to all segments of the community, including recent immigrants.
This table reports abnormal returns for equal- and value-weighted Democracy-Dictatorship hedge portfolios using the Carhart four-factor model. Both Democracy and Dictatorship portfolios are divided into three terciles based on the three transparency proxies deflated by either lagged share price or lagged assets per share or absolute value of forecast mean: forecast dispersion, forecast error, and revision volatility. Then we form a Democracy-Dictatorship hedge portfolio for each transparency tercile every month and regress the monthly excess returns to each hedge portfolio on the market factor (RM RF ), the size factor (SM B), the book-to-market factor (HM L), and the momentum factor (U M D). The estimated intercept α is interpreted as the abnormal return of the trading strategy. Forecast error is defined as the absolute value of the difference between the actual annual earnings per share (EPS) and the mean of analyst forecasts. Forecast dispersion is defined as the forecast standard deviation across all analysts following the same firm in the same year. Revision volatility is computed as the standard deviation of the changes over the fiscal year in the median forecast from the preceding month. Panel A reports the α when transparency proxies are scaled by lagged share price. Panel B and Panel C show the results when transparency proxies are scaled by lagged assets per share and absolute value of forecast mean, respectively. The last two columns show the value-weighted abnormal returns to the Democracy (Long) and Dictatorship (Short) portfolios for the high transparency groups (i.e., lowest terciles). The sample period is from September 1990 to December 1999. t-statistics are reported in parentheses under the estimation coefficient. The significance levels 1%, 5%, and 10% are denoted by ***, **, and *, respectively.
(4.13) where λ captures the premium of information received by home buyers between the day when FCSC announced that redistricting was to be considered and the approval date of the plan. The term θ captures the “net” impact of approval of the redistricting plan. In the absence of an information effect, that is no anticipation of redistricting changes I expect λ to equal zero. Column (1) of Table 4.10 reports the results of estimating (4.13) using all observations. In general, with this sample the information premium does not have an im- portant impact. However, the net benefit of redistricting, the coefficient on T reat · P ost 2 , is large (3.1 percent) compared to the estimate from the base model, (2.4 percent) though it is statistically insignificant. Column (2) - (8) report the results of separate regressions performed using a sample from a single catchment area. Similar to the previous result, values of houses redistricted out of Bryan Station increased in value. Sale prices increased, on average, by 7.1 and 3.9 percent for transactions that took place in the catchment areas for the proposed school and Paul Dunbar High School in the post-approval period, larger than the DD estimate of 6.6 and 2.2 percent reported in Table 4.6. For the Henry Clay High School catchment area, I also observe a 4.2 percent increase in house values redistricted to the proposed school and an insignificant but positive effect for houses that will be in Tates Creek. Only redistricting from Paul Dunbar to Lafayette and from Tates Creek to Henry Clay show negative net impact in the post-approval period. Overall I find that the results reported in Table 4.10 are still consistent with what I found in when estimating the boundary fixed effect models with a single post-approval period (Table 4.6) – that there is no significant negative benefit associated with boundary changes.
es his stake to the level that is just enough for him to camouflage as liquidity traders and gain from informed trading. Interestingly, a unique equilibrium of informed-sales arises naturally in which the underwriter liquidates his holdings if his private information in- dicates that the security will subsequently underperform, and he refrains from trading otherwise. Our results speak to the issues associated with the rise of the originate-to- distribute (OTD) lending model in debt markets (Bord and Santos, 2012). Because of the development of active secondary markets, banks’ incentives to screen and monitor loans have diminished (Keys, Mukherjee, Seru and Vig, 2010). Moreover, they tend to sell loans that are of excessively poor quality (Purnanandam, 2010), and underperform their peers by about 9% per year subsequent to the initial sales (Berndt and Gupta, 2009). To this end, our model fully captures the resultant adverse selection problem from OTD.
recognizes; even in the absence of conflicts of interest or other distortions resulting from players’ behavior, a CRA might have an adverse effect on critical economics variables. I develop a model of investment financing which, similarly to capital markets, is characterized by information asym- metry and lack of commitment. In the benchmark setting, the CRA is capable of perfect monitoring and reveals its private information truth- fully and without cost. I explore the impact of such an “ideal” CRA on the interest rate and the probabilities of project financing and default. I find that introducing such a CRA may lead to under-financing of projects with a positive net present value (NPV) that would otherwise be financed; a higher expected interest rate; and a higher expected probability of de- fault. These findings relate to the feedback effect, which is inherent in capital markets, and its asymmetric impact on firms of different quality. I evaluate the policy of restricting CRAs to provide hard evidence with their ratings, and suggest that it might have an unfavorable effect on the proba- bilities of project financing and default.
Additionally, the validity of the DID method requires that the relative change in asset transfer behavior between the two groups cannot be explained by factors other than the policy change. One potential concern is that the treatment group may develop worse health conditions over time, relative to the comparison group. If so, the drop in asset transfer behavior of the treatment group could be the result of the increasing medical cost. To test this possibility, I examine the trends in health condition and actual nursing home utilization of the two groups. Figure 3 shows the time trends for three different measurements of interest. First, self-reported health condition is shown in panel A, ranking from one to five, with one denote being in perfect health and five denote being in poor health. Both groups overall show a worsened health
Elimination riders are limits on coverage written into a health insurance policy that carve out coverage related to a pre-existing condition (Kaiser Family Foundation 2012). Prohibitions on elimination riders prevent insurers from placing such limits on coverage. Anecdotal evidence suggests they play an important role in determining coverage accessibility for people with pre- existing conditions (Kaiser Family Foundation 2001). If elimination riders are permitted, insurers can exclude coverage related to the pre-existing condition either temporarily or permanently. With such exclusions, people with pre-existing health conditions can at least purchase coverage for other types of care unrelated to the condition. Recent work by Hendren (2013) suggests that insurers deny coverage to people with pre-existing conditions because they have more private information about their own health risks than people without such conditions. Therefore, insurers choose not to offer people with pre-existing conditions coverage, even at an actuarially fair premium. Elimination riders may allow insurers to sell insurance to people with pre-existing conditions while limiting the probability of large losses. Indeed, elimination riders are commonly used among the nation’s four largest insurers (Waxman and Stupak 2010).
For 2003 and 2004, the development of test items for the centralized exams was carried out by the Bavarian State Institute of School Quality and Education Research, an orga- nization with more than 50 years of experience in the field of educational consulting. In 2005 and 2006, this responsibility was transferred to Saarland’s standing conferences on language and mathematics (Landesfachkonferenzen). Since the aim of the SOE was to safeguard quality assurance, test items were created such that they could assess stu- dents’ competences in relation to education standards set by the Standing Conference of the Ministers of Education and Cultural Affairs of the Länder (Kultusministerkonferenz). The subject matter of the tests was the material from grades 2 and 3. In German, this related to the two domains of “Reading” and “Writing / Language and Use of Language.” In reading, reference was made to the cognitive model of van Dijk and Kintsch (1983) that is also used in the international PIRLS studies. Questions were multiple choice and required extracting pieces of information from short texts. The most difficult questions further entailed meta-cognitive abilities, for example in the sense of relating texts to the author’s likely intentions of writing them. In the domain of writing and use of language, spelling and grammar competences were specifically tested. Therefore, students had to complete words and reformulate sentences. The mathematics test was not further sub- divided into different domains. However, all questions pertained to one or more of the following general mathematical competences: modelling, problem solving, argumenta- tion, illustration, and communication. These competences had to be applied to specific mathematical content that students were supposed to be familiar with (Paulus and Lei- dinger, 2009).
While our argument will be much stronger if we can support our hypothesis with a consistent and continuous data set of visible and invisible trade costs, such data set is rare. 3 Alternatively, we turn to case studies with OECD’s aid for trade dataset to see whether the efforts to boost bilateral trade are strengthened when debt rene- gotiation happens. For the purpose of the paper, we restrict our attention to the categories of aid which are directly related to trade policy adjustment (See Table 7 for details). Figure 3.A.1 plots the change in aid (only for trade policy purposes) around the default period for the following three cases: Honduras in 2004, Congo in 2008 and Burundi in 2009. In the years of sovereign defaults, creditors double or triple their expense in trade-related aid to help defaulters out. They are generous with trade benefit instead of strict with harsh trade punishment. The case studies serve as indirect evidence for our hypothesis that creditors lower their trade costs with debtors.
The London School of Economics and Political Science Three Essays on Macro Labour Economics Jiajia Gu A thesis submitted to the Department of Economics of the London School of Economics for the degree[.]
The data characteristics collected from each primary study includes information on the type of data used and level of wage in the estimations. The marginal effects reported in Table 2.3 show that keeping yearly data as a reference category, studies estimating the wage impact of immigra- tion using decadal data seem to confirm insignificant wage impact more often than those estimated using cross-section data. The marginal effects for decadal data are -0.375 for positive significance, 0.131 for negative significance and 0.243 for insignificance. Longhi et al. (2008) highlights that the impact estimated using cross-section data might underestimate the impact of immigration and first-differences should be used to capture the short-run effects of immigration, since they would be less affected by city-specific unobserved characteristics that might influence immigrant density or natives’ outcomes. However, most studies use census data and therefore compute first-differences over rather long periods. So the assumption of time-invariant location effects become unreasonable. The wage level used in each study as a dependent variable is also categorised under the data charac- teristics. Our sample includes primary studies that use annual, monthly, weekly, daily, hourly wages and/or give no detail on the level of the wages used. For the ordered probit model estimations, weekly wage level is used as a reference category. Table 2.3 shows that the marginal effects for a monthly wage dummy is 0.999 for positive significance. It is large and rather questionable. The definition of the wage in studies is quite heterogeneous so it is suspected that the dummy variable on the level of the wage might capture effects different than the ones they intend to measure. Sec- ondly, the definition of the wage reported or imputed, is often problematic and such a problem may be responsible for the implausible results obtained for monthly wage.