Regrettably, it should be stressed that while the state of knowledge about the functioning of economies is incomplete, the state of ignorance about African economies are even much higher. However, given the rate of progress in the development and applications of information and communication technologies (ICTS) as tools for policy analysis, economic projections and the overall management of the economy on a short- run and longer term basis; it is difficult to envisage a situation whereby any country in the near future would prefer the primitive techniques of rule of thumb to formal modeling of the economy. This paper therefore recommend that policing modeling provides a unifying focus within which to study a very wide range of economic problems that have always been presented in isolation from one another. Yet these conventional macro models essentially at their present level of development are no more than the analytical framework or skeleton which still requires the flesh and blood of details and common sense to be useful instruments of policy. As an alternative modeling framework, the Agent-Based computationalEconomics (ACE) modeler provides the initial configuration of a computational economic system comprising multiple interacting agents. And the modeler steps back to observe the development of the system over time without further intervention.
As the methodological basis, chapter 2 reviews RL in the economic liter- ature and develops a general learning framework, combining reinforcement and rule learning. The motivation is to provide an alternative, generic way of representing agent decision mechanisms in a unified framework for several classes of models. It tries to go beyond simplistic fomalisations of adaptive capabilities such as simple RL, but to keep computational complexity within bounds. Chapter 3 applies this approach to a model of statistical discrimi- nation. It is shown that the framework is capable of reproducing patterns of actual human behaviour in game-theoretic experiments. Chapter 4 is an application of RL to network formation. Results of the learning process are compared with axiomatic results for perfectly rational players. A modified version of the model is then used to reproduce an experiment and to com- pare its behaviour with observed human behaviour. A very different model is presented in chapter 5. While the purpose of the first chapters is to apply and analyse learning in rather simple settings, the purpose of this chapter is to use it in a complex setting with many influencing variables. The re- quirements for adaptation in this application are very different from that discussed before: In the model, doctors decide about treatment patterns, quality and their own workload. Patients choose doctors based on their own experience and recommendations of other consumers. Several simulations using different learning and choice scenarios are compared.
The only two methods that seem to perform slightly better than the steminess- based classification algorithms are LDA and SVM . However, this performance comes at a much higher computational cost: 8h32 and 95.8 gigabytes on average for LDA, 9h26 and 14.8 gigabytes for SVM. By contrast, our preferred methods take on average less than 6 minutes and less than 5 gigabytes in each replication of the experiment. We relied on the RTextTools package by Boydstun et al.  for the implementation of models (3) to (10). Although, this package employs a set of optimized algorithms, in particular those developed by Koenker and Ng  and contained in the SparseM package, there could certainly be more efficient ways of implementing the standard machine learning algorithms considered here both in R or other programming environments. Nonetheless, the computational complexity of these methods is well studied and documented (cf. Manning et al.  and Friedman et al. , as well as references therein). The problems become especially acute when the input space is high dimensional and sparse, which is precisely our case: as both the number of distinct keywords and vacancies grow, the “vacancy-keyword” matrix becomes increasingly sparse because even the median keyword appears in very few postings (less than 0.002% in the sample of vacancies with explicit discipline requirements and keywords, which is the sample on which the final classification method is trained). Note that regularization (e.g. Lasso, Ridge) here does not help for two reasons: the optimally selected penalty (though cross-validation) is close to zero. More importantly, even if we remove the 50% least frequently posted keywords, we are still left with a very sparse matrix.
This paper presents an alternative for sequential optimizing agent which is crucial for the reliability of computationaleconomics research. In particular it is a version of online classifier, a machine learning which processes classification with data stream. The sequential implementation makes it efficient, fast and practical data flow processing. Among this type of classifier, informatics divergence approach stands out with solid foundation in mathematical statistics and informatics theory. It is instructive to see the difference of the two approaches. Standard approach targets the performance in objective function, while the informatics works with statistical measures, e.g. Kullback-Leibler and Renyi divergence [CoDrRo07]. Positively the informatics agent can be effective alternative to standard sequential optimizers.
This paper contributes to a burgeoning literature on green preferences and con- sumer behaviour. Kahn and Kok (2014) looks at the capitalization of green labels in California housing market. Sexton and Sexton (2014) attributes consumers’ enthusi- asm on Prius to “conspicuous conservation", a costly signalling of one’s concerns for the environment. Bollinger and Gillingham (2012) underscores peer effects as the motives for people to install solar panels. Instead, I am exploring the importance of green preferences in steering investors’ behaviour and it is somewhat surprising to notice that the importance of green preferences is also significant in this setting, where agents are perceived to be more “rational" and profit-oriented. A major dis- tinction of this paper from the previous research is that I explicitly document and quantify the loss in efficiency due to this special “conspicuous generation" motive of green investors and examine the effects of financial incentives in partially offsetting it. It also relates to the intrinsic incentive crowding out topics in psychology and public economics literature, also from a very different angle. I show that extrinsic incentives such as renewable energy subsidies, albeit crowd out intrinsic motivation in green investments, encourage the investors to adopt a more “profit-maximizing" thinking, which could be desirable from the policy makers’ perspective.
This programme was not, as behavioural economics is today, a self-consciously distinct branch of the discipline: it was a central component of neoclassical economics. Neoclassical economics and experimental psychology were both relatively young enterprises, and the boundary between them was not sharply defined. According to what was then the dominant interpretation, neoclassical theory was based on assumptions about the nature of pleasure and pain. Those assumptions were broadly compatible with what were then recent findings in psychophysics. Neoclassical economists could and did claim that their theory was scientific by virtue of its being grounded in empirically-verified psychological laws. … Viewed in historical perspective, behavioural economists are trying to reverse a fundamental shift in economics which took place from the beginning of the twentieth century: the ‗Paretian turn‘. This shift, initiated by Vilfredo Pareto
What I say does not mean that historians should a-critically accept the Nobel history. It is legitimate to offer an informed judgment about someone who should have received the prize but did not, for instance. But more discussion is clearly needed, about its laureates, possible past candidates, and what is perhaps more important, possible future winners. Historians of economics are in a unique potential position to take a lead in this discussion, yet they are seldom part of it at all, often because they are discussing the life of some obscure XIXth century figure. That is their role too, surely, but no less than looking at the present in historical perspective. Should a Nobel Prize really be something unexpected? I am writing here and now, London, July 2007, and I say Paul Romer is going to win the Nobel Prize in economics. Personally, I think it will be well deserved, and I could develop why. But it could easily be the other way around. The point is, when someone wins, how often is there a historical account of that person’s work available – even if included in a broader historical discussion? Should the laureate win first and only after the first draft of his intellectual history be written, if at all? When someone wins it is always the non-historian colleagues that write reviews for journals. I see a world where historians of economics engage a realistically impartial discussion about the merits of different candidates, in particular in historical perspective, but we are certainly very far from that world today. In fact, a historical account about the intellectual history concerning many of the existing laureates since 1969 does not even exist.
Economics, Kyklos etc.). Additionally, printing houses (Routledge and Edward Elgar) are increasingly publishing peer reviewed articles and books on philosophical topics within economics. A number of graduate programs, mainly in Europe, such as those at Erasmus University, the London School Economics, the University of Bayreuth, the University of Helsinki, and at Kingston University, have been launched, granting graduate degrees in economics and philosophy. Scholars in economic philosophy are prominent figures not only in economic philosophy but also in economic history, the history of economic thought, and the philosophy of science. From a scholarly point of view, economic philosophy has become an established field of research in social sciences.
The London School of Economics and Political Science Essays on Labour Economics Attakrit Leckcivilize A thesis submitted to the Department of Economics of the London School of Economics and Political[.]
The London School of Economics and Political Science Essays in information economics Clement Minaudier A thesis submitted to the Department of Economics of the London School of Economics and Political[.]
The London School of Economics and Political Science Essays in Applied Economics Stephan Ernst Maurer A thesis submitted to the Department of Economics of the London School of Economics for the degree[.]
The London School of Economics and Political Science Essays in Economics of Education Marta De Philippis A thesis submitted to the Department of Economics of the London School of Economics for the deg[.]
THE LONDON SCHOOL OF ECONOMICS AND POLITICAL SCIENCE Essays in Political Economics of Development Yu Hsiang Lei A thesis submitted to the Department of Economics of the London School of Economics for[.]
i The London School of Economics and Political Science Essays in Organisational Economics Melania Nica A thesis submitted to the Department of Economics of the London School of Economics for the degre[.]
The London School of Economics and Political Science Essays in Public Economics Mohammad Vesal A thesis submitted to the Department of Economics of the London School of Economics and Political Science[.]
London School of Economics and Political Science Essays in Information Economics and Political Economy Weihan Ding A thesis submitted to the Department of Economics of the London School of Economics a[.]
Computation as a service and computational agency. As highlighted by scholars (Berry 2014; Jordan 2015; Mosco 2014) and activists (Stallman 2015), computation as a service is a disruptive reconfiguration of information politics and computational agency. On one hand, from the point of view of companies (from small startups to large corpo- rations) providing services over the Internet to end users, the ability to outsource com- putation and to manage computational resources efficiently allows to focus on the com- pany’s core business without having to deal with financial, legal and information man- agement issues stemming from having an internal team dedicated to managing the com- pany’s computational infrastructure. This is especially advantageous for startups, who can rely on computational capacity that can be easily adjusted to the organization’s evolution. Critiques of computation as a service, however, highlight how the premium that is paid through the outsourcing of computational capacity is not only economic (computation as a service is normally more expensive than equivalent infrastructure that could be devel- oped internally) but, most importantly, it involves surrendering control over computation: companies that rely on this can only control computation through the configuration op- tions allowed by their service providers, and end users who rely on cloud services (for ex- ample, for personal storage or email services) cannot control how their data is processed and are often subject to very restrictive terms of service (Jordan 2015, pp91–92). Resort- ing to its characteristic strategy of creating new discursive spaces (Berry 2004), the Free Software Foundation introduced the label of ‘Service as a Software Substitute’ (SaaSS, with a reference to the widely known label of ‘Software as a Service’ or SaaS, which is one of the possible configurations of cloud computing), to highlight that by relying on computa- tion as a service, users relinquish the ability to inspect, control and manage the software code and computational capacity used to provide services, in exchange of a black–boxed service whose computation is controlled by an untrusted third party. On the other hand,