in terms of their direct impacts on individual sectors. In attempting to describe their economies, economists in many African countries have applied certain models that are by now widely known. Thus, with the rapid progress in formal modeling techniques and the growing accessibility of electronic computers, it would not be out of place to expert every African country within the next decade, to have an operational formal model of the economy to use as an instrument for economic policy formulation and implementation. In general, these African economies are complicated systems encompassing micro behaviors, interaction patterns and global regularities. Whether partial or general in scope, studies of economic systems must consider how to handle difficult real-world aspects such as asymmetric information, imperfect, competition, strategic interaction, collective learning and the possibility of multiple equilibria. Fortunately, recent advances in analytical and computational tools are permitting new approaches to the quantitative study of these aspects. Here, one such approach is AGENT-BASED COMPUTTATIONAL ECONOMICS (ACE), which is the computational study of economic processes modeled as dynamic systems of interacting agents. This paper therefore explores the potential advantages and disadvantages of ACE model for the study of economic systems. Here, various approaches used in Agent-basedcomputationalEconomics (ACE) to model endogenously determined interactions between agents are discussed; and this concerns models in which agents not only (learn how to) play some (market or other) game, but also learn to decide with whom to do that (or not).
As the methodological basis, chapter 2 reviews RL in the economic liter- ature and develops a general learning framework, combining reinforcement and rule learning. The motivation is to provide an alternative, generic way of representing agent decision mechanisms in a unified framework for several classes of models. It tries to go beyond simplistic fomalisations of adaptive capabilities such as simple RL, but to keep computational complexity within bounds. Chapter 3 applies this approach to a model of statistical discrimi- nation. It is shown that the framework is capable of reproducing patterns of actual human behaviour in game-theoretic experiments. Chapter 4 is an application of RL to network formation. Results of the learning process are compared with axiomatic results for perfectly rational players. A modified version of the model is then used to reproduce an experiment and to com- pare its behaviour with observed human behaviour. A very different model is presented in chapter 5. While the purpose of the first chapters is to apply and analyse learning in rather simple settings, the purpose of this chapter is to use it in a complex setting with many influencing variables. The re- quirements for adaptation in this application are very different from that discussed before: In the model, doctors decide about treatment patterns, quality and their own workload. Patients choose doctors based on their own experience and recommendations of other consumers. Several simulations using different learning and choice scenarios are compared.
Much use of ACE models was motivated by the wish to take the underlying social structure of an economy explicitly into account and not to follow the implicit prac- tice of neoclassical economics to assume complete networks. By the assumption of the representative agent or at least many identical agents, one assumes implicitly a complete network because all agents have to be the same. And the complete network is the only network structure where each agent has exactly the same neighbourhood, namely all other agents. But recent studies of networks have shown that small-world or scale-free networks are much more likely to exist in reality. Small world networks are characterized by small average path lenghts between the nodes and comparatively high degree of clustering. The constituent feature of scale free networks is that there are very few nodes with a lot of connections to other nodes, and very many nodes with very few connections to other nodes. More precisely, the distribution of the number of neighbours (i.e. the degree of the nodes) follows a power law. How these networks influence the distributional properties of an economic system can be studied via sim- ulations, and even if they cannot provide a complete explication, they are necessary to deal with such abstract structure such as networks. Furthermore, the existence of certain network structures (e.g. particular scale-free networks) were proposed to give rise to self-organised criticality, a concept that should be of interest for institutionalists (see Section 3.1.2, and the identification of scale-free networks (and their role in the models) should therefore be taken into account. Furthermore, Albin and Foley (1992) and Gintis (2007) simulated the distributional effects of changing network structures in the general equilibrium framework and showed how a shift from central to de-central organization has severe distributional effects. Their models remained very abstract and one would not classify them as institutionalist models. They exemplify, however, how big the consequences of a small change in the underlying network structure can be. This insight is important for institutionalists when describing the stratification of real-world economies, because networks probably play an important role for the observed stratification (and unequal distribution of wealth). In order figure out how this role looks like, one must build on simulations of these network structures.
the “exogenous risk that hits the system” (such as an aggregate exogenous shock or an idiosyncratic shock to one of the nodes in the network), but also the “endogenous risk generated by the system itself” (Zigrand ). The system here is the network and the aim is to find indicators that will take into account how the structure of the links shapes and propagates an initial exogenous shock. We also extend the model to analyse how the same feedback mechanisms that generate endogenous risk, could set off a process of positive contagion, reversing negative market sentiments (Section 4.4). This paper can be related to the existing vast literature on financial contagion. At the two extremes, contagion is classified as pure - when a herd of investors drives appar- ently healthy and unrelated economies towards sunspot equilibria - or fundamentals- based (Masson ). In reality, however, spillovers are complex and encompass both features: they spread through real or financial channels, while still retaining some ran- domness driven by market sentiments. We try to capture this duality by, on the one hand, considering a financial network of cross-country investment flows, while on the other hand making downgrades happen stochastically. Indeed, the probability that an agent transmits the “default disease” to its network neighbours depends on both: its actual credit rating (an agent with a lower credit-rating, i.e. in a more advanced in- fection phase, transmits the negative contagion at a higher rate) and the interaction intensities (a stronger mutual relation increases the transmission probability). 55
nian Method of Swarm Design (HMOSD). This method is a principalled approach to swarm design con- sisting of two main phases. In the ﬁrst phase, the global goal(s) is(are) written in terms of properties that can be sensed and aﬀected by the agents. The resulting equa- tion(s) can then be used to develop requirements for the behaviors of the agents that lead to the global goal. Though swarm engineering has typically been applied to robotic design and computation design, we broaden the scope here by applying it to an economic system. So why economics? Economies are complex systems which en- compass micro and macro behaviors, individual interac- tion, equilibriums, and, in most cases, some sense of self- regulation . Because of this overwhelming complexity, a quantitative form of economics has been diﬃcult to ob- serve. However, with more powerful computational power and the development of eﬃcient control algorithms it is now possible to approach economics from a more quanti- tative, rather than theoretical, perspective . One such control method is swarm engineering. Just as a real economy is decentralized, automated swarms require no outside control . An accurate simulation can be run solely by itself, basic economics laws and theories gov- erning the physics of interaction of agents. An advantage of this method is the lack of the ceteris paribus (Latin for “all other things unchanged”) aspect of traditional eco- nomics. Observations qualiﬁed by ceteris paribus require that all other variables in a causal relationship are ruled out in order to simplify studies. A swarm controlled simu- lation, on the other hand, allows all factors to be included in the relationship between antecedent and consequent . Another salient advantage is an observer’s ability to control the basic structure of interaction. Before a run, the simulation allows one to tinker with basic parameters of the system, such as sizes of budgets, rate of utility in- crease, and the magnitude of competition. By allowing such control, a user can predict results of economies in several types of real-life situations, which is key in under- standing the scope of economic systems and the realistic range of our control.
The investigation of methods for validation of MASs was addressed by the EU FP7 research project GRACE . One of the project outputs is the methodology for qualitative analysis based on Petri net modeling notation. The behavior of each agent is represented as a single Petri net which can be verified by the regular methods to find out whether the model is bounded (the resource can only execute one operation at time), reversible (the agent can reinitiate by itself) or out of deadlocks (the agent can make at any state an action). The extension of these models with concept of time provides methods for quantitative analysis of multi-agent systems. The transitions are extended with the time parameters to capture the times of transition activations. Such a simulation shows the evolution of the tokens over places and over the time. The complete information about the progress of the agent behavior is summarized with a Gantt chart. Unfortunately, the development of the Petri net models is a time-consuming process, which requires specialized skills.
Our motivation behind the choice of the MAS is to lighten the Web server tasks in enterprise2.0 platform, and to provide more security to information exchange with some respect to confidentialities intra and inter enterprises. The objective of our work is to propose a helpful approach utilizing new coordination protocol (CordiNet) and computational collective intelligence for enterprise 2.0 design. Firstly, we implement a collaborative environment (Pro Social Network) that allows employers to share diagnosis and fault repair procedures. Secondly, we propose a coordination environment that is based on multi-agents system and interaction protocol.
The main aim of multi-agent models is to explore complex spatio-temporal phenomena, in particular the emergence of macro patterns of behaviour from the micro levels of activity of large number of actors (O’Sullivan & Hacklay, 2000; Batty, 2005), often within intricately defined spaces (Puusepp & Coates, 2007). It is this very level of geographical complexity and emergent behaviours that raise methodological challenges in validating multi-agent models (Amblard et al., 2005; Manson, 2007). Aspects that need to be considered are the stability or robustness of the emergent patterns, calibration of parameters, setting of initial state(s) and boundary conditions, and the propagation of error (Ginot & Monod, 2005). The problem of equifinality is also present. Ways of testing these rely heavily (though not exclusively) on sensitivity analyses (Saltelli et al., 2000) in order to: calibrate parameters governing micro behaviour against available empirical data, model parameter errors and assess model sensitivity to the parameter phase space and initial state(s). However, complex systems can be nonlinear in their response to parameter changes, may have amplified effects as well as tipping points and thresholds (Crossetto & Tarantola, 2001; Phillips, 2003; Manson, 2007).
in principle enabling more ﬂ exible IT deployment and more efﬁ cient use of computing resources (Information Age Partnership, 2004). According to BAE Systems (Gould et al., 2003), while the technology is already in a state in which it can realise these beneﬁ ts in a single organisational domain, the real value comes from cross-organisation use, through virtual organisations, which require ownership, management and accounting to be handled within trusted partnerships. In economic terms, such virtual organisations provide an appropriate way to develop new products and services in high value markets; this facilitates the notion of service-centric software, which is only now emerging because of the constraints imposed by traditional organisations. As the Information Age Partnership (2004) suggests, the future of the Grid is not in the provision of computing power, but in the provision of information and knowledge in a service-oriented economy. Ultimately, The Internet has enabled computational resources to be accessed remotely. Networked resources such as digital information, specialised laboratory equipment and computer processing power may now be shared between users in multiple organisations, located at multiple sites. For example, the emerging Grid networks of scientiﬁ c communities enable shared and remote access to advanced equipment such as supercomputers, telescopes and electron microscopes. Similarly, in the commercial IT arena, shared access to computer processing resources has recently drawn the attention of major IT vendors with companies such as HP (“utility computing”), IBM (“on-demand computing”), and Sun (“N1 Strategy”) announcing initiatives in this area. Sharing resources across multiple users, whether commercial or scientiﬁ c, allows scientists and IT managers to access resources on a more cost-effective basis, and achieves a closer match between demand and supply of resources. Ensuring efﬁ cient use of shared resources in this way will require design, implementation and management of resource-allocation mechanisms in a computational setting.
“The most salient structural characteristic of Walrasian equilibrium is its strong de- pendence on the Walrasian Auctioneer pricing mechanism, a coordination device that eliminates the possibility of strategic behavior. All agent interactions are passively me- diated through payment systems; ‘face–to–face’ interactions are not permitted. [...] The equilibrium values for the linking price [...] variables are determined by market clearing conditions imposed through the Walrasian Auctioneer pricing mechanism; they are not determined by the actions of consumers, firms, or any other agency supposed to actually reside in the economy. Walrasian equilibrium is an elegant affirmative answer to a log- ically posed issue: can efficient allocations be supported through decentralized market prices? It does not address, and was not meant to address, how production, pricing, and trade actually take place in real–world economies through various forms of procurement processes. [...] What happens in a standard Walrasian equilibrium if the Walrasian Auctioneer pricing mechanism is removed and if prices and quantities are instead re- quired to be set entirely through the actions of firms and consumers themselves? Not surprisingly, this ‘small’ perturbation of the Walrasian model turns out to be anything but small. [...] As elaborated by numerous commentators, the modeler must now come to grips with challenging issues such as asymmetric information, strategic interaction, expectation formation on the basis of limited information, mutual learning, social norms, transaction costs, externalities, market power, predation, collusion, and the possibility of coordination failure.” (Tesfatsion, 2006, p. 833–835)
In addition, the authors analyse the performance of the MA family before and after the 1997 Asian Financial Crisis, and find that the MA family works well in both sub-periods, as well as in different market conditions of bull runs, bear markets and mixed markets. The empirical observation that technical analysis can forecast the directions in these markets implies that the three China stock markets are not efficient. Lam, Chong, and Wong (2007) examine whether a day’s surge or plummet in stock price serve as a market entry or exit signal. Returns of five trading rules based on 1-day and intraday momentum are estimated for several major world stock indices. It is found that the trading rules perform well in the Asian indices, but not in those of Europe and the USA.
Beyond "proof of concept": While the potential of ABM for addressing a wide range of research question in social sciences is undoubted, there is a growing appreciation that there is a need for addressing problems more relevant to the real world (Matthews et al. 2007). Janssen and Ostrom (2006) claim that ABM has mostly been applied to the modeling of theoretical issues, whereas its application to empirically measurable phenomena is quite rare, and models therefore often do not go beyond a "proof of concept". These authors distinguish four ways (stylized facts, laboratory experiments, role games and case studies) of how empirical data can be included into ABM depending on the number of subjects and the degree of contextualization or generalization. In addition, Boero and Squazzoni (2005) highlight the importance of ABM's empirical embeddedness. They argue that empirical knowledge needs to be integrated into modeling practice and used for micro speciﬁcation as well as macro validation by integrating ABM with qualitative, quantitative, experimental and participatory methods. Although these studies make a signiﬁcant contribution to the development and classiﬁcation of empirically-based ABM, they conclude that new approaches are still needed, in particular regarding the empirical validation of ABM and the formalization of empirical knowledge integration into ABM.
In this study, optimization of the structures of seven compounds in Figure 1 was performed through quantum chemical method using density functional theory (DFT). The three parameters on which density functional theory is based are Becke’s gradients exchange correction  and the Lee, Yang, Parr correlation functional (i.e. B3LYP) . Also, the accuracy of density functional theory calculations are based on the chosen basis set, hence, 6- 31G* was the basis set used for the optimization of the seven molecular compounds and for the calculation of parameters that described the cytotoxicity of the compounds under study. The software used in this work was Spartan' 14 by wave function Inc . In addition, the optimized structures for the studied molecular compounds were used for docking study so as to calculate the binding affinity of the molecular compounds to the S. aureus cell line (PDBID: 4b19). Inhibition constant was also calculated using equation 1:
To ensure that the transportation network remains within the desired safety and economical constraints, it is equipped with a sophisticated data acquisition system (SCADA) and several conventional application programs that help the operator (a control engineer) analyze it (these programs are primarily designed for normal operating con- ditions). The network’s operation is monitored from a dis- patching control room (DCR), and whenever an unexpected event occurs, hundreds of alarms are automatically sent to it by the SCADA system. Under these circumstances, the op- erator must rely on experiential knowledge to analyze the information, diagnose the situation, and take appropriate remedial actions to return the network to a safe state. To re- duce the operators’ cognitive load in such circumstances and to help them make better decisions faster, Iberdrola first developed several (stand-alone) decision support sys- tems (e.g., a real-time database that stores information about the state of the network and an alarm analysis expert system that diagnoses faults produced in the network based on the alarm messages received at the DCR). To improve this support, Iberdrola decided that these systems should interoperate to produce a coherent view and that new func- tionality should be added (to enable the control engineer to actually perform and dynamically monitor the service res- toration process and also to exploit the new data sources, such as chronological information and faster rate snap- shots, which became available as the SCADA system was im-
The model farms assume a carefully selected parcel of land purchased specifically for developing a minimalist pasture-based dairy. Careful farm selection is critical to the amount of investment needed and to enable future low operating costs. To avoid investments in livestock housing, the farm site must have well-drained soils with some timber or brush for cover during the worst winter conditions. To keep feed costs low, the dairy needs mostly open ground with productive soils that can be managed for high-producing pastures that can be replanted with annual forage and improved perennial forage varieties.
disabilities. This is especially true with children who are often poor informants of their capabilities. Vernon and Brown (1964) report the case of a young girl admitted to a hospital for the mentally retarded for five years because she scored a 29 on the Stanford- Binet test. The child was released when hospital attendants finally realized she was deaf and had an IQ of 113 when measured on a performance-based instead of oral test. Children who blink, squint, or loose their place could be visually impaired which also could affect results on these tests. Similarly there is a whole literature on how race, experience, ethnicity and other characteristics of the examiner can affect test results (e.g., Terrell, et al.,1984). The same can be said for examinee background and motivation including problems regarding test anxiety. These imperfections in psychological tests and the interpretation of their results could imply a need for a completely different
In Chapter 4, the ability of different DFT exchange-correlation functionals to model compound I (Cpd I) in coral allene oxide synthase (cAOS) has been investigated in conjunction with an ONIOM QM/MM approach. A small basis set assessment indicates that the use of B3LYP/6-31G to optimize Cpd I in catalase-type active sites (as done in some previous computational studies) may not be stringent enough to adequately model its structural and electronic properties. Our assessment indicates BLYP does not predict the doublet and quartet spin states to be degenerate as typically seen in Cpd I/P450. While B3LYP 3 models doublet and quartet Cpd I/cAOS as a square pyramidal complex, other functionals (i.e. BLYP, 4 B3LYP*, 5 M06 6 and M06-L 6,7 ) model it as an octahedral complex. M06 favors the sextet spin state by 13.3 kJ mol -1 , unlike all other considered functionals which prefer doublet and quartet spin states. The structural and electronic properties predicted by B3LYP and M06-L differ from one another; while B3LYP predicts a 5-membered coordination complex with an electron radical localized on the side-chain of Tyr353, M06-L predicts a 6-membered coordination complex with an electron radical delocalized between the porphyrin moiety and Tyr353 side-chain.
This paper describes the computational prop- erties of one such architecture, embedded within a system for giving various kinds of conditional instructions and behavioral constraints to virtual human agents in a 3-D simulated environment (Bindiganavale et al., 2000). In one application of this system, users direct simulated maintenance personnel to repair a jet engine, in order to ensure that the maintenance procedures do not risk the safety of the people performing them. Since it is expected to process a broad range of maintenance instructions, the parser is run on a large subset of the Xtag English grammar (XTAG Research Group, 1998), which has been annotated with lex- ical semantic classes (Kipper et al., 2000) associ- ated with the objects, states, and processes in the maintenance simulation. Since the grammar has several thousand lexical entries, the parser is ex- posed to considerable lexical and structural ambi- guity as a matter of course.