• No results found

In this chapter, we considered a class of ABMs from a Markovian perspective and derive explicit statements about the possibility of linking a microscopic agent model to the dynamical processes of macroscopic observables that are useful for a precise understanding of the model dynamics. We showed that the class of single-step ABMs may be described as random walks on regular graphs and that the symmetries of the corresponding transition structure is related to lumpable macro partitions. In this way the dynamics of collective variables may be studied, and a description of macro dynamics as emer- gent properties of micro dynamics, in particularly during transient times, is possible.

Using Markov chain computations, we obtain a very detailed understand- ing of the VM with homogeneous mixing. On the one hand, the computation of the fundamental matrix of the macro chain provides us with precise knowl- edge about the mean transient behavior, on the other, it also tells us that some care must be taken in order to relate those mean quantities to sin- gle realizations of the model. Regarding convergence times, full information (probability distribution of convergence times) is provided by numerical in- tegration over the transient states which gives a better idea of the transient behaviors that single realizations may exhibit. The analysis is extended to the general (multi-state) VM, the analysis of which is reducible to the binary case in the absence of interaction constraints. On the other hand, similarity constraints as bounded confidence or assortative mating lead to additional absorbing states in the macro chain. This shows that opinion polarization is a direct consequence of bounded confidence (see Banisch and Araújo, 2012 for a biological interpretation in terms of sympatric speciation).

In our context, the random map representation (RMR) of a Markov pro- cess helps to understand the role devoted to the collection of (deterministic) dynamical rules used in the model from one side and of the probability dis- tribution ω governing the sequential choice of the dynamical rule used to update the system at each time step from the other side. The importance of this probability distribution, often neglected, is to encode the design of the social structure of the exchange actions at the time of the analysis. Not only, then, are features of this probability distribution concerned with the social context the model aims to describe, but they are also crucial in pre-

66 Agent–Based Models as Markov Chains dicting the properties of the macro dynamics. If we decide to remain at a Markovian level, then the partition, or equivalently the collective variables, to be used to build the model should be compatible with the symmetry of the probability distribution ω.

This is what makes homogeneous mixing (and respectively, the com- plete graph) so special because the full permutation invariance is realized (Aut(KN) = SN). On the other hand, an important mark of ABMs is

their ability to include arbitrary levels of heterogeneity and stochasticity (or uncertainty) into the description of a system of interacting agents. In a sense, the partition of the configuration space defining the macro level of the description has to be refined in order to account for an increased level of heterogeneity or a falloff in the symmetry of the probability distribution. It is, however, clear that, in absence of any symmetry, there is no other choice for this partition than to stay at the micro level and, in this sense, no Markovian description of a macro level is possible in this case. This will be spelled out in detail in the next chapter.

Chapter IV

From Network Symmetries to

Markov Projections

In the previous chapter, we have seen that an ABM defines a process of change at the individual level – a micro process – by which in each time step one configuration of individuals is transformed into another configuration. For a class of models we have shown this micro process to be a Markov chain on the space of all possible agent configurations. Moreover, we have shown that the full aggregation – that is, the re-formulation of the model by mere aggregation over the individual attributes of all agents – may give rise to a new process that is again a Markov chain, however, only under the rather restrictive assumption of homogeneous mixing. Heterogeneities in the micro description, in general, destroy the Markov property of the macro process obtained by such a full aggregation.

The question addressed in this chapter is how to derive Markovian coarse- grainings (Markov projections) if the assumption of homogeneous mixing is relaxed. In other words, how must the micro model and the projection con- struction be so that the projected system is still a Markov chain? We develop a tool which relates symmetries in the interaction topology to partitions of the configuration space with respect to which the micro process is lumpable. In effect, this leads to a refinement of the full aggregation which exploits all the dynamical redundancies that have its source in the agent network on which the model is implemented. Notably, the result is stated in terms of the symmetries of the agent network which is much simpler than the micro chain on the configuration space where the aggregation process (lump) is achieved.

The theoretical ideas presented here have been made available under Banisch and Lima (2012). A more detailed version containing the results for the example studied in Sec. 4.4 is under review (Banisch and Lima, 2013). The results have been presented at a series of conferences (Banisch et al., 2013; Banisch, 2013, a,b).

68 From Network Symmetries to Markov Projections

4.1

Interaction Heterogeneity and Projection Re-

finement

Let us begin this chapter with the simple example that is running through this thesis. Consider the VM with three agents on different networks defined by a 3 × 3 adjacency matrix A with aij = 1 whenever i and j are connected.

As before, in the iteration process, an agent pair (i, j) is chosen at random out of the set of all agent pairs with aij = 1 and the first adopts the state of the

second. Notice that an alternative way of realizing the agent update is to first choose an agent i and then choose another agent j out of all agents connected to i. The former is called link update and the latter node update dynamics and we shall see that this can lead to different probability distributions ω. We mainly consider link update in this chapter, but comment on the differences between the two variants in Sec. 4.2.

We first consider the complete graph defined by aij = 1 whenever i 6= j

and aii = 0. Notice that in that case, the two update variants are lead to

the same ω(i, j). Namely, the probability of a pair (i, j) to be chosen is for every pair ω = 1/6. That is, except for the exclusion of self-choice (with ω(i, i) = 0) it leads to the case dealt with in the previous chapter. Fig. 4.1 briefly recalls the respective Markov chain formulation and projection by illustrating (i.) the connectivity structure ω(i, j) = ω; ∀i 6= j, (ii.) the micro chain this leads to along with the transition rates, and (iii.) the resulting macro chain.

In order to go beyond complete homogeneity let us consider what hap- pens to that picture of one link is removed. Therefore, let us assume that a23 = a32 = 0. Under link update this leads to the following interac-

tion probabilities: ω(1, 2) = ω(2, 1) = ω(1, 3) = ω(3, 1) = ω = 1/4, and ω(2, 3) = ω(3, 2) = 0. This topology, the resulting micro chain and the probabilistic effects on the macro level are shown in Fig. 4.2.

It becomes clear that the introduction of interaction heterogeneity trans- lates into irregularities in the probabilistic structure of the micro chain in a way that the symmetry condition in Theorem 3.2.2, ˆP (x, y) = ˆP (ˆσ(x), ˆσ(y)), is violated for the macro partition X = (X0, X1, X2, X3). In other words,

it leads to the non-lumpability of the partition X = (X0, X1, X2, X3). As

shown in Fig. 4.2 the transition probabilities at the macro level are not uniquely defined and depend upon the respective micro configuration. Con- sider, as an example, the transitions from X2 to X3. The probability (3.7)

of a transition form configuration () to () is ˆP (, ) = ω(1, 2) + ω(1, 3) = 2ω, whereas ˆP (, ) = ω(2, 1) + ω(2, 3) = ω and

ˆ

P (, ) = ω(3, 1) + ω(3, 2) = ω. While all these probabilities are equal for the complete graph (as ω(i, j) = ω : ∀i, j) they are not all equal if one or two connections are absent which violates the lumpability condition. Deriving a partition such that the micro process projected onto it is a

Chapter IV 69

{

{

k = 0 k = 1 k = 2 k = 3 ω ω ω ω ω ω 1 2 3 ω ω ω

Figure 4.1: Probabilistic structure of the model with three agents on the complete graph.

{

{

k = 0 k = 1 k = 2 k = 3 1 2 3 ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω

?

?

?

Figure 4.2: Probabilistic structure of the model with three agents if the connection between 2 and 3 is absent.

1 2 3 ω ω ω ω ω ω ω

70 From Network Symmetries to Markov Projections Markov chain requires a refinement of the aggregation procedure. For the example considered here the respective refined partition is shown in Fig. 4.3. The main purpose of this chapter is to develop a systematic approach to this projection refinement by exploiting all the dynamical redundancies resulting from the symmetries of agent network. Network symmetries can be used to identify bundles of micro configurations that can be interchanged without changing the hypercubic micro chain. Our example may provide a first intuition. The interaction graph in our example has a symmetry such that the agents 2 and 3 can be permuted without affecting the connectiv- ity structure, (i.e., Autω = (1)(23)). This symmetry imposes a symmetries

in the hypercube graph associated to the micro process such that the con- figurations () and () with k = 2 and respectively () and () with k = 1 can be permuted without affecting the transition struc- ture. See also Fig. 4.4. In this simple example, therefore, the previous macro atoms X2 (and X1) must be refined such that the sets of configu-

rations {(), ()} (respectively {(), ()}) on the one hand and {()} (respectively {()}) on the other form different sets in a Markovian partition. 1 2 3 1 2 3 1 2 3

Figure 4.4: The 3 different configurations (), () and () of length 3 with one agent in  and two in  (k = 2). The first two configura- tions () and () are what we will call macroscopically equivalent.

Related documents