Real-time strategygames share many aspects with real situations in do- mains such as battle planning, air traffic control, and emergency response team management which makes them appealing test-beds for Artificial In- telligence (AI) and machine learning. End user annotations could help to provide supplemental information for learning algorithms, especially when training data is sparse. This paper presents a formative study to uncover how experienced users explain game play in real-time strategygames. We report the results of our analysis of explanations and discuss their charac- teristics that could support the design of systems for use by experienced real-time strategy game users in specifying or annotating strategy-oriented behavior.
Because of their generally faster-paced nature (and in some cases a smaller learning curve), real-time strategygames have surpassed the popularity of turn-based strategy com- puter games. In the past, a common criticism was to regard real-time strategygames as “cheap imitations” of turn-based strategygames, arguing that real-time strategygames had a tendency to devolve into “click-fests” in which the player who was faster with the mouse generally won, because they could give orders to their units at a faster rate. The common retort is that success involves not just fast clicking, but also the ability to make sound decisions under time pressure. The “click-fests” argument is also often voiced alongside a “button babysitting” criticism, which pointed out that a great deal of game time is spent either waiting and watching for the next time a production button could be clicked, or rapidly alternating between different units and buildings, clicking their respective button.
Pathfinding has become a popular and frustrating problem in game industry because the importance of game industry increased. Games such as role-playing games and real-time strategygames usually have characters routed on a mission from their current location to a predetermined or player determined destination . Agent movement is amongst the greatest challenges in the design of realistic AI in computer games. Path- finding strategies are generally utilized as the core of any AI movement system. The most common challenge of pathfinding in video games is the way to avoid obstacles smartly and look for the most beneficial path over different places. Modern computer game industry is becoming much bigger and more complex every year, both with re- gards to the map size and the number of units existing in the industry  .
The style of analysis in this section shifts toward a more qualitative nature in the hope that it will provide insights which the quantitative models presented above cannot. After every 25 rounds subjects were asked to state what their decision strategy was in their own words, to provide clear examples of human strategizing. This question was open-ended to avoid leading subjects to specific strategies and to allow subjects to express any strategy without restriction to prespecified answers. The benefit of this is that it allows researchers to inspect individual thought processes, on the other hand however it would be desirable to be able to group answers into more generalized categories in order to infer trends across all subjects. Hence, each answer was coded as falling into either of the following categories: fictitious play, pattern detecting fictitious play, reinforcement learning, pattern detecting reinforcement learning, other types of reasoning, ws/ls heuristic, and random. The results of this categorization are displayed in Table 19. The difference between fictitious play (and reinforcement learning) with respect to pattern detecting fictitious play (and pattern detecting reinforcement learning) is that in the latter categories subjects displayed some type of multi-period or sequential thought instead of simply looking at single-period actions. Examples are given below of actual answers of subjects and how they were classified.
Having defined the first hierarchical level of taxa, within-subjects heterogeneity can be further subdivided into two further taxa, coined as extrinsic and intrinsic within-subjects heterogeneity. The latter refers to heterogeneity that is conditioned on variables pertaining to game play, essentially referring to whether subjects adapt their strategy to the opportunities for exploitation presented by their opponent. Extrinsic within-subjects heterogeneity is essentially heterogeneity that cannot be attributed to the game play variables that a researcher would include in an econometric model, and therefore would be captured in the error term. Note, that this taxa includes both the cases where subjects may be conditioning on extrinsic but theoretically observable random variables i.e. sunspots, and the effects of extrinsic but non-observable variables such as the levels of various neurotransmitters in the brain and how they affect behavior.
”Game mechanics” always stimulate players to explore and alter environments (Sicart 2008), and engage in activities of probing, transforming and pushing the limits. This offers great possibilities to foreground the mutated and impure inheritance of colonialism of which Nash speaks. Historical strategygames employ this inherent potency by asking players to explicitly engage with colonial history. By walking, shooting, harvesting, building, digging, stabbing, sailing, navigating, mapping and giving orders, players explore and rework historical relations between identity and space. Following game theorist Järvinen assertion that the player’s agency can be best described by using verbs (2008, p.254), but adding to it that game mechanics are always cultural mechanisms as well, one could state that the player’s basic activity consists of probing, transforming and pushing the limits of colonial spatial transformations.
Presaging Carl von Clausewitz’s later discussions of the art of war (Clausewitz 1993), Hellwig relates the strategy game to the political sphere, affirming the dictum that war is merely a continuation of politics by other means: “The most natural way to end the war even against the enemy’s will is rather to deprive him of those means without which he cannot continue the war. […] Therefore, conquering the antagonistic territory has to end the war naturally.” (Hellwig 1803, p.§8, own translation). It is not the most brutal fighter who wins the war, but the most political (or even better, scientific) actor. After Clausewitz, war could no longer be conceptualized in terms of natural law. It became part of a rational and scientific, economic, and political system. Understandably, then, Hellwig's game forces players to counterattack the enemy tactically and strategically. In fact, it is far less about batting men in the field than the skilful control of space and the planning of menacing situations that force the enemy to withdraw. As with chess, the winner is the player who is able to best plan and anticipate in spatial terms, as well as look into the future of the unfolding complex constellations on the board. Two functions thus meet within Hellwig's game that seem to have little to do with one another but are frequently coupled in strategygames: the idea of space as a dominant plane of action, and the idea of a specific didactic that aims at negotiating abstract ideas by offering sensual playful reproduction and reenactment in an educational and enlightening way.
In this project, a strategy is explicitly a vector of genes. For the sake of simplicity in demonstra- tion and evaluation, several strategies comes along with the TuringLearner platform. We have IntVectorStrategy, RealVectorStrategy and PctVectorStrategy for the ING, RNG and PNG games as described in Chapter 3. A strategy of type IntVectorStrategy is a vector of lists of Boolean values evaluated to an integer number each. The other two consists of genes of real values. The reproduction can be asexual or bisexual. The reproducing process is managed by the corresponding player at the end of each generation. Two other types of strategies come together with the platform are the ElmanStrategy and EpuckStrategy, which correspond to the Elman Neural Network and the spefication of an epuck robot respectively. It worth addressing that each of these genes is associated with a mutation value (Figure 5.1). Table 5.1 gives a summary of these strategies and their players and the corresponding games.
(c) Now we want to construct a game based on the extensive form of part (b). For this we need Mr T’s preferences. There are two types of criminals in Mr T’s line of work: the professionals and the one-timers. Professionals are in the business for the long term and thus, besides being greedy, worry about reputation; they want it to be known that (1) every time they were paid they honored their promise to free the hostage and (2) their threats are to be taken seriously: every time they were not paid, the hostage was killed. The one-timers hit once and then they disappear; they don’t try to establish a reputation and the only thing they worry about, besides money, is not to be caught: whether or not they get paid, they prefer to kill the hostage in order to eliminate any kind of evidence (DNA traces, fingerprints, etc.). Construct two games based on the extensive form of part (b) representing the two possible types of Mr T.
A strategy profile s e is called a predecessor of a profile s if s is obtained from s e via a unilateral deviation of a player from its zero strategy to some positive strategy. We construct a weighted directed graph from the game, whose vertices are the strategy profiles. The profile z with all zero strategies act as the origin. For any predecessor-successor pair ( e s, s), a directed edge is drawn from e s to s and the weight assigned to this edge is the gain in payoff of the corresponding deviating player. The length of any directed path in this graph is the sum of weights of all edges that appear in the path. For any profile s, the path independence property holds if all paths from the origin z to s have the same length. We show that a game admits a potential if and only if the path independence property holds for all vertices of its associated graph (Theorem 1).
The LoL players reported that they look at the minimap mostly to locate the enemy (particularly the jungler for its role as gank initiator -see Appendix F) (e.g., “To just be aware of the situation, if there is an enemy missing, for example, you can still with a quick look on the minimap, or if you actually have some wards up- actually if you have some wards up, it is nice information to get from the minimap. If you have them up, you can check if you have some vision of the enemy jungler, for example, or if they have some heavy roamers it's important to keep track of that.”(03LO)); they also look for where their teammates are and if they are in trouble (e.g., “I'm looking at, erh, or where my hero is, erh, moving. And erh, I don't know if my teammate is in trouble, or if the enemies has shown up” (25DO)). Players pay particular attention to off-set situations, for instance, players that appear ‘absent’ on the map where there should be someone might indicate the deployment of a certain strategy by the enemy team. Some players use the minimap to gather other types of information, such as their team or enemy minions, this is also referred to by the players as map awareness (see Appendix F). According to the implicit content analysis, few of the LoL players mentioned the minimap as a pivotal point for strategy creation; on the contrary, players rely more on patterns of interaction that emerge during the gameplay (game awareness). (See ‘map awareness’ and ‘game awareness’ in Table 17).
In modern games, there are many games using these algorithms to make the characters move in the certain area as similar as the real-life character as in . The movement of character becomes smoother than before and as same as the one from a real human in real life as in . An optimal path must contain two points: the validity (collision free) and a path-length or the process time required for completes the search as in . In main problem of path-finding in game developing, especially the role play and strategy online games, is to find the way to pass the obstacles in the certain map with the less use of computer's resource. To the current, A* has been applied very wise in many games as in , such as Age of Empire series, World of Warcraft series, and Dota series. Besides A*, there are many games used the different variant of A* such as Hierarchical Path-finding A*, Navigation Mesh and IDA* as in . In 2007, Theta* made from A* as in -. The main purpose of this paper is to check how the theta*’s path looks like when it calculate on the same map as the A* applied in some games. And how the movements of characters in games are improved
This paper provides a sufficient condition for existence and uniqueness of equilib- rium, which is in monotone pure strategies, in a broad class of Bayesian games. The argument requires that the incremental interim payoff—the expected payoff differ- ence between any two actions, conditional on a player’s realised type—satisfies two conditions. The first is uniform strict single-crossing with respect to own type. The second condition is Lipschitz continuity with respect to opponents’ strategies. Our main result shows that, if these two conditions are satisfied, and the bounding parameters satisfy a particular inequality, then the best response correspondence is a contraction, and hence there is a unique equilibrium of the Bayesian game. Furthermore, this equilibrium is in monotone pure strategies. We characterize the uniform monotonicity and Lipschitz continuity conditions in terms of the model primitives. We also consider a number of examples to illustrate how the approach can be used in applications.
It constituted the driving force for a substantial body of research on the effect of the teaching strategy of inducing a conflict, on a cognitive level (Limόn, 2001; Snyder & Feldman, 1977; Tsai & Chang, 2005), namely a cognitive conflict. However, in order to effectively induce cognitive conflict, knowledge of student preconceptions is considered to be a prerequisite (Limόn, 2001; Millar, 1989; Scott et al., 1991). Based on this knowledge, the teacher is enabled to develop those learning activities that will lead students to the recognition of a contradiction, a problematic situation to which they fail to provide a solution based on their preconceptions (Hewson & Hewson, 1984; Limόn, 2001; Scott et al., 1991). When students recognize a cognitive conflict, this recognition itself motivates them to resolve the conflict, either by trying to reorganize existing conceptions or by seeking new information (Berlyne, 1965; Biggs, 1990; Keller, 1987; Piaget, 1980; Posner, Strike, Hewson, & Gertzog, 1982). In spite of its extensive use in science and the recognition of its effectiveness (Cakir, 2008; Vosniadou & Mason, 2012), no remarkable dissemination of the strategy has been recorded in PE teaching.
In this paper, we extend Schmeidler’s result to large generalized games with a finite number of atomic players. In our framework, both objective functions and admissible strategies may depend on the strategies of atomic players and on messages which aggregate information about strategies chosen by non-atomic players (i.e., not necessarily on the average of these actions). By extending the proof given by Rath (1992, Theorem 2) of Schmeidler (1973) classical result, we provide a short and direct proof of the existence of pure Nash equilibria in large generalized game, without purifying a mixed strategy equilibrium. Our theorem is related with equilibrium existence theorems in Balder (1999, Theorem 2.1) and Balder (2002, Theorem 2.2.1). However, one of the merits of our proof consist of its simplicity, as it is based only on standard fixed point arguments in compact metric spaces.
Thus, in summary, the game with a lower number of edges and a linear pathway may require less time compared to games involving larger number of edges and non-linear pathways. The findings from this study can be used for baseline comparison with performances by stroke patients. However, further research need to be done to verify these observations.
The game-theoretic literature on externalities, for example Sandholm (2001), has the potential to be useful in our context. However, the strong symmetry assumptions used, that yield strong and interesting conclusions, exclude al- most all of the games of interest to us. For example, they exclude the simple special case of our model where there are two nodes called home and work with one link between them, but two departure times. Hu (2010) considers Nash equilibrium with continuous departures for a single commuting corridor for one morning rush hour. It is shown that with a speci ﬁ c dynamic, the equilibrium exists and is unique. As we shall illustrate in the next subsection, multiple equilibria are quite natural in models of commuting. Ross and Yinger (2000) show that the only equilibrium in a general urban equilibrium version of a commuting model with continuous departure times and ﬂ ow congestion but no bottlenecks is an unreasonable one with a never ending rush hour. As we shall explain below, by allowing a large but ﬁ nite number of departure times and randomizing departures over small intervals between these discrete depar- ture times, with some eﬀort we can overcome these diﬃculties. Konishi (2004) considers existence, uniqueness and eﬃciency of Nash equilibrium primarily in a static model but also in a dynamic model with queues, employing Schmei- dler’s (1973) theorem 1 as we do. He uses bottlenecks whereas we use speed
In the previous section, we showed that whenever negative derivatives happen, they are with respect to the highest and lowest payoffs of a given player. In this section, we answer the following question: assuming that players will play according to the mixed equilibrium, and there are two negative derivatives for a given player, if this player has the opportunity to burn x utility payoff units, then what is the best burning utility strategy that the player can adopt? We now prove that he should burn utility in the case that he uses a strategy that is a best response to the strategy of the other player that is strongly collaboratively dominant for him. However, as we show next, for some cases the player should only burn utility if the other player indeed chooses the strongly collaboratively dominant strategy for him (this situation corresponds to burning utility in his highest utility payoff in the game), while in other cases the opposite should happen (this situation corresponds to burning utility in his lowest utility payoff in the game).
As the strategy set of each player is no longer required to be a complete lattice, our re- sults prove to be crucial in providing the existence of equilibrium for the games in which at least one of the players has a multidimensional strategy set and faces a form of budget constraint, or capacity constraint, or law regulation that makes some of her strategies in- feasible or unavailable. For instance, if such constraints are introduced into multi-stage R&D models (Amir ), or into Bertrand competition with pricing and advertising (Vives , Calciano ), or into generalized contest games (Acemoglu and Jensen ), the strat- egy set would no longer be a lattice, but a CPO. In particular, in a generalized contest game, the players make two types of costly eﬀort, each corresponding to a separate contest. One contest can correspond to an educational competition while the other can represent com- petition in sports. b Since the total amount of eﬀort that can be made by players is bounded from above, the maximum amount cannot be exerted for both type of contests. Thus, the strategy set ceases to be a lattice. For such cases, the existence of equilibrium cannot be veriﬁed by the existing results in the context of games with strategic complementarities. However, utilizing games with general complementarities, we show that the set of equi- librium is indeed a nonempty CPO.
A natural but na¨ıve approach would be to assume that if you increase the number of players and look at “determinacy axioms for three-player games”, the number of interesting features for set theory should also increase. It is well- known that the opposite is true: determinacy axioms of the above form for n-player games are only interesting if n = 2. In all other cases, they are trivial: If n = 1 they are all true, regardless of Γ, if n > 2 they are all false for almost all Γ (cf. Proposition 1). It turns out that if we want to give solution concepts for infinite many-player games, determinacy in the classical sense is not the right concept. Solutions that have been offered in the literature include giving up the notion of a pure strategy and moving to mixed strategies (cf. [Ga53] and [Br00]), and understanding many-player games as coalitional games. In this paper, we want to work with pure strategies and stay within the realm of non-cooperative perfect information games