The overall survey outcome shows that there are common themes emerging from the different research groups on how to design a successful agentarchitecture. These include common software engineering quality criteria such as modularity, low coupling, and separation of concerns, in addition to more problem-specific approaches such as coordination of sales and procurement through internal models of inventory and prices, and assigning current and future value to inventory and production resources. There are also some strong differences such as how to organize the communication between the different modules and which modules should own the data for specific tasks. These findings, and the fact that after several years of competition there is still much to be learned, suggest that the recipe for a full competent supply-chain trading agent is still an unsolved problem, even for an abstract, constrained environment like TAC SCM.
When using strings as the main datatype we have to find ways to represent structured data types or even pure binary data like images. For binary data we propose to use the Base64 encoding. For serialization of structured datatypes different solutions are possible. One possibility is the protobuf library which pro- vides a programming language neutral binary serialization. The encoding and decoding is more efficient compared to non-binary encodings regarding time and space. Other possibilities are more verbose encodings like JSON, YAML or even XML. Our framework does not enforce a specific way of serializing the domain specific data because there seems not to be a solution available that fits well for all different needs. Protobuf is already quite efficient but introduces big depen- dencies and could be outperformed by manual binary serialization techniques.
Mobile Agents are the programs that move between computers or nodes of network, autonomously trying to fulfill some specific goals given by users. Agents are different from other applications in that they are goal- oriented: they represent users and act on their behalf to achieve some set goals in an autonomous manner – i.e. they control themselves, as in the decision where and when they will move to the next computer or node. Mobile Agents do provide a viable means of performing network security assessment and analysis efficiently and effectively. Mobile agent neither brings new method to detect for IDS nor increases detection speed for some kind of attracting. Nevertheless, it improves the design, construct, and execute of IDS obviously .
We have also investigated the use of other theories to allow BDI agents to reason about the ethical acceptability of their actions. In  we considered a situation where ethical reasoning is only invoked when none of the systems existing plans apply, or a plan is being applied but is not achieving the robot’s goal – this follows from the agent having some self-awareness of the effectiveness of its actions and the options it has available. In this situation we considered an architecture where a route planning system is invoked to produce a wider range of options and they are annotated with the ethical consequences of selecting that option. We considered examples from the domain of unmanned air systems and an ethical theory based on prima facie duties in which the system has a preference order over its ethical duties (e.g., its duty to minimize casualties takes precedence over its duty to obey the laws of the air). In this system we were able to prove not only properties such as those in the Python based system (i.e., that the implementation correctly captured the ethical theory) but also “sanity checking” properties – so, for instance, in specific scenarios we could verify that the aircraft, if forced into an emergency landing, would always land in a field rather than on a road.
wherein a werewolf makes himself/herself seem to be a leader of villagers, such as a seer or a medium. The second strategy is called as “stealth werewolf” wherein a werewolf hides himself/herself as one of the villagers. The swindle werewolf can have the initiative for misleading villagers, while it is easy to be a target of divination or execution. The stealth werewolf cannot have the initiative, but it is hard to raise a doubt of werewolf since he/she does not work directly on the subject of execution. We attempt to make the stealth werewolf an agent, and would like to clarify important factors for the stealth werewolf. If an agent can talk and mislead villagers without attracting attention from other players, it is a strong stealth werewolf. Although there are previous studies that have investigated conversations in the Werewolf game (Hirata et al., 2016), they are not done so from the standpoint of the stealth werewolf. Therefore, in this paper, we investigate the influences of the numbers of ut- terances, appearances in utterances of other play- ers, and whispers, on the victory or defeat of were- wolves.
agents are best suited to track which targets given their sensor configurations, current pose, and the prior target estimates provided by the GMs (Section B.3.2). Once the targets are assigned to the respective vehicles, the motion planning algorithm designs information-rich kinodynamically feasible trajectories which traverse this continuous environment while sat- isfying all state and input constraints  (Section B.3.2). The vehicles are assumed to have known dynamics and sensor/detection models (though they may be nonlinear), such that predicted trajectories can be generated deterministically. Reliable pose estimates and environmental obstacle maps are assumed to be available to each agent for convenience, al- though extensions to uncertain pose and maps are also possible and will be studied in future work. Furthermore, all trajectory planning is decentralized and performed by each vehicle independently; the paths of other agents are assumed unknown, although this information could be shared among the agents. While more efficient sensor fusion can be achieved in such extended search problems using GM representations , there has been little prior work on how to effectively embed GMs into the planning framework. The algorithms pro- posed by this paper incorporate the GM target representations at each level of planning, including task allocation, trajectory planning, and human operator interface. By using computationally efficient algorithms in each of these phases, it is possible for large teams to develop real-time plans which explicitly account for the nature of the target uncertainty at every level.
IDA (Intelligent Distribution Agent) is a “conscious” software agent that was developed for the US Navy (Franklin et al. 1998). At the end of each sailor's tour of duty, the sailor is assigned to a new billet. This assignment process is called distribution. The Navy employs some 280 people, called detailers, to effect these new assignments. IDA's task is to facilitate this process by completely automating the role of detailer. IDA must communicate with sailors via email in natural language, understanding the content and producing life-like responses. Sometimes she will initiate conversations. She must access several, quite different, databases, again understanding the content. She must see that the Navy's needs are satisfied by adhering to some sixty policies and seeing that job requirements are fulfilled. She must hold down moving costs, but also cater to the needs and desires of the sailor as well as is possible. This includes negotiating with the sailor via an email correspondence in natural language. Finally, she must reach agreement with the sailor as to the new assignment, if at all possible. If not, she assigns.
Efforts to model the entire system and its interaction with the real world with any degree of accuracy necessarily involve complex abstractions together with a num- ber of assumptions. These abstractions and assumptions are embedded deep within an executable model and may not be explicit to end users, or even to the modellers. There- fore if we provide a guarantee, for example, that the autonomous system can definitely achieve or avoid something, there will be a number of pre-conditions (that the real world will behave in some particular way) to that guarantee that may be hard to extract. One of the aims of our approach is that the assumptions embedded in the modelling of the real world should be as explicit as possible to the end users of a verification attempt. Obviously, some parts of an agent’s reasoning are triggered by the arrival of infor- mation from the real world and we must deal with this appropriately. So, we first analyse the agent’s program to assess what these incoming perceptions can be, and then explore, via the model checker, all possible combinations of these. This allows us to be agnostic about how the real world might actually behave and simply verify how the agent behaves no matter what information it receives. Furthermore, this allows us to use hypotheses that explicitly describe how patterns of perceptions might occur. Taking such an approach clearly gives rise to a large state space because we explore all possible combinations of inputs to a particular agent. However it also allows us to investigate a multi-agent system in a compositional way. Using standard assume-guarantee (or rely- guarantee) approaches (Misra and Chandy 1981; Jones 1983, 1986; Manna and Pnueli 1992; Lamport 2003), we need only check the internal operation of a single agent at a time and can then combine the results from the model checking using deductive meth- ods to prove theorems about the system as a whole. Abstracting away from the contin- uous parts of the system allows us to use model checking in a compositional fashion. It should be noted that, in many ways, our approach is the complement of the typical approach employed in the verification of hybrid automata and hybrid programs. We are primarily concerned with the correctness of the discrete algorithms and are happy to abstract away from the underlying continuous system, while the other approaches are more concerned with the verification of the continuous control and are happy to abstract away from the discrete decision-making algorithms.
Obviously, some parts of an agent’s reasoning are triggered by the arrival of information from the real world and we must deal with this appropriately. So, we first analyse the agent’s program to assess what these incoming perceptions can be, and then explore, via the model checker, all possible combinations of these. This allows us to be agnostic about how the real world might actually behave and simply verify how the agent behaves no matter what information it receives. Furthermore, this allows us to use hypotheses that explicitly describe how patterns of perceptions might occur. Taking such an approach clearly gives rise to a large state space because we explore all possible combinations of inputs to a particular agent. However it also allows us to investigate a multi-agent system in a compositional way. Using standard assume-guarantee (or rely-guarantee) approaches [67, 52, 53, 66, 60], we need only check the internal operation of a single agent at a time and can then combine the results from the model checking using deductive methods to prove theorems about the system as a whole. Abstracting away from the continuous parts of the system allows us to use model checking in a compositional fashion.
action itself. We characterise the action of a narrative event by the parameters representing the basic dramaturgical principles of cameras as introduced in Ta- ble 1. Objects like characters are described only by their respective geometrical information. This way we can keep out application speciﬁc knowledge from the camera agent. The coherence between subsequent events is actually computed by the agent, based on the similarity of objects. This knowledge enables the camera agent to detect connected sequences of events.
In many domains, the actions costs depend on the context (state) where that action is applied. And the speciﬁc condi- tions of the state that makes an action have one cost or an- other may be difﬁcult to predict when modelling the domain. To address this challenge, we present a learning approach to be included in the PELEA architecture. We focus on mod- elling action durations (cost equal to execution time) on un- known terrains, like the ones that could exist in other plan- ets such as Mars. Even if we have details on how the robot performs on Earth, the actual execution of actions on Mars (or on an unknown environment) might be quite diverse. Thus, we propose the use of ML to automatically acquire that knowledge from actual actions execution. Although we focus on the Rovers domain, the studied approach could be useful in other mobile-robotics domains, including all kinds of tasks. Action durations (or similar metrics) usually de- pend on unpredictable factors such as weather conditions, presence of other interacting agents, navigation terrain, ma- terials to be handled, etc.
between the Editorial Team (Director and other Editorial Staff) and the MULTIDRONE system, during both the pre-production and the production phases. The Dashboard will manage and display maps of the areas where the shooting will take place, annotated with relevant infor- mation such as no-flight zones, flight corridors, points- of-interest, landing sites, etc. During Pre-Production, the Director can create and manage the Shooting Mission, which consists in a list of Shooting Actions triggered by events. During Production, the Director uses the Dashboard to i) control the execution of the mission, by triggering events that start or stop the execution of Shooting Actions; ii) graphically monitor the execution of the Mission through the map display and the video streams from the drones’ A/V cameras; and iii) introduce changes to the Shooting Mission or to specific Shooting Actions. The Dashboard will also allow for manual control of the cameras and respective gimbals.
The try was giving the agent the ability to increase and decrease velocity within the longitude and latitude coordinates, originating movement in the four cardinal directions (north, east, south, and west), in addition to being able to increase and decrease altitude. However, this displayed great constraints. As the camera of the UAV was always facing the same direction, but moving along other directions, it was not able to ’see’ and recognize with what object it was colliding, except when it was a frontal collision. This temporary solution did not provide the system anything rather than drawbacks, moving the agent away from being intelligent. The only way of addressing the issue was placing multiple cameras, one on each side of the quadcopter, which is not an scalable option. Having to place at least four, or six (if you want to see above and below) cameras, and processing their images at real time is clearly a far from affordable computationally expensive idea. Additionally, UAV hardware suppliers are not likely to focus on these features, but the opposite, efficiency in weight and flight aerodynamics.
The other problems associated with autonomous farm equipment can probably be overcome with technology. Better sensors and controls would allow the equipment to deal with plugging and malfunctions on its own. In addition to operating equipment, drivers are also collecting information (e.g., weed, disease and insect problems, soil issues, stand establishment). If they are no longer going across the field regularly, other ways need to be found to collect non-standard data. Better sensors would help. Improved scouting programs would be essential. Nevertheless, we will never have a sensor for every possible problem; a periodic human presence in the field is likely to be necessary for the near future. Autonomous farm equipment may be in our future, but there are important reasons for thinking that it may not be just replacing the human driver with a computer. It may mean a rethinking of how crop production is done. In particular, once the driver is not needed, bigger is no longer better. Crop production may be done better and cheaper with a swarm of small machines than with a few large ones. One of the advantages of the smaller machines is that they may be more acceptable to the non-farm community.
The query agents are responsible of analyzing the re- ceived requests, constructing queries and choosing the adequate base stations that can disseminate them to their closed nodes. They can also manage the storage nodes used as cache memory of the captured data. These tasks are managed by a planning-agent which controls other query-agents. A query-agent can be a simple update- agent that updates a specific base station or a storage node with a list of queries, or a mission-agent that is dis- seminated to the base stations which decide according to their states the capability to perform this mission or to collect some data from storage nodes.
• Proactivity. To help DIM agents to be adaptive to new situations, they need to be able to exhibit proactivity, that is, the ability to effect actions that achieve their goals by taking the initiative. Where appropriate, another characteristic that we feel our DIM agents should be able to possess is mobility, since not all of the resources that an agent needs to access will be within the local environment. Client-server architectures are not sufficient in themselves to yield efficient use of bandwidth, a problem where the Internet is suffering greatly. Consider the case where an agent (client) wishes to retrieve some data from a remote server; if the server does not provide the exact service that the client requires, for example the server only provides low-level services, then the client must make a series of remote calls to obtain the end service. This may result in an overall latency increase and in intermediate information being transmitted across the network which is wasteful and inefficient, especially where large amounts of data are involved. Moreover, if servers attempt to address this problem by introducing more specialised services, then as the number of clients grow so the amount of services required per server becomes infeasible to support.
Abstract — Evaluation tools are significant from the Agent Oriented Software Engineering (AOSE) point of view. Defective designs of communications in Multi-agent Systems (MAS) may overload one or several agents, causing a bullying effect on them. Bullying communications have avoidable consequences, as high response times and low quality of service (QoS). Architectures that perform evaluation functionality must include features to measure the bullying activity and QoS, but it is also recommendable that they have reusability and scalability features. Evaluation tools with these features can be applied to a wide range of MAS, while minimizing designer’s effort. This work describes the design of an architecture for communication analysis, and its evolution to a modular version, that can be applied to different types of MAS. Experimentation of both versions shows differences between its executions.
IDA is an Intelligent Distribution Agent for the U.S. Navy. Like CMattie, she implements global workspace theory (Baars 1988, 1997). At the end of each sailor’s tour of duty, he or she is assigned to a new billet. This assignment process is called distribution. The Navy employs some 200 people, called detailers, full time to effect these new assignments. IDA’s task is to facilitate this process, by playing the role of one detailer as best she can. Designing IDA presents both communication problems and constraint satisfaction problems. She must communicate with sailors via email in natural language, understanding the content. She must access a number of databases, again understanding the content. She must see that the Navy’s needs are satisfied, for example, that the required number of sonar technicians are on a destroyer with the required types of training. She must adhere to Navy policies, for example, holding down moving costs. And, she must cater to the needs and desires of the sailor as well as is possible.
Last but not least, the matchmaking agents have two responsibilities. One is to handle storing semantic descriptions of domain ontology and of services, and WSDL descriptions of services. This will make the service database available for the system while alleviating the burden of dealing with storage mechanisms. The other responsibility is to offer goal-service matching functionality. Mainly, the execution agent can directly ask for a specific service or can ask for a proper service to be found by performing semantic matching between the service and the goal descriptions. In the proposed architecture we have focused on the relevant components for proving the concepts involved in a dynamic support for solving non-linear equations systems in a semantic services environment. We assumed the existence of the needed content in the Semantic and Syntactic Repository describing web services. Creation and maintenance of this repository is a complex task that will be considered in a future work.