A TemporalOntology for ReasoningaboutActions
Abstract - In this paper, our work is devoted to systematic study of actions theories by using a logical formalism based on a first order language increased by operators whose main is to facilitate the representation of causal and temporal relationships between actions and their effects as well as causal and temporal relationships between actions and events. In Allen and Mc-Dermott’ formalisms, we notice that notions of past, present and future do not appear in the predicate Ecause. How to affirm that effects don’t precede causes? To use the concept of temporality without limiting themselves to intervals, we enrich our language by an operator defined on time-elements Our formalism avoids an ambiguity like: effect precedes cause. The originality of this work lies in proposal for a formalism based on equivalence classes. We also defined an operator who allows us to represent the evolutions of the universe for various futures and pasts. These operators allow to represent the types of reasoning which are prediction, explanation and planning. we propose a new ontology for causal and temporal representation of actions/events. The ontology used in our formalism consists of facts, events, process, causality, action and planning.
Our previous work on practical reasoning using value-based argumentation has re- quired assumptions about the values and preferences of other agents intended to justify particular choices of actions on their part. These choices can affect the outcome of an action performed by the reasoning agent. Justification of these assumptions is always difficult, particularly when several other agents are involved, multiplying the alterna- tive actions needing consideration. In this paper we have described an approach in which no assumptions need be made about the values and preferences of others: all that is required is that the agent concerned can identify the values it recognises and indicate their relative worth to itself. In some cases success may still depend on what the other does, but this can be assessed using bounds on the probabilities of the alter- natives available to the other. Note that when assessing probabilities we consider the whole range of actions available to the other, rather than the probability of the other performing a specified action. In this way we are able to achieve our objectives of allowing arguments which consider the actions of others, but which do not require as- sumptions about the beliefs and preferences of the others, while remaining consistent with multi-criteria utility theories, and the dominant actions of game theory. Thus we have shown how to:
+conseq. 0.99 0.99 0.99 0.99 0.98 0.99 Table 2: Test results for action classification. A clear trend towards improved classification ac- curacy emerges with increasing amounts of ground- ing, across all test sets. Notably, classifying actions in isolation proves to be challenging once lexical biases have been controlled for. Improvements in accuracy observed for models with access to relevant norms, meanwhile, demonstrate the clas- sifier’s ability to relate actions to behavioral rules. We also find that contextual grounding facilitates moral reasoning in the absence of shortcuts. Lastly, the near-perfect performance achieved by including consequences into the classifiers’ input (in addition to norms and context) can be attributed to workers’ tendency to associate moral actions with positive consequences and immoral actions with negative ones, 8 allowing the model to ‘solve’ the task by predicting consequence sentiment. Indeed, accu- racy remains at 98-99% even when consequences are used as the sole grounding source.
gistics [18, 49]) feature an agent, or set of agents, dedicated to coordinate the tasks performed by the different working components or agents. In argumen- tation, the role of the mediator agent is usually associated with negotiation dialogues [13, 50, 38], where the main objective of the mediator agent is to help reconcile the competing agents’ positions, for which the mediator agent usually relies on mental models of the participants. More relevant here is the concep- tual framework for negotiation dialogues SANA , that proposes a number of so called artifacts, such as a social Dialogue Artifact, that acts as a media- tor which regulates the agents dialogue interaction and a social Argumentation Artifact that can be regarded as a sophisticated commitment store, where the agents’ submitted arguments, as well as other arguments that may be publicly available, are organised and their social acceptability status can be evaluated following different algorithms. Similar to our approach, the SANA framework defines, as part of the social Dialogue Artifact, an Argumentation Store (AS) that stores a collection of socially acceptable arguments. The main difference being, that while a central part of ProCLAIM is the definition of the structure and relation of the schemes in ASR tailored for the specific purpose of deliber- ating over safety critical actions, the SANA’s AS is presented as a placeholder for any argument scheme, that is, developers are given little guidance on which argument schemes the AS should encode. In a broader view, the SANA ap- proach is similar to that proposed in the FP6-European project ASPIC, where a set of generic components were developed: (Inference Engine, Dialogue Man- ager, Learning Component, Decision-Making component) that can be plugged into an agent in order to add argumentation capabilities. This is the approach undertaken in  to implement ProCLAIM in the transplant scenario.
Many deliberation strategies are possible and it is impossible to consider them all in detail. Instead we characterise some typical deliberation strategies in terms of the execution paths they admit. We focus on the non-interleaved strategy (which we denote by (ni)) and two simple ‘alternating’ strategies: one which first applies a planning goal rule and then executes a single basic action of one of the agent’s current plans (which we denote (as)); and another which first applies a planning goal rule and then executes a single basic action from each of the agent’s current plans (which we denote (am)). These strategies were chosen as representative of deliberation strategies found in the literature and in current implementations of BDI-based agent programming languages. However none of these strategies (or any other single strategy) is clearly “best” for all agent task environments. For example, the (ni) strategy is appropriate in situations where a sequence of actions must be executed ‘atomically’ in order to ensure the success of a plan. However it means that the agent is unable to respond to new goals until the plan for the current goal has been executed. Conversely, the (as) and (am) strategies allow an agent to pursue multiple goals at the same time, e.g., allowing an agent to respond to an urgent, short-duration task while engaged in a long-term task. However they can increase the risk that actions in different plans will interfere with each other. It is therefore important that the agent developer has the freedom to choose the strategy which is most appropriate to a particular problem.
Allen and Ferguson specify temporal structure before introducing events (or, for that matter, properties and actions), reversing the conceptual priority events enjoy over time in the Russell construction men- tioned by Kamp. Without embracing this reversal, the present paper builds on elements of Allen and Ferguson (1994) and other works to construct time from not only events, but also properties and actions. The aim is to find a temporalontology of finite strings that is not too big (which R or any infinite linear order arguably is) and not too small (which the linear order from Russell’s construction can be, depend- ing on the event structure it is fed as input). Insisting on temporal structure that is just right is reminiscent of Goldilocks, and perhaps more germanely, the Goldilocks effect observed in Kidd et al. (2012) as the tendency of human infants to look away from events that are overly simple or overly complex. Whether or not any useful link can be forged between that work and the present paper, I am not able to say. But I do claim that the notions of projections brought out below provide helpful handles on granularity, especially when granularity is varied.
The CNTRO system was able to order the event se- quences for 95% of the narratives. The reasoner failed due to different interpretations of time intervals and back- ground assumptions in the manual annotation. Comput- ing the order of two events is difficult when using ‘start’ or ‘finish’ temporal relations when both the start and end times cannot be annotated. For example, a narrative might specify that antiplatelet therapy began at the time of stent implantation, and specify that it occurred for a period of 2 months. The temporal relation of the event1 (antiplatelet therapy) and event2 (stent implantation) depends on whether the start and end times of the events can be com- pared. When considering the start time, the two events start at the same time (event1 starts event2). The system cannot infer the relationship by the end time since the duration of “stent implantation” is not specified, given that it occurs at a single point in time. Given the assumption that the stent implantation procedure cannot last for 2 months, we can infer that event1 ends after event2. This kind of background knowledge needs to be further speci- fied in the domain ontology so that the CNTRO system can infer the correct order. Additionally, “patient death” inherently is known to be the last event in a patient-care timeline. This kind of inherited order needs to be incorpo- rated in the domain ontology so that the sequence of events can be correctly inferred.
from the graph database, 2D Euclidean/Cartesian geometries can thus be processed through distance and other spatial related math calculations. Lucene only managed WKT-serialised data, which is supported by GeoSPARQL, and improve performances on shapes search. As many Triple Stores, GraphDB provides both the predicate-object-subject (POS) index and the predicate-subject-object (PSO) index for triples management. Therefore, because of the use of connectors to these external operations, topological reasoning is achievable. Disadvantages of the method: it needs a complete framework: SPARQL Endpoint, connectors and functions, such as GraphDB provides them. Moreover, the processed elements need to be expressed in a common and well-described spatial reference system to guarantee results consistency.
One reason, I submit, rests with McTaggart’s interpretation of R-relations as order relations and therefore as static and not dynamic. Such a line of reasoning explicit in the second characterization of the debate, assumes at the outset that temporal passage qua succession is real in the sense of mind-independent only if it is grounded in events changing their A-properties or undergoing absolute becoming, or in some other A-theoretic manner. In so doing, it packs into the commonsense phenomenon of time’s dynamism, an A-theoretic ontological analysis of succession, and infers that if one denies that analysis, then one is thereby denying the commonsense fact of transition—the timelike character—and thereby the reality of time. However, one can deny the ontology implicit in commonsense (if it has an ontology, which I doubt), without denying the phenomenological fact upon which it is based. All that needs to be done is to provide an alternative ontology that comports with the phenomena in question.
In order to illustrate our discussion, think of a surgical action like an Endoscopic Removal of Foreign Body from Stomach (e.g. in a child who swallowed a marble). For the sake of brevity, we will refer to this procedure type as X. (Thus in what follows, ‘X’ is a constant. At the same time, many of the following formulae can be used as sche- matic guides how to deal with the modifiers in question generally.) In a simplified form we can describe X as follows: every instance of X begins with an endoscopy pre- paration (a), followed by the introduction of the endoscope (b), endoscopic exploration (c), grasping of the foreign body (d), and extraction of the endoscope with the foreign body (e). A description of this is outlined in the information object X_Plan. This plan is realized only by actions that correspond to the sequence abcde. If any of these sub- actions is missing, the action is no longer of the type X. X_Plan is therefore only rea- lized when X is fully accomplished. As X has necessarily all temporal parts a-e, X_implemented is not a subclass of X, because it may still be in the phase a or b, lack- ing the remaining sequential processes c-e. The same applies to X_abandoned (e.g., the action is incomplete because no foreign body was found, i.e. no d is performed). Rooted in realist philosophy, and in accordance with common sense, BioTop makes a clear distinction between information objects like plans and real processes. It is there- fore not compatible with the Action ODP as depicted in Figure 1, which obfuscates the ontological distinction between real and hypothetic entities. In what follows, we set out alternative models to account for the different ‘ flavors ’ of actions contained in the Action ODP in a way compatible with both realism and common sense.
poral concepts can be represented in OWL. Concepts in time are represented as 4-dimensional objects with the 4th dimension being the time (timeslices). Time instances and time intervals are represented as in- stances of a TimeInterval class, which in turn is re- lated with concepts varying in time as shown in Fig- ure 4. Changes occur on the properties of the temporal part of the ontology keeping the entities of the static part unchanged. The 4D-fluents approach still suffers from proliferation of objects since it introduces two additional objects for each temporal relation (instead of one in the case of N-ary relations). The N-ary re- lations approach referred to above is considered to be an alternative to the 4D-fluents approach also consid- ered in this work. Avoiding proliferation of objects can be achieved if quintuples are used instead of triples as proposed in . A detailed comparison of design pat- terns for representing temporal information for the Se- mantic Web is presented in .
However, as illustrated by Examples 1-2, tem- poral and causal relations are closely related, as assumed by many existing works (Bethard and Martin, 2008; Rink et al., 2010). Here we ar- gue that being able to capture both aspects in a joint framework provides a more complete under- standing of events in natural language documents. Researchers have started paying attention to this direction recently. For example, Mostafazadeh et al. (2016b) proposed an annotation framework, CaTeRs, which captured both temporal and causal aspects of event relations in common sense stories. CATENA (Mirza and Tonelli, 2016) extended the multi-sieve framework of CAEVO to extracting both temporal and causal relations and exploited their interaction through post-editing temporal re- lations based on causal predictions. In this paper, we push this idea forward and tackle the problem in a joint and more principled way, as shown next.
machines, are modeled with Intentional Elements (IE’s) originating from the PQR formula. Key is that an activity (P - What) is executed in order to achieve a goal (R - Goal) or to produce an outcome. The relation between an activity and a goal is “contributes”, including a range of values (++, +, +/−, −, −−) to indicate a (strong) positive, a (strong) negative or a neutral contribution. The semantics of this relation are defined imprecise deliberately to pose no restriction to their ap- plication. As usual in ontology engineering, a property such as “contributes” can be refined to match the application at hand. The relations “produces” and “con- sumes” link an activity with an outcome in the sense that an activity produces or consumes an outcome. Activities, goals and outcomes and all the other IE’s for that matter can be decomposed into sub-activities, sub-goals and sub-outcomes, respectively. The relation that establishes a hierarchy between IE’s is the “part of” relation.
( 1998 ) propose to model atomic actions such as left-leg-back by an HMM and to manually dene a context-free grammar that is used to assemble atomic actions to higher level actions. The authors evaluate their approach on hand movements of a music conductor and can successfully nd bars, i.e. temporal boundaries of musical phrases, from the sequence of conducted gestures. A similar approach is proposed in kizler and Forsyth ( 2008 ), who model atomic movements of body parts with HMMs and use regular expressions to model sequences of such atomic movements. Pirsiavash and Ramanan ( 2014 ) also use a context-free grammar to model hierarchical decompositions of actions. A parsing algorithm using nite state machines allows to eciently infer such hierarchical temporal structures. In a related approach, Vo and Bobick ( 2014 ) propose to use a grammar that is restricted to AND and OR rules. The grammar allows to decompose high-level activities into sequences of lower- level actions and a message passing algorithm enables ecient inference over all possible decompositions. Hou et al. ( 2017 ) attempt to model context without an explicit grammar. Therefore, the input sequence is clustered into temporally connected and spatially similar subactions. Similar clusters are merged to avoid nding the same subaction class twice. The best sequence of subactions is then found by a shortest-path algorithm over the mined subaction instances. Another clustering based approach to incorporating context has been proposed by Cheng et al. ( 2014 ). In their work, a video is represented as a sequence of visual words obtained by k-means clustering. A sequence memoizer ( Wood et al. , 2011 ) is then used to learn temporal dependencies between the visual words. In terms of context modeling, Kuehne et al. ( 2014 , 2016 ) are closest to our work. They approach temporal action segmentation with a classical speech recognition system. Video-frame probabilities are represented with a Gaussian mixture model and a hidden Markov model is used for the temporal progression of these features throughout a low-level action, which is the equivalent to a word in speech recognition. A context-free grammar models interdependencies between action classes and can be seen as the equivalent to a language model in speech recognition.
All subjects were provided with one of three different search topics and then asked to evaluate a set of 48 newspaper articles with respect to the topic description. The three search topics, 385 (hybrid auto engines), 396 (sick building syndrome) and 415 (drug trafficking) were selected from a subset of the Trec-7 and Trec-8 collections developed by Sormunen (2002). The topic statements are included in Appendix A. Topics were chosen based on several factors. First, articles on the topic had to represent a sufficient mix of relevance levels. Second, topics that were unlikely to be familiar to study participants were chosen to minimize variation in existing knowledge participants might have about the topic. Finally, topics were chosen for their potential interest to study participants. Articles were presented one at a time and participants were asked to rate them using a 4-point categorical scale: not relevant, marginally relevant, relevant and highly relevant. Definitions for each relevance category were provided by the system.
Covariation is recognised as an important aspect of statistical thinking and reasoning and is used to explore the relationship between two attributes. Often, covariation is determined from the interpretation of scatterplots that display the correspondence of two numerical attributes and is described as a trend in the data. Scatterplots are utilised when conducting exploratory data analysis (EDA). EDA strategies are useful for interpreting the data as they allow the data to be manipulated in order to construct graphical representations that facilitate making sense of the data. The translation of EDA strategies into innovative software packages, such as TinkerPlots: Dynamic Data Exploration, has placed student learning about data analysis in technological environments and there is a need to investigate the way in which students learn in these contexts.
This approach is implemented in the DogmaModeler ontology engineering tool, and tested in building the CCFORM ontology (a medium size legal ontologyabout customer complaints), which has been built by many lawyers. We shall illustrate (in section 4) how DogmaModeler guides ontology modelers to quickly detect unsatisfiability in early phases and does not require these modelers to have any background knowledge about logic or reasoning. One of the interesting lessons we have learned in the CCFORM project is that lawyers were able to intuitively understand their modeling mistakes and how to avoid it for next time. Some of them even admitted that they understood some logics from their experience in using DogmaModeler.
Qualitative research in education is about understanding what teachers and students do in educational settings. The goal is to study the participants‘ view of the situation (Creswell, 2003). Educational design research methodologies provide the opportunity to interrogate educational settings from this perspective. The methods utilised are selected on their ability ―to fit the specifics of the problem and situation, and may consist of any one or a combination of explanatory, interpretive, experimental, computational, mathematical or exploratory methods‖ (Sloane, 2006, p. 44). To ensure educational design studies are considered credible, rigorous research propositions are needed to address the criticism that qualitative education research often fails to bridge the gap between theory and practice (Cobb, Confrey et al., 2003; DBRC, 2003; Sloane, 2006). The methodological objective of this thesis is to create a learning environment ―to investigate the possibilities for educational improvement by bringing about new forms of learning in order to study them‖ (Kelly, 2006, p. 108) with the aim of linking theory and practice in situ to provide evidence that has the potential to inform teaching practices, curriculum development, and knowledge of student learning (McKenney, Nieveen & Akker, 2006).