Email: b.knight, j.ma, email@example.com
This paper presents a discreteformalism for temporal reasoningabout actions and change, which enjoys an explicit representation of time and action/event occurrences. The formalism allows the expression of truth values for given fluents over various times including non- decomposable points/moments and decomposable intervals. Two major problems which beset most existing interval-based theories of action and change, i.e., the so-called dividing instant problem and the intermingling problem, are absent from this new formalism. The dividing instant problem is overcome by excluding the concepts of ending points of intervals, and the intermingling problem is bypassed by means of characterising the fundamental time structure as a well-ordered discrete set of non-decomposable times (points and moments), from which decomposable intervals are constructed. A comprehensive characterisation about the relationship between the negation of fluents and the negation of involved sentences is formally provided. The formalism provides a flexible expression of temporal relationships between effects and their causal events, including delayed effects of events which remains a problematic question in most existing theories aboutaction and change.
easily ignored. At the most extreme active end of the spectrum, the user cannot proceed with the primary task until the user has taken an action related to the commu- nication. For example, the Firefox anti-phishing tool prevents Firefox from loading suspected Phishing web sites unless a user clicks on a link to override the tool’s recommendation. Other active indicators might play sounds or animations to get a user’s attention, without blocking the primary task. Passive communications, on the other hand, may simply change the color of an icon without doing anything to attract the user’s attention. Secure systems designers should consider which type of communication will be most effective in a particular system, as well as where to place it on the active- passive spectrum. They should consider the severity of the hazards that the system is attempting to avoid, the frequency with which the hazard is encountered, and the extent to which appropriate user action is necessary to avoid the hazard. For example, frequent, active warn- ings about relatively low-risk hazards or hazards that ordinary users are unable to take action to avoid may lead users to start ignoring not only these warnings, but also similar warnings about more severe hazards. A more passive notice or status indicator might be a better choice than a warning in such situations, as it will pro- vide information that may be of use to expert users without interrupting ordinary users for whom it is of minimal use. On the other hand, when hazards are se- vere and user action is critical, active warnings may be most appropriate, perhaps with links to relevant train- ing.
Common Sense”. McCarthy was primarily concerned with cases where an agent has complete knowledge about its domain of discourse. However, since having complete knowledge about a domain is a very strong assumption, researchers also investigated the epistemic case of incomplete knowledge and tried to formalize sensing actions. The first logical formalization which considers incomplete knowledge is due to Moore (1985) who implemented the concept of possible worlds from Modal Logic to action theory. One way to formalize action and change is to use a first order logical theory, possibly with second order extensions. Examples are the Situation Calculus (McCarthy, 1963), the Fluent Calculus (H¨olldobler and Schneeberger, 1990; Thielscher, 1998), the Event Calculus (Kowalski and Sergot, 1986) and Temporal Action Logic (Doherty, 1994). Another possibility to formalize action and change is the syntactic definition of a high- level action language and to ground the language in a set-theoretic operational semantics. This approach is commonly used e.g. in action planning. The planning language PDDL (McDermott et al., 1998) is based on STRIPS (Fikes and Nilsson, 1972) and the Ac- tion Description Language (ADL) (Pednault, 1994) which are both formalized in an operational semantics.
The second class of problems we study is the selection of optimal strategies for limited memory influence diagrams. Influence diagrams are intuitive and concise representations of decision making situations [Howard and Matheson, 1984]. A decision-making problem usually involves both controllable and non- controllable quantities, which in the formalism of influence diagrams are repre- sented, respectively, by action and state variables. A strategy is a mapping from state variables into action variables, which completely determines the behavior of an agent (or a team of cooperative agents) acting under the model. The spec- ification of an influence diagram and a suitable strategy uniquely determines a joint probability distribution over the state variables, and a rational agent tries to select a strategy that maximizes expected utility over these probability distri- butions. In principle, an optimal action at a given decision step might depend on all previous actions and observations, which leads to an exponentially large strategy. To avoid such an exponential complexity, Lauritzen and Nilsson  proposed using limited memory influence diagrams, in which the information available to each local decision is explicitly determined as part of the input, and hence under the control of the model builder. They showed the existence of a class of limited memory influence diagrams for which the optimal strategy re- mains the same if we relax the constraint on the size of admissible strategies, that is, allowing each local decision to be made based on the full history of actions and observations does not increase expected utility. They named those diagrams
Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoningabout quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.
The segmentation module made mistakes in detecting exact boundaries for uncommon phrases, e.g., “hun- dreds of thousands of people”, and “mid-1970’s”. Detection of missing units is problematic in cases like “Three eggs are better than two”. The SRL returns “Three eggs” as a candidate unit, which needs to be pruned appropriately to obtain the correct unit. The primary limitation of the reasoning system in both tasks is the lack of an extensive knowledge base. Wordnet based synsets prove to be insufficient to infer whether units are compatible. Also, there are certain reasoning patterns and various implicit relations be- tween quantities which are not currently handled in the system. For example, inferring from the sentence “Militants in Rwanda killed an [average of 8,000 people per day] for [100 days]” that “around 800,000 peo- ple were killed”. Also, implication of ratios can be involved. For example, the sentence “[One out of 100 participating students] will get the award” implies that there were “100 participating students”, whereas “[9 out of 10 dentists] recommend brushing” does not imply there were 10 dentists.
2.2.3. Reasoningabout Intrusion Evidence The Bayesian networks constructed in this way offer an ex- cellent opportunity to reason about the uncertain in- trusion evidence, particularly the IDS alerts. We call those attributes with a confidence value of 1 the veri- fied attributes. The report of such verified attributes are observations of facts. When new verified attributes are re- ported by system monitoring/scanning tools, we can use these observations to re-compute the confidence val- ues in the related previous objects in the network with Bayesian inference. And for each node in the Bayesian net- work, its final probability value is the combined result of all the evidence and knowledge. Take the Bayesian net- work shown in Figure 2 as an example. We may be uncertain about an IDS alert reporting a buffer overflow at- tack against sshd, since the IDS has reported the same type of alerts incorrectly in the past. However, if by scan- ning the system we find that sshd is not running properly after the IDS reports this alert, we can then update the con- fidence in ¬sshd running to be 1. Thus, we are more certain about the alert, which caused the attribute alter- ation. Though human users would do the same reasoning, placing these evidence into Bayesian networks offers addi- tional benefits, since such a reasoning process can then be performed automatically and systematically. Also such rea- soning could become too difficult for human users when dealing with very complicated scenarios.
When agents need to be involved in a dependency, they should trust the dependee. This trust reflects its estimation on the willingness of the dependee to personally fulfil the dependum. The previous section has presented the different elements that could help to determine this value. At the end of the estimation of a dependee’s willingness about a service, the depender may decide that the willingness value is high enough for the dependency to remain unchanged or that the value is low. In the second case, the depender should try to improve willingness value. One solution consists in positively influencing, through specific measures, the determinants of this value: criticality, pressure or reciprocity. To sustain the presentation of such measures and to demonstrate the applicability of our work, we present three scenarios, based on the running example case study, which emphasize different dependency’s settings with variations on dependees’ side.
into a single proposition or program δ. The fragment of the calculus restricted to programs is the Lambek calculus , which can be modelled by a quantale Q. The interaction between programs and states is modelled by the action of Q on a Q-right module. This fragment of our structure has been used to study concurrency in computer science [1, 21] and the dynamics and interaction of physical systems . The crucial additional epistemic features are captured by (lax) endomorphisms of the above structure, one endomorphism for each agent.
Abstract In the most popular logics combining knowledge and awareness, it is not possible to express statements about knowledge of unawareness such as “Ann knows that Bill is aware of something Ann is not aware of” – without using a stronger statement such as “Ann knows that Bill is aware of p and Ann is not aware of p”, for some particular p. In Halpern and Rˆego (2006, 2009b) (revisited in Halpern and Rˆego (2009a, 2013)) Halpern and Rˆego introduced a logic in which such statements about knowledge of unawareness can be expressed. The logic extends the traditional framework with quantification over formulae, and is thus very expressive. As a con- sequence, it is not decidable. In this paper we introduce a decidable logic which can be used to reason about certain types of unawareness. Our logic extends the tradi- tional framework with an operator expressing full awareness, i.e., the fact that an agent is aware of everything, and another operator expressing relative awareness, the fact that one agent is aware of everything another agent is aware of. The logic is less expressive than Halpern’s and Rˆego’s logic. It is, however, expressive enough to express all of the motivating examples in Halpern and Rˆego (2006, 2009b). In addition to proving that the logic is decidable and that its satisfiability problem is PSPACE-complete, we present an axiomatisation which we show is sound and com- plete.
callMethod The callMethod method (listing 9) is used when an active ob- ject method is called. It forwards the method call to the correct MPI node followed by the start of the receiving process for the result value. These to steps are also shown in the future process, first a send action followed by an asynchronous receive (irecv) action. The precise call identifier r is not relevant, any unique identifier will do, hence the use of the sum. This method also requires the validCall predicate discussed in section 5.4 indicat- ing, among other things, that the pre-condition is met.
Russell’s contradiction prevented Frege from completing his program of showing how all of arithmetic and analysis is logical in nature. The foundations of analysis were discussed in Part III, “The Real Numbers”, of Grundgesetze der Arithmetik (Basic Laws of Arithmetic), volume II, published in 1903. Russell’s Antinomy overshadows this second volume, and prevented the formal continuation, but before Frege introduced his own theory of real numbers he criticised in prose other extant theories, as he had done other theories of natural numbers in Grundlagen. The earlier book’s wit and light touch are here replaced by protracted, sarcastic and tedious schoolmasterly lecturing of others, most particularly Thomae. Cutting away the redundant verbiage, Frege’s criticisms come down to three further points. Firstly, the formalists are excessively cavalier about the distinction between signs and what they signify, ascribing properties of the one to the other and vice versa. Since they identify numbers with signs, this is to be expected. Secondly, for this reason, they are unable to distinguish between statements made within a formal context and statements made about a formal context. For example, when we say that a king and two knights cannot force checkmate, we have stated a well-known theorem of chess. But we have made a statement about chess, not a statement within chess. Chess positions and chess pieces do not have meanings: they are what they are, but do not state or say anything. (Frege 1903, Section 91.) By contrast, a mathematical statement has a meaning and states something. To suppose that a theory about the signs of arithmetic is a theory about numbers is to confuse statements within the language of arithmetic,
Notice how the reasoning required here contrasts with that involved in the analysis of section 2.3. Here the expert believes there is probability mass at the origin, and so is expressing a belief about perfection. He might plausibly reason something like this: “This system has very simple functionality, it has been designed very simply, and I have evidence of certain kinds of formal verification of its correctness, so I think there is a chance that they got it completely right.” This is very different from reasoning that “I know this system is too complex to be correct, so I know it will eventually fail in operation, but I am reasonably confident that the pfd is extremely small.” The two statements are very different in kind, and support for them comes from very different evidence. We think that real experts would be more comfortable with the former than with the latter.
As will be clear from the informal discussion above, human judgment inevitably forms an important element of any assessment of confidence (or its complement, doubt) when this arises from epistemic uncertainty. If, as we believe, confidence should be expressed probabilistically, the appropriate calculus of probability is a subjective Bayesian one. The first problem in any Bayesian analysis is to obtain the prior beliefs of the expert. Consider an example in which a pfd is the subject of the dependability claim. This pfd can be regarded as an unknown number that characterizes the aleatory uncertainty discussed above. In principle, we could estimate this number to any degree of accuracy if we were in the fortunate position of being able to generate unlimited numbers of statistically representative test cases, and we had a perfect oracle to decide whether each test case had been executed correctly. In practice, of course, we are never in this position: instead, there is uncertainty about the value of the pfd. This is the epistemic uncertainty discussed above, arising from imperfect knowledge, etc. This uncertainty about the true value of this pfd requires it to be treated as a random variable, P, so that confidence is expressed as a probability. Thus the expert may believe a priori that
processes must first, of course, have the wherewithal to undergo the first-order processes in question. Then to this must be added whatever is necessary for the creature to represent, and come to believe, that it is undergoing those events. Put differently, a creature that is capable of thinking about its own thought that P must be capable of representing thoughts, in addition to representing whatever is represented by P. "The second point is that in the decades that have elapsed since Premack and Woodruff (1978) first raised the question whether chimpanzees have a ‘ theory of mind ’, a general (but admittedly not universal) consensus has emerged that metacognitive processes concerning the thoughts, goals, and likely behavior of others is cognitively extremely demanding (Wellman, 1990; Baron-Cohen, 1995; Gopnik and Melzoff, 1997; Nichols and Stich, 2003), and some maintain that it may even be confined to human beings (Povinelli, 2000). For what it requires is a theory (either explicitly formulated, or implicit in the rules and inferential procedures of a domain-specific mental faculty) of the nature, genesis, and characteristic modes of causal interaction of the various different kinds of mental state. There is no reason at all to think that this theory should be easy to come by, evolutionarily speaking. And then on the assumption that the same or a similar theory is implicated in meta-cognition about one’s own mental states, we surely shouldn’t expect meta-cognitive processes to be very widely distributed in the animal kingdom. Nor should we expect to find meta-cognition in animals that are incapable of mind-reading." 185