formed concepts sets. However, those algorithms have no guarantee of completeness for DL fragments which allow full negation and disjunction [12, 14]. Nevertheless, structural algorithms still work well as optimizations somewhere.
Structural DL reasoning methodology deals with deduction on KBs with well- formed syntactic structures. Sophisticated normalization pre-processes were nor- mally required in structural algorithms [12, 67]. Structural methods (in PTime) were employed by primitive DL reasoners that did not need full negation and disjunc- tion expressivity. Checking concept subsumptions for such simple DL languages may use structural algorithms directly, where whether a concept is subsumed by another one is inferred by comparing their syntactic deﬁnitions. When processing DLs with full negation and disjunction expressivity, the structural methods lose completeness [12, 14]. Nevertheless, as an optimization technique, especially in those DL rea- soning methods that utilize told information, such as top-search and bottom-search classiﬁcation, structural algorithms may be used to generate some told information eﬃciently. Anyway, structural algorithms are not complete in reasoning for more expressive DL fragments. More powerful reasoning capability comes from logical approaches. Among them, tableau-based DL reasoning algorithms are shown to be both sound and complete .
EL, DLP and DL-Lite, which also correspond to language fragments OWL EL, OWL RL
and OWL QL of the Web Ontology Language.
The EL family of description logics is characterised by allowing unlimited use of existential quantifiers and concept intersection. The original descriptionlogic EL allows only those features and ⊤ but no unions, complements or universal quantifiers, and no RBox axioms. Further extensions of this language are known as EL + and EL ++ . The largest such extension allows the constructors ⊓, ⊤, ⊥, ∃, Self , nominals and the univer- sal role, and it supports all types of axioms other than role symmetry, asymmetry and irreflexivity. Interestingly, all standard reasoning tasks for this DL can still be solved in worst-case polynomial time. One can even drop the structural restriction of regularity that is important for SROIQ. EL has been used to model large but lightweight ontologies that consist mainly of terminological data, in particular in the life sciences. A number of reasoners are specifically optimised for handling EL-type ontologies, the most recent of which is the ELK reasoner for OWL EL. 6
in both a satisfiable and an unsatisfiable variant, which re- sults in 18 files. The letter ending a file name indicates the variant of the file contents; moreover, n stands for satis- fiable concepts while p denotes unsatisfiable ones. Every file contains 21 numbered concept examples of increasing complexity. Furthermore, the computational time is ex- pected to grow exponentially with a subsequent concept example. The testing method consists in finding the num- ber of the most complex example which can be evaluated in no more than 100 seconds. We use this method, run- ning the reasoning system on one machine, in order to compare the A, T and DB strategies in terms of absolute values. All problems were initially transformed to NNF. The results of tests are collected in Table 2; the number 0 means that no example of the concept can be evaluated in the given time limit.
the associated challenges. We pursue instead, an inconsistency tolerant approach, where the aim is to draw meaningful conclusions from inconsistent knowledge.
Existing work in this area includes repair semantics [LLR + 10] and [EL16] in which incon- sistency is tolerated within the logic DL-Lite A [CDL + 07]. However, these approaches have not been extended to more expressive logics. A number of paraconsistent logics (for example [HW09], [MMH13] and [ZXLV14]) have been developed for more expressive logics, but the re- sulting logics are very weak. This situation was improved by Quasi-Classical Paraconsistent logics (for example [ZXLV14]). However, in the logics thus far, there is no control over the arbitration of inconsistencies, deciding which axioms to favour when conflict occurs. This was addressed by the incorporation of Possibilistic logic (for example [ZQS13] and [QZ13]) which allows axioms to be labelled with a measure of necessity (or possiblity) of the truth of the axiom. The arbitration mechanism relies on establishing a threshold that captures the overall confidence level in a knowledge base that determines which axioms may be defeated. This leads to rather coarse control over the arbitration. The absence of a solution that permits inconsistency tolerant reasoning in expressive logics, and that also offers precise control over the arbitration of inconsistency, has led to the work in this thesis.
For fuzzy rule induction the FRL algorithm 1,12 was used as a basis. The algorithm constructs fuzzy clas- sification rules and can use nominal as well as nu- merical attributes. For the latter, it automatically extracts fuzzy intervals for selected attributes. One of the convenient features of this algorithm is that it only uses a subset of the available attributes for each rule, resulting in so-called free fuzzy rules. The KNIME implementation follows the published algo- rithm closely, allowing various algorithmic options to be set as well as different fuzzy norms. After ex- ecution, the output is a model description in a KN- IME internal format and a table holding the rules as fuzzy interval constraints on each attribute plus some additional statistics (number of covered pat- terns, spread, volume etc.). These KNIME repre-
The layered architecture (cf. Figure 3.1 on page 45), which was envisioned for the Semantic Web by its inventor Tim-Berners Lee (Berners-Lee, 2000b), will guide us through the chapter. Section 3.1 sketches the fundamental ideas behind the Se- mantic Web and introduces the layered architecture. Section 3.2 on page 46 reca- pitulates the syntax layer and briefly introduces the Extensible Markup Language (XML), the accompanying languages for defining schemas and discusses the in- adequacy of XML as a semantic foundation of the Semantic Web. Section 3.3 on page 49 presents the data layer and describes the Resource Description Framework (RDF), which is intended to be the unifying data model for all Semantic Web data purposes. We discuss the proposed semantics for RDF data and the associated vocabulary definition language RDF Schema (RDFS) and present an (incomplete) axiomatic formalization of RDF in Datalog. Section 3.4 on page 60 presents the on- tology vocabulary layer and introduces the Web Ontology Language (OWL) and its subsets OWL Lite and OWL DL. We relate the language to the Description Logics SHIF (D) and SHIN O(D) presented in Section 2.4 on page 31 and discuss limi- tations of the language. Section 3.5 on page 70 presents current proposals for rules on the Semantic Web and discusses their relation with the other layers presented before.
The standard approach to information flow in a multi-agent system has been presented in  but it does not present a formal description of epistemic programs and their updates. The first attempts to formalize such programs and updates were done by Plaza , Gerbrandy and Groeneveld , and Gerbrandy [10, 11]. However, they only studied a restricted class of epistemic programs. A general notion of epistemic programs and updates for DEL was introduced in . In our papers [2, 3], we introduced an algebraic semantics based on the notion of epistemic systems and a sequent calculus for a version of DEL, but the completeness of the sequent calculus was still an open problem. In this paper, we summarize the material in [2, 3] and present an updated version of the sequent calculus for which we have proved the completeness theorem with regard to the algebraic semantics.
Obviously, although we do accept much of what we are told, we should not accept all of it. As Woods points out, in a situation when we are told something, there may be a trigger which indicates that before acceptance there should be a due dil- igence search. Woods sees such situations as rare. He does enu- merate some triggers for the due diligence exercise. That some interlocutor’s word is an evaluation, not a description, triggers ordinarily that it should not merely be accepted but only when properly defended. A perceptual report which involves misper- ception and is immediately corrected, an interpretation, e.g., “Mother Theresa had a generous disposition”, recognition of some unreliability about the subject matter, are all triggers that one should not simply accept what one has been told. However, Woods greater concern in this discussion is not with these trig- gers but with the fact that in mechanisms that generate beliefs which may become premises of our reasoning, there may be many conditions which may produce error but which are not ac- companied by triggers. Woods explains that when one acquires a belief either through perception, say-so, or inference, one is experiencing belief change. With perception or say-so, there may be a long chain to the eventual production of one’s belief. By contrast, an inferential chain may be short. Woods proposes talking about the number of steps leading to a belief in a particu- lar case as “the surface of a medium of belief-change” (330, ital- ics in original). “The larger the surface size of a medium of be- lief change, the greater the likelihood of error” (331). Woods proposes this hypothesis as intuitive and worthy of empirical test.
We have proposed tightly coupled rough descriptionlogic programs (rough dl-programs) under the answer set semantics, which generalize the tightly coupled descriptionlogic programs by rough set theory in both the logic program and the descriptionlogic component. In this paper, we first provide the syntax and semantics of rough dl-program, then we present some reasoning problems of rough dl-program, finally we show that the answer set of rough dl-program has a close relation with the minimal model, and the rough dl-program faithfully extends both rough disjunctive logic program and rough descriptionlogic. In a word, rough dl-program can well represent and reason a great deal of real-word problems.
The interface conforms to a standard “tell and ask” format: facts are asserted to the knowledge base (KB) and queries answered without the user specifying when or how reasoning should be performed. In order to improve efficiency, and to support the (future) possibility of multi-user access to a KB, the interface has a simple transaction control mechanism. This mechanism could also be augmented with partial (complete) roll-back: the ability to undo the last (an arbitrary number of) transactions.
In order to illustrate the privacy scenario motivating this problem, assume that you are asked to perform a survey regarding the satisfaction of employees with the management of a company. Since the boss of the company is known not to respond well to criticism, the employees insist that you perform the survey such that the identity of persons voicing criticism cannot be deduced by the boss. Thus, you let the employees use a pseudonym when answering the survey. However, the survey does ask some personal data from the participants, and you are concerned that the boss can use the provided answers, in combination with the employee database and general knowledge about how things work in the company, to deduce that a certain pseudonym corresponds to a specific employee. For example, assume that in the survey the anonymous individual x states that she is female and has expertise in logic and privacy. The boss knows that all employees with expertise logic belong to the formal verification task force and all employees with expertise privacy belong to the security task force. In addition, the employee database contains the information that the members of the first task force are John, Linda, Paul, Pattie and of the second Jim, John, Linda, Pamela. Since Linda is the only female employee belonging to both task forces, the boss can deduce that Linda hides behind the pseudonym x. The question is now whether you can use an automated system to check whether such a breach of privacy can occur in your survey.
The Web Ontology Language (OWL) as part of the semantic web  is a widely used knowledge representation language for describing knowledge in application domains. A major topic of knowledge representation focuses on representing information in a form that computer systems can utilize to solve complex problems. The selected knowledge representation formalism is descriptions logics (DLs) , which is a fam- ily of formal knowledge representation languages. It is used to describe and reason about relevant concepts (terminological knowledge - TBox) and individuals (asser- tional knowledge - ABox) of a particular application domain. The widely used Web Ontology Language (OWL) is based on DLs. One of the reasoning components in DL systems is an engine known as classifier which infers entailed subsumption rela- tions from knowledge bases. Research for most DL reasoners is focused on optimizing classification using one single processing core [7, 24, 16]. Considering the ubiquitous availability of multi-processor and multi-core processing units not many OWL reason- ers can perform inference services concurrently or in parallel.
“ Toleration of inconsistency can only be done by fuzzy systems. We need a semantic web which will provide guarantees and about which one can reason with logic” [ 1 ], such are the words of Tim Berners-Lee, founder and President of the World Wide Web Consortium. Where he tries to show us that all these metadata are created by humans, and so they should contain many uncertainties and inaccuracies which will affect the construction of ontologies. Because fuzzy logic was conceived to find solutions to the problems of inaccuracies and uncertainties in a flexible way, researchers have had the idea to integrate this logic in the field of the Semantic Web in
In this paper we argue that the framework of parameterized complexity has a lot to offer for the complexity analysis of descriptionlogicreasoning problems—when one takes a pro- gressive and forward-looking view on parameterized complex- ity tools. We substantiate our argument by means of three case studies. The first case study is about the problem of con- cept satisfiability for the logic ALC with respect to nearly acyclic TBoxes. The second case study concerns concept sat- isfiability for ALC concepts parameterized by the number of occurrences of union operators and the number of occurrences of full existential quantification. The third case study offers a critical look at data complexity results from a parameter- ized complexity point of view. These three case studies are representative for the wide range of uses for parameterized complexity methods for descriptionlogic problems.
4 CRIL, Université d'Artois & CNRS, France
Abstract. In this paper we present an approach to defeasible reasoning for the descriptionlogic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more eective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption. 5
Getting more help: If you are confused about the rules of logic, or are having trouble applying them to your own work, please make an appointment to see Professor Markert or Professor McKinney in the Writing and Learning Resources Center. The Honors Writing Scholars may also be able to assist you, and the UNC Writing Center on main campus has a website with additional examples and explanations of logical fallacies. Go to:
for an inductive predicate P are merely sequent versions of the productions defining P , while the left-introduction rule for P embodies the natural principle of rule induction over the definition of P . However, there is also a natural notion of cyclic proof for the logic, for which we introduce a second proof system in Section 4. In this system, the induction rules of the first system are replaced by simple case-split rules. Pre-proofs in the system are “unfinished” derivation trees in which every node to which no proof rule has been applied is identified with a syntactically identical interior node; pre-proofs can thus straightforwardly be understood as cyclic graphs. In general, pre-proofs are not sound, so to ensure soundness we impose a global trace condition stipulating, essentially, that for each infinite path in the pre-proof, some inductive definition is unfolded infinitely often along the path. By appealing to the well-foundedness of our inductive definitions, all such paths can be disregarded, whereby the remaining portion of proof is finite and hence sound for standard reasons. Finally, in Section 5, we identify the main directions for future work.
Other proposals to include default-style rules into description logics include the work of Baader and Hollunder (1995) and Padgham and Zhang (1993).
Closely related to our work is that of Giordano et al. (2009b) who use preferential order- ings on ∆ I to define a typicality operator T for ALC such that the expression T(C) v D corresponds to our C @ ∼ D. They provide a version of a representation result for preferential orderings in terms of properties on selection functions (functions on the power set of the domain of interpretations), and present a tableaux calculus for computing preferential en- tailment that relies on KLM-style rules. Recently (Giordano et al., 2013b), they extended this work by considering modular orderings on ∆ I (i.e., ranked interpretations) and then augment the inferential power of their system with a version of a minimal-model semantics, in which some ranked interpretations are preferred over others. This is similar in intuition to minimal rank entailment, but their approach also has a circumscriptive flavour to it (see below) since it relies on the specification of a set of concepts for which atypical instances must be minimized. As mentioned in Section 5, minimal rank entailment for ALC is based on the definition of minimal rank entailment for the propositional case, first presented by Giordano et al. (2012). In two recent papers (Giordano, Gliozzi, Olivetti, & Pozzato, 2013c; Giordano et al., 2013b) they extended this to the case for ALC.