Top PDF Knowledge based Grid Using Semantic Web for the University System

Knowledge based Grid Using Semantic Web for the University System

Knowledge based Grid Using Semantic Web for the University System

Grid was introduced in the mid1990s to denote a distributed computing environment for advanced science and technology. The concept of Computational Grid has been inspired by the „electric power Grid‟, in which a user could obtain electric power from any power station present on the electric Grid irrespective of its location, in an easy and reliable manner. When we need additional electricity we have to just plug into a power Grid to get additional electricity on demand, similarly for computational resources plug into a Computational Grid to access additional computing power on demand using resource. Internet users require lots of knowledge for their future need but problem is irrelevancy of data. The present web and search engines are fully based on syntax to retrieve the data from the distributed environment which does not full fill
Show more

7 Read more

Ontology Based Knowledge Grid in Semantic Web to Discover Knowledge in Distributed Environment

Ontology Based Knowledge Grid in Semantic Web to Discover Knowledge in Distributed Environment

Knowledge based grid is a semantic web based environment that enable machine and human to communicate, coordinate, publish, share and manage different knowledge resources in distributed environment which enhance scaling and stability. It supports on demand, annotative and strong services which is useful for innovation and collaborative work in distributed heterogeneous environment. The Knowledge Grid architecture is design on grid tools and services, i.e. it considers basic grid services to design specific knowledge extraction services [11]. Such types of services can be design in different ways using the different available grid tools and services. Grid architecture based on the Global Tools which is more suitable for knowledge discovery communication, sharing process and integration. It is an environment for providing Grid-based knowledge discovery services. These services allow professionals and scientists to create and manage complex knowledge discovery applications, composed as workflows that integrate data sets, mining tools, and computing and storage resources provided as distributed services on a Grid. It facilities allow users to compose, store, share, and execute these knowledge discovery workflows as well as publish them as new components and services on the Grid. The knowledge grid can be used to perform data mining on very large data sets available over Grids, to make scientific discoveries, improve industrial processes and organization models, and uncover business valuable information. It provides a higher level of abstraction and a set of services based on the use of Grid resources to support all those phases of the knowledge discovery process. Therefore, it allows the end-users to concentrate on the knowledge discovery process they must develop without worrying about Grid infrastructure details.
Show more

12 Read more

Towards Knowledge based Systems for GDPR Compliance

Towards Knowledge based Systems for GDPR Compliance

Abstract. Legal compliance is traditionally seen to be sufficiently demon- strable using legal documents that describe how various operations and activities follow a given set of obligations. The General Data Protec- tion Regulation (GDPR) enforces larger responsibilities upon organisa- tions and provides motivation for the use of technological measures that can ease its compliance. While there is no legal requirement to collab- orate on compliance technologies or to use a common mechanism for defining knowledge, doing so has several benefits to the larger commu- nity. Through this paper, we describe how open and shared technolo- gies targeted towards GDPR and its compliance can be used to create knowledge-based systems. Our approach uses semantic web technologies due to their open and flexible nature towards describing concepts and relationships. We present a model for such a knowledge-based system along with work published to date.
Show more

8 Read more

Research on construction of natural language processing system based on semantic web ontology

Research on construction of natural language processing system based on semantic web ontology

Some people are committed to study to indicate the likelihood of RDF data. However, it is the final conclusion of these studies and to fully consider the scale problem. One of the study's proposed RDF data storage based on path, but this method based on path eventually in the relational database based on: it is the sub graph stored in different relational tables. Therefore, such systems cannot provide massive RDF data query scale. Others focus on the measure of semantic similarity network internal and estimation method is used to selectively to optimize the RDF data query; these methods based on memory map implementation, there are still limitations on the scope of it. As the language definition is currently a very active research field in network based knowledge representation, there are many proposals and new standards. The most important of which are RDF mode and DAML+OIL (recently re defined as OWL), the latter defined on the former one. In addition, there are XML model and Topic Maps is sometimes seen as a knowledge representation language.
Show more

6 Read more

Semantic Web and Knowledge Management in eHealth Care System

Semantic Web and Knowledge Management in eHealth Care System

We have simulated our system in Java. We implemented and tested with a system configuration on Intel Dual Core processor, Windows XP and using Netbeans 7.0. We have used the following modules in our implementation part. The details of each module for this system are as follows. We have implemented and tested with the 5 moudles. They are: PHR Owner Module,Attribute based Access Policy Module, Data confidentiality Module,Search Engine and Verify Files. We describe the each module in details now as follows. In the first module, the main goal of our framework is to provide secure patient-centric PHR access and efficient key management at the same time. The key idea is to divide the system into multiple security domains (namely, public domains (PUDs) and personal domains (PSDs)) according to the different users’ data access requirements. The PUDs consist of users who make access based on their professional roles, such as doctors, nurses and medical researchers. In practice, a PUD can be mapped to an independent sector in the society, such as the health care, government or insurance sector. For each PSD, its users are personally associated with a data owner (such as family members or close friends), and they make accesses to PHRs based on access rights assigned by the owner. Each data owner (e.g., patient) is a trusted authority of her own PSD, who uses a KP-ABE system to manage the secret keys and access rights of users in her PSD. Since the users are personally known by the PHR owner, to realize patient-centric access, the owner is at the best position to grant user access privileges on a case-by-case basis. For PSD, data attributes are defined which refer to the intrinsic properties of the PHR data, such as the category of a PHR file. For the purpose of PSD access, each PHR file is labeled with its data attributes, while the key size is only linear with the number of file categories a user can access. Since the number of users in a PSD is often small, it reduces the burden for the owner. When encrypting the data for PSD, all that the owner needs to know is the intrinsic data properties. In the second module, we provide security using Attribute based encryption technique. In our framework, there are multiple SDs, multiple owners, multiple AAs, and multiple users. In addition, two ABE systems are involved. We term the users having read and write access as data readers and contributors, respectively. In the third module,
Show more

6 Read more

Mining severe drug-drug interaction adverse events using Semantic Web technologies: a case study

Mining severe drug-drug interaction adverse events using Semantic Web technologies: a case study

While recognizing, explaining and ultimately predicting DDIs constitute a huge challenge for medicine and public health, informatics-based approaches are increasingly used in dealing with the challenge [5]. Semantic Web technologies provide a scalable framework for data standardization and data integration from heterogeneous resources. For instance, Samwald et al. [6] developed a Semantic Web-based knowledge base for query answering and decision support in clinical pharmacogenetics, in which three dataset components are integrated. In our previous and ongoing study, we developed a standardized knowledge base of ADEs known as ADEpedia (http://adepedia.org) lever- aging Semantic Web technologies [7]. The ADEpedia is intended to integrate existing known ADE knowledge for drug safety surveillance from disparate resources such as Food and Drug Administration (FDA) Structured Product Labeling (SPL) [7], FDA Adverse Event Reporting System (AERS) [8], and the Unified Medical Language System (UMLS) [9].
Show more

12 Read more

Knowledge based Learning Experience Management on the Semantic Web

Knowledge based Learning Experience Management on the Semantic Web

Learners engaged in informal learning tasks often find sharing their learning experience to be helpful. Informally, a learning experience describes what a learner has done, with what learning resources. We explore technological issues in a semantic web driven learning experience management architecture that supports collaborative learning, i.e., learning by reusing, sharing experience and building knowledge together within a particular domain. We demonstrate that by exploiting semantic web technology, it is possible to search and reuse the learning resources more intelligently through the use of semantically annotated learning experiences. Work has been done in line with the knowledge life cycle, from building an ontology of learning experiences, creating semantic annotations to reusing semantics to manage learning experience. Functionalities of ontology management, semantic annotation and reuse have been provided in a Service Oriented Architecture (SOA). We give a scenario in which the system can be used to assist collaborative learning in different domains through sharing learning experience and constructing knowledge together.
Show more

8 Read more

A web service based architecture for authorization of unknown entities in a Grid environment.

A web service based architecture for authorization of unknown entities in a Grid environment.

In this section will give a detailed description about our system, the Augmented Authorization System Using Reputation (AASUR). At the most general level, the main contribution of this architecture is the notion of providing authorization on the basis of points which are collected in a standardized way, and which in turn may be used by individual resources to determine reputation. The only guarantee that this architecture makes is that points will be collected consistently for accepted actions. Any decision that results in a breach at a resource site is completely the responsibility o f the resource site that misinterpreted the points presented at the time of resource request. There is no requirement to obtain or analyze any assertions from any third party though, in the future, this capability could be exploited to enhance this system further. From an administrative point of view, there is no need to establish anything prior to a job request/allowance. Though these features may exist in other systems individually, this combination, and consequently the overall ability of the system, does not exist anywhere else to the best of our knowledge.
Show more

141 Read more

Performance Analysis Of Ontology Processing System Using Information Filter and Knowledge Repository for Semantic Web

Performance Analysis Of Ontology Processing System Using Information Filter and Knowledge Repository for Semantic Web

The suggested system works on making the dynamic query updates using knowledge source modifications. As the source or its knowledge base changes the way ontology deals with the query is also changed. The users request is passed on to the Semantic Web network which separates the query according to their URI’s. Mainly the separation involves partition of the data for forming the triples which contains the information o the subject, their relationships and the objects which is a data values. The segregated information is stored in the knowledge repository from where it is further processed. Now if the information which is passed for comparing and mapping with the ontologies doest matched with the required formats the processing systems the request for processing. It removes the problems associated with uncertain information and legacy ontologies processing’s. The meta-data model and logics data is defined in the hierarchies given with the RDF and XML schemas. They are further used for holding the various types of user’s oriented models. The rules defined for the ontologies formation is based on the domain knowledge and the logics. The rules can be dynamically updated by the ontology editor which later on be guided by the previously assigned logics. Knowledge repository is a relational database organized in a way that enables efficient storing and access to RDF metadata. This repository can be seen as a RDF repository. Knowledge processing component enables efficient manipulation with the stored knowledge, especially graphbased processing for the knowledge represented in the form of rules, e.g. deriving dependency graph or consistency checking Knowledge sharing is realized by searching for rules that satisfy the query conditions. In the RDF repository rules are represented as reified RDF statements and while in RDF any statement is considered to be an assertion, we can view an RDF repository as a set of ground assertions in the form (subject, predicate, and object). Rules are also related to domain ontology, which contains domain axioms used for deriving new assertions. Therefore the searching is realized as an inferencing process[15].
Show more

5 Read more

Knowledge Extraction for Semantic Web using Web Mining with Ontology

Knowledge Extraction for Semantic Web using Web Mining with Ontology

This paper [5] proposes a method for making the K-Means algorithm more effective and efficient; so as to got better clustering with reduced complexity for discovering content from web pages using web content mining. A clustering algorithm partitions a data set into several groups such that the similarity within a group is larger than among groups, usually multidimensional is classified into groups (clusters) such that members of one group are similar according to a predefined criterion. The proposed algorithm uses standard deviation that reduces the time to make the cluster in simple k-mean. Tatyana IVANOVA Technical University of Sofia, College of Energy and Electronics, Botevgrad Bulgaria [6] proposed and discussed the architecture of the ontology learning module for extension of integrated development environment for learning objects, known also as Learning resource management and development system by integration of semantic technologies. The Authors Sivakumar and Ravichandran K.S As given in [7] semantic A Review on Semantic-Based Web Mining and its Applications. Author survey the Semantic-based Web mining is a combination of two fast developing domains Semantic Web and Web mining. Our approach is supported by our integrated the current challenges of the World Wide Web (WWW). The idea is to improve the results of Web Mining by making use of the new semantic structure of the Web and to make use of Web Mining for creating the Semantic Web.
Show more

6 Read more

A Prototype of a Semantic Platform with a Speech Recognition System for Visual Impaired People

A Prototype of a Semantic Platform with a Speech Recognition System for Visual Impaired People

For these reasons, in this paper we make a prototype of a platform based on semantic web with speech recog- nition using natural language allowing blind students access to knowledge resources and learn independently of their tutor. The present work is divided as follow: in Section 2 we review related papers. The next section discusses the current problems. Afterwards, in Section 4 we present our proposal, in Section 5 we present the conceptual scheme of architecture and finally in Section 6 expected contribution is presented.

6 Read more

A Novel Ontology Processing System Using Information Filter and Knowledge Repository for Semantic Web

A Novel Ontology Processing System Using Information Filter and Knowledge Repository for Semantic Web

The suggested system works on making the dynamic query updates using knowledge source modifications. As the source or its knowledge base changes the way ontology deals with the query is also changed. The users request is passed on to the Semantic Web network which separates the query according to their URI’s. Mainly the separation involves partition of the data for forming the triples which contains the information o the subject, their relationships and the objects which is a data values. The segregated information is stored in the knowledge repository from where it is further processed. Now if the information which is passed for comparing and mapping with the ontologies doest matched with the required formats the processing systems the request for processing. It removes the problems associated with uncertain information and legacy ontologies processing’s. The meta-data model and logics data is defined in the hierarchies given with the RDF and XML schemas. They are further used for holding the various types of user’s oriented models. The rules defined for the ontologies formation is based on the domain knowledge and the logics. The rules can be dynamically updated by the ontology editor which later on be guided by the previously assigned logics. Knowledge repository is a relational database organized in a way that enables efficient storing and access to RDF metadata. This repository can be seen as a RDF repository. Knowledge processing component enables efficient manipulation with the stored knowledge, especially graph- based processing for the knowledge represented in the form of rules, e.g. deriving dependency graph or consistency checking Knowledge sharing is realized by searching for rules that satisfy the query conditions. In the RDF repository rules are
Show more

5 Read more

A Novel Approach for Improving the Recommendation System by Knowledge of Semantic Web in Web Usage Mining

A Novel Approach for Improving the Recommendation System by Knowledge of Semantic Web in Web Usage Mining

Web sites are great amount of use for the user. Web sites are built, deployed and maintained to serve with various function to user. Web is increasing its importance in each possible aspect and is becoming an expected part of one’s routine and regular resources. Hence there are sufficient opportunities and wide scope and requirement to study this field in the depth. Systems incorporating knowledge from ONLY navigational history (WUM) often produce incomplete, inefficient result – as it is based on single parameter .So newly added content(web page),will not be listed in Recommendation List , although it best matches with users interest – just because so far it is not visited or visited very less time. So a system can be improved, if it considers semantic knowledge of each page and incorporates this factor, with knowledge achieved from WUM, dynamically – for each possible element (page) from recommendation set.
Show more

9 Read more

DEVELOPING SEMANTIC WEB MODEL OF SCHOOL INFORMATION SYSTEM USING SEMANTIC WEB TECHNOLOGIES

DEVELOPING SEMANTIC WEB MODEL OF SCHOOL INFORMATION SYSTEM USING SEMANTIC WEB TECHNOLOGIES

Information technology (IT) has a remarkable influence on Education sector. Number of Schools and colleges are increased now a day. Schools are provided highly infrastructure, transportation and learning facilities through latest technology. Every parent have dream to give their children better education. To find the school based on criteria, traditional web pages have limitation to give proper results. This paper proposed semantic web framework to enhance the school information system. Semantic model of school information system provides knowledge based information about the schools. Ontology [1] of user profile including questionnaires related to schools can be useful in searching proper information.
Show more

8 Read more

The New Approach To Internet Marketing Based On Knowledge Bases And The Semantic Web

The New Approach To Internet Marketing Based On Knowledge Bases And The Semantic Web

Marketing on the social networks has become more serious than ever and this is not something what somebody could do in his free time and according to the principles that he likes. It is sure that somebody with little knowledge and experience directs web page of Coca Cola or manages its marketing activities. The same should be in the big but also in the small business system. If somebody wants to be a big business system, they must think of the big. We must bear in mind that “Correct marketing does not work by the personal feeling.” Tools, techniques and skills related to digital marketing have been developed through the large number of research and case studies. In order to succeed, one must be better than the competition, to be better and always even better.
Show more

6 Read more

Managing Semantic Metadata for Web Grid Services

Managing Semantic Metadata for Web Grid Services

The way that current service-oriented infrastructure handles and manages services’ metadata is not adequate and effective for metadata to help services discovery and knowledge sharing. First, there is no enough metadata about Web/Grid services. Services, in particular, legacy resources, are developed by service providers for their own use, without realising the role and importance of metadata this naturally leads to the lack of descriptive information for services. Second, metadata are unstructured. Web/Grid services are diverse; the types of metadata required for describing ser- vices in e-Science (Hey & Trefethen, 2003) vary greatly between individuals, organisations, and scientific communities. The use of different ter- minologies and the adoption of various metadata models such as using comments or annotations as metadata are inevitable. Unsurprisingly, this causes the problem of mutual understanding and service interoperability. Third, metadata lack semantics. XML- (www.w3.org/XML) based metadata modeling and representation as in WSDL and UDDI are incapable of capturing genuine semantics, relationships, or constraints. There are no problems for humans to understand XML-based metadata as described in the above photo example because we know the meaning of these English words. The question is: “can
Show more

22 Read more

Applying the Semantic Web to Manage Knowledge on the Grid

Applying the Semantic Web to Manage Knowledge on the Grid

The ontologies are represented in a machine- understandable language with formal semantics and reasoning capability, namely DAML+OIL. This language is based on Description Logics. Ontologies in this language can be elaborate and expressive, and the temptation is to over complicate the interface to them, rendering them daunting and incomprehensible to the user. Instead we adopted a simplified presentation interface that loses little of the expressivity of the language but hides it from the user. We call this OntoView – it provides a “domain expert- sympathetic” view over the ontology, configurable by the expert knowledge engineer in collaboration with the domain specialists. The view consists of a set of relatively simple “view entities” that map to more complex constructs in the underlying ontology. As these entities are manipulated in the view, corresponding modifications will be produced in the ontology. The manner in which the entities in a particular ontology view map to the constructs in the underlying ontology, is determined by a “view configuration” (Figure 2), specifically created for that ontology, and stored in an XML-based format.
Show more

8 Read more

Knowledge based Recommendation System in Semantic Web   A Survey

Knowledge based Recommendation System in Semantic Web A Survey

World Wide Web has become a major source of information acquisition as it contains millions of documents related to any topic, which is of interest to users. Users often find it difficult to extract the relevant information from the documents returned as a result of the query posted on Web the reason behind this is, WWW contains documents which can be interpretable by only human but not by machine[1]. Recommendation system is used to solve this problem by generating personalized recommendations to Web users. Personalized recommendation in Web is no longer considered as an option but has become a necessity because of the movement from traditional physical stores of products or information to virtual stores of products and information [2]. As a result of this movement customers have a wide variety of options to choose from. Users can switch from one Website to another in virtual store; as many Websites offer the same type of services and products. It becomes difficult to retain customers in virtual store. Personalized recommendations help to solve the customer retention problem. Recommendation systems improve the trust of customer in business by building customer loyalty and one to one relationship by understanding the needs of each customer.
Show more

6 Read more

Head Movement Based Feeder System for the Physically Challenged Using PSoC

Head Movement Based Feeder System for the Physically Challenged Using PSoC

Proximity detection is performed by a proximity antenna acting as a capacitive sensor. The proximity antenna consists of a wire connected to the proximity connector on the board. Upon power up, the board establishes a baseline capacitance value of the board along with the antenna attached to it. This is used as a reference value of capacitance and is called the parasitic capacitance of the board. When a conductive object such as a human finger is brought close to the antenna, the overall capacitance of the board changes. This change in capacitance determines the proximity of the finger to the antenna. An increase in capacitance corresponds to the finger being closer to the antenna. This is used to light up the LEDs based on the proximity of the finger to the antenna. The number of LEDs turned on increases as the proximity of the finger increases. To establish the parasitic capacitance, the antenna must be connected to the board before power up. The baseline for capacitive sensors is updated continuously by the firmware. This accounts for any changes in environmental conditions during the operation.
Show more

5 Read more

Incremental knowledge based system for schema mapping

Incremental knowledge based system for schema mapping

In order to determine the similarity between instances, all the instances of a schema element are combined into a virtual document. In this way, many virtual documents are created from all the schema elements. The documents are then compared with each other using the document similarity measure, TF/IDF, for completing matching. This approach has been implemented in some systems including RiMOM (Li et al., 2009) and COMA++ (Massmann and Rahm, 2008). COMA++ uses website names and descriptions for determining the similarity between documents. Instance overlapping methods are also used to determine elements similarity. In COMA++, URLs are used to identify the overlapping between web directories such as Yahoo and Google considering URL usage. Four similarity measures: base-k similarity, dice, minimum and maximum are used to determine URL-based similarity. URL matching alone achieves average F-measures 60% and 79% with the combinations of name and description matching respectively. Instance overlapping methods are also used to match large life science ontologies (Kirsten et al., 2007) and product catalogs (Thor et al., 2007). In order to match product catalogs, the similarity between associated instances is used for deriving the similarity between elements. The hyperlink between data sources and general object matching are also used for performing instance matching. Hoshiai et al. (2004) compared feature vectors between a pair of elements using keywords found in the instances, and the similarity between feature vectors are determined by a structural matcher. There are other instance-based matching systems such as GLUE (Doan et al., 2002), SAMBO (Lambrix and Tan, 2006) and SEMINT (Li and Clifton, 2000).
Show more

230 Read more

Show all 10000 documents...