Even after using the SemanticWeb in the e-learning field, the e-learning is still limited because of the very important and known obstacle of the communication between both the tutors and students, and students and their advisors. This obstacle is happening since all the information and material uploaded and accessed using the web without face to face contact compared to tradi- tional learning system. This limitation is causing prob- lems in tracking students’ situations, giving proper in- structions to improve their performance, etc. To reduce this gap between the two learning systems, SemanticWebMining proposed to investigate students’ logs data on distance learning portals to provide signs, information about students’ conditions and what could motivate and help them, to the administrators and advisors to decide the best way to guide their students to more successful study and by personalizing of e-learning content and ser- vices provided according to each student’s preferred studying strategies . From their work, it appears that the representation of the semantic data, collected by questionnaire, using a relational database is not the best way, since there is a more suitable format such as XML, RDF, and OWL which shows the real semantic data rep- resentation. Since a normal relational database has been used, it seems that this is inappropriate SemanticWebMining.
In this part, the state-of-the-artsurvey is carried out to discuss different approaches and solutions that researchers have proposed to make in-vehicle communication more secure. Researchers have worked in different CAN-Bus layers to introduce security solutions. Cho and Shin  proposed a clock skew based framework for ECU fingerprinting and use it for the development of Clock based Intrusion Detection System (IDS). The proposed clock based fingerprinting method  exploited clock characteristic which exists in all digital systems: “ tiny timing error known as clock skew ”. The clock skew identification exploits uniqueness of the clock skew and clock offset which is used to identify a given ECU based on clock attributes of the sending ECU. The proposed method measures and leverages the periodic behavior of CAN-Bus messages to fingerprint each ECU in the network and then constructing a reference clock behavior of each ECU by using Recursive Least Square (RLS) algorithm. Based on the developed reference behavior, deviation from the baseline clock behavior would consider as abnormal behavior (ECU is compromised) with low rate of false positive error: 0.055%. Cho and Shin developed a prototype for the proposed IDS and demonstrated effectiveness of the proposed CIDS on three different vehicles e.g. Honda Accord, Toyota Camry, and a Dodge Ram.
Miao et al.  presented a bilayer conduct reflection strategy in light of the seman- tic examination of dynamic API sequences. Operations on touchy framework assets and complex practices are disconnected in an interpretable way at various semantic layers. At the lower layer, crude API calls are joined to extract low-layer practices by means of information reliance investigation. At the higher layer, low-layer practices are fur- ther joined to build more intricate high-layer practices with great interpretability. The separated low-layer furthermore, high-layer practices are at last inserted into a high dimensional vector space. Henceforth, the disconnected practices can be specifically uti- lized by numerous prominent machine learning calculations. In addition, to handle the issue that considerate projects are not satisfactorily examined or malware and amiable projects are seriously imbalanced, an enhanced one-class bolster vector machine (OC- SVM) named OC-SVM-Neg is proposed which makes utilization of the accessible nega- tive examples. The trial comes about demonstrate that the proposed include extraction technique with OC-SVM-Neg beats double classifiers on the false caution rate and the speculation capacity.
list of servers and could accept requests. In addition, the improved WLC compared with the existing WLC and round robin algorithm. The results indicated that improved WLC provide better performance than other algorithms in terms of load distribution and average execution time. In , A. A. Abdelltif et al presented a load balancing application module based on SDN in a cloud environment to enhance the response time and resource usage. The module consisted of a system observer, a dynamic load balancing algorithm and a service categorization module which applied on an SDN controller and the cluster server. OpenFlow switch used to connect the server pools to the controller. The system observer gathered feedback from the servers and sent to the dynamic load balancer. As well as, the requests accepted by the service categorization module to identify requests services and send them to the balancer module. Furthermore, the load of each server computed according to resource utilization (CPU and Memory) and bandwidth. The proposed system applied to different VMs using OpenStack and HTTPref tool used to create the load toward the servers. Request per Second, response time and reply time are used to measure the performance of the Dynamic load balancing algorithm module. The results of the proposed method showed better performance in all the aforementioned metrics with compare to HAproxy. In , Dr. Mustafa Elgili argued the efficiency of three different load balancing algorithm; Round Robin, least connection and least loaded algorithm. The study used OPNET tool to simulate the proposed system structure. A load balancer placed between 8 HTTP servers and 112 clients. It is indicated that the CPU usage of web servers will be higher while using the least loaded algorithm with compare to the other two algorithms that had the same rate of CPU consuming. As well as, based on balancing the load among servers, the least loaded method distributed the requests with high difference among servers. Although, Round
Abstract: The aim of the paper is to investigate and present the subject of building ontologies using the SemanticWebMining that is defined as the combination of the two fast-developing research areas SemanticWeb and Web Mining.Web mining is the application of data mining techniques to the content, structure, and usage of Web resources and The SemanticWeb is the second-generation WWW, enriched by machine-processable information which supports the user in his tasks.. This can help to discover global as well as local structure “models” or “patterns”within and between Web pages and ontology extraction witch is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between those concepts, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. This paper gives an overview of where the two areas meet today, and discuss ways of how a closer integration could be profitable.
The study was carried out in which comparative statements of various page ranking algorithms with link editing, General Utility Mining and Topological frequency Utility Mining. Researchers also provided Model by taking constraints such as WebMining activity, topology, Process, Weighting factor, Time complexity, and Limitations etc. This also helped in comparing WPs-Tree and WPs-I tree structures. They concluded the page ranking algorithms play a major role in making the user search Navigation easier in the results of a search engine, which helps in best utilization web resources by providing required information to the Navigator. They also concluded WPs-Tree and WPs- I tree provide better storage representations. The association between web pages could be found easily in an efficient way. This survey could be helpful for understanding various page ranking algorithms along with different storage representation to correlate web pages. As a future direction, the new metric could be developed which may be still better than this, so that users could have quick response, resources on the network could be used efficiently thus promoting green computing. (Prasad Reddy, Shashikumar G.Totad, Geeta R. Bharamagoudar, Sept – Oct 2012).
As figure 1 describes at the bottom is XML that is suitable for sending documents across web. XML allows writing structured web documents with user-defined vocabulary. RDF is a basic data model, similar to entity-relationship model, to write simple statements about Web objects (resources). The RDF data model does not rely on XML, but RDF has an XML- based syntax. RDF Schema provides modeling primitives for organizing Web objects into hierarchies. Key primitives of web objects are classes, properties, subclass, sub-property relationships, and domain restrictions. It can be viewed as a primitive language for writing ontologies. The Proof layer involves the actual deductive process as well as the representation of proofs in Web languages (from lower levels) and proof validation. The Logic layer is used to enhance the ontology language and to allow the writing of application-specific declarative knowledge. Finally, the Trust layer will emerge through the use of digital signatures and other kinds of knowledge, based on recommendations by trusted clients .
901 | P a g e A program that wants to compare or combine information across the two databases has to know that these two terms are being used to mean the same thing. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters. A solution to this problem is provided by the third basic component of the SemanticWeb, collections of information called ontology. In philosophy, ontology is a theory about the nature of existence, of what types of things exists; ontology as a discipline studies such theories. Artificial-intelligence and Web researchers have co-opted the term ontology is a document or file that formally defines the relations among terms. The most typical kind of ontology for the Web has taxonomy and a set of inference rules. The taxonomy defines classes of objects and relations among them. For example, an address may be defined as a type of location, and city codes may be defined to apply only to locations, and so on. Classes, subclasses and relations among entities are a very powerful tool for Web use. We can express a large number of relations among entities by assigning properties to classes and allowing subclasses to inherit such properties. If city codes must be of type city and cities generally have Web sites, we can discuss the Web site associated with a city code even if no database links a city code directly to a Web site. Inference rules in ontology supply further power. Ontology may express the rule "If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code." A program could then readily deduce, for instance, that a Cornell University .
The authors aim was to focuses on two important issues: improving search-engine performance through static caching of search results, and helping users to find interesting web pages by recommending news articles and blog posts. Concerning the static caching of search results, they presented the query covering approach. For the recommendation of web pages, they presented a graph based approach, which helps to identify user-log. This paper concerned with the approaches of web substance mining and different uses of webmining. Web contains accumulation of hyperlinks, texts and images. Webmining methods are incredible framework utilized for data extraction. They suggested an organized and extensive outline of the writing in the region of Web Data Extraction Methods and Applications.
A different approach to SRT is taken by Vector Space Models (VSM), which eschew the use of symbolic structures, instead modeling all linguis- tic elements as vectors, from the level of words to phrases and sentences. Proponents of this ap- proach generally invoke neural network methods, obtaining impressive results on a variety of tasks including lexical tasks such as cross-linguistic word similarity (Ammar et al., 2016), machine translation (Bahdanau et al., 2015), and depen- dency parsing (Andor et al., 2016). VSMs are also attractive in being flexible enough to model non-local and gradient phenomena (e.g., Socher et al., 2013). However, more research is needed to clarify the scope of semantic phenomena that such models are able to reliably capture. We therefore only lightly touch on VSMs in this survey.
In , the authors have discussed on the IE tools for semanticweb and compared in many dimensions such as, the task domain, the techniques domain and the automation degree and also described why IE system fails in these dimensions. In , the author discussed how semanticweb is differ from web 2.0 and also discussed advantages and disadvantages of both techniques. The author says that semanticweb is differ but only needs some basic structure of the web to increase reliability and flexibility. In , first the authors discussed on semanticweb and the proposed a model that gives the build process and complete description of functions of each module. The authors have given some outline to increase the scalability and to provide better result of semanticweb .
This paper  proposes a method for making the K-Means algorithm more effective and efficient; so as to got better clustering with reduced complexity for discovering content from web pages using web content mining. A clustering algorithm partitions a data set into several groups such that the similarity within a group is larger than among groups, usually multidimensional is classified into groups (clusters) such that members of one group are similar according to a predefined criterion. The proposed algorithm uses standard deviation that reduces the time to make the cluster in simple k-mean. Tatyana IVANOVA Technical University of Sofia, College of Energy and Electronics, Botevgrad Bulgaria  proposed and discussed the architecture of the ontology learning module for extension of integrated development environment for learning objects, known also as Learning resource management and development system by integration of semantic technologies. The Authors Sivakumar and Ravichandran K.S As given in  semantic A Review on Semantic-Based WebMining and its Applications. Author survey the Semantic-based Webmining is a combination of two fast developing domains SemanticWeb and Webmining. Our approach is supported by our integrated the current challenges of the World Wide Web (WWW). The idea is to improve the results of WebMining by making use of the new semantic structure of the Web and to make use of WebMining for creating the SemanticWeb.
Google stated that PageRank works with the help of calculating the number and standard for nodes to web page to setup a dry idea for crucial web pages. Ranks pages depend on the number of outlinks indicating to them. The algorithm allocates pages a Total PageRank based on the PageRanks of the outlinks specify to the page. The links to a page can be arranged into the following types: Inbound links which is links in between given site from external source pages. Outbound links which are links from the given page to pages in the same site or other sites and The links which has no outgoing link is known has dangling link.
Peter.F.Patel et.al., has presented a concept of SemanticWeb Rule Language (SWRL) that combines the sublanguages of web ontology language (OWL) and Rule Markup Language (RML). It contains OWL DL and OWL Lite, which are the sublanguages of OWL aligned with unary/binary data log, which are the sublanguages of the RML. This paper implements the number of OWL maxims by including described rules. Thus the proposed method enables the rules by combining them with an OWL knowledge base system. A complete explanation of the OWL model theoretic semantics is presented in this paper in order to make available a recognized significance for OWL ontologies and the rules for developing. The rules projected are said to be in the structure of an insinuation among a predecessor (stiff) and the resultant (chief). Such that every time when the state of affairs are individual in the predecessor seize, then the state of affairs specified in the resultant must be seized too.
How do we relate low-level image features to high-level semantics? Our survey shows that the state-of-the-art techniques in reducing the ‘semantic gap’ include mainly five categories: (1) using object ontology to define high-level concepts, (2) using machine learning tools to associate low level features with query concepts, (3) introducing relevance feedback (RF) into retrieval loop for continuous learning of users’ intention, (4) generating semantic template (ST) to support high-level image retrieval, (5) making use of both the visual content of images and the textual information obtained from the Web for WWW (the Web) image retrieval. Retrieval at Level 3 is difficult and less common. Possible Level 3 retrieval can be found in domain specific areas such as art museums or newspaper library. Current systems mostly perform retrieval at Level 2. There are three fundamental
Available Online at www.ijpret.com 1421 real world it is a software engineering model. The designed Goals of Semantic Data system is to represent the real world as accurately as possible within some data set. There is linear and hierarchical organization of data to give certain meanings like in below example. Semantic data allow the real world within data sets by representing, machines to interact with worldly information without human interpretation .This semantic data is organized on binary models of objects, mostly in groups of three parts consisting of two objects and their relationship. Consider example, if one wanted to represent a pen is on a letter book, the organization of data might look like: PEN LETTER BOOK. The objects (pen and letter book) are interpreted with regard to their relationship (residing on). The data is organized linearly, telling the software that as PEN comes first in the line, it is the object that acts. i.e., the position of the word makes the software to understand that the pen is on the letter book and not that the letter book is sitting on the pen. Databases designed in this concept have greater applicability and are easily integrated into other databases.
The discovery of user access patterns from the user access logs, referrer logs, user registration logs etc is the main purpose of the Web Usage Mining activity. Pattern discovery is performed only after cleaning the data and after the identification of user transactions and sessions from the access logs. The analysis of the pre-processed data is very beneficial to all the organizations performing different businesses over the web . The tools used for this process use techniques based on AI, data mining algorithms, psychology, and information theory. The different systems developed for the Web Usage Mining process have introduced different algorithms for finding the maximal forward reference, large reference sequence, which can be used to analyze the traversal path of a user. The different kinds of mining algorithms that can be performed on the preprocessed data include path analysis, association rules, sequential patterns, clustering and classification. It totally depends on the requirement of the analyst to determine which mining techniques to make use of. Association Rules, This technique is generally applied to a database of transactions consisting of a set of items. This rule implies some kind of association between the transactions in the database. It is important to discover the associations and correlations between these set of transactions. In the web data set, the transaction consists of the number of URL visits by the client, to the web site. It is very important to define the parameter support, while performing the association rule technique on the transactions. This helps in reducing the unnecessary
Now-a-days internet has become an essential part of our daily life. In the past decade, there is an outstanding growth in number of websites & visitors. Because of this, large amount of data has been generated . Data mining involves analysis of data sets to find unsuspected relationships and to summarize the data. Webmining is one of the most noteworthy fields in the area of data mining. WebMining is the application of data mining techniques to Web data, which can be Web document content or hyperlink structure or Web log file, to discover and mine the undiscovered knowledge and useful patterns. Webmining is divided in the following three categories: Web content mining is the extraction, mining and integration of useful data, information and knowledge from Web page content. Web content mining is differentiated from two different points of view: Information Retrieval View and Database View. Web structure mining focuses on the underlying structure of the websites. It can be used to categorize web pages and is useful to generate information such as the similarity and relationship between different web sites. Web usage mining is the process of finding out what users are
environment tools use their own built in software for graphical representation. In programs developed using Java provision exists to interface with OntoStudio, TopBraid and SWOOP tools. Protégé is having its own application program interface (API) to integrate with any given application. Multiple language support is available in OntoStudio. Inference Engine is used to recognize any ambiguity about the meaning of the terminology used in the Ontology . Pellet reasoner is supported by all the IDE. Many reasoners use first- order predicate logic to perform reasoning. One of the difficulties of IDE is that the OWL file created in one editor tool is not compatible with other editor tools even though the file types are the same. Among the four tools discussed in this paper TopBraid is user-friendly. SWOOP and Protégé are open source editors. OntoStudio is commercially available tool with additional features like multi language support, database interface, reports and charts generation and has more flexibility. Protégé tool is also flexible, easy to use, open editor and widely used by the semantic group.
Measuring blog participants authority is becoming increasingly important, particu- larly when professional application areas are involved. Authoritative bloggers might be experts on a particular subject or persuasive forces that influence others. Weighing blog posts with respect to the author’s authority is a significant challenge for knowledge discovery algorithms & verifying the authenticity of data shared on blogs. Information frequently flows from blog to blog, making it difficult to track the information’s origin, provenance, or credibility. Discovery algorithms that incorporate techniques such as pattern mining to discover information flows