Ontologies only answer part of our questions on how to represent characteristics of services. The exploring and discovering of large information spaces is still a diffi- cult task, especially if the user is not familiar with the terminology used to describe information and the query language used to search specific information . Work on enhancing Web service discovery framework—such as UDDI—with semantic features ,  cannot solve the problem since the user still needs to write the query in a dedicated ontology language, or to fill a template with terms defined in an ontology to find services. To cross this hurdle, we develop the OPUCE Visual Semantic Service Browser (OPUCE Browser in short) that provides intuitive visual interfaces for users to easily carry out the tasks of service exploring and discovering. As the most significant feature, the browser does not require users to have any prior knowledge of ontology languages or domain terms, which highly pronounces the theme of the OPUCE project: user- centricity.
Abstract. The successful application of Grid and Web Service technologies to real-world problems, such as e-Science , requires not only the development of a common vocabulary and meta-data framework as the basis for inter-agent communication and service integration but also the access and use of a rich re- pository of domain-specific knowledge for problem solving. Both requirements are met by the respective outcomes of ontological and knowledge engineering initiatives. In this paper we discuss a novel, knowledge-based approach to re- source synthesis (service composition), which draws on the functionality of semantic web services to represent and expose available resources. The ap- proach we use exploits domain knowledge to guide the service composition process and provide advice on service selection and instantiation. The approach has been implemented in a prototype workflow construction environment that supports the runtime recommendation of a service solution, service discovery via semantic service descriptions, and knowledge-based configuration of se- lected services. The use of knowledge provides a basis for full automation of service composition via conventional planning algorithms. Workflows pro- duced by this system can be executed through a domain-specific direct map- ping mechanism or via a more fluid approach such as WSDL-based service grounding. The approach and prototype have been used to demonstrate practi- cal benefits in the context of the Geodise initiative .
16 Read more
If we take the example of the BLASTn service pre- sented in the requirements section we can demonstrate how the semantic find service can support a seman- tic query over such a resource description. The user presents a discovery query in terms of a DAML+OIL description of the kind of service they require. In the example case it could be a service which accepts Ex- pressed Sequence Tags. The find service uses the on- tology server to determine which services accept Ex- pressed Sequence Tags or a more general semantic data type. The find service allows users to resolve queries of the “domain specific” category in Section 2. The separation of the semantic service discover from registration stems from several key require- ments. Firstly it enables the UDDI registration pro- cess, and semantic service advertisement to be pro- viding by different people, i.e third party metadata. Secondly it allows substantial reuse of the semantic find service for discovery of entities other than ser- vices, such as workflows, or static data.
3.4 Semantic Service Registry Implementation FUSION semantic service registry combines SAWSDL-based service descriptions as an extended of UDDI registry. With service capability profiling based on OWL-DL, the semantic matchmaking of service annotation is solved through description logic (DL) reasoning . For semantic discovery, service providers must extend their WSDL interfaces into SAWSDL to construct the Advertisement Functional Profiles (AFPs), while service requestor prepare its query with Request Functional Profiles (RFPs). Then, FUSION captured the semantics annotation defined in SAWSDL input and output data. It read modelReference annotated in <xs:element> entities under <wsdl:types>. FUSION also gained service functionality through semantics categorization defined in <wsdl:portType>. Figure 5 shows FUSION Semantic Registry Architecture, consist the feature of knowledge base in OWL ontology, service publication and discovery, Pellet DL reasoner, Java library, and UDDI server.
11 Read more
semantic service descriptions on the DAML-S pro- file schema with specific extensions for bioinfor- matics . However, we have decided not to force service publishers and third parties to describe busi- ness details, workflow or binding using the schema provided by the DAML-S upper level ontol- ogy, Instead, industry standards and associated tools can be used to author and discover such in- formation. In my Grid these include the UDDI model for specifying business details, Web Ser- vices Flow Language (WSFL) for workflow, and WSDL for binding information. This lowers the en- try cost for publishing or annotating a service. The DAML-S based approach is only used for seman- tic discovery where domain ontologies (such as bioinformatics ontologies) and associated reason- ing are essential.
provides a framework for semantic descriptions of Web Ser- vices and acts as a meta-model for such Services based on the Meta Object Facility (MOF) . Semantic service de- scriptions, according to the WSMO meta model, can be defined using one of several formal languages defined by WSML (Web Service Modelling Language) , and con- sists of four core elements deemed necessary to support Se- mantic Web services: Ontologies, Goals, Web Services and Mediators. Ontologies are described in WSMO at a meta- level. A meta-ontology supports the description of all the aspects of the ontologies that provide the terminology for the other WSMO elements. Goals are defined in WSMO as the objectives that a client may have when consulting a Web service. Web Services provide a semantic description of services on the web, including their functional and non- functional properties, as well as other aspects relevant to their interoperation. Mediators in WSMO are special ele- ments used to link heterogeneous components involved in the modelling of a Web service. They define the necessary mappings, transformations and reductions between linked elements.
10 Read more
Fredj et al.  propose a hierarchical-based approach of semantic gateways to improve the discovery of IoT semantic Web services in a dynamic context. In such an approach, the IoT environment is modeled as a tree hier- archy of smart spaces. Each smart space is controlled by a semantic gateway that maintains information about the IoT services in its scope (located within the space) and processes discovery requests. To minimize the discovery cost, the approach proposes creating clusters of similar services that can be optimized over time in terms of num- ber of clusters and number of services per cluster. The discovery cost is measured in terms of the number of ser- vice request matching operations performed in a gateway to discover services matching an incoming request. Sim- ilarly to QoDisco, this approach is based on distributing data structures to support the discovery process. How- ever, the goal is to minimize the discovery cost instead of dealing with scalability issues as is the case in QoDisco. Moreover, Fredj et al.  use only service location as con- text information and QoC is not considered in the scope of their proposal.
14 Read more
In order to illustrate the above, we return to the printer agent example described earlier. First a semantic mapping is undertaken between the CIM model and SNMP model, one such mapping being the equivalence between the CIM Printer and the MIB Printer classes. This mapping is then injected into the Knowledge Discovery Service, resulting in nodes which have a MIB based printer attached having a mapping and associated bridge registered. A context connector is introduced so that the application does not need to be altered to take advantage of the KDN. Thus when the application seeks to discover the output capacity of all the printers being managed, the connector poses the query locally via CIM and also passes the query to the KDSEN. The KDN then distributes this query to all nodes which either directly supports CIM based printers or those which have been mapped (e.g. via the MIB mapping in our case). Any node which has a MIB based printer attached then receives the query via the external query interface, applies the bridge to transform the query from CIM to SNMP based and transform the responses if necessary into CIM format. The Query Resolver of the KDSEN that originally received the query then takes all the responses and returns them.
12 Read more
Web Service computing is enabled by using an architecture that provides interoperability between disparate and diverse applications. One goal of Web Services is to facilitate inter-organisational distributed computing using traditional protocols such as the Hyper Text Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). However, these web service technologies do not provide standards for dynamically discovering, selecting candidates, composing and invoking Web Services based on their capabilities. In effect, a human must interpret the functionality or applicability of a web service and write some software capable of using the service or configure a generic client to invoke the service.
105 Read more
Schema mapping occurs in scenarios such as data warehousing, when entries from multiple sources that use different database schemas are merged into a single database. It is a practise that has obvious similarities to merging and mapping ontologies in the Semantic Web. However, there are key differences that make it more straightforward. Within a data warehouse, the application of the different schemas is generally known beforehand and, in most cases, schemas to be merged are representing the same information. It is rare that schemas representing very diverse information would want to be combined, as the aim is to combine large quantities of homogeneous information in order to extract useful patterns rather than to correlate diverse information to infer new knowledge. Schema matching systems therefore only have to map classes within discrete sets. There is also a much higher probability of two similar classes being a match, as there are not the subtle semantic differences present in the Semantic Web. This allows the schemas to be matched using relatively straightforward algorithms, known as match operators, which include techniques such as structural graph matching, element-level text comparisons and entry pattern identification . The choice between these algorithms depends upon the nature of the schemas and the amount of information available, such as whether instance data is present. It is also worth noting that schema matching is usually an offline process than can be performed by systems with significant processing power; there is no need to develop lightweight approaches that can be executed on the fly.
13 Read more
 had proposed a graph-based approach with a refined service-relationship graph generation algorithm based on Service-MessagePart Matrix. It uses semantic extended WSDL to reduce the search space but did not consider the equality semantics of Input and Output parameters to identify the services.  projected an approach that extracts the semantic similarities between I/O parameters of the services and represents them as a directed graph. It also recognizes and deals with cyclic dependencies among I/O parameters.  had suggested an approach for service composition which considered the dependency of web services using directed graph.  had proposed composition oriented service discovery with Service Aggregation Matchmaking algorithm which identifies the set of composable services based on I/O parameters only.  had proposed a framework for web service composition by checking the composability of services. The precondition and effect parameters were left out as the future work of  and  and they did not identify the substitutable services which is essential if the services are not available on the fly.  had suggested an approach that identifies the substitutable and composable set of services based on Input and Output parameters only. However this approach will not identify the services that are related through output and precondition parameters.  had proposed a method for service composition by considering hash table for maintaining the relationship of input and output parameters of services. This paper identifies the substitutable and composable services based on degree of match of IOPE parameters along with the data type match between the parameters using Indirect Backward Chaining approach.
The Service Oriented Architecture (SOA) is a distributed computing paradigm that allows interaction between software components regardless of their platform, implementation, and location . The building blocks of SOA are services which are pieces of functionality as software components exposed to be reused by other parties. Service providers offer these components by publishing them in some service registry or repository. Service consumers which may be either human users or software agents, request for a capability without any prior knowledge about existing services and their locations. Thus, for a consumer to use services, appropriate ones should be discovered.
44 Read more
The popularity of the service oriented computing and web services, attracts organizations to use the web to sell their own services. Web services are advertised in a central repository, later it can be invoked and used by the consumers. In central repository the web services are described by the description language called WSDL. A Web service based on Web Service Description Language (WSDL) is termed as syntactic based web services. WSDL based description allows keyword based processing. This limitation prevents fully automatic discovery, composition, invocation, and monitoring of web services. The reason for this shortcoming is the lack of semantic understanding. To overcome this problem, Web services require a method to incorporate semantics. Just as the Semantic Web  is an extension of the current World Wide Web, a semantic Web service  is an extension of Web services. It overcomes Web service limitations by using knowledge representation technology from the semantic Web. Specifically, it uses ontology to describe its service instead of using WSDL. Such ontology can be understood by machines. This allows a fully automatic discovery, composition, invocation, and monitoring in Web services.
2. Once translated, the request specification is sent to the Generator. The Generator will try to provide the needed functionalities by composing the available service technologies, and hence composing their functionalities. It tries to generate one or several composition plans with the same or different technology services available in the environment. It is quite common to have several ways to do a same requirement, as the number of available functionalities in pervasive environments is in expansion. Composing service is technically performed by chaining interfaces using a syntactically or semantically method matching. The interface chaining is usually represented as a graph or described with a specific language. Graph based approaches, represent the semantic matching between the inputs and outputs of service operations. It is a powerful technique as many algorithms can be applied upon graphs and hence optimize the service composition. Number of languages have been proposed in the literature to describe data structure in general and functionalities offered by devices in particular. If some languages are widely used, such as XML, and generic for multiple uses others are more specific to certain
12 Read more
Web responder have multip erpective parallization which give a capable interface to this huge information. Request counts and bits are two useful information sources gave by most web look engines. Data request count of an inquiry is a check of the amount of requst pages that contain the at a time request for parallel response appraoch. We proposed a semantic likeness measure using both page counts and scraps recuperated from a web look engine for two words. Four word co-occasion measures were enlisted using page counts. We proposed a lexical example extraction calculation to separate different semantic relations that exist between two words. What's more, a progressive example gathering calculation was proposed to recognize particular lexical examples that delineate the same semantic association. Both page counts based co-occasion measures and lexical example clusters were used to describe features for a question consolidate.
CWRV/SIR reached a final state after five iterations, and the following (single and in-between) concepts were induced: d, o_d, f_d, f_o, and d_f. The final state fed the C5 system, and the learning outcome was executed over the original set of cases. Inspecting the classification results the following was observed: most (over 80 %) of the ’follow_up’ (f) and ’operate’ (o) cases, were miss-classified as ’follow_up OR discharge’ (f_d), and ’operate OR discharge’ (o_d), respectively. The result is not disappointing. The combined therapeutic deci- sions, or ’o_d’ could be utilized and support the medical decision-making into the early phases of the diagnostic process. For instance, ‘f_d’ excludes operation and ’o_d’ may indicate to an acute status that may be real or not – if real, the patient should be immediately taken to surgery, if not the patient should be sent back home. Pediatric Surgery Clinic personnel validated, from a semantic point of view, the learned, in-between, set of concepts. Accuracy improved slightly, e.g., 94% over an original 92% estimate. Accuracy assessment was not based on randomized testing. To assess accuracy we used the same cases, which were used to derive learning output. Because in between concepts essentially exclude on class it may seem that binary classification would be appropriate. Indeed, concept ’f_d’ excludes operation. So, from a classification point of view results should be identical and indeed they were (binary classification was examined during randomized testing with V-fold validation using a 80%/20% split between training and test sets, respectively.) However, SIR does not aim on improving accuracy; rather it focuses on identifying new classes, hidden and implied by the original class definitions.
563 Read more
problem solving approach of multi agent system is used to decompose complex task in sub tasks. This paper presents a model which shows how agent’s power can be used to provide a aggregate web service which gives full information to company or say selectors about college, zone, students, their back log, average percentage, college reputation and other factors about college at a single point. This will also reduce the cost of recruitment and will provide more suitable candidates which
In this work, we have focused on some of the prob- lems that hinder the applicability of automated compos- ition in practical scenarios. To circumvent the lack of semantic information that is common for real-life ser- vices, we opted for using a publicly available repository of scientific workflows, from which we extracted the information necessary for the planning algorithms. We measured the accuracy of the automated compositions compared to ones written manually in order to estimate how helpful these tools would be to composition devel- opers. The results showed that the overall quality of compositions obtained by both standard planning tools and our algorithms is acceptable (around or above 75 %), but there was room for improvement for specific cases. We identified that the cases in which not all input information was used in the solutions had lower average quality, and that, by applying an algorithm designed to address this problem, the quality was increased from 45 to 71 %.
13 Read more
Abstract: - The advancement of internet technology and the vast growth of information and web services over web demands forefficient classification approaches for information and service discovery. Web services are playing an active role in providinginformation and services in today’s information retrieval and management of variety of information sources and combination
Adoption techniques are widely applied in and for cloud service usage to im- prove the slow acceptance rate of cloud services by SMEs. In such context, a well-understood problem is finding a suitable service from the vast number of services offering similar packages to satisfy user requirements such as security, cost, trust and operating systems compatibility has become a big challenge. However, a major drawback of existing techniques such as frameworks, web search, decision support tools, management models, ontology models and agent technology is that they are restricted to a specific task or they replicate service provider offerings. In this paper, we present Cloudysme a cloud ser- vice adoption solution, a middleware that is capable of aiding the decision making process for SMEs adoption of cloud services. Using a case study of SaaS storage services offerings by cloud providers, we introduce a new for- malism for judging the superiority of one service attribute over another, we propose an extended version of pairwise comparison and Analytical hierar- chical Process (AHP) which is a traditional multi-criteria decision method (MCDM) in solving complex comparisons. We solve the issue of service rec- ommendation by introducing an acceptable standard for each service attribute and propose a protocol using rational relationships for aiding cloud service ranking process. We tackle the issue of specific tasking by using a set of con- cepts and associated semantic rules to rank and retrieve user requirements. We promote a knowledge engineering approach for natural language processing by using terms and conditions in translating human sentences to machine readable language. Finally, we implement our system using 30 SMEs as a pivotal study. We prove that the use of semantic rules within an ontology can tackle the issue of specific tasking.
27 Read more