Web/Gridservices’ metadata and semantics are becoming increasing important for service sharing and effective reuse. In this paper we present a generic framework for engineering and managingservices’ SemanticMetadata (SMD) with the ultimate purpose of facilitating interoperability, automation, and knowledgeable reuse of services for problem solving. The framework addresses fundamental issues, approaches, and tools for the whole lifecycle of SMD management, in other words, those of acquiring, modeling, representing, publishing, and reusing services’ SMD. It adopts ontologies and the SemanticWeb technologies as the enabling technologies by which services’ metadata are semantically enriched and made interoperable, understandable, and ac- cessible on the Web/Grid for both humans and machines. In particular, mechanisms are proposed to make use of service SMD for service discovery and composition. The paper also describes a service SMD management system in the context of the UK e-Science project GEODISE. A suite of tools are developed, which forms the core of the SMD management infrastructure. We demonstrate the added value of the use of SMD through the integration of SMD management with GEODISE application systems.
In this paper, we describe a framework that is motivated by the above concerns and aims to develop a community-centric platform of tools and services that integrate the major existing annotation tools, academic search tools, and scientific databases into the Cyberinfrastructure based scholarly research. These tools and services, collectively called the Semantic Research Grid (SRG) , are backed by databases which store user and community specific data and metadata and have been configured into three applications: (1) A model for scientific research which links both traditional simulations and observational analysis to the data mining of existing scientific documents; (2) A model for a journal web site supporting both readers and the editorial function; (3) A model for a natural collection of related documents such as those of a research group or those of a conference.
Extensive metadata requirements of both the worldwide Grid and smaller sessions or “gaggles of gridservices” that support local dynamic action may be investigated in diverse set of application domains such as sensor and collaboration grids. For exam- ple, workflow-style Geographical Information Systems (GIS) Grids such as the Pat- tern Informatics (PI) application  require information systems for storing both semi-static, stateless metadata and transitory metadata needed to describe distributed session state information. The PI application is an earthquake simulation and model- ing code integrated with streaming data services as well as streaming map imaginary services for earthquake forecasting. Another example, collaborative streaming sys- tems such as Global Multimedia Collaboration System (GlobalMMCS)  involve both large, mostly static information systems as well as much smaller, dynamic in- formation systems. GlobalMMCS is a service-oriented collaboration system which integrates various services including videoconferencing, instant messaging and streaming, and is interoperable with multiple videoconferencing technologies. Zhuge defines Knowledge Grid in [30-31] as “an intelligent and sustainable interconnection environment that enables people and machines to effectively capture, publish, share and manage knowledge resources and that provides appropriate on-demand services to support scientific research, technological innovation, cooperative teamwork, prob- lem solving, and decision making”. To this end, Gaggles may also be thought of as dynamic sub-components of the Knowledge Grid. Each Gaggle might be created in a dynamic fashion to support science and engineering applications of the Knowledge Grid.
The emerging technologies of grid computing, webservices, and service-oriented workflows will enable scientific projects to be conducted on a larger scale than ever before. Scientific workflows are often very dynamic in both structure and persistency, complex, and may involve a lot of interaction, very large data flows, and may change on very short notice. Given modern information technology tools, they can be constructed by combining dispersed network accessible services into virtual organizations. Within a scientific workflow environment, metadata or grid service data is necessary for service consumer application to discover services and for services to publish their properties. Software reliability engineering within service oriented workflow systems is still a
time complexity . A framework IRS-III (Internet Reasoning System) is used for creation and running of SemanticWebServices, which takes a semantic broker-based approach for the mediation between service requesters and service providers . Web service composition system can help us to automate the business process, from specifying functionalities, to develop the executable workflows that capture non-functional requirements to deploy them on a runtime infrastructure . Aviv Segev et al.  proposed a context-based semantic approach to the problem of matching and ranking of webservices for possible service composition and provided a numeric estimation about the possible composition to the designer. An OWL-S service profile ontology based framework is used, for the retrieval of webservices based on subsumption relation and structural case-based reasoning, which performs domain-dependant discovery . Tamer Ahmed Farrag et al.  proposed a mapping algorithm that helps to facilitate the integration of the current conventional webservices into the new environment of the SemanticWeb. This has been achieved by extracting information from WSDL files and using it to create a new semantic description files using OWL-S. Hai Dong et al  proposed a conceptual framework for a semantic focused crawler, which combines the speciality of ontology-based metadata classification from the ontology, based focused crawlers and the speciality of metadata abstraction from the metadata abstraction crawlers, in order to achieve the goal of automatic service discovery, annotation, and classification in the Digital Ecosystems environment. Antonio Brogi , had emphasized the importance of behavioral information in service contracts, that inhibits the possibility of guaranteeing the service interactions and highlighted the limitations of currently available service registries which do not take into account the behavioral information. In the research to webservices, a lot of initiatives have been conducted with the intention to provide platforms and languages for Web Service Composition (WSC) such as Business Process Execution Languages for WebServices (BPEL4WS) . Presently some languages have the ability to support semantic representation of the Webservices available on the internet such as the Web Ontology Language for WebServices OWL-S  and the Web Service Modeling Ontology WSMO .
This study illustrates the notion of semantic World Wide Web by implemented the World Wide Webservices. World Wide Webservices make the semantic World Wide Web more significant and powerful. It is fast growing technology, especially in the e-commerce area. Webservices have a lot to offer when it arrives to creating web-based applications for selling things over the internet. They are a good way for applications to broadcast with each other over the internet. This allows the applications applied in distinct areas to help seamlessly in a bigger system. This makes webservices a good option for the Mimesis task. A game motor on a user’s computer desires to demand a design from a centrally established Advisable Planner to decide what activities the engine will take. These two schemes are in writing in distinct languages and should broadcast over the internet. The World Wide Web service provides the connection link between the game motor and the Advisable Planner. Web Service acts as a circulated middleware to facilitate the interoperability of the entire scheme with the support of distributed technologies. This study proposes a scheme of base platform to coordinate the various services coming from heterogeneous environment. Resource administration devices are utilized to support the deployment and administration of webservices and grids are utilized in circulated entire mesh. The architecture of heterogeneous schemes are based on the WebServices is put forward, through the module of virtual data warehousing, recognizing facts and figures mapping and interoperability. The implementation of architecture is limited to the merchandise facts and figures of heterogeneous schemes. Study display this approach is a feasible approach to support a distributing and interoperability for multi-source facts and figures circulated in heterogeneous stages.By applying the world wide webservicessemantic world wide web becomes more powerful and meaningful for the users so that they can all the applicable data by one click from a specific point and share the available resources on world wide world web and internet. World Wide Web allows the identical mix and agree approach of the real world service.
To provide advanced knowledge  services, we need efficient ways to access and extract knowledge from Web documents. Although Web page annotations could facilitate semantic knowledge gathering, annotations are hard to find and will probably never be rich or detailed enough to cover all the knowledge these documents contain. Manual annotation is unfeasible and unsalable and automatic annotation tools are still largely unreliable. Specialized knowledge services therefore require tools that can search and extract specific knowledge directly from unstructured or semi-structured text on the Web, guided by an ontology that details what type of knowledge to harvest . As a complement to knowledge extracting tools, we use a technique that allows users and publishers to specify their own set of metadata to best describe the content which is referred to as attributes in this document. An attribute is a pair of name and value. The novel tool we use here for publishing SemanticWeb (SWeb) documents, converting conventional Web documents into SWeb documents, classifying and managing them accordingly is a SemanticWeb browser (SWeb), developed in the project “KnowDive” at the University of Trento . Our paper covers the publishing methodology of SemanticWeb documents using our existing tools and framework.
The last category of tools underlying the SRG system, are the Content Management Systems (CMS) used by journals and conferences. Manuscript Central is a journal management system which is a popular choice of many publishers. Likewise, CMT (developed by Microsoft Research) is a popular conference management system. We cannot and should not replace these tools. Instead, we plan to wrap them with Webservices and then create new tools which aggregate information from various sources, such as annotation tools and academic search tools, to provide added value to the editors and readers of publication venues. For example, one could download all the papers submitted to a venue and analyze them with CiteSeer-like algorithms to extract front- and back-end metadata, or with tools like Oscar3 to extract domain specific metadata. This metadata could then be fed to a community- building tool which generates a list of referees that are not in conflict of interest with the authors of submitted papers (using methods similar to that used in ). Another useful service would be to enable journals build communities of authors— especially in association with “special issues” of papers on a single topic.
The SemanticWeb framework can be summarized as providing a metadata layer in content-interoperable languages, mainly in RDF, which intelligent or automatic services can be made by machines based on the layer. The typical architecture for managing the metadata layer, for example, KA2  and Sesame , consists of an ontology-based knowledge warehouse and inference engine as the knowledge-based system to provide intelligent services at the front end, such as conceptual search and semantic naviagtion for user to access the content as summarized in Fig. 2. In the backend is the content provision component, consisting of various tools for creating metatdata from unstructured, semi-structured and structured documents. In this paper, the information extraction system we developed is used as a component of the back end.
programming model illustrated in fig. 1.The lowest level is the traditional code (SQL, Fortran, C++ or Java) implementing a service; the next level is agent technology using metadata to choose which services to use; the third workflow level links the chosen services to solve the distributed bioinformatics problem. Each level has distinct “programming models” with the metadata of relevance here needing tools to add (annotate) the metadata and some logic (artificial intelligence) to reason about them. These tools have initially been developed by the Semanticweb community and are now being applied to the Grid. MyGrid and DiscoveryNet are two UK e-Science projects exploring these ideas for Bioinformatics. Note this discussion has shown how the Grid requires and can use both distributed artificial intelligence (AI) and agent technologies and the three level programming model integrates “conventional programming”, AI, agents and coarse grain application integration or workflow; four often disparate and rival threads of computer science research.
Research on the SemanticWeb and Web/Grid resource description, discovery and composition is booming but there is currently little effort on a systematic and integrated approach to the management of resources’ SemanticMetadata (SMD), nor on key tools that add, store and reuse SMD. In this paper we propose a generic framework for managing resource SMD, in which ontologies are used for metadata modeling and the Web Ontology Language (OWL) for semantic representation. Generated resource SMD are archived in a knowledge repository enhanced with Description Logic (DL) based reasoning capability. A raft of tools, mechanisms and APIs are developed to support SMD management lifecycle, including metadata generation, semantic annotation, knowledge storage and semantic reuse. Both the framework and its supporting technologies have been applied to a large existing e-Science project, which has produced a working resource management prototype. While SMD can be exploited in many ways with regards to resource discovery, provenance and trust, we illustrate their usage through a knowledge advisor that assists resource assembly and configuration in the context of engineering design search and optimisation.
In this paper, we have introduced various tools addressing three different stages of the semanticweb based knowledge management life cycle – knowledge creation, knowledge capture and knowledge reuse - for assisting engineers using the Geodise toolkit. Accessing and reasoning with the ontology and instances is facilitated via the OntoView mechanism on top of which function annotation and reuse services are built. The reuse includes ontology driven queries over instances and the semantic matching based knowledge advisor for function configuration and assembly. We are currently extending this system to incorporate the semantic annotation and retrieval of the configured workflows.
In particular, the DCMI RDA Task Group  is developing namespaces for metadata structure and content terminologies from Resource Description and Access (RDA) , the successor to the Anglo-American Cataloguing Rules. The RDA metadata element vocabulary is being declared in RDFS, while several sets of controlled terms for the content of specific elements are being made available in SKOS. This will result in the standard labels for metadata elements and attributes, for example “Title” and “Content type”, each having its own URI. The terms which are allowed as values for specified elements, for example “spoken word” (a value for content type) and “microform” (an instance of media type), will also have URIs. This will help various metadata encoding formats, such as MARC21 and Dublin Core (DC), to make machine-processable declarations of which RDA elements and values they use, which in turn will improve interoperability between metadata stored in different encoding formats.
Another semantic annotation system was produced by (Dehors et al., 2005). Dehors et al. have developed a methodology for semi-automatically extracting annotations from existing pedagogical documents. Their QBLS (Question-Based Learning) system does not require a specific annotation tool; instead it uses MS Word templates that rely on pre-defined layouts which are linked to ontologies to produce semantic annotation. The resultant semantic annotation is then used with the Corese 92 semantic search engine to perform semantic queries. They evaluated their system based on a two-hour exercise session attended by 49 students. The students rated the usability of the conceptual navigation provided by the system with a high score (4.2 out of five). The system also appealed to most teachers that used it for authoring their pedagogical materials; this was because the system relied on well-known software (i.e. MS Word) to produce its output.
In Sri Lanka, most of the libraries use CDS/ISIS windows version for content organization. In this digital environment we do keyword indexing widely for knowledge management. It is very fruitful way of managing the knowledge. In this process it is very easy to locate a document without wasting precious time of its users. And also it helps to select the right documents for the right user. In the Regional Centre for Strategic Studies library, where I am the Librarian, do this task widely. It is the most practical way to give the better services to the user.
• Meta-data rich Message-linked WebServices as the permeating paradigm • “User” Component Model such as “Enterprise JavaBean (EJB)” or .NET. • Service Management framework including a possible Factory mechanism • High level Invocation Framework describing how you interact with system