Abstract: Educational Institutions Continue To Search For Opportunities To Rationalize Its Resources Because Of The Negative Impact Of Financial Crises On Educational Institutions. In Addition, to Lack of Government Support for Educational Institutions to Provide their Requirements. CloudComputing Changed The Way In Which Development and Access To Applications And Infrastructure Provision, Which Covers All The Cloud Services. In This Article, It Is Likely That CloudComputing Is One Of Those Opportunities That Can Help Educational Institutions To Face Their Problems. Consequently, Educational Institutions Can Take Advantage Of Cloud Applications And Services To Provide Alternatives To Free Or Cost-Effective Through Cloud Services. Use of Indications in CloudComputing Helped To Create A Platform Independent Of Statute. In This Article, We Analyze How Can CloudComputing Affect the Limits of Educational Resources in the Background of SemanticWeb through Use of Experimental and Semantic Platforms. We Offer A Technical Solution Using Digital Cloud Services.
The term Web 3.0, also known as the semanticweb, describes sites wherein the computers will be generating raw data on their own without direct user inter- action. Web 3.0 is considered as the next logical step in the evolution of the Internet and web technologies. For Web 1.0 and Web 2.0, the Internet is con- fined within the physical walls of the computer, but as more and more devices such as smartphones, cars, and other household appliances become connected to the web, the Internet will be omnipresent and could be utilized in the most efficient manner. In this case, various devices will be able to exchange data among one another and they will even generate new information from raw data (e.g., a music site, Last.fm, will be able to anticipate the type of music the user likes depending on his previous song selections). Hence, the Internet will be able to perform the user tasks in a faster and more efficient way, such as the case of search engines being able to search for the actual interests of the indi- vidual users and not just based on the keyword typed into the search engines. Web 3.0 embeds intelligence in the entire web domain. It deploys web robots that are smart enough of taking decisions in the absence of any user interference. If Web 2.0 can be called a read/write web, Web 3.0 will surely be called a read/write/execute web. The two major components forming the basis of Web 3.0 are the following:
recognizing and standardizing entities in every queries and documents, e.g., companies , titles and skills , then constructing various entity-award capabilities supported on the entities. semantic search hard and fast techniques for retrieving data from primarily based facts sources like ontologies and XML as discovered on the SemanticWeb . Such technologies modify the formal articulation of domain information at a high level of quality and may allow the user to specify their purpose semantic search based on disambiguation cloud be a perceive what a user is searching find, word sense disambiguation (RDF) may be a trendy framework for the way to describe any internet along with a web site and its content material. An RDF description (such descriptions are often time referred as metadata). include the authors of the resource, date of creation or change, the organization of the pages on a site (the sitemap), data that describes content in terms of audience or content rating, key words for search engine data collection, subject classes and so on. Keywords to concept mapping: The concept map can assist you brainstorm your subject matter and see what principles or keywords to use as you search for statistics. It additionally helps you become response to what you understand regarding your topic, presents you with an chance to consider your subject matter in new strategies and identify gaps on your experience.
integration of advanced graphics (Scalable Vector Graphics or SVG) and semantic data. 3-D social networking systems and immersive 3-D internet environments has been given another focus that will take the best of virtual worlds (such as Second Life) and gaming environments and merge them with the Web. In the last few years, acquisition of knowledge through learning is benefited from the technological evolution of the web. The explosion of the web has permitted the introduction of new educational processes, which are more flexible for accessing the resources for learning. Now a days Internet is a power house of information and a very good source of knowledge. Advanced search engines or the semanticweb has come into picture in order to effectively deal with the huge amount of information on the web. This has given a very good opportunity to retrieve useful and relevant information in audio, video forms for its users .A traditional web engine is not empowered to to really understand what the search id and how efficiently it should be done. It simply looks for Web pages that contain the keywords found in your search terms. Search engine does not effectively retrieve the content that is being searched for.
Enterprises often don’t have the required expertise to build cloud-based solutions. The average medium-to-large company that has been in business for more than a few years typically has a collection of applications and services spanning multiple eras of application architecture from mainframe to client-server to commercial-off the-shelf and more. The majority of the skills internally are specialized around these different architectures. Often the system administrators and security experts have spent a lifetime working on physical hardware or on-premises virtualization. Cloud architectures are loosely coupled and stateless, which is not how most legacy applications have been built over the years. Many cloud initiatives require integrating with multiple cloud-based solutions from other vendors, partners, and customers. The methods used to test and deploy cloud-based solutions may be radically different and more agile than what companies are accustomed to in their legacy environments. Companies making a move to the cloud should realize that there is more to it than simply deploying or paying for software from a cloud vendor. There are significant changes from an architectural, business process, and people perspective. Often, the skills required to do it right do not exist within the enterprise.
Virtualization has been used successfully since the late 1950s. A virtual memory based on paging was first implemented on the Atlas computer at the University of Manchester in the United Kingdom in 1959. In a cloudcomputing environment a VMM runs on the physical hardware and exports hardware- level abstractions to one or more guest operating systems. A guest OS interacts with the virtual hardware in the same way it would interact with the physical hardware, but under the watchful eye of the VMM which traps all privileged operations and mediates the interactions of the guest OS with the hardware. For example, a VMM can control I/O operations to two virtual disks implemented as two different sets of tracks on a physical disk. New services can be added without the need to modify an operating system. User convenience is a necessary condition for the success of the utility computing paradigms. One of the multiple facets of user convenience is the ability to run remotely using the system software and libraries required by the application. User convenience is a major advantage of a VM architecture over a traditional operating system. For example, a user of the Amazon Web Services (AWS) could submit an Amazon Machine Image (AMI) containing the applications, libraries, data, and associated configuration settings. The user could choose the operating system for the application, then start, terminate, and monitor as many instances of the AMI as needed, using the Web Service APIs and the performance monitoring and management tools provided by the AWS.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  study cloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
may be legally problematic and dangerous. Uncerti ﬁ ed code provided by the cus- tomer could harm the system health of the Cloud infrastructure or contain vul- nerabilities of any kind. Thus a Business Process designed and used in the Cloud may only consist of prede ﬁ ned and by the Cloud operator certi ﬁ ed building blocks. Designing a Business Process goes through some development steps, well known from software development techniques (cf. Fig. 1). In the beginning a Process is typically designed in a graphical notation. This step is iteratively per- formed and can be accompanied by internal simulation phases, which help the designer evaluating his newly created or altered Process model. The design phase is usually followed by a test phase. During the test phase a Process model can be executed. If necessary the Process model and depending artifacts ﬁ rst have to be deployed. In most Execution environments the deployment of new Process models is simply based on XML ﬁ les. More complex is the preparation of all services referenced by a Process model. If a service hasn ’ t been used by a Process model before, it must be installed and con ﬁ gured before usage. Often the Process model has to be updated to re fl ect the current IP or URL of the newly deployed service. An execution of a Process in the test phase must be clearly marked preventing misinterpretations of Process data and incoming or outgoing signals to external systems. Any error found while testing may lead to another design phase. After a successful test phase the Process changes to the productive phase and can be used as desired. Any errors found in the productive phase will also lead to a new design phase. At the end of its lifetime a Process model is undeployed, saving computing resources and preventing the creation of new Process instances. 10 The physical Fig. 1 Business Process
Thus providing Infrastructure as a Service essentially means that the cloud provider assembles the building blocks for providing these services, including the computing resources hardware, networking hardware and storage hardware. These resources are exposed to the consumers through a request management system which in turn is integrated with an automated provisioning layer. The cloud system also needs to meter and bill the customer on various chargeback models. The concept of virtualization enables the provider to leverage and pool resources in a multi-tenant model. Thus, the features provided by virtualization resource pooling, combined with modern clustering infrastructure, enable efficient use IT resources to provide high availability and scalability, increase agility, optimize utilization, and provide a multi-tenancy model.
This paper introduces internet-based cloudcomputing, its characteristics, service models, and deployment models in use today. We also discussed the benefits and challenges of cloudcomputing and the significance of flexibility and scalability in a cloud- based environment. In this paper we also focus on issues and advantages for web and cloud based applications; we also point out the various difficulties associated with dynamic updates for such applications, present and layout directions for future work.
Replace a hardware in-house server is the most obvious use. Now, it is thought of useful for large business but the benefit will mostly fe found by small companies without on-staff technologists. Let Microsoft do the maintenance! Because business owners are slow to accept the radical change, they are finding the HotServer, as a backup, worth the cost. Azure, with the FTP updating can be a low cost alternative to the many cloud backup products on the market.
Amazon EC2 is an IaaS model and may be considered the central part of Amazon’s cloud platform. It was designed to make web scaling easier for users. The interaction with the user is done through a web interface that permits obtaining and configuring any desired computing capacity with little difficulty. Amazon EC2 does not use regular configurations for the cen- tral processing unit (CPU) of instances available. Instead, it uses an abstrac- tion called elastic compute units (ECUs). According to Amazon, each ECU provides the equivalent CPU capacity of a 1.0- to 1.2-GHz 2007 Opteron or 2007 Xeon processor. Amazon S3 is also an IaaS model and consists of a stor- age solution for the Internet. It provides storage through web service inter- faces, such as REST and SOAP. There is no particular defined format of the stored objects; they are simple files. Inside the provider, the stored objects are organized into buckets, which are an Amazon proprietary method. The names of these buckets are chosen by the user, and they are accessible using a hypertext transfer protocol (HTTP) uniform resource locator (URL), with a regular web browser. This means that Amazon S3 can be easily used to replace static web hosting infrastructure. One example of an Amazon S3 user is the Dropbox service, provided as SaaS for the final user, with the user having a certain amount of storage in the cloud to store any desired file. 1.6.2 Rackspace
Abstract The surging demand for inexpensive and scalable IT infrastructures has led to the widespread adoption of Cloudcomputing architectures. These architec- tures have therefore reached their momentum due to inherent capacity of simplifi ca- tion in IT infrastructure building and maintenance, by making related costs easily accountable and paid on a pay-per-use basis. Cloud providers strive to host as many service providers as possible to increase their economical income and, toward that goal, exploit virtualization techniques to enable the provisioning of multiple virtual machines (VMs), possibly belonging to different service providers, on the same host. At the same time, virtualization technologies enable runtime VM migration that is very useful to dynamically manage Cloud resources. Leveraging these fea- tures, data center management infrastructures can allocate running VMs on as few hosts as possible, so to reduce total power consumption by switching off not required servers. This chapter presents and discusses management infrastructures for power- effi cient Cloud architectures. Power effi ciency relates to the amount of power required to run a particular workload on the Cloud and pushes toward greedy con- solidation of VMs. However, because Cloud providers offer Service-Level Agreements (SLAs) that need to be enforced to prevent unacceptable runtime per- formance, the design and the implementation of a management infrastructure for power-effi cient Cloud architectures are extremely complex tasks and have to deal with heterogeneous aspects, e.g., SLA representation and enforcement, runtime reconfi gurations, and workload prediction. This chapter aims at presenting the cur- rent state of the art of power-effi cient management infrastructure for Cloud, by care- fully considering main realization issues, design guidelines, and design choices. In addition, after an in-depth presentation of related works in this area, it presents some novel experimental results to better stress the complexities introduced by power-effi cient management infrastructure for Cloud.
Internet of things (IoT) is an upcoming technology that permits interaction between real- world physical elements such as sensors, actuators, personal electronic devices, and so on, over the Internet to facilitate various applications in the fields of e-health, intelligent transportation, and others. IoT is the convergence of different visions—things-oriented, Internet-oriented, and semantic-oriented . Radio frequency identification (RFID) and sensing components are associated with everything used in daily lives, and information is uploaded into the computer, which monitors everything. RFID is the thing that con- nects the real world to the digital world. The basic idea of IoT is the pervasive utilization of things or objects—such as RFID tags, sensors, actuators, mobile phones, and so on— which, through unique addressing schemes, are able to interact with each other and coop- erate with their neighbors to reach common goals. Wireless sensor network, RFID system, and RFID sensor network are used to collect data opportunistically . Many challenges face this upcoming technology, in which technology and social network must be united for unique addressing, storing, and exchange of collected information. A remarkable point of contact for both sensing environments and cloud is IoT, where the underlying physi- cal items can be further abstracted according to thing-like semantics . With emerging technology IoT, a new framework is introduced to converge the utility-driven, cloud-based computing . IoT provides several advantages. They are as follows:
Above all, this book emphasizes problem solving through cloudcomputing. At times you might face a simple problem and need to know only a simple trick. Other times you might be on the wrong track and need some background information to get oriented. Still other times, you might face a bigger problem and need direction and a plan. You will find all of these in this book. We provide a short description of the overall structure of a cloud here, to give the reader an intuitive feel for what a cloud is. Most readers will have some experience with virtualization. Using virtualization tools, you can create a virtual machine with the operating system install soft- ware, make your own customizations to the virtual machine, use it to do some work, save a snap- shot to a CD, and then shut down the virtual machine. An Infrastructure as a Service (IaaS) cloud takes this to another level and offers additional convenience and capability.
Reference Model for Collaborative Networks” (ARCON) is a modeling framework for capturing collaborative networks . Its goal is to provide a generic abstract representation of collaborative networks a) to better understand their involved entities and relations among them and b) to provide basis for more spe- ciﬁc models for manifestations of collaborative networks. While ARCON provides a very complete reference model, it does not speciﬁcally focus on opportunistic col- laborations. In the ﬁeld of ad-hoc networks there have been modeling structures presented for workﬂows as well as opportunistic service compositions. Such solutions usually propose decentralized strategies, which in our case could not ful- ﬁll our requirements as we explained in previous section. Other approaches focus more speciﬁcally on modeling the collaboration in the context of collaboration models for the Internet of Things. It has been proposed to use agent models to capture how sensors in a network can collaborate . The model includes var- ious types of software agents that realize sensor collaboration. Other approaches model collaboration between Internet of Things (IoTs) entities . For collabora- tions, devices are abstracted as device-oriented Web services which are composed in process models. Further, this approach does not include aspects like temporal or local validity, which we address. The “pervasive computing supported collabora- tive work” model (PCSCW) aims to seamlessly integrate smart devices to enable the collaboration of users . A task model deﬁnes collaboration processes that make use of resources deﬁned in a resource model, under consideration of device collaboration rules . These rules deﬁne behavior of resources within a collab- oration, for example, to switch the means of data communication given a certain threshold is reached. Despite not targeting opportunistic collaborations speciﬁ- cally, PCSCW’s approach is very similar to our perception of collaboration mod- eling and did and will continue to inﬂuence our work.
There’s growing sentiment among many cloud experts that ultimately hybrid adoption will be most ad- vantageous for many organizations. Warrilow says “for some time Gartner has advised that hybrid is the most likely scenario for most organiza- tions.” Staten agrees with the notion for two reasons. First, “some appli- cations and data sets simply aren’t a good fit with the cloud,” he says. This might be due to application architec- ture, degree of business risk (real or perceived), and cost, he says. Second, rather than making a cloud-or-no- cloud decision, “it’s more practical and effective to leverage the cloud for what makes the most sense and other deployment options where they make the most sense,” he says. In terms of strategy, Staten recommends regularly analyzing deployment decisions. “As cloud services mature, their applica- bility increases,” he says.
The next layer within ITaaS is Platform as a Service, or PaaS. At the PaaS level, what the service providers offer is packaged IT capability, or some logical resources, such as databases, ﬁle systems, and application operating environment. Currently, actual cases in the industry include Rational Developer Cloud of IBM, Azure of Microsoft and AppEngine of Google. At this level, two core technolo- gies are involved. The ﬁrst is software development, testing and running based on cloud. PaaS service is software developer-oriented. It used to be a huge difﬁculty for developers to write programs via network in a distributed computing environ- ment, and now due to the improvement of network bandwidth, two technologies can solve this problem: the ﬁrst is online development tools. Developers can directly complete remote development and application through browser and remote console (development tools run in the console) technologies without local installation of development tools. Another is integration technology of local development tools and cloudcomputing, which means to deploy the developed application directly into cloudcomputing environment through local development tools. The second core technology is large-scale distributed application operating environment. It refers to scalable application middleware, database and ﬁle system built with a large amount of servers. This application operating environment enables appli- cation to make full use of abundant computing and storage resource in cloudcomputing center to achieve full extension, go beyond the resource limitation of single physical hardware, and meet the access requirements of millions of Internet users.