Top PDF A framework for QoS driven user-side cloud service management

A framework for QoS driven user-side cloud service management

A framework for QoS driven user-side cloud service management

Aceto et al. [ 89 ] presented a comprehensive survey of cloud QoS monitoring. They argue that cloud monitoring is of paramount importance in effective and efficient cloud management for both providers and users. Fatema et al. [ 84 ] pre- sented a definition of monitoring and classified the monitoring techniques into general purpose and cloud-specific categories. They derived a list of capabilities that are relevant to facilitate efficient cloud operational management and identi- fied the areas that have unique functions which can be managed separately and investigated the role of monitoring in supporting them from both providers’ and consumers’ perspectives. Furthermore, they defined a taxonomy by grouping the capabilities of the different cloud operational areas for analyzing the monitoring tools. They found that general purpose monitoring tools have a client–server ar- chitecture where the client resides on the monitored object and communicates in- formation to the server. These tools were designed for monitoring fixed-resource environments with dynamic scaling of resources and as a result, lack many capa- bilities like scalability. They point out that in designing future monitoring tools, especially for louds, these challenges must be addressed since issues such as scal- ability are important for cloud monitoring. Additionally, they highlight the role of monitoring in trust assurance and service selection in cloud computing. They also identified the need for an ontology of cloud metrics to classify the various cloud metrics to help support cloud service monitoring.
Show more

301 Read more

Model Driven Platform for Service Security and Framework for Data Security and Privacy using Key Management in Cloud Computing

Model Driven Platform for Service Security and Framework for Data Security and Privacy using Key Management in Cloud Computing

The cloud service provider should enforce the same or even higher levels of security controls as expected by the cloud customer or as best practice in the industry. There are logical risks of information disclosure or data integrity by having unsecure applications or permission handling functionalities. The application or underlying infrastructure could be open to exploits by hackers. The user permission and role model could be exploited as well by external hackers or internal employees that have too many access rights. In general, the same security measures need to be applied like in any IT system. The complexity arises from the cloud technology model that is based on virtualization and distributed responsibilities between the infrastructure layers. The cloud service provider must take care of physical and logical security that is in his sole responsibility. For example, the cloud service provider may offer encryption, but it is up to the customer to activate and use it. Clear responsibilities for network, operating system, and application security measures are key priorities to achieve such a secure cloud solution.
Show more

7 Read more

QoS Based Framework for Effective Web Services in Cloud Computing

QoS Based Framework for Effective Web Services in Cloud Computing

To remove vulnerabilities with single processor, single database and single node architecture, the cloud comput- ing research is getting velocity to the fullest. Not only industries, also academia, are actively participating in finding appropriate solutions. In [3], IBM raised an is- sue, with single user cloud Provider, that has limited re- source to use, and has lack of interoperability among cloud Providers also, prevents deployment across differ- ent clouds. A.NET based cloud computing a software platform named Aneka [4], and Reservoir architecture; the computational resources within a site are partitioned by a virtualization layer into virtual execution environ- ments (VEEs) is used for clouds. In [5], Huang and team from IBM described a service oriented cloud computing platform that enables web-delivery of application-based services with a set of common business and operational services. Abicloud, Eucalyptus, Nimbu and OpenNebula are among those available clouds. Similarly, different ty- pes of clouds and their varying interoperability have been developed.
Show more

9 Read more

Mobility Aware QoS Framework for Mobile Cloud Computing

Mobility Aware QoS Framework for Mobile Cloud Computing

Some of existing work focuses on QoS aware web service. Lodi et al [4] proposed a middle-ware architecture for enabling service level agreement (SLA)- driven clustering of QoS- aware application servers. Some work focus on QoS architecture design for cloud computing, Wang et al [5] proposed on adaptive QoS management framework for VOD (Video on Demand) cloud service centres. Ye et al [6] proposed a framework for QoS and power management in a service cloud environment with mobile devices. Some work focus on mechanisms for QoS management in cloud computing. Li [7] proposed on adaptive management of virtualized resources in cloud computing using feedback control. Xiao [8] proposed a reputation based QoS provisioning in cloud computing via Dirichlet multinomial model.
Show more

8 Read more

A FRAMEWORK FOR QOS-AWARE EXECUTION OF WORKFLOWS OVER THE CLOUD

A FRAMEWORK FOR QOS-AWARE EXECUTION OF WORKFLOWS OVER THE CLOUD

variety of workloads. A layered Cloud architecture taking into account different stakeholders is presented in (Litoiu et al., 2010). The architecture supports self-management based on adaptive feedback control loops, present at each layer, and on a coordination ac- tivity between the different loops. Mistral (Jung et al., 2010) is a resource managing framework with a multi- level resource allocation algorithm considering real- location actions based mainly on adding, removing and/or migrating virtual machines, and shutdown or restart of hosts. This approach is based on the usage of Layered Queuing Network (LQN) performance model. It tries to maximize the overall utility taking into account several aspects like power consumption, performance and transient costs in its reconfiguration process. In (Huber et al., 2011) the authors present an approach to self-adaptive resource allocation in vir- tualized environments based on online architecture- level performance models. The online performance prediction allow estimation of the effects of changes in user workloads and of possible reconfiguration ac- tions. Yazir et al. (Yazir et al., 2010) introduces a distributed approach for dynamic autonomous re- source management in computing Clouds, performing resource configuration using through Multiple Crite- ria Decision Analysis.
Show more

6 Read more

Application of Cloud Rank Framework to Achieve the Better Quality of Service (QoS) Ranking Prediction of Web Services

Application of Cloud Rank Framework to Achieve the Better Quality of Service (QoS) Ranking Prediction of Web Services

the many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It was more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In their work to find meaningful subgroups, they invent the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then they proposed a unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Their approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of the proposed approach. Their experiments were performed on three real data sets: MovieLens-100K4, MovieLens-1M and Lastfm. They run many state-of-the-art recommendation methods and check whether their top-N recommendation performance is improved after using their framework. The experimental results showed that using subgroups was a promising way to further improve the top-N suggestion performance for many popular CF methods. Gediminas Adomavicius et. al [3] has introduced and explore a number of item ranking techniques that can generate recommendations that have substantially higher aggregate diversity across all users while maintaining comparable levels of suggestion accuracy. Comprehensive empirical evaluation consistently showed the diversity gains of the proposed techniques using several real-world rating datasets and poles apart rating prediction algorithms. They have conducted experiments on the three datasets including Movie Lens (data file available at grouplens.org), Netflix (data file available at netflixprize.com), and Yahoo! Movies (individual ratings collected from movie pages
Show more

14 Read more

An enhanced QoS Architecture based Framework for Ranking of Cloud Services

An enhanced QoS Architecture based Framework for Ranking of Cloud Services

Federated cloud concepts were addressed in [5], [6], where scalability is increased by inter cloud negotiation i.e., any number of customers requests can be satisfied at a time. And the mapping function [6], is implemented by continuous double action, sensor unit was used to predict the geographic distribution of users. Advanced reservation strategies were encountered in [14], [15] which allow the customers to reserve the resources in a prior manner. But no co-reservation is allowed in [14]. Overlapped Advance Reservation Strategy (OARS) gives mitigation of negative effects brought about by advance reservation but the lower rejection rate at the price slightly increasing the violations of reservation [15]. Alternate offers protocol and broker’s negotiation strategy were used for advanced reservation in [7]. But it supports only the negotiation for timeslots and number of resources. An algorithm to minimize total cost of resource provisioning and to avoid over-provisioning and under-provisioning is proposed using reserved instances which guarantee the resources to reserved users. But there is an underlying assumption is that there are always providers willing to sell call options [10]. SLA-oriented Dynamic Provisioning Algorithm supports integration of market based provisioning policies and virtualization technologies for flexible allocation of resources to applications. However it does not include customer-driven service management, computational risk management and autonomic management of Clouds which improve the system efficiency, minimization of SLA violation and the profitability of service providers [11]. Comparison of different cloud services can be obtained through Service Measurement Index (SMI) and Analytic Hierarchy Process (AHP). But the ranking algorithm proposed here cannot cope with variation in QoS attributes such as performance by Adopting fuzzy sets [8]. Singular Value Decomposition Technique (SVD) determines the best service provider for a user application with a specific set of requirements. It provides an automatic best-fit procedure which does not require a formal knowledge model. However there is no standard way to allow a universal description format and semantics [12].
Show more

10 Read more

User Service Assistant: An End-to-End Reactive QoS Architecture

User Service Assistant: An End-to-End Reactive QoS Architecture

The rapid deployment of interactive and multimedia applications, and the increased mobility of computers leads to the need of new technical solutions in computing systems. Today, the Internet comprises a heterogeneous set of networks, with very different characteristics, especially considering the increased usage of wireless networks. Even the end systems are architecturally very different, and these factors combined lead to unreliable and unpredictable performance of networked applications. One of the problems today is the question of how to manage resources and thus provide users with control over the behavior of applications, known as Quality Of Service (QoS) management. Much work has previously been done within this area, but all proposed schemes have one thing in common, a high level of complexity, which so far have prevented any of them to be fully implemented. In this paper we propose a new approach towards QoS management, in a framework we call USA, User Service Assistant. We believe that a QoS management framework should work independent of applications and available resources, and that it should react to user input rather than trying to predict the users perception of quality. We have focused on the feasibility of implementing USA, and thus we have been able to realize it with an experimental application. This paper firstly lays out the framework, describes the differences between USA and the schemes proposed today, and then describes the implementation of the framework. After that we describe the extensions to the implementation we propose to evaluate next, in order to fully assess the model. Finally we discuss the framework and its implications, and conclude on our work.
Show more

10 Read more

Security ensured multicopy data management framework for cloud service providers

Security ensured multicopy data management framework for cloud service providers

different servers — to prevent simultaneous failure of all copies — in exchange of pre-specified fees metered in GB/month. The number of copies depends on the nature of data; more copies are needed for critical data that cannot easily be reproduced and to achieve a higher level of scalability. This critical data should be replicated on multiple servers across multiple data centers. On the other hand, non-critical, reproducible data are stored at reduced levels of redundancy. The CSP pricing model is related to the number of data copies. For data confidentiality, the owner encrypts his data before outsourcing to CSP. After outsourcing all n copies of the file, the owner may interact with the CSP to perform block-level operations on all copies. These operations includes modify, insert, append and delete specific blocks of the outsourced data copies. An authorized user of the outsourced data sends a data access request to the CSP and receives a file copy in an encrypted form that can be decrypted using a secret key shared with the owner. According to the load balancing mechanism used by the CSP to organize the work of the servers, the data- access request is directed to the server with the lowest congestion and thus the user is not aware of which copy has been received. We assume that the interaction between the owner and the authorized users to authenticate their identities and share the secret key has already been completed and it is not considered in this work.
Show more

5 Read more

A Study of Different QoS Management Techniques in Cloud Computing

A Study of Different QoS Management Techniques in Cloud Computing

As shown in Fig.1, in the service-oriented environment, complex distributed systems are dynamically composed by discovering and integrating distributed Cloud services, which are provided by different organizations. The distributed Cloud services are usually employed by more than one service users (i.e., the service-oriented systems). The performance of the service oriented systems is highly relying on the performance of the employed Cloud services. Quality-of-Service (QoS) is usually engaged for describing the non-functional characteristics of Cloud services. QoS management of Cloud services refers to the activities in QoS specification, evaluation, prediction, aggregation, and control of resources to meet end-to-end user and application requirements. With the prevalence of Cloud services on the Internet, investigating Cloud service QoS is becoming more difficult and in recent years, a number of QoS-aware approaches have been comprehensively studied for Cloud services. However, there is still a lack of real-world Cloud service QoS datasets for validating new QoS-driven techniques and models. Without convincing and sufficient real-world Cloud service QoS datasets, characteristics of real-world Cloud service QoS cannot be fully mined and the performance of various recently developed QoS-based approaches cannot be justified. To collect sufficient Cloud service QoS data, evaluations from different geographic locations under various network conditions are usually required. However, it is not an easy task to conduct large-scale distributed Cloud service evaluations in reality. Effective and efficient Cloud service distributed evaluation mechanism is consequently required. The Cloud service evaluation approaches attempt to obtain the Cloud service QoS values by monitoring the target
Show more

5 Read more

Cloud Service Framework for Multimedia Applications

Cloud Service Framework for Multimedia Applications

serves. The applications of cloud computing can be classified as broadcasting applications (High Definition Television, Radio, online news etc.) and interactive applications (video conference, video chat, e-health monitor, e-banking etc.). Most of these applications include multimedia processing improve the QoS and in turn QoE. When the cloud service providers are not in the situation to manage the request load of the network, the part of load is transferred to the other cloud network without any involvement of user or the network. The process of load sharing is managed by cloud service provider and the process is called video delivery services. Load sharing improves the QoS and QoE. The end user devices are serving more than one purpose, along with what purpose the devices are intended to serve. Other facilities are made available in the form of application. This competency of network is due to the collaboration work of network providers and cloud service providers. Video Streaming
Show more

6 Read more

Service Level Management (SLM) in Cloud Computing Third party SLM framework

Service Level Management (SLM) in Cloud Computing Third party SLM framework

Abstract: The key issue in cloud computing in enterprises is the management of the Quality of Service (QoS), by an appropriate SLM (Service Level Management). Cloud Service Provider are offering SLMs to their users. We think that a third party SLM can be a better way to assure a robust and equal SLM. This paper presents the elements of third party SLM, namely Cloud Service Registration Agent, Negotiation Agent, Compensation Agent, Comment Agent, Billing Agent and Service Monitoring Engine. Such framework has been tested in a real life case. Results show that third party SLM is not only equal, but also dependable and reasonably easy to implement.
Show more

6 Read more

Cloud Service Selection for Dynamic QoS and Fuzzy Entropy Weight TOPSIS

Cloud Service Selection for Dynamic QoS and Fuzzy Entropy Weight TOPSIS

User preferences are used to describe the user preference tendency and historical behavior, There have been some cloud service selection method based on QoS and user needs, for example, Literature [2 ~ 3] put forward a service selection model by using fuzzy clustering methods based on Agent and trust domain. Literature [4] introduced a computing method of QoS under dynamic condition, and established an open, fair algorithm framework to evaluate QoS of candidate service. Literature [5] proposed a web service selection fuzzy algorithm for QoS and user needs. The algorithm can choose the best service sets to meet with the user requirements after proceeded defuzzification about language description of user's QoS needs and preferences. A web service quality model is established in Literature [6] based on APIHook. The method of weighted mean was used to calculate weight of Web service quality attributes. In Literature [7,8], constraints QoS of customers is described by using fuzzy logic, in order to get better service, which used comprehensive evaluation about the service based on the fuzzy control rules, and updated the fuzzy rules through genetic or gene algorithm.
Show more

6 Read more

A Survey on QoS Modelling techniques used in IoT Cloud Service Providers

A Survey on QoS Modelling techniques used in IoT Cloud Service Providers

This model is used for assessing the functioning and behaviour of cloud computing environment over the deployment time and execution time. This model will provide predictions on service quality metrics like response time for handling request, authenticity of service provided and availability of resources in the cloud. Applications of such models mostly evolve in relation to problems that require decision-making capability in system management process. Different techniques like simple heuristics, nonlinear programming and meta-heuristics are used to determine optimized decisions. This model also provide mechanism for decision making in resource allocation, load balancing at the server side when too many requests arrives, and admission control for the incoming requests from the point of cloud service provider and resource management strategies from the point of user.
Show more

6 Read more

Service workload patterns for Qos-driven cloud resource management

Service workload patterns for Qos-driven cloud resource management

An execution log records the input data size and exe- cution QoS; a monitoring log records the network status and Web server status. We reorganize these two logs to find the SWP under which QoS keeps steady. Our SWP mining algorithm is based on a generic algorithm type, DBSCAN (density-based spatial clustering of applications with noise). DBSCAN [10] analyses the density of data and allocates the data into a cluster if the spatial density is greater than a threshold. The DBSCAN algorithm has two parameters: the threshold ε and the minimum number of points MinPts. Two points can be in the same clus- ter if their distance is less than ε . The minimum number of points is also given. We also need a parameter Max- TimeRange, the max time range of a cluster. We expect the range of time is a cluster that can be steady and that has a size limit. When the cluster is too large, e.g., if the range exceeds a threshold, the cluster construction should be stopped. The main steps are given in the following Algorithm 1:
Show more

21 Read more

Service workload patterns for QoS-driven cloud resource management

Service workload patterns for QoS-driven cloud resource management

processing feature to determine the quality metrics for different service and infrastructure configuration types. For the first two, we used the Azure Diagnos- tics CSF to collect monitoring data (Figure 10). We also created an additional simulation environment to gather a reliable dataset without interference from un- controllable cloud factors such as the network. Con- crete applications systems we investigated are the fol- lowing: a single-cloud storage solutions for online shop- ping applications [16] and a multi-cloud HADR solu- tion (high-availability disaster recovery system [23]. This work has resulted in a record of configura- tion/workload data combined with performance data – as for instance shown in [16] where Figs. 3-6 capture response time for different service types and Figs. 7-8 show infrastructure concerns such as CPU and stor- age aspects. In that particular case, 4100 test runs were conducted, using a 3-service application with be- tween 25 and 200 clients requesting services. Azure CSF telemetry was used to obtain monitoring data. This data was then looked at to identify our workload patterns (CPU, Storage, Network) → Performance. 7.2 Cloud Application Workload Patterns
Show more

20 Read more

A QoS Oriented Framework for Adaptive Management of Web Service based Workflows

A QoS Oriented Framework for Adaptive Management of Web Service based Workflows

In the following discussion we elicit role of each component in various phases. Workflow,the process under execution, is a collection of tasks that can either be accomplished by in-house services or through third party services. DAML-S [8] specification is used to specify workflow. Web Service Mediator queries the web for services that would accomplish a particular task. Essentially it uses information stored in UDDIs across the web to retrieve listing of task specific Web Services. Monitor is responsible for monitoring, measuring and asserting facts about newly calculated-observed values of general, task-specific and Inter- net service specific QoS parameters. It calculates various values on the basis of mathematical underpinnings of our proposed QoS model. Intelligent Task Ex- ecution Engine manages and co-ordinates execution of tasks that are part of underlying workflow. The engine utilizes DAML-S [8] encoded information about the workflow to co-ordinate the execution of tasks, providing input to a task, binding and execution of a Web Service associated with the task, routing the re- quest to appropriate task depending upon the output obtained. Expert System allows assertion of facts in the knowledge base depending upon the rules fired. WebQ uses JESS [27] based on Rete [28], a low complexity algorithm, reduces the overhead associated with dynamic selection and binding of Web Service to a task at runtime. Knowledge Base is a repository of facts about Web Service related parameters - reliability, latency, execution time, performance and other QoS parameters. Rule Repository collects rules used to specify user specific QoS requirements, available workflow QoS and elicitation of steps to be taken for achieving specified QoS requirement. We use a multi-level approach wherein firing of a set of atomic rules leads to composite rule being executed [26].
Show more

11 Read more

A New Framework for the Evaluation of QoS in Cloud Federation

A New Framework for the Evaluation of QoS in Cloud Federation

Thus, given the diversity of clouds, plurality of providers and the importance of observing the obligations in service level agreement (SLA), one of the challenges posed by cloud computing is to evaluate the Quality of Service (QoS) for commercial applications in such systems. Evaluating Quality of Service parameters and measuring service quality levels and identify threshold levels, can provide a SLA management processes .Availability, processing time, the way of distribution of task on different clouds, accuracy of results, safety and cost are among effective parameters in measuring and assessing the quality of services [2]. Different models have been studied for evaluating Quality of Service in the clouds and cloud federation. Due to the extensive variety of services, clouds and IaaS of each strategy have a special range of assessment and do not fully cover all the effective components in the Quality of Service [3, 4]. Some of the patterns are evaluated based on tree structure that hybrid service tree is achieved after the generation of sub-trees, doing related processes and compounding them [5]. Also another research studied five strategies of computational redundancy and their effect on availability, time processing, task distribution among different clouds, accuracy of results, safety and cost. Then the quality level of cloud was measured and after that the quality level in Cloud federation has been analyzed according to each strategy in cloud federation. Also they have been analyzed and explained the different trade-offs and effective parameters on the Quality of Service among these strategies in [6]. Another investigative approach by providing System of Systems (SoS) approach to evaluate the Quality of Service has studied the evaluation in cloud environments. SoS has a hierarchical structure in which oversight and accountability are at higher levels and smaller components of the system such as bit are at lower [7]. To assess the efficiency of services in cloud computing systems, various models based on queuing theory has been studied [8]. Also, some studies have proposed a semantic -based model that has a broker [9]. Also, some of them have used a fuzzy logic approach to secure QoS and guarantee SLA level of services from the change in size and scale to the demand in the cloud IaaS layer [10].
Show more

6 Read more

Policy-Driven Governance in Cloud Service Ecosystems

Policy-Driven Governance in Cloud Service Ecosystems

When Apptitude creates a new application for the CloudDev platform it needs to proceed through the steps of a formal lifecycle management process imposed by CloudDev on all ISVs. The process has been put in place to prevent the adverse effects of introducing a problematic application into the ecosystem, and to ensure that the environment remains healthy and competitive. During the application development phases, unit and integration testing take place in an isolated development environment – a development sandbox. Once this stage is completed ISVs can choose to launch a private beta testing programme with a limited number of invited users. This is carried out in an isolated trial environment – a beta sandbox. When a new application is finally ready for release Apptitude submits the final version of its codebase and the application description to CloudDev. The artefacts that comprise the application need to observe a number of policies set out by CloudDev. These concern both technical aspects such as restrictions on application coding standards or how applications use platform resources, as well as business aspects such as restrictions on an application’s pricing model. Before the application codebase is deployed onto the CloudDev production environment and the application description is added to the CloudDev application store, a quality review step takes place. The CloudDev quality assurance staff examines the code and metadata submitted by Apptitude and employs a combination of manual and automated methods to ensure that all relevant policies are observed. In case of policy violations these are reported and the release is blocked. Alternatively, the application is allowed into the main production environment and into the application store. Consumers can thereafter select the application and subscribe to use it. When Apptitude wishes to retire an application there is another set of conditions to be checked. CloudDev’s main objective here is to ensure that decommissioning an application does not have any adverse effect on consumers who are still using it and on other applications that are interfacing with the application to be retired.
Show more

240 Read more

Measurement-Based Policy-Driven QoS Management in Converged Networks

Measurement-Based Policy-Driven QoS Management in Converged Networks

In the CNQF architecture, each RB communicates with one or more Resource controllers (RCs) which are the logical management and control entities responsible for low level (re)configuration at the Policy Enforcement Points (PEPs) in the transport plane. The PEPs are at the network entities such as gateway nodes, access routers, edge routers etc. where the PDP (i.e. RBs) policy decisions are enforced. Thus each RB (WARB, FARB, CNRB) is interfaced with one or more corresponding RCs (FARC, WARC, CNRC) which perform different configuration and control functions depending on where the PEPs are located on the transport plane. For instance, in the CN, an RC located in the edge router (PEP) may be responsible for packet marking (e.g. DiffServe Code Points, DSCP marking in a DiffServe domain) in response to CNRB policy decisions. While in the wireless access network, an RC may be responsible for configuration of gateway nodes to map layer 2 QoS parameters (e.g. WiMAX QoS classes) to layer 3 IP QoS parameters (e.g. DiffServe DSCPs).
Show more

5 Read more

Show all 10000 documents...