• No results found

Service Level Management (SLM) in Cloud Computing Third party SLM framework


Academic year: 2021

Share "Service Level Management (SLM) in Cloud Computing Third party SLM framework"


Loading.... (view fulltext now)

Full text


Service Level Management (SLM) in Cloud Computing

Third party SLM framework

Gianmario Motta


, Linlin You


, Nicola Sfondrini


, Daniele Sacco


and Tianyi Ma


1 Dept. of Industrial and Information Engineering,University of Pavia, Pavia, Italy 2Bip. - Business Integration Partners, Milano, Italy

motta05@unipv.it, {linlin.you01, daniele.sacco01, tianyi.ma01}@ateneopv.it, nicola.sfondrini@mail-bip.com

Keywords: Cloud Computing, Service Level Management, Third-party SLM framework, Service Level Agreement,

Quality of Service.

Abstract: The key issue in cloud computing in enterprises is the management of the Quality of Service (QoS), by an

appropriate SLM (Service Level Management). Cloud Service Provider are offering SLMs to their users. We think that a third party SLM can be a better way to assure a robust and equal SLM. This paper presents the elements of third party SLM, namely Cloud Service Registration Agent, Negotiation Agent, Compensation Agent, Comment Agent, Billing Agent and Service Monitoring Engine. Such framework has been tested in a real life case. Results show that third party SLM is not only equal, but also dependable and reasonably easy to implement.


Cloud Computing is emerging as a standard technology both for IT vendors and users [1] that, according to industry analysts, is trasforming the whole IT landscape [2] [21] [22] [25]. Such “commoditisation” enables enterprises to re-use services to get a time-to-market advantage [3][4][24]. Therefore, the overall performance of enterprise IT services porfolio is determined by the collective QoS (quality of Service) of cloud services delivered by different Cloud Service Providers (CSP), internal and external.

In traditional in premises deployments, widely used frameworks, as ITIL and COBIT, are a reference for SLM [9] [10]. However, cloud deployments do require a specific and appropriate Service Level Management (SLM), that monitors, measures and optimizes QoS[5] [6] [7] [8]. Such framework should guide an enterprise with an early cloud maturity to establish a cost-effective SLM [11].This is precisely the purpose of this paper.

In section 2 we summarize the emerging issues for SLM within the cloud deployment. Section 3 proposes a Third Party framework to solve cloud SLM issues. In section 4 we illustarte a case study that assesses effectiveness and efficiency of the Third Party framework. Finally, in section 5, we summarize our result and future improvements.



First of all, let us illustrate the relationship of Service Level Agreement (SLA) and SLM.

SLA is a statement of service promise to customers, that are measured by service metrics or Service Level Objectives (SLO) [13], and enforced by payments in front of filled promises of and penalties in front of unfilled ones.

SLM is the process through which a SLA is negotiated and service levels are controlled. Specifically, IT Service Management Processes, Operational Level Agreements, and Underpinning Contracts are managed to support the agreed Service Level Targets. [9] SLM monitors and reports Service Levels, through regular Customer reviews. Of course, a robust SLM needs dedicated resources, [12] i.e. specific organization roles and management processes, various data repositories and systems to probe, manage and report performances.

SLM and SLA are to be used jointly (and actually are); SLM provides the way to create SLA, to provision system resources and to manage system performance.

However, a traditional super-structured SLM is rather inconsistent with the dynamics in the cloud,


where customers have to select, even on demand, the right service provided among CSPs. So, a mutual trust should be established between customers and CSPs [15] and new SLM issues emerge as we list here below:

(1) Variable price and performance [16]

CSPs deliver services in various forms, e.g. a service with similar functions may have different billing schemas in different CSPs. The service can also be provided on various performances to maximize revenues. Moreover, the same services provided by different CSPs may have different performances and normally overcommit their services in terms of capability and performance. In these cases, customers may be confused in selecting a service complying with their needs.

(2) Untrusted collaboration [16] [17]

Cloud Computing provides a way to establish an IT facility without physical investments. In some cases, CSPs may provide a service of a quality lower than in SLAs. As a consequence, users may not only lose the control of their IT resources but also stick in a situation where the service paid is not what they should receive [12] [13].

(3) SLA deviation [17]

Generally, cloud service performance is measured by SLM. Ideally, results should not change regardless the side. However, the calculation may affect the QoS. For instance, the annual performance is normally higher than the monthly one. So, users need a fair-and-square performance measurement.

(4) Negotiation [8] [18]

SLAs is widely used to define QoS. However, if neglecting SLA in SaaS, IaaS and PaaS, users will select cloud services by cost. As a result, the negotiation process of each trade will be either slow or unfair (controlled by CSPs).

(5) Comments [19] [23]

The service portfolio is controlled by CSPs. Therefore, users cannot know the real capability and performance of services. Also, though users can comment service, comments are not published. So in current market, CSPs can sell immature services with unrepeatable or low performance.


With the rapid growth of cloud, the emerging issues are to be solved urgently. Several scholars are addressing these issues [20]. However, they split SLM into two parts that are, respectively, for CSPs and users. On the side of CSPs, a new generalized and extensible simulation framework to deliver high

performance, reliable, fault-tolerant and sustainable cloud services is presented [15]. On the side of users, a generic QoS framework to enhance the cloud workflow systems to manage different cloud components with an efficient support of QoS is proposed [14]. Specifically, it is a guideline for users to develop composed services. These methods are partial, since they do not address the interaction between CSPs and users.

Moreover, in order to overcome specific challenges, some works have been performed as followings:

(1) As for monitoring, Lattice Monitoring Framework are designed and built for the purpose of monitoring the performance of cloud environments. [26] It is a framework to coordinate measurement probes to collect data efficiently. This framework finally integrates with RESERVOIR (a service cloud to orchestrate services and computing resources) to monitor cloud services in a distributed environment. However, the usage of such framework is limited, since it is only flexible for the service consumers. Cloud federation is weakly supported and definition of SLAs are not provided.

(2) Regarding accounting and billing, the sub-processes of Cloud Service Provisioning Information Model are discussed to disclose how they are proceeded in the Cloud Supply Chain. [27] The accounting process greatly relies on monitoring data, that shall be measured in different duration and with consistency. Then according to accounting data and pricing functions, Billing is generated. However the SLA deviation is not discussed in these processes, and also how to carry penalty is not concerned.

(3) Concerning the disclosing of the performance of different cloud services, Dwarf benchmarks are used to identify application models. [28] It proposes that a set of Dwarf benchmarks can ultimately be a standardized measurement of the performance of cloud resources. However only IaaS (Infrastructure as a Service) is in the scope, it is doubtful if this method can be applied for other services namely Software as a Service (SaaS) and Platform as a Service (PaaS).

In short, SLM issues in cloud are not addressed comprehensively. Current SLM frameworks are proficient in a conventional context, and unable to ensure QoS in cloud. In order to solve emerging SLM issues, we propose a third party SLM framework.




As shown in Figure 1, third party SLM framework consists of six components:

1. Cloud Service Registration 2. Negotiation agent

3. Compensation agent 4. Comment agent 5. Billing agent

6. Service Monitoring engine

Service Monitoring Engine Negotiation Agent Compensation Agent

Commitment Agent

Cloud Service Registration Agent Internet CSPs

Service Users

Fig. 1. Third party SLM framework overall architecture.

4.1 Cloud Service Registration Agent


Through CSRA, CSPs can publish, modify or update their cloud services. Using a defined XML notation, a cloud service can be analysed by Service Analyser component, which is a pre-process component to analyse the incoming cloud service description file. After that, the Service Verification component will categorize the service according to its properties and update Service Repository. If the above process finished successfully, the new cloud service is available in the repository that can be used by Negotiation agent (NEA) to make a contract with service users. Finally, a feedback will be generated and given back to CSPs to indicate if the service has been registered or not. The overall architecture of CSRA is shown in Figure 2.

Cloud Service Registration Agent

Service Repository

Service Analyzer

Service Verificationer

Service Description Result


Fig. 2. Cloud Service Registration agent overall architecture

4.2 Negotiation Agent (NEA)

Contract Proposal Final Contract Contract Proposal Final Contract Feedback Negotiation agent SLA template User Requirement analyzer Contracted SLAs template generator Contracted SLA confirmer Final Contract (Contracted SLAs) Service Repository CSPs Service Users User requirement

Fig. 3. Negotiation agent overall architecture

In the framework, it defines the way for CPSs and service users to make the contract. According to service users’ requirements, User Requirement Analyser (URA) will select suitable service automatically based on following criteria: 1) required service capability, 2) service performance and 3) ideal price. Then the agent will use Contracted SLA template generator (CSTG) to generate a contract proposal and send it to both CSPs and service users. According to the feedback, the agent will revise the contract and after several iterations, a final contract will be verified by Contracted SLA confirmer (CSC) and stored in the local database. Simultaneously, the SLA metric related to the new contract will be controlled by


Service Monitor Engine (SME). The overall architecture of NEA is shown in Figure 3.

4.3 Compensation Agent (COA)

Periodically, Compensation agent (COA) will run automatically to check if the service user has received the service with defined QoS by the comparison of actual and contracted SLAs. This work is mainly done by Service Performance Analyser (SPA) which will use actual service running data to calculate SLA, then comparing it with contracted SLA. If there is any contradiction, the compensation report will be generated by Compensation Reporter (COR) which will store the report in compensation database and send it to the CSP and the service user. The overall architecture of COA is shown in Figure 4.

Compensation report Compensation report

Compensation agent Final Contract (Contracted SLAs) SLA monitoring result Service

performance analyzer Compensation reporter Compensation


CSPs Service Users

Fig. 4. Compensation agent overall architecture.

4.4 Comment Agent (CMA)

Comment Comment Feedback Comment agent Service Repository SLA monitoring result Compensation report Comment Repository Commitment generator Final Contract (Contracted SLAs) Service Users CSPs

Fig. 5. Comment agent overall architecture.

In order to 1) rank the service, 2) give a better service solution to service users and 3) motivate CSPs to enhance their services, Comment agent (CMA) is introduced which will use performance data and compensation data to generate up-to-date comments for each used cloud service. Also it is also a portal for service users to submit their comments. Periodically, the feedback with new comments will be sent to CSPs. Then CSPs can improve the capability and performance of the service accordingly since CSPs would like gain more users by their high satisfied services. The overall architecture of CMA is shown in Figure 5.

4.5 Billing Agent (BAT)

Basically, the billing schema is defined by the contract, but the framework will use other resources to make a fair bill. For instance, if there is a compensation of one cloud service, the system will automatically calculated it according to the compensation rule. Also the services can be more competitive with good comments in terms of capacity and performance. Thus service providers can serve more users and get more profit by pay as you go model. The overall architecture of BAT is shown in Figure 6. Billing agent Service Repository SLA monitoring result Compensation report Comment Repository Bill generator Final Contract (Contracted SLAs) Bill Service Users CSPs Bill

Fig. 6. Billing agent overall architecture

4.6 Service Monitoring engine (SME)

The framework provides Service Monitoring engine (SME) to control, monitor and manage the service used by different service users based on defined SLA metric. SLA metric is generated by the contract containing contracted SLAs. During the process, SLA data collector (SLADC) will collect and aggregate the service status information from CSPs and service users, then generate a universal SLA data and store it in the database. Using the generated SLA data and SLA metric, SLA Analyser (SLAA)


can provide on-time SLA report to two stakeholders. Other additional functions like capability and performance trend, forecasting and dashboard will be given to have an overall view of the cloud service. The overall architecture is shown in figure 7.

SLA report Service status Information

SLA report

CSPs Service Users

Service Monitoring engine

SLA monitoring result

Final Contract (Contracted SLAs)

SLA data collector SLA Analyzer

SLA Data

Service status information

Fig. 7. Service Monitoring engine overall architecture.



We have validated our SLM framework on a global Telco operator. Briefly, the company did benefit from our framework through the ability to speed up the time to market, exploit the public cloud capabilities as the ones offered by Amazon and Rackspace, and reduce the number of incidents related to the IT resources saturation.

The whole framework could be deployed by using different programming languages on the basis of the kind of systems involved. The main requirement for a correct implementation are standard APIs that simplify the dialogue between the service users and the selected CSPs.

Our case study concerns 112 services deployed on virtual instances on Red Hat 5.X and Windows Server 2008. In order to avoid intrusiveness, we create a Java based program for each building block of the framework, therefore exploiting XML flexibility to speed-up the communication among them, and providing the Cloud orchestrator information in real time on the status of the services.

First of all, we define and provide SLA templates in NEA. Accordingly each SLA template, that represents a specific application domain, has to be published into the registry directly managed by NEA. In the preliminary step, CSPs assign their services to a particular template and afterwards define SLA mappings. Then the user searches for the appropriate services, and map its requirements on the associated

template. Through this mapping, negotiation process starts. In the meanwhile, the trusted third party takes performances data to assist the user to evaluate the selected services. To specify SLA, we use WS-Agreement language, which is currently supported by major vendors and selected by W3C as a future standard.

Then we implement SME to monitor contracted cloud services. Supported by a structured forecasting model, SME is extended to make IT capability forecast that ensures the QoS by an optimised pre-provision of the required IT resources in a defined period, e.g. one quarter. By doing this, the company greatly enhances stability of the orchestrated cloud services, strongly shortens the service delivery time and sharply reduces the operating cost.

Table 1 the comparison result

Without Framework With SLM Framework # of monthly incident (112 services) 9 0 SLA compliancy 85% 100% Delivery Time (hours) 24 6

% of Cloud budget used

(for Test&Dev) 100% 70%

After testing our framework, we summarize the comparison result in table 1. Briefly, we use prototyped framework to monitor 112 services measured by monthly incident number and SLA compliance status. Then we also measure the service delivery time (calculated in hour) and cost (in test and development) for a new service. As shown in table 1, results are high service performance (availability), shorter service delivery time and, finally, an impressive cost reduction, that uncloses the benefit and capability of the Cloud technology.



This paper has presented a third-party SLM framework, that covers the whole service life cycle, that meets the requirements typical of SLM in cloud, as the strong dialectics between customers and providers, and overcomes the limits of popular frameworks as ITIL and COBIT, that deal with traditional on premises contexts. The proposed SLM, by covering the SLM lifecycle, guides user companies, that can follow a simple a straight approach. In the future, the framework will be further enhanced by developing:

(1) A strategy to increase the security of the authority channel and to identify the legal entity.


Therefore, a secure transaction channel and a robust identification mechanism are needed.

(2) Security Stored data on billing, service portfolio and contract are sensitive for both service providers and service users. So a data secure mechanism should be developed.


[1] Lachal, L. "Trends to Watch: Cloud Computing

Technology." Ovum Trends Brief (2011).

[2] Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., & Brandic, I. (2009). Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Generation computer systems, 25(6), 599-616.

[3] Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., &

Ghalsasi, A. (2011). Cloud computing—The business perspective. Decision Support Systems, 51(1), 176-189.

[4] Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R.,

Konwinski, A., ... & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58.

[5] Motta, G., You, L., Sacco, D., & Sfondrini, N. (2013, May).

Cloud computing: the issue of service quality: an overview of cloud service level management architectures. In Service Science and Innovation (ICSSI), 2013 Fifth International Conference on (pp. 230-233). IEEE.

[6] Garg, S. K., Versteeg, S., & Buyya, R. (2013). A framework

for ranking of cloud computing services. Future Generation Computer Systems, 29(4), 1012-1023.

[7] Lee, S. Y., Tang, D., Chen, T., & Chu, W. C. (2012, July).

A QoS Assurance middleware model for enterprise cloud computing. In Computer Software and Applications Conference Workshops (COMPSACW), 2012 IEEE 36th Annual (pp. 322-327). IEEE.

[8] Zhu, F., Li, H., & Lu, J. (2012, May). A service level agreement framework of cloud computing based on the Cloud Bank model. In Computer Science and Automation Engineering (CSAE), 2012 IEEE International Conference on (Vol. 1, pp. 255-259). IEEE.

[9] Motta, G., & Sfondrini, N. (2012). Cloud computing and enterprises: a survey. In The 2nd International Conference on Computer and Management (CAMAN).

[10] Firdhous, M., Ghazali, O., & Hassan, S. (2011, August). A trust computing mechanism for cloud computing with multilevel thresholding. In Industrial and Information Systems (ICIIS), 2011 6th IEEE International Conference on (pp. 457-461). IEEE.

[11] Durkee, D. (2010). Why cloud computing will never be free.

Queue, 8(4), 20.

[12] Proehl, T., Erek, K., Limbach, F., & Zarnekow, R. (2013, January). Topics and Applied Theories in IT Service Management. In System Sciences (HICSS), 2013 46th Hawaii International Conference on (pp. 1367-1375). IEEE. [13] Iden, J., & Eikebrokk, T. R. (2013). Implementing IT

Service Management: A systematic literature review. International Journal of Information Management, 33(3), 512-523.

[14] Liu, X., Yang, Y., Yuan, D., Zhang, G., Li, W., & Cao, D.

(2011, December). A generic qos framework for cloud workflow systems. In Dependable, Autonomic and Secure

Computing (DASC), 2011 IEEE Ninth International Conference on (pp. 713-720). IEEE.

[15] Calheiros, R. N., Ranjan, R., De Rose, C. A., & Buyya, R. (2009). Cloudsim: A novel framework for modeling and simulation of cloud computing infrastructures and services. arXiv preprint arXiv:0903.2525.

[16] Ferretti, S., Ghini, V., Panzieri, F., Pellegrini, M., & Turrini, E. (2010, July). Qos–aware clouds. In Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on (pp. 321-328). IEEE.

[17] Qian, H., Medhi, D., & Trivedi, T. (2011, May). A

hierarchical model to evaluate quality of experience of online services hosted by cloud computing. In Integrated Network Management (IM), 2011 IFIP/IEEE International Symposium on (pp. 105-112). IEEE.

[18] Kafetzakis, E., Koumaras, H., Kourtis, M. A., & Koumaras,

V. (2012, July). QoE4CLOUD: A QoE-driven

multidimensional framework for cloud environments. In Telecommunications and Multimedia (TEMU), 2012 International Conference on (pp. 77-82). IEEE.

[19] Patel, P., Ranabahu, A. H., & Sheth, A. P. (2009). Service level agreement in cloud computing.

[20] Weinhardt, C., Anandasivam, D. I. W. A., Blau, B.,

Borissov, D. I. N., Meinl, D. M. T., Michalk, D. I. W. W., & Stößer, J. (2009). Cloud computing–a classification, business models, and research directions. Business & Information Systems Engineering, 1(5), 391-399.

[21] Yuan, W. H., Wang, H., & Fan, Z. Y. (2014). Research on Optimization of Resources Allocation in Cloud Computing Based on Structure Supportiveness. In Frontier and Future Development of Information Technology in Medicine and Education (pp. 849-858). Springer Netherlands.

[22] Lian, J. W., Yen, D. C., & Wang, Y. T. (2014). An exploratory study to understand the critical factors affecting the decision to adopt cloud computing in Taiwan hospital. International Journal of Information Management, 34(1), 28-36.

[23] Buyya, R., Yeo, C. S., & Venugopal, S. (2008, September).

Market-oriented cloud computing: Vision, hype, and reality for delivering it services as computing utilities. In High Performance Computing and Communications, 2008. HPCC'08. 10th IEEE International Conference on (pp. 5-13). IEEE.

[24] Mell, P., & Grance, T. (2011). The NIST definition of cloud

computing (draft). NIST special publication, 800(145), 7. [25] Motta, G., & Sfondrini, N. (2011). Research studies on

cloud computing: a systematic literature review. In 17th

International Business Information Management

Association Conference (IBIMA).

[26] Clayman, S., Galis, A., Chapman, C., Toffetti, G., Rodero-Merino, L., Vaquero, L. M., ... & Rochwerger, B. (2010, April). Monitoring Service Clouds in the Future Internet. In Future Internet Assembly (pp. 115-126).

[27] Lindner, M., Galán, F., Chapman, C., Clayman, S.,

Henriksson, D., & Elmroth, E. (2010). The cloud supply chain: A framework for information, monitoring, accounting and billing. In 2nd International ICST Conference on Cloud Computing (CloudComp 2010).

[28] Phillips, S. C., Engen, V., & Papay, J. (2011, November). Snow white clouds and the seven dwarfs. In Cloud Computing Technology and Science (CloudCom), 2011 IEEE Third International Conference on (pp. 738-745). IEEE. Chicago


Related documents

Bill Pay is available within German American’s online banking system. Bill Pay users will need to re-enter their payment information in their German American Bill Pay service.

Database Application Server Applications Enterprise Manager Cloud Control Oracle Management Service Oracle Management Repository Agent Agent Agent Administrators

SLM processes provide a framework by which services are defined, levels of service required to support business processes agreed upon, Service Level Agreements (SLAs)

Providers that have contracted with a third party (clearinghouse/network service vendor or a billing agent) are required to have an agreement signed by that third party in which

Providers who have contracted with a third party (clearinghouse/network service vendor or a billing agent) are required to have that third party sign an agreement in which they

A bill- ing service that contracts on a percentage basis does not qualify as a party that furnished services to a beneficiary, thus a billing service cannot directly receive

The Asylum Program of Arizona is a non profit organization of Catholic Community Services of Southern Arizona that provides legal representation for immigrant victims of

The Coal Mining industry (Long Service Leave) Payroll Levy Bill 1992 (the ‘Bill”) is introduced in conjunction with the Coal Mining Industry (Long Service Leave Funding) Bill 1992,