• No results found

How To Write A Cloud Computing Blog Post

N/A
N/A
Protected

Academic year: 2021

Share "How To Write A Cloud Computing Blog Post"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

449

19

Knowledge-Intensive

Cloud Services

Transforming the Cloud

Delivery Stack

Michael P. Papazoglou and Luis M. Vaquero

Contents

19.1 Introduction ...450

19.2 Cloud Computing Overview ...452

19.3 Cloud APIs ...455

19.4 Cloud Interoperability ...456

19.4.1 Open Cloud Standards Incubator ...456

19.4.2 Open Cloud Computing Interface ...457

19.4.3 Cloud Data Management Interface ...457

19.5 Cloud Service Life Cycle ...458

19.6 Analysis of the Cloud Delivery Model Landscape ...461

19.7 Knowledge-Intensive Cloud Services ...464

19.8 Analysis of Cloud Service Domain Languages ...465

19.8.1 Overview of Cloud Service Definition Languages ...466

19.8.1.1 IaaS Definition Languages ...466

19.8.1.2 PaaS Definition Languages ...468

19.8.1.3 SaaS Definition Languages ...471

19.8.2 Overview of Cloud Service Constraint Languages ...472

19.8.2.1 Formal Representation of Constraints ...472

19.8.2.2 Overview of Candidates for a Cloud Service Constraint Language ...473

19.8.3 Overview of Cloud Service Manipulation Languages ...476

19.9 Need for a More Flexible Cloud Delivery Model ...478

19.9.1 Inflexible Monolithic Cloud Solutions ...478

19.9.2 High Diffusion of Cloud Domain Languages ...479

19.10 Scaffolding the Cloud Monolith ...480

19.10.1 Blueprint Definition Language ...482

(2)

19.1  IntroduCtIon

Currently enterprises are faced with a continuing demand for business expansion, and, as a result, they often need to invest in stand-alone servers or software that tra-ditionally demand heavy capital investment but remain frequently underutilized. IT infrastructures have become today too complex and brittle, and, as a result, enterprise IT is unable to keep pace with their needs as almost 70% of IT investment concen-trates on maintenance, leaving little time to support strategic development projects. Many enterprises are usually left with a plethora of technologies and systems, with a suboptimal level of integration, creating significant barriers to developing innova-tive solutions that are required to provide a competiinnova-tive edge. For such enterprises in search of a cost-effective use of computing resources, cloud computing technology is rapidly becoming a commercially viable computing service delivery model.

Much like the historical client-server era, the advent of cloud computing is fun-damentally changing the way we consume computing resources. Cloud computing allows enterprises to share resources, software, and information across a rapidly growing multitude of connected devices, creating new opportunities for business speed and efficiency. It reduces IT complexity by leveraging the efficient pooling of on-demand, self-managed virtual infrastructure, consumed as a service. Cloud computing eliminates the need for large capital outlays to launch new applications, moving the decision out of the investment realm and into the operational. With Cloud computing practices, service providers can expand their offerings to include the entire traditional IT stack ranging from foundational hardware and platforms to application components, software services, and entire software applications.

Cloud computing is an evolving term that describes a broad movement toward the deployment of network-based applications in a highly flexible, virtualized IT environment to enable interaction between IT service providers of many types and clients. Virtualization means a form of abstraction between the user and the physi-cal resource in a way that preserves the user’s impression that he or she is actually interacting directly with the physical resource. Actually, cloud resources and soft-ware services are delivered virtually, and although they may appear to be physical (servers, disks, network segments, web or RESTful services, etc.), they are actually virtual implementations of those on an underlying physical infrastructure, which the subscriber never sees. The essence is to decouple the delivery of computing services from their underlying technology. Beyond the user interface, the technology behind the cloud is opaque to the user, abstracting away from technology in order to make cloud computing user-friendly.

With cloud computing resources and services are delivered virtually, that is, although they may appear to be physical (servers, disks, network segments, etc.), they are actually virtual implementations of those on an underlying physical infrastructure, AQ1

19.10.3 Blueprint Manipulation Language...483

19.10.4 Life Cycle Support ...485

19.11 Sample Cloud Application Using Blueprinting ...486

19.12 Summary ...491

(3)

which the subscriber never sees. This enables service providers to move toward a fully virtualized service-oriented infrastructure, eliminating silo-based complexity, acquir-ing and movacquir-ing resources to where they are most needed, improvacquir-ing performance, reducing costs, and speeding up the delivery of services. The objective is to create a virtualized service-oriented infrastructure for multiple integrated services of all types. Services here include infrastructure resources (i.e., computing power, storage, and machine provisioning), software resources (i.e., middleware and platform-based development resources), and application components (which include full-fledged applications as well as stand-alone business processes).

Over the past few years, we have experienced significant technological develop-ments that enable cloud computing. These include far more reliable Internet services, with higher throughput and resilience coupled with virtualization techniques that enable computing facilities to be replicated and reproduced easily. One consequence of this move to virtualization is that it is no longer necessary to have in-house com-puting infrastructures. Instead, it is possible to shift comcom-puting and storage capa-bilities “into the cloud” where they offer economies of scale in terms of IT support, speed, and resource sharing. At the same time, cloud computing faces a number of challenges, which include improved security and privacy models and associated regulatory compliance considerations, interoperability, latency, performance, and reliability concerns, as well as managing the very flexibility that cloud provides.

One of the greatest challenge facing longer-term adoption of cloud computing ser-vices is the ability to automatically provision serser-vices, effectively manage workload segmentation and portability (i.e., seamless movement of workloads across many platforms and clouds), and manage virtual service instances, while optimizing use of the resources and accelerating the deployment of new services. Within such a cloud environment, it is also important to be able to work with both cloud-based and enterprise-based applications using a unified approach that can function across existing applications and multiple cloud providers. This includes the ability for opportunistic and scalable provisioning of application services, consistently achiev-ing quality of service (QoS) targets under variable workload, resource, and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (virtual machines [VMs], application ser-vices, storage, and database) for handling sudden variations in service demands. To meet these pressing demands, it is essential to transform the fabric of the current inflexible service delivery models. To transform the cloud delivery stack requires reliance on what we call knowledge-intensive cloud services. As we explain later in this chapter, knowledge-intensive cloud services need to rely on a framework combining definition and manipulation languages.

This chapter aims to discuss the various cloud delivery model options and pro-vide a perspective on how to improve cloud service delivery and guarantee efficient workload segmentation and resource utilization by relying on knowledge-intensive cloud services. The chapter presents vision, challenges, and architectural elements for achieving interoperability and support of dynamic expansion or contraction of capabilities and partitioning of workloads in cloud environments. We begin our dis-cussion by providing a brief overview of cloud computing to improve readability and understanding.

(4)

19.2  Cloud ComputIng overvIew

In order to understand the technological direction of cloud computing and evaluate differing cloud service delivery options, it is necessary to understand its operating principles as well as the distinct dimensions of the various cloud offerings.

Defining an evolving term such as cloud computing is a hard task. There are cur-rently several definitions for this term most comprehensive of which the one given by the National Institute of Standards and Technology (NIST). NIST defines Cloud Computing as [Mell 2009]

A consumption and on-demand delivery computing paradigm that enables conve-nient network access to a shared pool of configurable and often virtualized computing resources (e.g., networks, servers, storage, middleware and applications as services) that can be rapidly provisioned and released with minimal management effort or ser-vice provider interaction.

The aim at offering every element of the IT stack as a service implies that cloud provid-ers can also offer platforms to deliver specialized services while enabling an industrialized and pervasive deliverance of these highly specialized services. Users of IT-related ser-vices can follow an SOA-like approach that lets them focus on what the serser-vices provide them with rather than how the services are implemented or hosted. SOA is an architecture pattern while the cloud can be viewed as a target deployment platform for that architec-ture pattern. In particular, cloud computing provides the ability to leverage on-demand new computing resources and platforms that SOA applications require, but an organi-zation may not own. The cloud-SOA merger offers unprecedented control in allocating resources dynamically to meet the changing needs of service-based applications, which is only effective when the service level objectives at the application level guide the cloud’s infrastructure/platform management layer [Rodero-Merino 2010, Papazoglou 2012].

Cloud computing can be viewed as a holistic ecosystem of a collection of ser-vices, applications, information, and infrastructure comprised of pools of comput-ing, network, information, and storage resources. These components can be rapidly orchestrated, provisioned, implemented, and decommissioned, and scaled up or down thereby providing for an on-demand utility-like model of allocation and consumption.

The NIST definition states the cloud computing is composed of five essential characteristics, three cloud service delivery models, and four deployment models.

The most commonly accepted characteristics of cloud computing are briefly sum-marized as follows [Mell 2009]:

• On-demand self-service: Service providers offer clients the ability to provi-sion computing capabilities, such as cloud servers, dedicated storage, hard-ware load balancers, an online authorization service, some geolocation data, a testing platform, etc., on demand. This way, applications can automatically build and scale across their whole life cycle without requiring human inter-action with each provider of all the services the application is using.

• Broad network access: The cloud provider’s resources are available over the network and can be accessed through standard mechanisms.

(5)

• Location independent resource pooling: Provider’s resources are pooled and shared among several clients (multitenancy). The client generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, region, or datacenter).

• Rapid elasticity and provisioning: Elasticity implies the ability to scale resources as needed by applications. To the client, the cloud appears to be infinite, and the client can purchase or lease capabilities available for provi-sioning in any quantity at any time.

• Pay-per-use measured service: Clients consume resources and pay only for resources that they use. To this effect, cloud computing platforms usually employ consumption-based billing mechanisms to charge for cloud service use. Cloud computing is typically divided into three models of service delivery (com-monly referred to as cloud stack layers). These three categories are software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS):

• IaaS is at the lowest layer and is a means of delivering very basic computing capability—machines with operating systems and storage as standardized services over the network. The IaaS layer allows the infrastructure pro-vider to abstract away infrastructure-specific details such as which exact hardware an application is using and where the application is running. IaaS incorporates the capability to abstract resources as well as deliver physi-cal and logiphysi-cal connectivity to those resources and provides a set of appli-cation programming interfaces (APIs), which allow interaction with the infrastructure by clients. Currently, the most prominent example of IaaS is Amazon Web Services (AWS), whose Elastic Compute Cloud (EC2) and Simple Storage Service (S3) products offer bare-bone computing and stor-age services, respectively.

• Platform provider as a service or “platform as a service” (PaaS) is interposed between IaaS and SaaS and refers to an environment where developers build and run an application by using whatever prebuilt components and interfaces that particular platform provides as a service to developers over the web. The PaaS provider facilitates deployment of cloud applications by entirely man-aging the cloud software development platform (often an application hosting, middleware environment) providing all of the facilities required to support the complete life cycle of building and delivering networked services. The PaaS consumer does not manage or control the underlying cloud infrastruc-ture including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. One serious drawback with conventional PaaS offerings is that they are constrained by the capabilities that are available through the PaaS provider and do not allow easy extensibility or customization options. Currently, there exist several examples of PaaS, the most well known being Google App Engine (GAE) and Force.com.

AQ2

(6)

• SaaS is an “on-demand” software application that is owned, delivered, and managed remotely by one or more software providers. This layer builds upon the underlying IaaS and PaaS and provides clients with integrated access to software applications. The provider delivers software based on a single set of common code and data definitions that are consumed in a one-to-many model. This layer provides a self-contained operating envi-ronment used to deliver the entire user experience including the content, its presentation, the application(s), and management capabilities. SaaS pro-vides the most integrated functionality built directly into the offering with no option for consumer extensibility. Typical examples of the SaaS level are SalesForce.com that offers CRM applications accessible by subscription over the web and the Amazon Fulfillment Web Service (Amazon FWS). The most important overlap between SOA and cloud computing occurs at this level of the cloud stack.

The cloud computing layers do not simply encapsulate on-demand resources at dif-fering operational levels; they also help define a new application development model. Each layer of abstraction provides numerous possibilities for defining services that can be offered on a pay-per-use basis. These three levels support virtualization and management of differing levels of the computing solution stack.

The cloud model gives also the opportunity to mashup services from a variety of cloud providers to create what is known as a cloud syndication. Cloud syndica-tions are essentially federasyndica-tions of cloud providers whose services are aggregated in a single pool supporting three basic interoperability features: resource migration, resource redundancy, and combination of complementary cloud resources and ser-vices [Kurz 2011]. Cloud syndications at the SaaS level are termed “BPaaS” (or business process as a service). These comprise a distinct integrative layer above the SaaS layer that reflects the focus on enterprise-specific services. BPaaS allows creating unique end-to-end business processes that are usually syndicated with other exter-nal services (possibly provided by diverse SaaS providers). BPaaS requires effective management of cloud resources. It is important to understand that when integrat-ing end-to-end business processes from diverse providers, the resultant ecosystem is limited by the service quality of its weakest component in the stack.

The continuous pressure of IT departments and infrastructure providers to offer computing infrastructure at the lowest possible cost have made them real-ize that by effectively leveraging the concepts of resource pooling, virtualization, dynamic provisioning, utility, and commodity computing, they can create cost-effective cloud solutions. This has resulted in four major cloud deployment models [Armbrust 2009]:

• The first cloud deployment model where developers leverage applications that are available from and run on public facilities is collectively referred to as the “public cloud” model.

• The second cloud deployment model where developers develop their own “on-premise” or “internal” cloud solutions is collectively referred to as the “private cloud” model.

(7)

• The third and final cloud deployment model concerns developers who inte-grate and federate services across both the public and private cloud. This model is collectively referred to as the “hybrid cloud” model.

• The fourth and final cloud deployment model is the “community cloud.” Here, the cloud infrastructure is shared by several organizations and sup-ports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. To ensure guarantees from cloud service providers for service delivery, businesses using cloud services typically enter into service level agreements (SLAs) with the cloud service providers. Although SLAs vary between businesses and cloud ser-vice providers, they typically include the required/agreed serser-vice level through QoS parameters, the level of service availability, the indication of the security measures adopted by the cloud service provider, and the rates of the services.

19.3  Cloud ApIs

In a cloud architecture, modules communicate with each other over APIs, usually by means of web services or REST technologies. A cloud architecture model exposes a set of software interfaces or APIs that developers and customers use to manage and interact with cloud services. Provisioning, management, orchestration, and monitor-ing are all performed usmonitor-ing these interfaces. Cloud architecture offers at least three broad kinds of APIs:

• Cloud functionality interfaces: These are programming interfaces through which developers and consumers interact with providers to request, deploy, reconfigure, administer, and use services. These types of interfaces include managing the security-related aspects of a cloud as well as managing and modifying instances of deployed services [DMTF 2009]. In addition, they include interfaces for image and infrastructure management. These APIs control details such as firewalls, node management, VM, and network man-agement, as well as load balancing.

• Data functionality interfaces: These are the channels through which data flow in and out of the cloud. These APIs encompass various forms of cloud data sources, documents, web portals, maps, etc., and are used for reading, writing, updating, and querying data. For instance, most SaaS providers offer API calls to read (and thereby “export”) data records. Data functionality APIs also include simple primitives for file storage services, document storage services, and so on. • Application cloud interfaces: Application cloud APIs provide methods to

interface and extend applications on the web. These APIs provide func-tionality beyond data access. They are hallmarks of most SaaS providers, easing integration of on-premise vendor and home-grown applications. Application cloud APIs connect to applications such as CRM, ERP, finan-cial, billing, desktop productivity, and social media/networks applications, and so on. These applications are delivered as SaaS.

(8)

19.4  Cloud InteroperAbIlIty

Providers have their own way of implementing how a user or cloud application inter-acts with their cloud. This leads to cloud API propagation as rarely, if ever, do two providers implement their clouds in exactly the same way using the same compo-nents. This limits cloud choice because of vendor lock-in, portability, ability to use the cloud services provided by multiple vendors including the ability to use an orga-nization’s own existing datacenter resources seamlessly.

We are already seeing different vertical PaaS stacks that are popping up every-where, and each is incompatible with the other. Almost every cloud has a unique infrastructure for providing network services between servers and applications and servers and storage. Differences are likely in network addressing, directory services, firewalls, routers, switches, identity services, naming services, security policies, and other resources. This exacerbates the need for interoperability between clouds so that complex business applications developed on clouds are interoperable.

Cloud interoperability refers to the situation where two distinctly separate and independent providers (who, for instance, provide PaaS services) work and orches-trate seamlessly from a customer perspective and integration. This also includes the possibility of “wiring up” the PaaS offerings with not only current and modern pro-vider offerings but also to legacy resources and services. Another related dimension of cloud interoperability is intercloud interoperability, which leads to forming cloud federations of private and public clouds using some form of cloud orchestration ser-vices [Parameswaran 2009].

An important concept in the cloud interoperation space is that of metadata. When managing large amounts of data with differing requirements, metadata are a conve-nient mechanism to express those requirements in such a way that underlying data services can differentiate their treatment of the data to meet those requirements.

In the following, we provide a brief overview of the most promising cloud interop-erability approaches.

19.4.1  Open ClOud StandardS InCubatOr

DMTF’s cloud efforts are focused on standardizing interactions between cloud environments by developing specifications that deliver architectural semantics and implementation details to achieve interoperable cloud management between service providers and their consumers and developers. For this, DMTF is using the recom-mendations developed by its Open Cloud Standards Incubator initiative. DMTF’s Open Cloud Standards Incubator aim is to aid the industry in addressing challenges that affect the interoperability, portability, and security of cloud computing environ-ments [DMTF 2009].

The environment under consideration by the Open Cloud Standards Incubator includes all of these deployment models. The main focus of the incubator is manage-ment aspects of IaaS, with some work involving PaaS. These aspects include SLAs, QoS, workload portability, automated provisioning, and accounting and billing.

Examples of standardization areas include resource management protocols, data artifacts, packaging formats, and security mechanisms to enable interoperability.

(9)

19.4.2  Open ClOud COmputIng InterfaCe

The Open Cloud Computing Interface (OCCI) comprises a set of open community-lead specifications (http://occi-wg.org/ ). The OCCI is a RESTful protocol and API for all kinds of management tasks. It is OCCI that provides for commonly under-stood semantics and syntax in the domain of customer-to-provider infrastructure management. OCCI was originally initiated to create a remote management API for IaaS model-based services, allowing for the development of interoperable tools for common tasks including deployment, autonomic scaling, and monitoring. It has since evolved into a flexible API with a strong focus on interoperability while still offering a high degree of extensibility. The current release of the OCCI is suitable to serve many other models in addition to IaaS, including PaaS and SaaS.

The OCCI specification is divided in three parts:

• OCCI core model defines a representation of instance types, which can be manipulated through an OCCI rendering implementation. It is an abstraction of real-world resources, including the means to identify, classify, associate, and extend those resources. A fundamental feature of the OCCI core model is that it can be extended in such a way that any extension will be discoverable and visible to an OCCI client at run-time. An OCCI client can connect to an OCCI implementation using an extended OCCI core model, without knowing anything in advance, and still be able to discover and understand, at run-time, the various resource and link subtypes supported by that implementation. • OCCI infrastructure defines how an OCCI implementation can model and

implement an IaaS API offering by utilizing the OCCI core model. This API allows for the creation and management of typical resources associated with an IaaS service, for example, creating a “compute” instance and “stor-age” instance and then linking them with StorageLink. The StorageLink type represents a link from a resource to a target storage instance. This enables a storage instance be attached to a compute instance, with all the prerequisite low-level operations handled by the OCCI implementation. • OCCI HTTP rendering defines how to interact with the OCCI core model

using the RESTful OCCI API. The document defines how the OCCI core model can be communicated and thus serialized using the HTTP protocol. OCCI can be used in conjunction with other standards to provide for extended interoperability. It can, for instance, connect to the Storage Networking Industry Association’s (SNIA) cloud storage standard, Cloud Data Management Interface (CDMI) to provide enhanced management of the cloud computing storage and data [SNIA 2010].

19.4.3  ClOud data management InterfaCe

The SNIA has completed a standard aimed at making it easier for enterprises to move data around various clouds on the basis of common data storage interfaces. A data storage interface is an interface interposed between an application (or service) and the

(10)

underlying set of services that enable reading and writing data, among other functions. SNIA categorizes and describes the cloud services that live behind this interface and as well as those which consume it.

SNIA’s Cloud Storage Reference Model (CSRM) uses multiple types of cloud data storage interfaces that are able to support both legacy and new applications [SNIA 2010]. All of the interfaces allow storage to be provided on demand, drawn from a pool of resources. The capacity is drawn from a pool of storage capac-ity provided by storage services. The data services are applied to individual data elements, as determined by the data system metadata. Metadata specify the data requirements on the basis of individual data elements or on groups of data elements (containers). Storage system metadata are produced and interpreted by the cloud offering or basic storage functions, such as modification and access statistics, and for governing access control.

The CSRM suggests employing a CDMI as the functional interface that appli-cations may use to create, retrieve, update, and delete data elements from the cloud. As part of this interface, the client will be able to discover the capabilities of the cloud storage offering and to use this interface to manage containers and the data that are placed in them. This interface may also be used by adminis-trative and management applications to manage containers, domains, security access, and monitoring/billing information, even for storage that is functionally accessible by legacy or proprietary protocols. The capabilities of the underlying storage and data services are exposed so that clients can understand the offer-ing. The CDMI specification uses RESTful principles in the interface design where possible.

In the SNIA model, data system metadata offer guidelines to the cloud storage system on how to provide storage data services for data managed through the CDMI interface. Such metadata contain information about the number of bytes stored in the data object, the time when a data object was created or last modified, geopolitical identifiers specifying a region where the object is permitted to or not permitted to be stored, data retention periods, desired encryption algorithms, desired maximum data rate on retrieve, and so on.

Cloud service life cycle focuses on exposing the actual cloud service function-ality, which is one or more aggregated resources exposed as a single unit of man-agement and managing the life cycle of a service in a distributed multiple-provider environment in a way that satisfies service-level agreements with its customers [Breiter 2009].

19.5  Cloud servICe lIfe CyCle

The idea of managing applications throughout their complete life cycle is not new. The IT infrastructure library (ITIL) is a set of best practices that are widely accepted to manage IT services. These include operation and improvement of IT services [Cartlidge 2007]. Applications deployed on top of the cloud stack are also associated with their own life cycle, migrating from development to testing, stag-ing, and production environments, with frequent rollbacks. One of the purposes of AQ4

(11)

the Open Cloud Standards Incubator is to address the characteristics of the cloud service life cycle. The DMTF identified the following aspects of the life cycle of a cloud service:

• Description of the cloud service in a template • Deployment of the cloud service into the cloud • Offering of the service to its consumers

• Consumer entrance into contracts for the offering

• Provider operation and management of instances of the service • Decommissioning of the service offering

The DMTF’s life cycle could be seen as a subset of a more complete software life cycle, which is presented in Figure 19.1. In [Stankov 2010], the authors present an analysis on how the ITIL and the global software development and delivery (which refers to the emergence of Internet-based collaborative environments and the expansion of the global delivery of IT services) can align cloud application life cycle with the environment (plat-form, infrastructure, and support services) required at every phase in the life cycle.

In the DMTF’s description phase, all information required to create a certain type of cloud service is captured in a service template. A cloud service template contains all knowledge that is needed to instantiate a cloud service and to manage the result-ing cloud service instances. After a developer has created the components of a service, the developer begins the process of making it available to cloud consumers by creating

SaaS

Analysis Design

Resource

selection Implementation

Tests and documentation

5-Operation and maintenance

1-Template description (developer) 2-Offering on some SLA terms (provider) 3-Subcribe and contract (consumer) 4-Deployment (provider)

6-Decommissioning (provider)

PaaS IaaS

Third party integration Application conception, design,

development, test, patching, maintenance, runtime adaptation, security, etc.

Virtual and/or physical machine management (provision, scheduling,

configuration and deployment)

fIgure 19.1  Mapping of a typical software life cycle to elements of the Cloud Stack*.

(12)

a template that defines the content of and interface to the service. This cloud service template describes the service in a generic manner completely independently of how the service is exposed to potential service subscribers. For example, it does not include any pricing information or information about datacenter-specific resources required for hosting instances of the service. This template can be registered in a marketplace for other developers or providers to integrate them in their own applications or offers. This, in turn, implies the need for templates to be searched and processed so that a user could, for instance, just “plug” two of them together to get a running service or perform basic set theoretic operations on the templates such as union, intersection, and difference.

In the offering phase, a provider who creates a service offering for consumption by one or more consumers customizes the cloud service template. This may include aspects such as the prices associated with the respective offering and specific techni-cal information, such as VMs, Internet protocol address ranges, and storage require-ments for hosting instances of this specific offering. An offering is the unit that a consumer requests by establishing a contract with the provider for that offering.

When a cloud service offering is published in the service catalog, the subscrip-tion and contracting phase begins. From an offering catalog, a requester can select an offering and then specify the necessary parameters for the cloud service instance, such as capacity, availability, performance, and duration. The provider and customer then agree on the SLA terms that stipulate the use of this particular service offer-ing. Subsequently, the provider provisions a service instance in the service provision phase that satisfies the constraints defined in the SLA offering, and the consumer uses the instance as defined in the contract.

The provision phase incorporates run-time monitoring and maintenance func-tionality where a provider manages a deployed service and all its resources, includ-ing monitorinclud-ing resources and notifyinclud-ing the consumer of critical situations. After the contract is terminated, the provider decommissions the offering by reclaiming the service instance and its supporting resources.

For instance, providers of PaaS services can offer their services, which can be integrated with services from some PaaS library. This approach frees developers from the burden or reimplementing highly frequent pieces, such as authentication or data storage. The presence of a marketplace with appropriate selection tools is, therefore, essential in the cloud ecosystem.

Having PaaS online libraries integrated in a PaaS application is complemented by the presence of online tools assisting developers in this integration and in develop-ing and debuggdevelop-ing new pieces of code. In this implementation phase, IaaS resources can be instantiated by the developers to host online developing environments or unit testing tools. The tests can also make use of SaaS test suites or software for enabling regression tests.

State-of-the-art infrastructure management tools offer a large set of automation packages and a platform for programming workflows for higher-level provisioning operations. Indeed, a cloud workflow system can be regarded as a type of PaaS that facilitates the automation of distributed large-scale applications in the cloud. Being a PaaS, a cloud workflow system can get access to other cloud services. This way, the workflow service can control the underlying IaaS resources or control the workflow of the PaaS and SaaS components integrated in the cloud application.

(13)

19.6  AnAlysIs of the Cloud delIvery model lAndsCApe The three conventional cloud delivery models (SaaS, PaaS, and IaaS) are constrained by the capabilities available at their delivery level and do not allow for easy extensi-bility or customization options or any level of transparency on their underlying layers in order to endow developers and providers with more powerful and configurable service delivery options.

Arguably, IaaS clouds are the most explored layer in the cloud stack. IaaS allows for certain reconfiguration potential for the VMs and their (virtual) networks. Application components deployed on an IaaS cloud may exhibit complex depen-dencies on the configuration of multiple datacenter network, middleware (e.g., PaaS services used for the development of maintenance of the applications embedded in the VMs), and related application resources (external SaaS pieces integrated with the application). IaaS level cross-configurations of the VMs comprising a specific cloud service are currently not possible, and cross-machine configurations need to be inferred at deployment time or changed once the VMs are running, instead of remaining statically allocated as in today’s IaaS solutions. Moreover, VM images, VM appliance descriptions, or even the APIs for exposing the IaaS capabilities are hardly interoperable and require a lot of adaptation work to interoperate.

PaaS clouds are supposed to ease programmers’ and administrators’ lives along the entire software life cycle. In general, the PaaS is supposed to assist in all these processes. One serious drawback with contemporary PaaS offerings is that they are constrained by the capabilities that are available through the PaaS provider and do not allow easy extensibility or customization options. To address some of these prob-lems, Pandey et al. propose a PaaS service in charge of coordinating some of the tasks in the VM life cycle: selection, integration, and deployment [Pandey 2011]. They propose a “cloud market maker” service in charge of negotiation with multiple service providers to match VM requirements to VM capabilities. Once a match is found, required resources are allocated. A market directory service is needed to maintain a catalog of services. Unfortunately, this proposal leaves a few other unat-tended stages in the entire cloud application life cycle (e.g., operations, redeploy-ment, developredeploy-ment, maintenance, etc.), is statically defined (inflexible to cope with changes in long-lived applications), and heavily depends on the visibility that other service providers endow their applications with. For instance, the market maker can-not change the IaaS resources assigned to a Google Application Instance service.

In summary, current IaaS/PaaS offerings are constrained by their underlying VM capabilities and do not allow for easy extensibility, mashup, or customization options at the PaaS consumer (or developer) level. They also expose little (if any) configu-ration capabilities of the underlying IaaS layer, thereby lacking configurable auto-mated elasticity. PaaS middleware and the applications built on it are not portable across different public and private clouds and cloud providers. Furthermore, many hosted middleware solutions are not integrated with any IaaS management.

Table 19.1 provides an overview of some of the most well-known IaaS and PaaS approaches along the dimensions of flexibility, life cycle management, scal-ability security, and data management. As shown in Table 19.1, advanced IaaS solutions include features for custom control on their automated scalability features.

(14)

tA ble  19.1 C om pa ri so n  of  I aa s  an d p aa s  A pp ro ach es n ame  of Appr oac h Cloud le vel fle xibility App  l ife Cycle  m gt scalability security d ata  m gt tec hnolog y  specification Amazon SimpleDB, SQS, SNS, RDS and AutoScale IaaS— PaaS Modular aggre gation of separate services None Horizontal and v ertical scaling plus load balancing K

ey-based authentication to access

the dif ferent modules K ey-v alue and relational DBs W eb, REST , and SO A interf aces Azure PaaS Modular aggre gation of separate services App and data mark etplace Horizontal and vertical scaling plus load balancing Federated identity and access control through rule BLOB, table, queues, caching, and relational DB W eb, REST , and SO A interf aces

Amazon Elastic Beanstalk (T

omcat app serv er -based) PaaS T ransparenc y into the underlying services (VM, SNS, etc.) Tomcat- deri ved de velopment support (concurrenc y, logging, etc.) Horizontal and vertical scaling plus load balancing K

ey-based authentication plus

T omcat features Not nati ve: plugg able DB modules and transaction managers W AR file GAE PaaS Some modules can be inte grated by in voking GAE APIs (e-mail, memcache, user authentication, sessions, or image manipulation) None None

Google account– based authentication and sandboxing

High replication and master/sla ve datastores (easier to use: inte grated with JDO/ JP A or with GQL) and memcache Servlet technology AppScale PaaS So m e m od ul es c an b e in te gr at ed by in vo ki ng G A E A PI s (u se r au th en th ic at io n, m ai l, et c. ) “W or kfl ow ” an d so m e Ia aS co ns tr ai nt s None Google authentication Datastore GL Neptune: Servlet, map-reduce, MPI technology A Q 6

(15)

Several modules can be activated and used, but they let little room for defining custom security mechanisms, data management options, or support for a com-plete cloud application life cycle.

Among the PaaS solutions, more security, life cycle, and data management capa-bilities are exposed (some of them directly derived from the application containers underlying most PaaS solutions). A salient example with regard to its configurabil-ity and the transparency on underlying layers is VM Elastic Beanstalk. Elastic Beanstalk lets users keep some visibility over the underlying IaaS resources related to the container: they can select the most appropriate Amazon VM instance type, the database, and storage options; choose the Availability Zone(s), enabling HTTPS pro-tocol on the load balancer; and run other application components, such as a memory caching service, side-by-side in Amazon’s VMs or the container scalability by using Amazon’s scaling and monitoring tools for VMs (Autoscale and CloudWatch). Also, Neptune offers the option of specifying the number of nodes supporting a given service [Bunch 2011]. These are detailed in the related work section in the following.

SaaS is not an exception to the interoperability and lack of configurability prob-lems. The SaaS client is presented with a complete application, for example, CRM, which cannot be customized easily at the process level and cannot be simply com-bined with external services from other providers to offer improved functionality to the client or developer.

The cloud life cycle and the cloud-enabled marketplace are, then, far from being realized since cloud vendors expose a monolithic services with little (if any) configura-tion/customization options, preventing these services from diverse cloud service tiers to be linked together in a synergistic manner to address their application needs (e.g., “scale my PaaS for authentication when my IaaS-embedded application receives too many users”).

There is clearly a need to mashup services from a variety of cloud providers at the different delivery levels to create a cloud ecosystem. However, the type of tailored integration of cloud services needs to fit to specific business needs, using a mixture of SaaS, PaaS, and IaaS. In addition, they need to satisfy QoS requirements. This integrated ecosystem can be realized by leveraging and weaving together all aspects of infrastructure, platform, and application services to provide a wide variety of ver-tically aligned business solutions and falls under the purview of BPaaS [Papazoglou 2011a]. An example on how the cloud application life cycle will influence current mobile application market is presented in the following.

In current mobile application markets, developers register, upload their applica-tions, and publish them making them visible and available for mobile users. Mobile users connect to the marketplace, search for applications, and get them downloaded in their devices. While this model may be simple and may work very well for this segment, corporate applications are often more demanding. They are typically com-posed of several complex and distributed bits that need to be configured so that they can work together. These composing pieces often belong to separate vendors, present heterogeneous packaging and distribution mechanisms.

Also, these applications are often server elements (mobile market applications can be seen either as clients or as stand-alone applications), which require a pow-erful underlying infrastructure (IaaS) that scales with the application load under

(16)

predefined and varying conditions. Mobile operating systems expose APIs that reveal some services assisting developers of client application on common tasks. These APIs are typically a service in the device itself. In a corporate market, developers may need to tie client application installation in the device with server instantiation in the available IaaS service that is closest to the user. Similarly, general applications may require interaction with “online” APIs exposing some frequently used services (e.g., GAE’s data access object or authentication PaaS services) or other online ser-vices developed by third parties (SaaS).

The complex cloud ecosystem enabled by these new possibilities introduces new needs for complex interservice dependencies, huge space of solutions, and holistic cloud application solutions. These call for additional knowledge from all the actors (and supporting mechanisms exposed by the cloud platform). Here, the concepts of reuse, refinement, and dynamic (often automated) adaptation for deal-ing with cloud applications and their physical/platform configurations can be fused in the need for expressing, storing, querying, and manipulating some metadata (knowledge) in a formalized manner. The rising of knowledge-based cloud appli-cation engineering thus becomes a first class citizen for constructing cloud ecosys-tems. This topic and its relation to traditional knowledge-based engineering are dealt with in the next section.

19.7  Knowledge-IntensIve Cloud servICes

There is indeed a need for structuring the information exchanged by any service components (across all the layers of the cloud stack) to make proper use of a cloud service during its life span. This information is essential and spans an enormous breadth of subject areas. It may originate from analysis and design tools (e.g., data from a marketplace for programmers to reuse previously developed code or inte-grate online APIs in their development process) all the way to the manner a given component consumes the monitoring data exposed by the IaaS provider. This range of information (knowledge) is currently widespread and compartmentalized per layer, for example, operational knowledge about a particular layer (e.g., PaaS), if it exists, stays confined strictly within this layer. It is by no means cross-correlated with knowledge originating from other layers, or not even exposed to other layer components, which could make more informed decisions and can thus increase the overall cloud application utility. It is therefore obvious that there is a pressing need for not only exposing operational knowledge at all levels of the cloud stack but also organizing and determining the visibility of the available information. This is due to the fact that many service providers require part of this information to remain private to avoid facing security or privacy vulnerabilities [Vaquero 2011].

The idea of knowledge-based cloud services finds its roots in knowledge-based engineering, and it concerns structuring metadata about cloud services. There is a very real need for interoperability and portability specifications to include all the relevant application, platform, and network infrastructure metadata necessary to move parts of an application from one cloud to another and allow invoking ser-vices from diverse clouds. The truth is that a comprehensive solution to the cloud metadata problem—apart from the practical efforts of DMFT—is currently missing.

(17)

Cloud-intensive knowledge may relate to service applications, technical capabilities, or operational level details and may serve to provide answers to such questions as follows:

• Where and how is a service implemented?

• What kind of customer service and response time does a service guarantee? • What type of SLAs or security policies does a service come endowed with? • What are the technical specifications that a platform supports?

• Is a platform an open-source one or a proprietary, closed one? • Does a platform bill a fixed or variable cost for capacity?

Such knowledge can be captured, abstracted, and stored in some medium, such as DMTF’s service template, and can be ultimately used to instantiate a cloud ser-vice and to manage it during its life cycle. Furthermore, this knowledge must be organized and appropriately utilized and combined with other service knowledge to fulfill the vision of a holistic and efficient cloud application life cycle management.

The need to provide a holistic and abstracted view of all cloud service and resource-specific knowledge, which possibly span disparate cloud environments, is critical in order to realize the BPaaS model, which we presented in Section 19.2. Such efforts call for defining an appropriate mechanism for breaking the cloud monolith and building a more flexible system that preserves the essential properties of the cloud: abstraction, scalability, pay as you go model, and illusion of infinite resources underneath [Papazoglou 2011a]. This topic will concern us throughout this chapter. We start first by identifying the requirements and efforts for describing, organizing, and manipulating cloud-intensive knowledge.

19.8  AnAlysIs of Cloud servICe domAIn lAnguAges

Development of cloud applications that utilize knowledge-intensive cloud services requires an effective description of cloud service-related metadata, an effective organization of the required of knowledge, and an associated support environment, which supports the description and manipulation of service-related metadata. We collectively refer to the language approaches that contribute to this environment as cloud service domain languages.

Cloud service domain languages interlace three interrelated components: 1. A cloud service declarative definition language (CSDL), which provides the

necessary abstraction constructs, called blueprints, to describe the opera-tional, performance, and capacity requirements of infrastructure, platform, and application third-party services

2. A cloud service constraint language (CSCL), which specifies any explicitly stated rule or regulation that prescribes any aspect of cloud service defined in CSDL and verifies the validity of rule combinations

3. A cloud service manipulation language (CSML), which provides a set of operators for manipulating, comparing, and redefining CSDL blueprints, without compromising backward compatibility

(18)

In the following, we shall briefly review R&D activities related to cloud service domain languages. We shall commence by reviewing research work in cloud service defini-tion languages first.

19.8.1  OvervIew Of ClOud ServICe defInItIOn languageS

As currently no definition language exists that combines IaaS, PaaS, and SaaS ser-vice elements, we shall divide work related to cloud template definition languages in three sections, each of which reflects activities in the corresponding cloud stack layer. Cloud service template languages target the definition of useful knowledge that extends from lower-level cloud interoperability challenges centered around network addressing, multicast enablement, and VM techniques to the higher-level interoperability desires of software services, for example, web or RESTful services.

19.8.1.1  Iaas definition languages

The description mechanisms for IaaS clouds can be categorized as “template-based” and model-driven [Papazoglou 2011a].

19.8.1.1.1 Template-Based Approaches

IaaS users who employ template-driven approaches, for example, either end users or PaaS providers, typically package the software stack (operating systems, middleware, and service components) in one or more VMs. However, each IaaS provider exposes its own interface and lets users exploit cloud capabilities and define their own services [Youseff 2008, Galán 2009]. This fact certainly complicates interoperability between clouds at the same layer of the cloud stack and, more so, across layers.

In order to solve this interoperability problem, DMTF uses templates that help manage cloud resources and support the cloud service life cycle by, for instance, describing the way in which a cloud offering is presented and consumed [DMTF 2009]. The service offering is abstracted from the specific type of cloud resources offered, and service templates are used to describe in a general form what a cloud provider can offer (see also Section 19.5). The cloud service template is then mapped into the specifics of the provider’s technology and constrains.

One of the most prominent initiatives in DMTF, the open virtualization format (OVF) [OVF 2010], describes a specification of software packaging and distribution to be run in VMs. OVF is a platform-independent, efficient, extensible, and open packaging and distribution format for VMs. OVF is virtualization platform neutral, while also enabling platform-specific enhancements to be captured.

There are several extensions of OVF to address low-level interoperability prob-lems. Galán et al. consider configuration parameters (e.g., hostnames, IP, and other application-specific parameters) for software components included in the VMs as one of the capabilities that should be exposed by an IaaS provider [Galán 2009]. Horizontal VM scalability by relying on noninfrastructure metrics is also briefly dealt with in this publication. In a continuation of this work, the authors describe a sys-tem for a more complete application life cycle management on top of an IaaS cloud [Rodero-Merino 2010]. However, all OVF-based approaches are very restrictive as they focus on specific points of the application life cycle: definition and deployment

(19)

time configuration. The fact that the application horizontally scales and load balances its VMs cannot be considered beyond the configuration step.

Previous approaches focused on the computing part of the infrastructure being exposed. One of the important missing bits in most IaaS representations is the net-work. Bernstein et al. [Bernstein 2009] mention a series of protocols that could help to enable the exposure of communications as another IaaS service. A wealth of net-work description languages (e.g., Netnet-work Description Language) exist that could fit in here. However, the lack of maturity in the cloud networking arena places the analysis of network description languages beyond the scope of this chapter.

The work of Bernstein and Vij concentrates on proposing a layered set of proto-cols and formats (called intercloud protoproto-cols) as well as a set of common mechanisms that are needed both inside the clouds and in-between the clouds to resolve the IaaS interoperability challenge [Bernstein 2010]. The authors propose the use of intercloud directories as mediators for enabling connectivity and collaboration among disparate cloud providers. To facilitate collaboration among disparate heterogeneous cloud envi-ronments, they use the Extensible Messaging and Presence Protocol (XMPP), which forms a sound basis for asynchronous communication. To ensure that the requirements of an intercloud enabled cloud provider are correctly matched to the infrastructure capabilities in an automated fashion. This publication proposes the use of a declarative semantic model to capture both the requirements and constraints of IaaS computing resources. For this purpose, they use the resource description framework (RDF) lan-guage to specify IaaS resources and SPARQL, an SQL-like lanlan-guage, for querying and manipulating semantic information [Bernstein 2010].

In concluding this subsection, it is probably worth mentioning the first holis-tic integration approach: the Virtual private eXecution infrastructure Description Language (VXDL). VXDL allowed for an organized aggregation of heterogeneous computing and communication resources [Koslovski 2008].

19.8.1.1.2 Model-Driven Approaches

Model-driven approaches are based on considering models of the system as the main elements along all the steps of the software life cycle (design, implementation, deployment, test, and maintenance).

Model-driven approaches have been employed for automating the deployment on top of IaaS clouds. Goldsack et al. [Goldsack 2009] propose a declarative configu-ration framework (SmartFrog) in which component descriptions were specified as ordered hash tables. The provided description is checked against a data model rep-resenting the life cycle status of the component, and inferences are made as to what changes are required to take the component from its current status into the one that matches the hash table declaration. Configurations and descriptions may contain data that come from different stages of the processing of a description. There may be data that are known in advance provided by the writer as part of the description or data that may only be known much later, either during the processing of the description or at run-time. This enables a full delegation approach with dynamic reference resolution.

A model-based approach in which domain experts construct virtual appliances (vir-tual images with prebuilt configuration points) is proposed in [Konstantinou 2009]. The virtual appliance model in [Konstantinou 2009] consists of two parts: packaging

(20)

and composition. Packaging includes configuration attributes of different products to be installed on the image, while composition determines how the appliance can be connected to other appliances. The proposed model declaratively exposes configuration points for the appliance models to be composed into solution models (using simple aggregation and cross-configuration of the appliances). This can be further customized by including infra-structure-specific requirements and may affect and drive VM placement and network configuration, which is referred to as a cloud-specific virtual solution deployment plan. In a similar manner, [Chieu 2010] describes a solution-based provisioning mechanism using composite appliances to automate the deployment of complex application services on an IaaS cloud. A key element for the composition of virtual appliances in these two publica-tions is the virtual port. This is a typed object encapsulating all the properties required to affect a specific type of communication (it captures how images can be linked).

Virtual ports are very similar to the service endpoints presented by [Collazo-Mojica 2010]. This work exposes a visual model for service specification that resem-bles traditional web mashup, but relies on an ad hoc XML format.

Table 19.2 summarizes characteristics of the most important works identified relating to cloud service definition languages.

19.8.1.2  paas definition languages

The variety of services that can be exposed a la PaaS make it quite complicated to get a unified view on the models and templates employed in describing these capabili-ties. The reader should take into account that a workflow engine or a service market-place can both be PaaS services that are inherently different in the data they require and present. It is also worth mentioning that, while the literature is rich in attempts on describing IaaS systems, PaaS-related work is much scarcer.

In this subsection, we shall concentrate on the approaches relevant to PaaS sys-tems, which are compared and summarized in Table 19.3. We shall examine two commercial approaches (Azure and Beanstock) as well as an experimental system.

The Windows Azure Platform is a Microsoft cloud platform (http://www. microsoft.com/windowsazure/) is used to build, host, and scale web applica-tions through Microsoft datacenters. Azure offers a SOA and REST imperative

tAble 19.2 summary of research work in service definition languages operational  service  description performance-oriented service  Capabilities resource  utilization policies

Rodero-Merino et al. (2010) OVF Ad hoc XML Ad hoc XML NA

Bernstein and Vij (2010) RDF RDF RDF RDF

Koslovski et al. (2008) VXDL VXDL N/A N/A

Konstantinou et al. (2009) OVF Ad hoc XML N/A Ad hoc XML Collazo-Mojica et al. (2010) Ad hoc XML Ad hoc XML N/A Ad hoc XML Goldsack et al. (2009) Ordered hash

tables

Ordered hash tables

(21)

procedure for issuing basic CRUD operations using a series of predefined pay-loads in the HTTP requests. It defines a set of HTTP headers in its requests and responses that vary widely across the services offered by this PaaS. Information on how these requests are represented internally remains proprietary.

AWS Elastic Beanstalk (http://aws.amazon.com/elasticbeanstalk/) is a PaaS solu-tion to quickly deploy and manage applicasolu-tions in the AWS cloud. Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balanc-ing, autoscalbalanc-ing, and application health monitoring. Elastic Beanstalk leverages AWS services such as Amazon EC2, Amazon Simple Storage Service (S3), Amazon Simple Notification Service, Elastic Load Balancing, and Autoscaling to deliver the same highly reliable, scalable, and cost-effective infrastructure.

https://elasticbeanstalk.us-east-1.amazon.com/? ApplicationName=SampleApp

&Description=Sample%20Description &Operation=CreateApplication &AuthParams

Listing 19.1 Sample BeanStalk request

<CreateEnvironmentResponse xmlns=“https://elasticbeanstalk. amazonaws.com/docs/2010-12-01/”> <CreateEnvironmentResult> <VersionLabel>Version1</VersionLabel> <Status>Deploying</Status> <ApplicationName>SampleApp</ApplicationName> <Health>Grey</Health> <EnvironmentId>e-icsgecu3wf</EnvironmentId> <DateUpdated>2010-11-17T03:59:33.520Z</DateUpdated>

<SolutionStackName>32bit Amazon Linux running Tomcat

7</SolutionStackName>

(continued)

tAble 19.3

Comparison of paas definition Approaches

platform ApI type language/payload

Azure Imperative Ad hoc XML BeanStalk Imperative Ad hoc XML CloudFormation Imperative Ad hoc JSON

AppScale Imperative Ad hoc XML, Ruby metalanguage, and key/value pairs

(22)

(continued) <Description>EnvDescrip</Description> <EnvironmentName>SampleApp</EnvironmentName> <DateCreated>2010-11-17T03:59:33.520Z</DateCreated> </CreateEnvironmentResult> <ResponseMetadata> <RequestId>15db925e-f1ff-11df-8a78-9f77047e0d0c</ RequestId> </ResponseMetadata> </CreateEnvironmentResponse> Source: aws.amazon.com AQ9

Listing 19.2 Sample BeanStalk request to create an environment for an application

We shall use representative examples based on AWS Elastic Beanstalk for illustra-tion purposes. The code in Listing 19.1 shows a sample request to create an applica-tion on top of BeanStack. As can be observed, the user can configure it specifically and independently of all other applications. The specific behavior of the applica-tion depends on several elements underneath this top level, such as the environment or the underlying instances and how they scale and are load balanced. A sample environment is shown in Listing 19.2, while Listing 19.3 illustrates how users can “attach” S3 buckets to applications running on top of one of Amazon’s exposed Tomcat application servers.

<CreateStorageL o catio n Resp o n se x m ln s=“https:// elasticbeanstalk.amazonaws.com/docs/2010-12-01/”> <CreateStorageLocationResult> <S3Bucket>elasticbeanstalk-us-east-1-780612358023</ S3Bucket> </CreateStorageLocationResult> <ResponseMetadata> <RequestId>ef51b94a-f1d6-11df-8a78-9f77047e0d0c</ RequestId> </ResponseMetadata> </CreateStorageLocationResponse> Source: aws.amazon.com

Listing 19.3 Sample specification of an S3 bucket to be used by the application

BeanStalk relies on Amazon’s CloudFormation (http://aws.amazon.com/de/ cloudformation/). CloudFormation includes the concept of templates; a JSON format, file that describes all the AWS resources needed by the application; and Stack, the AQ10

(23)

set of AWS resources that are created and managed as a single unit when a template is instantiated. It also allows for provisioning of these resources in an ordered man-ner, letting users specify resource dependencies, deployment constraints, or applica-tion environment (e.g., use queues to access a web server).

Finally, AppScale is an open source extension to the GAE (see Table 19.3) PaaS cloud technology. AppScale has been build upon the GAE software develop-ment kit, which helps to distribute execution of GAE applications over virtualized cluster resources. This also includes IaaS cloud systems such as Amazon’s AWS/ EC2 and Eucalyptus. Through the framework offered by AppScale, developers can investigate the interaction between PaaS and IaaS systems. AppScale exposes a series of underlying PaaS services in a unified manner [Chohan 2009]. This means that AppScale data are handled as a combination of XML and key/value pairs. Very recently, AppScale included Neptune, a domain-specific language for AppScale cloud configuration. Neptune automates configuration and deployment of existing high-performance computing software applications via cloud comput-ing platforms. Neptune is a metaprogrammcomput-ing extension of the Ruby programmcomput-ing language, which allows users to specify workflows for certain languages, frameworks, and scientific applications [Bunch 2011].

Each Neptune program consists of several jobs to run in a given cloud. The name of the job and which parameters are necessary for the given jobs; these can be extended to enable each job to include any set of keywords. Neptune can specify what parameters are needed to optimize a job and what data types are required for each.

19.8.1.3  saas definition languages

There is a great deal of work related to composing services based on the concept of service mashups [Benlismane 2008, Mietzner 2008]. In this chapter, we shall restrict ourselves to cloud-specific initiatives in an attempt to distill what is new to the cloud that was not there in previous service composition models.

The work in [Hamdaqa 2011] follows model-driven approach and proposes a metamodel that allows cloud users to design applications independent of any platform and build inexpensive elastic applications. This metamodel, however, describes only the capabilities and technical interfaces of cloud application services.

The authors in [Mietzner 2008] investigate how an SOA could be extended with variability descriptors and SaaS multitenancy patterns to package and deploy mul-titenant aware configurable composite SaaS applications [Mietzner 2008]. As an extension of this work, an application metamodel has been proposed that enables the modeling of components of an application, component dependencies at the SaaS and at the IaaS level [Fehling 2011]. The authors in this work also define an application descriptor independent on the underlying technologies that packages all the applica-tion components.

Overall, SaaS template refinement models lack nonfunctional and constraint descrip-tions for arbitrary cloud services. In addition, variability metamodels at the SaaS level are inherently vague and leave actual enactment unspecified. Although additional work

(24)

is available for the SaaS level, for example [Cai 2009, Thrash 2010], it lacks a formal structure and detailed technical definitions thus reducing their usability.

Finally, Table 19.4 summarizes activities in the SaaS definition space.

19.8.2  OvervIew Of ClOud ServICe COnStraInt languageS

The cloud paradigm offers the chance for applications to implement their own policies in a dynamic manner, depending on the actual environment they are running on. In particular, cloud computing enables customers to acquire and release resources for client applications adaptively in response to load surges and other dynamic behaviors.

One pending challenge for cloud applications has to do with the means to express such control policies and how to automate dynamic cloud adaptation and, in general, take advantage of the natural elasticity of shared resources in cloud computing sys-tems. The nature of this challenge is multipronged since it now implies delivering policy specification and adaptation across different abstraction layers in the cloud stack, which were black boxes in previous paradigms.

Existing solutions are often restricted to event-condition-action rules, where con-ditions are matched against metrics of the observed system, for example, incoming traffic flows. Endowing systems with dynamic adaptation and conflict resolution of policies in response to changes within the managed services has been a constant concern in policy-based frameworks (e.g., [Lupu 1999, Lymberopoulos 2003]).

In the cloud domain, we are dealing with a constraint satisfaction problem applied to cloud resources and services that need to meet specific requirements, such as QoS, or the need for a specific infrastructure. This problem can be represented by a set of variables, which represent cloud resources or services, and a set of constraints imposed on those variables in order to find an assignment of values to the variables such that all the constraints are satisfied. To improve readability, before we delve into this topic, we shall provide a brief introduction to the topic of formal representation of constraints and how to express and reason about constraints.

19.8.2.1  formal representation of Constraints

Policies are typically based in some sort of logic to express the relations among the objects subject to study. In general, the computational power and complexity of the constraint satisfaction problem depend on the constraint language that we are allowed to use in the instances of the constraint satisfaction problem.

AQ11

tAble 19.4

Comparison of saas definition Approaches

Approach scope Aim format

Across-layer  transparency

Hamdaqa et al. (2008) IT resources Engineering and portability None: reference model Some IaaS/ PaaS-transparency Mietzner et al. (2008) IT resources Engineering

and portability

Figure

fIgure 19.1  Mapping	of	a	typical	software	life	cycle	to	elements	of	the	Cloud	Stack*.
tAble 19.1 Comparison of Iaas and paas Approaches name of  ApproachCloud levelflexibilityApp life Cycle mgtscalabilitysecuritydata mgttechnology specification Amazon	 SimpleDB,	 SQS,	SNS,	 RDS	and	 AutoScaleIaaS—PaaSModular	aggregation	of	separate	services
Table	 19.2	 summarizes	 characteristics	 of	 the	 most	 important	 works	 identified	 relating	to	cloud	service	definition	languages
Figure	 19.2	 illustrates	 how	 cloud	 blueprinting	 helps	 transform	 enforce	 one- one-way	 vertical	 deployment	 “channels,”	 which	 are	 espoused	 by	 monolithic	 cloud	 stack	solutions	that	permeate	the	cloud	today,	into	a	flexible	cloud	delivery	mode

References

Related documents

The central finding of the paper is a general existence re- sult: there always exists a sequence of payoff-improving trades that leads to a stable vote allocation in finite time,

FDA guidance documents and industry standards describe risk management for medical device software as a supporting process to the software development and validation processes..

Standardized SESP and K&amp;E Support Choice Legacy Node Virtualization Experion MX and MXProLine Application Upgrade PC Hardware Refresh (with display and

In models of money supply growth ∆m, output growth ∆y, inflation ∆p, fluctuations in an interest rate ∆r and a rate spread rr, however, we find only one case in which

Online Banking Fraud and Target Selection by Cybercriminals Is target popularity related to its size.. •   United States: out of around 6,500 active financial institutions,

Both studies highlight the importance of not only listening to children and young people regarding their experiences and views of how services can better address their mental

The radiographs were measured using standard techni- ques to obtain the following parameters from the two different radiographs: occipital– C2 angle, C2 –C7 angle, C7–sternal

Disease is indicated by the 6' Cusp, 6th house, planets in the constellation of the occupants of the 6th house, the occupants of the &amp;I' house, the planets in the constellation