In , authors described a framework for servicemigration in cloud computing environments using a genetic algorithm to search and select possible migrations. The algorithm utilized a cost model with various servicemigration costs, including the costs of consistency maintenance and communication during migrations, a service table with information of all migrated and replicated services, and a general computing platform registry with information about hosted services. Another approach to selection of migratable services in a cloud was proposed in . The selection process considered pre-defined criteria related to Q O S of the migratable services in the cloud, namely: response time, throughput, availability, reliability, and cost, and it utilized the Analytic Hierarchy Process (AHP, ) method with comparison matrices defined by a consumer’s judgments on the Q O S criteria. Although the AHP method is utilized also in our approach presented in this paper, the comparison matrices are, in our case, defined by the ontology of dynamic properties and preferences of automatically discovered providers and their services, which supports a multi-criteria migration decision making process.
Our investigation relates to identifying potential short- comings in flow-based anomaly detection techniques, which manifest due to their deployment; in this case, wide-area virtual servicemigration in large cloud data centres. This line of enquiry is, in-part, motivated by previous research, conducted by Brauckhoff et al., which examined the impact sampling of network flow data has on anomaly detection . Their study showed that statistical techniques that identify anomalies in traffic volumes perform less effectively under sampling conditions. Furthermore, they suggest spectral-based analysis, using entropy measures of traffic feature distributions, e.g., source and destination IP address and port numbers, are more robust to sampling. As we will discuss in Section III, wide-area virtual servicemigration manifests as a change in network traffic volume, observable at a data centre – this is similar to the effect sampling has. This observation was one of the motivations for the choice of a PCA-based approach for the study we present in Section V. To the best of our knowledge, our investigation is the first to examine the impact virtual servicemigration has on network flow-based anomaly detection techniques.
attempts to address these issues by providing a new security protocol for secure servicemigration. This security protocol is called the Resource Allocation Security Protocol (RASP) for secure servicemigration over Cloud infrastructure . The protocol is developed and tested through a simulation study using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool which is used for the analysis of large-scale Internet security protocols and applications. In the second approach, the ProVerif tool, which verifies cryptographic protocols and associated security goals, is employed. Using both techniques, this paper shows that the RASP protocol can be used to securely migrate services to different Cloud environments.The rest of the paper is organised as follows: Section II presents the related work; Section III details the solution approach while Sections IV and V describe our first and second attempts of RASP, evaluation and show the results. Section VI details of the applications of RASP to a Vehicular Testbed. The paper concludes with Section VII.
This work started by analyzing existing research about data center archicture and traffic. Based on state-of-the-art models, we decided on the typical cloud infrastructure, namely a 3-tier archic- ture, consisting of core, aggregation and ToR switches. We also established that internal traffic can be very complex and thus, we focused on external cloud traffic. Using SWITCH incoming and outgoing traffic, 200 services were selected to represent cloud services in our model. Next, we researched anomaly detection and decided that traffic analysis needs to be done at the edge of the architecture, namely the core switches. We continue by analyzing the traffic from all of the said switches together in order to also catch anomalies whose destinations are spread through- out the data center. Since the goal of this work is related to migration’s affect on traffic, we also researched how virtual servicemigration is done within the data center or wide-area. The rea- sons for migration such as load balancing, maintenance, and power failure are also an important part of the migration modeling. Once the system description was established, we continued by constructing methodology steps to process traffic data and analyze it for injected anomalies. By extending an existing flow-reading framework, we filtered SWITCH data for the selected ser- vices, split it in 5-minute-bins and stored it. We also generated anomalies in the same format. Using all this data, various scenarios were created by constructing timeseries based on a sub- set of services and anomalies. In addition, migration was incorporated into constructing these timeseries, which allowed for simulating virtual servicemigration of various sizes. Finally, we ran a number of experiments to determine how virtual servicemigration affects anomaly detection.
LTE’s evolved Nodes B (eNBs), henceforth referred to as fog-enabled cellular networks, can be a promising solution to enhance the real-time services for connected vehicles . One significant challenge in the fog-enabled cellular net- works is mobility as the vehicles traverse different cells with high speeds. Therefore, in addition to the conventional cellular handover transferring users’ connectivity between cells , servicemigration is needed to maintain service continuity. It means that the mobility must be properly handled to guarantee both low latency and high reliability, particularly for the real-time services. To minimize the neg- ative impact on Quality of Service (QoS), both computation (e.g., computing architecture and virtualization techniques) and communication (not only the one between the vehicles and access points but also the one among the access points) have to be taken into account. In the context of the MEC or fog computing, active service applications are encapsulated in Virtual Machines (VMs) or containers. There have been sev- eral studies that deal with the mobility problem in the MEC or fog computing –. In , general architectural com- ponents supporting VM migration and interactions among such components are defined and discussed. In  a general layered framework is proposed, which allows the migrated applications to be decomposed into multiple layers. In such a framework, only the layers missing at the destination need to be transferred, thus reducing a big amount of data to be handled during the servicemigration. A VM handoff mech- anism for the servicemigration is proposed in , in which the migration files are compressed before being migrated to adaptively reduce the total migration time. Huang et al.  and Wu et al.  focused on the mobility pattern of edge computation devices and developed a cost model for the servicemigration using a Markov decision process based approach. In , a time window based servicemigration is proposed to search the optimal service placement sequence. However, most of the existing studies, e.g. –, are based on abstract models, and do not reflect the real situation, where many parameters need to be optimized. Furthermore, since in the servicemigration data needs to be transferred via the communication infrastructure, communication protocols (such as the ones for handover) and strategies to handle the servicemigration have to be considered.
Converged mobile devices such as smartphones and Internet tablets are now capable of providing IP multimedia services, such as Voice over IP (VoIP) ser- vices and video streaming, but they are still limited in terms of display size, processing power and bandwidth. On the other hand, xed devices such as PC and hardware-based IP phones can oer better usability and bandwidth, but they can not be accessed by users on the go. Therefore, integration of heterogeneous devices is necessary in order to combine their advantages. This requires that devices in the vicinity can borrow the capabilities from each other, so that multimedia services can be moved between mobile and xed devices to either gain better quality, performance and usability, or en- able mobility. In particular, servicemigration should be seamless and with minimal disruption of the running sessions. From the user's perspective, the benet is that they could use multimedia services to their best convenience, and maintain the continuity of their service experiences while changing de- vices, without terminating or interrupting the remote participant.
The execution of application on local mobile devices results in high CPU utilization for a longer period of time as compared to accessing the application processing services of cloud server node. It is examined that the average CPU utilization for executing sorting service on local mobile device is 48.67 % of the total CPU utilization on local mobile device for 17,427 ms duration. The average CPU utilization for executing matrix multiplication service on local mobile device is 45.46 % of the total CPU utilization on local mobile device for 31,190 ms duration. The average CPU utilization for executing power compute service on local mobile device is 48 % of the total CPU utilization on local mobile device. Since, in computational offloading application processing load is outsourced to remote server nodes, energy consumption cost of application processing on the local device is reduced up to 95 %. It is examined that the allocation of RAM on mobile device reduces up to 72 % and the duration of CPU utilization reduces up to 99 % by computational offloading to cloud server node. Hence, computational offloading reduces application processing load on local device which results in minimizing resource utilization on SMD (RAM, CPU) and decreases energy consumption on SMD. However, it is found that the execution cost of runtime computational offloading remains high for offloading smaller computational load to the cloud server node. It is for the reason of additional delays incurred during the configuration of distributed application processing platform at runtime. Furthermore, it is examined that for all instances of active services offloading by using ASM, the CPU utilization for operating system increases up to 3 % on the Android virtual device which shows additional load on the mobile device in component offloading. However, for the physical mobile device the increase in CPU utilization is found 0 % during servicemigration at runtime.
As a case study, we describe a system (e.g., a gaming application) where network virtualization is used to support thin client applications for mobile devices to improve their QoS. By decoupling the service from the underlying resource infrastructure, it can be migrated closer to the current client locations while taking into account migration cost. This paper identifies the major cost factors in such a system, and formalizes the corresponding optimization problem. Both randomized and deterministic, gravity center based online algorithms are pre- sented which achieve a good tradeoff between improved QoS and migration cost in the worst-case, both for servicemigration within an infrastructure provider as well as for networks supporting cross-provider migration. The paper reports on our simulation results and also presents an explicit construction of an optimal offline algorithm which allows, e.g., to evaluate the competitive ratio empirically.
Abstract— In many ways cloud computing is an extension of the service-oriented computing (SOC) approach to create resilient and elastic hosting environments and applications. Service-oriented Architectures (SOA), thus, share many architectural properties with cloud environments and cloud applications, such as the distribution of application functionality among multiple application components (services) and their loosely coupled integration to form a distributed application. Existing service-based applications are, therefore, ideal candidates to be moved to cloud environments in order to benefit from the cloud properties, such as elasticity or pay-per-use pricing models. In order for such an application migration and the overall restructuring of an IT application landscape to be successful, decisions have to be made regarding (i) the portion of the application stack to be migrated and (ii) the process to follow during the migration in order to guarantee an acceptable service level to application users. In this paper, we present best practices how we addressed these challenges in form of servicemigration patterns as well as a methodology how these patterns should be applied during the migration of a service-based application or multiples thereof. Also, we present an implementation of the approach, which has been used to migrate a web-application stack from Amazon Web Services to the T-Systems cloud offering Dynamic Services for Infrastructure (DSI).
Quality of Service(e.g. ) is considered most significant issues in CC and VANET networks. A major obstacle in CC is performance unpredictability because providers are unable to foresee temporal variations in service demands and the geographical distribution of their clients (e.g. ). There is a similarity between this problem and a patchy cloud in which it is not lonely capable to provide its computing sources needed. Therefore, for removing limitations the provided services in a large numbers of patchy cloud can be used for meeting vehicles needed. Numerous Quality of Services such as response time, scalability, performances, reusability and availability and ets are existed (e.g. ). such properties are a part of Service Level Agreements (SLAs) ( e.g. ). This proposed method, firstly, the affecting parameters on the Quality of Service provider(or cloudlet); then measuring deliverable Quality of Services continuously to the vehicles and recording it in a catalog system is presented. if in any circumstances the QoS service is decreases, servicemigration should be performed to provide better services. In this stage , catalog system(e.g. ) by using queries based on ontology, which is in a form of an abstract layer on infrastructure layer, finds the best service and proposes it as a substitute. As a result, servicemigration is performed from a service provider to another.
technique conducts a three steps migration  The ﬁrst step is meant to identify use cases and candidate services that cover those use cases. The description of this step does not provide own way of identiﬁcation of services, instead it refers to works of Sneed  or SMART .The identiﬁed services are wrapped into services during the second step. The last step is deployment and validation. This step establishes infrastructure and deploys already created services. The services are further tested in order to assure that the migrated system fulﬁlls requirements. SMART is a family of ﬁve approaches of migration toward SOA . The basic SMART approach is SMART-Migration Pilot (SMART-MP). SMART-MP iden- tiﬁes services and their components. This technique estimates potential risks and tries to provide a migration pilot with strategies for migration of the whole sys- tems. Four remaining approaches are tailored version of the basic case. SMART ServiceMigration Feasibility (SMART-SMF) focuses mainly on feasibility of mi- gration and its risks. SMART Enterprise Service Portfolio (SMART-ESP) dedi- cated for companies that decided to migrate their system but they did not identify all the services. SMART Environment (SMART-ENV) is meant for companies that did not select the target platform for migrated system. This approach aims at selection of this platform with analysis of its implications like risks and cost. The last family member is SMART System. This technique supports migration from initial estimations and analysis, through implementation and selection of environment till the end of migration. The SMART family provides guidelines for migration, but the guidelines are not complete . They neglect impact of architecture of migrated systems on the process of migration and the target ar- chitecture.
At Level 2 there are some non-resolved challenges of migration when it comes to the process industry. Distributed monitoring and control enables plant supervisory control. The distributed control system (DCS) of a large process plant is usually highly integrated compared with a SCADA solution which is standard in factory automation. The SCADA is a supervisory system for HMI and data acquisition and the system communicates through open standard protocols with subordinated PLCs. The PLCs in the SCADA solution are autonomous compared to their counterpart, which sometimes are referred to as controllers, in the DCS. In this paper the process control system is defined as a DCS including HMI workstations, controllers, engineering station and servers all linked by a network infrastructure. A DCS is truly ”dis- tributed,” with various tasks being carried out in widely dispersed devices. Migration of Level 2 functionality in the form of a DCS exhibits challenges when it comes to co- habitation between legacy and SOA as well as the migration of the control execution , . In this paper the DCS is exemplified by a server/client based system as depicted in Fig. 2, which is a common topology.
A migration of an IT system formation to Cloud infrastructures is complex and demands the choice of adequate Cloud infrastructure services and Cloud VM images for every component within the formation. We propose CloudGenius, a framework that guides through a Cloud migration process that provides methods that support multi-criteria-based decisions on selecting a Cloud VM images and Cloud infrastructure services component-wise. In the following subsections we present the process and give details on the formal model of the selection problem, the required user input and flexibilities, and the selection and combination steps that choose an image and service from the abundance of offerings and find the best combination. Finally, an alternative evaluation variant is addressed.
Our next experiment evaluates XenFlow live migration. Two physical machines forward packets, while the two other machines generate and receive packets. A virtual machine performs as a router. The experiments use two additional ma- chines to generate or receive packets, each one communicates with both physical routers. The experiments consist of virtual router forwarding a 6 Mb/s UDP flow, with 1472 B packets, which is the most common size of Ethernet MTU (Maximum Transmission Unit). While virtual router is forwarding packets, after 30 s, it is migrated from source physical machine to the destination one, using both Xen native migration and our proposed XenFlow migration. Results, shown by presents our results of the experiment, indicate that the migration performed by native-Xen application, referenced in figure as Xen, implies the interruption of data forwarding for approximately 50 s. This interruption occurs due to deactivation of the virtual machine while its memory is transferred from source physical server to destination physical server. Thus, as native Xen necessarily forwards packets throw the virtual machine, stop- ping the virtual machine results in the interruption of packet forwarding. Migration is only possible without packet loss when applying plane separation paradigm, because packets are forwarded by Domain 0, data plane. Therefore, while the virtual machine is migrated, data plane remains active in source physical server, forwarding packets thorn previously established connections. XenFlow employs plane separation paradigm and Figure 3(c) shows that migration of a virtual router occurs with no packet loss, referenced in figure as XenFlow.
The model has a nice feature that skills’ distribution over the agents in the economy determines the supply of each service and thus the price of that service (Because production requires only labor/ skills). Another fea- ture is that their income also depend on the price vector, which depends on skills’ distribution and preferences.
development. Hoon Lee et al.  underlined, however, that reuse has limitations when it is applied within the boundary of an organization, but can be increased if an organization is offering software functionality as services to external parties or using external services. Natis et al.  stated that the success of a service in SOA can be measured partly by the degree to which it is reused by outside applications. Natis et al.  suggested that the IT environment must develop a culture where reuse of external solutions is considered a characteristic of excellence in software engineering and preferable to custom programming. They also admitted that service reuse does not happen by chance – it requires governance, incentives, discipline and tools. The same result can be seen from Frazen . She outlined the importance of other factors, such as governance, leadership among others, in order to achieve reusability.
All services are orchestrated using the business process enterprise logic (BPEL). The most important is the schema analyzer service. This service analysis the schema of the given relational database. This service finds all the tables and relational schema mappings. The data retrieval service collects all the data from selected tables for migration. The NoSQL conversion service is a composite service. This service is composed of MongoDB insertion service, Cassandra insertion service, Neo4j insertion service and Amazon DynamoDB insertion service.