In this paper we investigated guaranteeing the assembly process information flow in real time, enterprise wide, from assembly station sensors directly to the company policy maker’s offices, is the true solution for improving productivity competence, reducing bottlenecks movements, optimization of supply chain network, identify missing operations in the floor process and more over reducing losses and greater profits than ever. In fact, the ideal production lies in the real machine or assembly capabilities of working non- stop at maximum speed, lacking downtimes or inactivity and assembled goods reject threats. Assembly lines will be prone to stand-still and will produce defective pieces if the machines are unable to working to their full capability or demands made of them. This is often the case of misinformed factory management on real time factory floor performances. Even though equipped with original equipment manufacturer indicator knowledge about their systems, they still can’t get that efficiency so needed to improve yield. Transformation is necessary to ride the expected tide of change in the current manufacturing environment, particularly in the information technology and automation landscape. Multinational company’s strives to reduce computing costs, to improve plant floor visibility and to achieve more efficient energy and surroundings use of their IT hardware and software investments. Cloudcomputing infrastructure accelerates and promotes these objectives by providing unparalleled flexible and dynamic IT resource collection, Virtualization, floor visibility and high accessibility. This paper establishes the value of realizing cloud connect and usage state of affairs in the cloud manufacturing environment, especially in automotive assembly stations which typically have large numbers of mixed applications, various hardware and huge data amount generated from sensors and devices in real-time and event-based exploration and assembly operations. The purpose of this paper is to behavior an Information Technology automotive assembly environment Systems analysis in the case of MNC’s. To validate this objective, the article has been divided into two parts: monitoring vision and control and case study with the help of the manufacturing execution assembly system. The purpose of the theory part of the study is to first introduce the concept of Cloud connect in the respective field of manufacturing execution assembly system, and then chat about the substance of Service management in information technology.
Operating a web site that requires database access, supports considerable traffic, and possibly connects to enterprise systems requires complete control of one or more servers, to guarantee responsiveness to user requests. Servers supporting the web site must be hosted in a data center with access from the public Internet. Traditionally, this has been achieved by renting space for physical servers in a hosting center operated by a network provider far from the enterprise’s inter- nal systems. With cloudcomputing, this can now be done by renting a virtual machine in a cloud hosting center. The web site can make use of open source software, such as Apache HTTP Server, MySQL, and PHP; the so-called LAMP stack; or a Java™ stack, all of which is readily available. Alternatively, enterprises might prefer to use commercially supported software, such as Web- Sphere ® Application Server and DB2 ® , on either Linux ® or Windows operating systems. All
This technique provides an approach that the document itself can reserve its privacy and security even when being exchanged on unsecured networks. Some security components like storage, access, and usage control – that the companies may deploy an information system to be responsible for – are encapsulated (encapsulation object-oriented concept) within the document to ensure autonomic document architecture for Enterprise Digital Right Management (E-DRM). This can’t only be applied for files that can be exchanged through uncontrollable network like cloudcomputingsystems, but also can be applied for the USB flash drivers.
In , a trust management system for cloudcomputing based on the issue of trust between the users and the CSPs is discussed. SLAs differ from one service providers to another hence the need for trust. The paper proposes a trust management system with metrics for identifying trust worthy CSPs and a trusted cloud service with secure data and resource provisions.  focuses on virtualization, privacy and data integrity as means of ensuring trust. A model comprising trust in terms of data is proposed between consumers and providers on the cloud. A critical review in  examines trust management in cloudcomputing which considers security as a vital component of trust management. The paper proposes a model for trust management system and carried out a comparison of trust systems. Establishing trust in cloud services in  presents the issues of trust with customer data being processed in remote locations by a cloud service provider. Various aspects relating to trust were discussed in the paper including diminishing control of the user. Trust in the cloud in  alludes to the fact that organisations will not host their software applications on the cloud without the guarantee of trust. The paper examines the TClouds model that is beneficial to all the parties utilizing the cloud.  focuses on utilizing third party agent checking, to control and manage trust in the cloud, which is concerned with having several unknown users whose intention may either be bad or good regardless of the cloud provider. The paper proposes a model with appropriate preferences to allow a user decide on a suitable cloud provider.
In this study, we see that the Cloudcomputing has been widely recognized as the widely growing computing infrastructure. CC offers many advantages by allowing users to use infrastructure like servers, networks, and data storages, without impacting to the owner’s organization. In this paper we are introducing DataBase-as-a-Service (DBaas) which promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, of the user’s data. Database as a Service architectures offer organizations new and unique ways to offer, use and manage database services. The fundamental differences related to service-orientation and discrete consumer-provider roles challenge conventional models yet offer the potential for significant cost savings, improved service levels and greater leverage of information across the business. As discussed in this paper, there are a variety of issues, considerations that are must to understand for effectively using DBaaS in all organizations. We introduced Relational/Database Cloud, a scalable relational databases-as-a-service for cloudcomputing environments. Database systems deployed on a cloudcomputing infrastructure face many new challenges such as dealing with large scalability of operations, elasticity, and autonomic control to minimize the operating cost, Continuous availability, Datasets privacy. These challenges are in addition to making the systems fault-tolerant and highly available. Relational Cloud overcomes three significant challenges: efficient multi-tenancy, elastic scalability, and database privacy
The evolution of networking technology to support large-scale data centers is most evident at the access layer due to rapid increase of number of servers in a data center. Some research work (Greenberg, Hamilton, Maltz, & Patel, 2009; Kim, Caesar, & Rexford, 2008) calls for a large Layer-2 domain with a ﬂatter data center network architecture (2 layers vs. 3 layers). While this approach may ﬁt a homoge- nous, single purpose data center environment, a more prevalent approach is based on the concept of switch virtualization which allows the function of the logical Layer-2 access layer to span across multiple physical devices. There are several architectural variations in implementing switch virtualization at the access layer. They include Virtual Blade Switch (VBS), Fabric Extender, and Virtual Ethernet Switch technologies. The VBS approach allows multiple physical blade switches to share a common management and control plane by appearing as a single switching node (Cisco Systems, 2009d). The Fabric Extender approach allows a high-density, high-throughput, multi-interface access switch to work in conjunction with a set of fabric extenders serving as “remote I/O modules” extending the internal fabric of the access switches to a larger number of low-throughput server access ports (Cisco Systems, 2008). The Virtual Ethernet Switch is typically software based access switch integrated inside a hypervisor at the server side. These switch vir- tualization technologies allow the data center to support multi-tenant cloud services and provide ﬂexible conﬁgurations to scale up and down the deployment capacities according to the level of workloads (Cisco Systems, 2009a, 2009c).
Virtualization has been used successfully since the late 1950s. A virtual memory based on paging was first implemented on the Atlas computer at the University of Manchester in the United Kingdom in 1959. In a cloudcomputing environment a VMM runs on the physical hardware and exports hardware- level abstractions to one or more guest operating systems. A guest OS interacts with the virtual hardware in the same way it would interact with the physical hardware, but under the watchful eye of the VMM which traps all privileged operations and mediates the interactions of the guest OS with the hardware. For example, a VMM can control I/O operations to two virtual disks implemented as two different sets of tracks on a physical disk. New services can be added without the need to modify an operating system. User convenience is a necessary condition for the success of the utility computing paradigms. One of the multiple facets of user convenience is the ability to run remotely using the system software and libraries required by the application. User convenience is a major advantage of a VM architecture over a traditional operating system. For example, a user of the Amazon Web Services (AWS) could submit an Amazon Machine Image (AMI) containing the applications, libraries, data, and associated configuration settings. The user could choose the operating system for the application, then start, terminate, and monitor as many instances of the AMI as needed, using the Web Service APIs and the performance monitoring and management tools provided by the AWS.
The resource allocation in cloud environment is an important and challenging research topic. Verma et al.  formulate the problem of dynamic placement of applications in virtualized heterogeneous systems as a continuous optimization: The placement of VMs at each time frame is optimized to minimize resource consumption under certain perfor- mance requirements. Chaisiri et al.  study the trade-off between the advance reservation and the on-demand resource allocation, and propose a VM placement algorithm based on stochastic integer programming. The proposed algorithm minimizes the total cost of resource provision in infrastructure as a service (IaaS) cloud. Wang et al.  present a virtual appliance-based automatic resource provisioning framework for large vir- tualized data centers. Their framework can dynamically allocate resources to applications by adding or removing VMs on physical servers. Verma et al. , Chaisiri et al. , and Wang et al.  study cloud resource allocation from VM placement perspective. Bacigalupo et al.  quanti- tatively compare the effectiveness of different techniques on response time prediction. They study different cloud services with different priorities, including urgent cloud services that demand cloud resource at short notice and dynamic enterprise systems that need to adapt to frequent changes in the workload. Based on these cloud services, the layered queuing network and historical performance model are quantitatively compared in terms of prediction accuracy. Song et al.  present a resource allocation approach according to application priorities in multiapplication virtualized cluster. This approach requires machine learning to obtain the utility functions for applications and defines the application priorities in advance. Lin and Qi  develop a self-organizing model to manage cloud resources in the absence of centralized management control. Nan et al.  present opti- mal cloud resource allocation in priority service scheme to minimize the resource cost. Appleby et al.  present a prototype of infrastructure, which can dynamically allocate cloud resources for an e-business comput- ing utility. Xu et al.  propose a two-level resource management system with local controllers at the VM level and a global controller at the server level. However, they focus only on resource allocation among VMs within a cloud server [19,20].
All the current systems support at least two different execution modes: supervisor mode and user mode. The first mode denotes an execution mode in which all the instructions (privileged and nonprivi- leged) can be executed without any restriction. This mode, also called master mode or kernel mode, is generally used by the operating system (or the hypervisor) to perform sensitive operations on hardware- level resources. In user mode, there are restrictions to control the machine-level resources. If code run- ning in user mode invokes the privileged instructions, hardware interrupts occur and trap the potentially harmful execution of the instruction. Despite this, there might be some instructions that can be invoked as privileged instructions under some conditions and as nonprivileged instructions under other conditions. The distinction between user and supervisor mode allows us to understand the role of the hypervisor and why it is called that. Conceptually, the hypervisor runs above the supervisor mode, and from here the prefix hyper- is used. In reality, hypervisors are run in supervisor mode, and the division between privileged and nonprivileged instructions has posed challenges in designing virtual machine managers. It is expected that all the sensitive instructions will be executed in privileged mode, which requires supervisor mode in order to avoid traps. Without this assumption it is impossible to fully emulate and manage the status of the CPU for guest operating systems. Unfortunately, this is not true for the original ISA, which allows 17 sensitive instructions to be called in user mode. This prevents multiple operating systems managed by a single hypervisor to be isolated from each other, since they are able to access the privileged state of the processor and change it. 4 More recent implementations of ISA (Intel VT and AMD Pacifica) have solved this problem by redesigning such instructions as privileged ones.
components. Network isolation in the cloud can be done using various techniques of network isolation such as VLAN, VXLAN, VCDNI, STT, or other such techniques. Applications are deployed in a multi-tenant environment and consist of components that are to be kept private, such as a database server which is to be accessed only from selected web servers and any other traffic from any other source is not permitted to access it. This is enabled using network isolation, port filtering, and security groups. These services help with segmenting and protecting various layers of application deployment architecture and also allow isolation of tenants from each other. The provider can use security domains, layer 3 isolation techniques to group various virtual machines. The access to these domains can be controlled using providers' port filtering capabilities or by the usage of more stateful packet filtering by implementing context switches or firewall appliances. Using network isolation techniques such as VLAN tagging and security groups allows such configuration. Various levels of virtual switches can be configured in the cloud for providing isolation to the different networks in the cloud environment.
Apart from the vendor-specific migration methodologies and guidelines, there are also proposals independent from a specific cloud provider. Reddy and Kumar proposed a methodology for data migration that consists of the following phases: design, extraction, cleansing, import, and verification. Moreover, they categorized data migration into storage migration, database migration, application migration, business process migration, and digital data retention (Reddy and Kumar, 2011). In our proposal, we focus on the storage and database migration as we address the database layer. Morris specifies four golden rules of data migration with the conclusion that the IT staff does not often know about the semantics of the data to be migrated, which causes a lot of overhead effort (Morris, 2012). With our proposal of a step-by-step methodology, we provide detailed guidance and recom- mendations on both data migration and required application refactoring to minimize this overhead. Tran et al. adapted the function point method to estimate the costs of cloud migration projects and classified the applications potentially migrated to the cloud (Tran et al., 2011). As our assumption is that the decision to migrate to the cloud has already been taken, we do not con- sider aspects such as costs. We abstract from the classification of applications to define the cloud data migration scenarios and reuse distinctions, such as complete or partial migration to refine a chosen migration scenario.
On-line systems are nothing new. What makes Azure special is that you can use the same operating system and programs that you use in-house with no customization. Since the internet and world wide web became ubiquitous, programs have been written for the Linux operating system but you could not run the most popular ones. Amazon and
OAGIS (Open Applications Group Integration Speci ﬁ cation)  from the OAGi is an international cross-domain transaction standard for B2B and A2A and exists since 1996 (only its ﬁ rst versions were not XML-based), it is used by over 38 industries in 89 states (05/2011); main stakeholders are IBM, Oracle, DHL, SAP, Microsoft. OAGIS 9.5.1 consists of 84 BOs, used in over 530 BODs (including master data exchange), the BODs are used in 64 sample scenarios, and OAGi provides Web service de ﬁ nitions. One of its explicit objectives is to provide a canonical business object model . It integrates many other standards: UN/ CEFACT, ISO, OASIS, CCTS/CCL and many more and can be used together with ebXML and is EDIFACT-compatible. The OAGi quickly adopts modern trends, e.g. soon the JSON exchange format will be supported additionally to XML to better support mobile devices, and there are cloud and BPMN initiatives. Most important for us is its openness and schema extensibility by XSD overlays, in addition to instance extensions by freely usable so-called user areas.
Enterprises often don’t have the required expertise to build cloud-based solutions. The average medium-to-large company that has been in business for more than a few years typically has a collection of applications and services spanning multiple eras of application architecture from mainframe to client-server to commercial-off the-shelf and more. The majority of the skills internally are specialized around these different architectures. Often the system administrators and security experts have spent a lifetime working on physical hardware or on-premises virtualization. Cloud architectures are loosely coupled and stateless, which is not how most legacy applications have been built over the years. Many cloud initiatives require integrating with multiple cloud-based solutions from other vendors, partners, and customers. The methods used to test and deploy cloud-based solutions may be radically different and more agile than what companies are accustomed to in their legacy environments. Companies making a move to the cloud should realize that there is more to it than simply deploying or paying for software from a cloud vendor. There are significant changes from an architectural, business process, and people perspective. Often, the skills required to do it right do not exist within the enterprise.
As the cloud service providers are proliferating, it may be diffi cult for the service consumer to keep track of the latest cloud services offered and to fi nd the most suit- able cloud service providers based on their criteria. In such cases, the service broker performs the cost calculation of the service(s), thus performing the analysis on behalf of the consumer and providing the most competitive service to the consumer from the palette of available services. This may lead to consumption of the service from a new service provider providing the service at better conditions (based on matching criteria like SLA, costs, fi t, security, energy consumption). Thus, the service broker may be able to move system components from one cloud to another based on user- defi ned criteria such as cost, availability, performance, or quality of service. Cloud service brokers will be able to automatically route data, applications, and infrastruc- ture needs based on key criteria such as price, location (including many legislative and regulatory jurisdictional data storage location requirements), latency needs, SLA level, supported operating systems, scalability, backup/disaster recovery capabilities, and regulatory requirements. There are a number of frameworks and solutions that provide examples of this functionality. Some of them are the RESERVOIR [ 27 ], a framework that allows effi cient migration of resources across geographies and administrative domains, maximizing resource exploitation and minimizing their uti- lization costs; the Intercloud [ 28 ] environment which supports scaling of applica- tions among multiple vendor clouds; and Just in Time [ 29 ] broker which adds value by offering cloudcomputing without needing to take care of capacity planning but simply discovering, recovering, and reselling resources already amortized and idle. Another approach is provided by FCM [ 30 ], a meta- brokering component providing transparent service execution for the users by allowing the system to interconnect the various cloud broker solutions, based on the number and the location of the utilized virtual machines for the received service requests.
A s you are reading this chapter, you may have already noted the signiﬁcance of business agility and the roadmap presented using the balanced scorecard in our previous chapters. We have attempted to lay a sound foundation to deﬁne various business stakeholders and how these technology forces have helped align chief information ofﬁcers (CIOs), chief marketing ofﬁcers (CMOs), and others in the current turbulent business environment. We have also put our best efforts into deﬁning the Business Agility Readiness Roadmap with clear guideposts to help you make rapid progress in your business process optimization. We have made an effort to highlight cases of success and failure that would help you to identify clearly the stage you are at in your business and to transform your business with the impact of these technology forces. You have noticed that it is impera- tive to transform your current business model and processes to create business agility to survive and thrive in business irrespective of what business sector you are in today. It was okay to run a business independently just a few years ago, but no longer. We introduced and talked about business ecosystems, a concept that brings all stakeholders such as employees, customers, and partners (distribu- tors, value-added resellers, systems integrators, independent consul- tants, investors, shareholders) to participate in your business to create business agility and to help your business succeed. Considering carefully, you would realize that all play a major role in your business, and none should be ignored or taken lightly.
In simple language, mobile commerce is the mobile version of e-commerce. Each and every utility of e-commerce is possible through mobile devices using the computa- tion and storage in the cloud. According to Wu and Wang , mobile commerce is “the delivery of electronic commerce capabilities directly into the consumer’s hand, anywhere, via wireless technology.” There are plenty of examples of mobile com- merce, such as mobile transaction and payment, mobile messaging and ticketing, mobile advertising and shopping, and so on. Wu and Wang  further report that 29% of mobile users have purchased through their mobiles 40% of Walmart products in 2013, and $67.1 billion purchases will be made from mobile device in the United States and Europe in 2015. This statistics proves the massive growth of m-commerce. In m- commerce, the user’s privacy and data integrity are vital issues. Hackers are always trying to get secure information such as credit card details, bank account details, and so on. To protect the users from these threats, public key infrastructure (PKI) can be used. In PKI, an encryption-based access control and an over-encryption are used to secure the privacy of user’s access to the outsourced data. To enhance the customer sat- isfaction level, customer intimacy, and cost competitiveness in a secure environment, an MCC-based 4PL-AVE trading platform is proposed in Dinh et al. .
A cloud OS should provide the APIs that enable data and services interoper- ability across distributed cloud environments. Mature OSs provide a rich set of services to the applications so that each application does not have to invent important functions such as VM monitoring, scheduling, security, power management, and memory management. In addition, if APIs are built on open standards, it will help organizations avoid vendor lock-in and thereby creating a more flexible environment. For example, linkages will be required to bridge traditional DCs and public or private cloud environments. The flex- ibility of movement of data or information across these systems demands the OS to provide a secure and consistent foundation to reap the real advan- tages offered by the cloudcomputing environments. Also, the OS needs to make sure the right resources are allocated to the requesting applications. This requirement is even more important in hybrid cloud environments. Therefore, any well-designed cloud environment must have well-defined APIs that allow an application or a service to be plugged into the cloud eas- ily. These interfaces need to be based on open standards to protect customers from being locked into one vendor’s cloud environment.
The SR55 series features 24VDC, 110 or 230 VAC selectable control voltage, as well as easily and separately adjustable motor start and stop times. These fully programmable units are equipped with a touchscreen with an easy-to-navigate menu structure, a quick Automatic Application Setup feature, built-in SCR failure protection, and full data logging (fault records, motor current, operational status, etc.) SR55 soft starters also fea- ture integrated Modbus RTU, or optional Modbus TCP or Ethernet/IP communica- tion, as well as programmable analog I/O, digital inputs and relay outputs for remote control.
30 Before the emergence of use and technology uncertainty a closed system perspective, also referred to as technological optimism, was an appropriate viewpoint on technologies. When taking a closed system perspective things are assumed to go right, because systems are well designed and maintained, procedures are complete and correct, people behave as one expects and as one taught them, and designers can foresee and anticipate every contingency. Overall, people are seen as a liability and threat to the system and therefore their flexibility is minimised to achieve efficiency (Hollnagel et al. 2011). Use and technology uncertainty and the concept of technological discontinuities made clear, however, that organisational changes triggered by new technologies are not easy to foresee. Technologies (machines and automation in particular) are very good to tackle problems in predictable environments. They are suitable for predictable environments because risks can be clearly identified, assessed, and controlled. One does not need to be highly flexible. For uncertain environments and the use of complex systems, however, one does not know what the risks are. Flexibility and adaptability are needed—skills normally associated with people but less with technology. The argument that technology is too brittle was made by Dreyfus several decades ago but still seems to apply today (Dreyfus 1987; Dreyfus 1992). It is necessary to adopt an open system perspective, also referred to as technological realism. In an open system perspective, things are assumed to go right, because people learn to overcome design flaws, adapt their performance to meet demands, interpret, and apply procedures to match conditions, and people can detect and correct things that go wrong. Overall, people are seen as an asset that enable systems to function properly (Hollnagel et al. 2011).