• No results found

ENERGY EFFICIENT SECURE STORAGE MANAGEMENT FOR APPLICATIONS USING VIRTUAL MACHINES IN PRIVATE CLOUD

N/A
N/A
Protected

Academic year: 2021

Share "ENERGY EFFICIENT SECURE STORAGE MANAGEMENT FOR APPLICATIONS USING VIRTUAL MACHINES IN PRIVATE CLOUD"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

ENERGY EFFICIENT SECURE STORAGE MANAGEMENT FOR APPLICATIONS

USING VIRTUAL MACHINES IN PRIVATE CLOUD

SHIVKUMAR ANADUR

1

Professor Department of Computer Science & Engineering, BKIT, Bhalki, India

ABSTRACT

A Cloud Computing is a subscription-based service where we can obtain networked storage space and computer resources. Technologies such as cluster, grid, and now cloud computing are decide allowing access large amounts computing power in fully, para-virtulized manner by amassed resources i.e. virtual machines to consumer. Cloud computing denotes energy efficiency in all components of computing systems that is hardware, software, local area network. Energy computing has to achieve manifold objectives of energy consumption and utilization improvement for computing paradigm that are not pay-per-use such as cluster and grid revenue maximization as another metric for cloud computing architecture. This paper we proposed Energy-Efficient Scheme for virtual machines that distribute maximum workload on minimum number of virtual machine’s.

KEYWORDS: Virtual Machines, MAID, RAID clusters The amount of digital data produced by humans is increasing every day. Based on the IDC report [Susane Albers,2010], the amount of information and content created and stored digitally will grow to over 7 zettabytes by 2015. This explosion of digital data must be managed and used with data intensive applications such as sensor data archives, search engines, and customer management systems (OLTP). As well, the increased data must be stored somewhere, which implies that storage capacity must also increase. The report [Jiandun Li,2011] says that the storage capacity shipment in 2014 will be 7 times as large as that in 2009.

Today, the energy consumption rate of IT

equipment in data centers has increased

greatly[Jiandun Li,2011]. A report [Saurabh Kumar

Garg,2011] notes that the storage energy

consumption rate for all IT equipment will increase more and more in the short term because the amount of digital data is increasing quickly. For example, the paper [Akshat Verma,2008] reports that the storage power consumed for large online transaction

processing systems (OLTP, data intensive

applications) consumes more than 70% of the total power of all IT equipment. Thus, power reduction for storage is strongly required in large datacenters.

Many applications run at datacenters today. I/O behaviors of applications are quite different in different applications. For example, TPC-C issues random I/O to master data tables, and TPC-H issues sequential read commands mainly to large transaction

tables. These I/O behaviors of data intensive applications will become good hints for power saving of storages.

However, previous storage power saving methods use only storage level I/O behaviors and do not utilize application level I/O behaviors effectively. Thus these previous methods based on storage level I/O behaviors may improperly apply storage power saving function at run time even though a length of I/O intervals is not enough to obtain the power saving effect. Therefore, previous power saving methods may cause run time performance degradation of applications. By considering the application I/O behaviors, we may avoid applying the power saving function at improper timing and it is likely that we achieve high power saving effects without high performance degradation of applications compared with previous works.

LITERATURE SURVEY

IEEE published in 2011: A Scheduling Algorithm for Private Clouds

Abstract

In contrast with public clouds, private clouds have some unique features, especially when related to workflow scheduling. Of course, the trade-off problem between power and performance remains to be one of the key concerns. Based on the previous research, in this paper, the authors propose a hybrid energy-efficient scheduling algorithm using dynamic migration. Specifically, there are at least two

(2)

challenging problems in VM scheduling: a) how to reduce the coming request’s response time, and b) how to balance the workloads, when the datacenter is running on low-power mode. These two problems happen because: a) powering up a sleeping node via remote access is kind of time consuming; b) traditional energy-efficient algorithms always fall short to sharing workloads across hosts. Aiming at these two problems, they proposed a hybrid energy-efficient scheduling approach, which is comprised of pre-power technique and least-load-first algorithm.

To reduce the response time, they applied a pre-power technique. In this approach, they first set a desired spectrum for the left capacity (i.e. available capacity of awake hosts) and what they would do next is by all means to keep the left capacity in this spectrum. In this manner, powering up or down a node is not passively controlled by the idle threshold, which is difficult to set, but fully controlled by cloud scheduler. When the left capacity of current private cloud is running low (not running out), a power-up command would be send to one or more asleep nodes to wake up. Therefore, the left capacity scales up and the coming request would not have to wait that much any more. When the left capacity overflows the desired spectrum, a power-down command would be send to one or more awake nodes to asleep. Thus, the left capacity scales down and more energy would be saved than controlled by the idle threshold. In a word, we do not use the idle threshold to save energy, but make the left capacity a little more than the current workloads, and keep this margin by forcibly powering up or down.

Of course, the trade-off problem between power and performance remains to be one of the key

concerns. The proposed hybrid energy-efficient

scheduling algorithm using dynamic migration, the experiments shown that it cannot only reduce the response time, conserve more energy, but also achieve higher level of load balancing.

IEEE published in 2009 :vGreen: A System for Energy Efficient Computing in Virtualized Environments

Abstract

In this paper, they presented vGreen, a multi-tiered software system for energy efficient

computing in virtualized environments. It comprises of novel hierarchical metrics that capture power and performance characteristics of virtual and physical machines, and policies, which use it for energy efficient virtual machine scheduling across the whole deployment. They shown this through real life implementation on a state of the art test bed of server machines that vGreen can improve both performance and system level energy savings by 20 % and 15 % across benchmarks with varying characteristics.

In this, they introduced vGreen, a multi-tiered software system to manage VM scheduling across different PMs (physical machine) with the objective of managing the overall energy efficiency and performance. The basic premise behind vGreen is to understand and exploit the relationship between the architectural characteristics of a VM (eg. instructions per cycle, memory accesses etc.) and its performance and power consumption. vGreen is based on a client server model, where a central server (referred to as ‘vgserv’) performs the scheduling of VMs across the PMs (referred to as ‘vgnodes’). The vgnodes perform online characterization of the VMs running on them and regularly update the vgserv with this information. These updates allow vgserv to understand the performance and power profile of the different VMs and aid it to intelligently place them across thevgnodes to improve overall performance and energy efficiency.

IEEE published in 2002: Massive arrays of idle disks for storage

Archives Abstract

The declining costs of commodity disk drives are rapidly changing the economics of deploying large amounts of online or near-line storage. Conventional mass storage systems use high performance RAID clusters, automated tape libraries or a combination of tape and disk. In this paper, they analyzed an alternative design using massive arrays of idle disks, or MAID. The design of MAID arrays requires accurate models for both drive performance and for drive power. They used a combination of existing performance models and measurements from sample IDE drives to develop a unified performance and power model. The simulator model is simply an extension to a analytic model designed for SCSI

(3)

drives however, they verified that the important aspects of the model are also appropriate for IDE drives. To make their model and analysis concrete, they measured various characteristics of several samples of a 20GB IDE drive; specifically, they used the IBM 60GXP Model IC35L020AVER07 because there is copious information available and had several of the drives. The drive supports the ATA-5 standard and Ultra DMA Mode 5 (100MB/sec) and uses a load/unload mechanism to avoid head-to-disk contact on startup.

They argued that this storage organization provides storage densities matching or exceeding those of tape libraries with performance similar to disk arrays. Moreover, they shown that with effective power management of individual drives, this performance can be achieved using a very small power budget. In particular, they showed that their power management strategy can result in the performance comparable to an always-on RAID system while using 1/15th the power of such a RAID system.

IEEE published in 2002: Application transformations for power and performance-aware device management

Abstract:

Energy conservation without performance degradation is an important goal for battery-operated computers, such as laptops and handheld assistants. In this paper they determined the potential benefits of

application-supported device management for

optimizing energy and performance. In particular, they consider application transformations that increase device idle times and inform the operating system about the length of each upcoming period of idleness. They assess the potential energy and performance benefits of this type of application support for a laptop disk. Furthermore, they proposed and evaluated a compiler framework for performing the transformations automatically for a disk device. The experimental results demonstrate that unless applications are transformed, they cannot accrue any of the predicted benefits. In addition, they show that our compiler can produce almost the same performance and energy results that they obtained by hand-modifying applications. Overall, they found that the transformations they propose can reduce disk

energy consumption from 55% to 89% with only a small degradation in performance.

The disadvantage of this is that the transformations can be limited to only the battery operated computers such as laptops and handheld assistants.

IEEE published in 2004: Eeraid: Power efficient redundant and inexpensive disk arrays

Abstract

Recent research works have been presented on conserving energy for multi-disk systems either at a single disk drive level or at a storage system level and thereby having certain limitations. This paper studies several new redundancy-based, power-aware, I/O request scheduling and cache management policies at the RAID controller level to build energy-efficient RAID systems, by exploiting the redundant information and destage issues of the array for two popular RAID levels, RAID 1 and RAID 5.

For RAID 1, they developed a Windowed

Round Robin (WRR) request scheduling policy; for RAID 5, they introduced a N-chance Power Aware cache replacement algorithm (NPA) for writes and a

Power-Directed, Transformable (PDT) request

scheduling policy for reads. Trace-driven simulation proves EERAID saves much more energy than legacy RAIDs and existing solutions.

The main idea of Windowed Round Robin (WRR) is to alternatively dispatch every N successive requests to primary group and then to the mirror group and back and forth. The traditional round robin policy can be taken as a special instance of WRR with N equaling to one. When N requests arrive at RAID 1 controller and are delivered to one group disks among the other group have more opportunities to continuously ramp down speed and stay longer in low power states. There fore the overall energy in the entire disk array is conserved.

EXISTING SYSTEM

Power, especially that consumed for storing data, and cooling costs for datacenters have increased rapidly. The main applications running at datacenters are data intensive applications such as large file servers or database systems. Recently, power management of the data intensive applications has

(4)

been emphasized in the literature. Such reports discuss the importance of power savings.

Energy-efficient scheduling approach for private clouds to reduce coming request response time, balance workload when data center is running in low power mode and design algorithm on the base of pre power techniques and Least-load first algorithm, experimental result show save more energy, archive higher level of load balancing. In existing system they wait for job completion and assign new job to VM after completion of, as save energy number of VM are increased then consume energy also increase.

PRAPOSED SYSTEM

A Cloud Computing is a subscription-based service where we can obtain networked storage space and computer resources. Technologies such as cluster, grid, and now cloud computing are decide allowing access large amounts computing power in fully, para-virtulized manner by amassed resources i.e. virtual machines to consumer. Cloud computing denotes energy efficiency in all components of computing systems that is hardware, software, local area network. Energy computing has to achieve manifold objectives of energy consumption and utilization improvement for computing paradigm that are not pay-per-use such as cluster and grid revenue maximization as another metric for cloud computing architecture. In this we proposed Energy-Efficient Scheme for virtual machines that distribute maximum workload on minimum number of virtual machines. System Requirement Specification:

Hardware Requirements: • 10GB HDD(min) • 128 MB RAM(min) • Pentium P3 Processor(min) Software Requirements: • Java1.4 or higher

• Java Swing – front end

• Windows98 or higher

METHODOLOGY

Energy-Efficient Scheme for virtual

machines that distribute maximum workload on minimum number of virtual machines (VM). Energy-efficient scheme with migration, clone, pause, resume basic concept is introduced using minimum load distribution, first come first serve, hybrid energy efficient scheduling algorithm incoming VM request call jobs minimum then start virtual machines if request is increased beyond our capacity then apply migration for workload distribution. If VM request is platform, software then applies cloning of VM. Consumer provide request in lease’s using lease management system (LMS) collect these leases in queue, virtual machines are also in queue, EES

schedules VM request to virtual machine,

implementation is as shown in below fig.3 working procedure of EES.

CONCLUSIONS

Cloud computing is a new computing paradigm that offers a large quantity of compute and storage resources to the masses. Scientist and start up companies can have access to these possessions by paying a minute amount of money just for what is really wanted. In their various contour and flavors, cloud aim at offering compute, storage network, software, it is combination “as a service”.

IaaS,PaaS,SaaS are three most common

nomenclatures for level of generalization of cloud services, ranging from “raw” virtual servers to sophisticated hosted applications. Virtualization enables high, reliable, and agile deployment mechanisms and management of services, providing on demand cloning, live migration services which

(5)

improve reliability. A great popularity and apparent success have been seen in this area.

How to provide an energy-efficient

scheduling method for the cloud platform has become a challenging problem. Our work is an essential support such application which is publically running, we propose Energy-efficient scheduling scheme call EESS is always beneficial for power generation plant and their survive problems that needs today, and we move towards Green Computing which really needs for us.

RESULT SNAPSHOTS

REFERENCES

Susane Albers (2010),”Energy efficient algorithms”, Communication of ACM, vol.53 No.5, 86-96.

Jiandun Li, Junjie Peng, Wu Zhang (2011),”An Energy- efficient Scheduling Approach Based on Private Clouds” , Journal of Information & Computational Science ,volume 8, Number 4 , 716-724.

Jiandun Li, Junjie Peng, Wu Zhang (2011), ”A Scheduling Algorithm for Private Clouds”,

Journal of Convergence Information

Technology, Volume 6, Number 7, 1-9. Saurabh Kumar Garg, Chee Shin Yeo, Arun

Anandasivam, Rajkumar Buyya (2011) ,“Environment-conscious scheduling of HPC applications on distributed cloud-oriented data centers ”, Journal of Parallel and Distributed Computing, vol.71, no.6, 732-749.

Akshat Verma, Puneet Ahuja, Anindya Neogi (2008), “Power-aware dynamic placement of HPC applications”, Proceedings of the 22nd

International Conference on

Supercomputing (ICS’08),Island of Kos, 175-184.

Chuling Weng, Zhigang Wang, Minglu Li, and Xinda Lu (2009), “The Hybrid Scheduling Framework for Virtual Machine Systems ,”Proc. Conf. VEE09, 113-120.

Gaurav Dhiman ,Giacomo Marchetti ,Tajana Rosing (2009),“vGreen: A System for Energy

Efficient Computing in Virtualized

(6)

2009 San Francisco,California ,USA,19-21.

Gregor von Laszewski ,LizheWang ,Andrew J. Younge ,Xi He (2009) ,”Power-Aware Scheduling of Virtual Machines in

DVFS-enabled clusters” ,cluster 09 IEEE

international on Cluster , 1-11.

Bo Li, Jianxin Li, Jinpeng Huai, Tianyu Wo, Qin Li,Liang Zhong (2009) , ”EnaCloud:An Enegy-saving Application Live Placement

Approach for Cloud Computing

Enviorments” ,In IEEE Intenational

Conference on cloud Computing 200, 17-24.

Aman kansal, Feng Zhao, jie Liu, Nupur Kothari, Arka A. Bhattacharya (2010), “Virtual

machine power metering and

provisioning”, copyright 2010 ACM E. Pinheiro and R. Bianchini, “Energy conservation

techniques for disk array based servers,” in

Proc. 18th Annual International

Conference on Supercomputing. ACM, 2004, pp. 68–78.

O. Weddle, C., Q. M., J., and A. Wang, “Paraid: A gear-shifting poweraware raid,” in 5th USENIX Conference on File and Storage. USENIX Association, 2007, pp. 245–267. D. Narayanan, A. Donnelly, and A. Rowstron, “Write

off-loading: Practical power management

for enterprise storage,” in 6th

USENIXConference on File and Storage Technologies. USENIX Association, 2008, pp. 253–267.

K. Verma, A., L. R., Useche, and R. Rangaswami, “Srcmap: Energy proportional storage using dynamic consolidation,” in 8th USENIXConference on File and Storage Technologies. USENIX Association, 2010, pp. 267–280.

E. Otoo, D. Rotem, and S.-c.Tsao, “Dynamic Data Reorganization for Energy Savings,” in

Proceedings of the 22nd international conferenceon Scientific and statistical database management, 2010, pp. 322–341. J. Guerra, H. Pucha, J. Glider,W. Belluomini, and R.

Rangaswami, “Cost effective storage using extent based dynamic tiering,” in 9th USENIXConference on File and Storage Technologies. USENIX Association, 2011. “Tpc-c, an online transaction processing benchmark,”

Transaction Processing Performance

Council, 2010. [Online]. Available:

http://www.tpc.org/tpcc/

“Tpc benchmarkTMh standard specification revision

2.14.2,” Transaction Processing

Performance Council, 2011. [Online]. Available: http://www.tpc.org/tpch/ A. Brunelle, “btrecord and btreplay user guide,”

2010. [Online]. Available: http://www.cse. unsw.edu.au/aaronc/iosched/doc/ btreplay. Html.

“Hitachi adaptive modular storage 2500 datasheet,” Hitachi Data Systems, 2011. [Online].

Available: http://www.hds.com/assets/

pdf/hitachidatasheet- ams2500.pdf

S. Harizopoulos, M. Shah, J. Meza, and P. Ranganathan, “Energy efficiency: The new holy grail of data management systems research,” in 4th Biennial Conf. on Innovative Data Systems, 2009, pp. 112– 123.

M. Poess and R. Nambiar, “Tuning servers, storage and database for power efficient data warehouse,” in 26th IEEE International

Conf. onData Engineering. IEEE

Computer Society, 2010, pp. 1006–1017. S. M. Snyder, S. Chen, P. K. Chrysanthis, and A.

Labrinidis, “Qmd: exploiting flash for energy efficient disk arrays,” in DaMoN, S. Harizopoulos and Q. Luo, Eds. ACM, 2011, pp. 41–49.

References

Related documents

In this paper we have presented a root n consistent estimator for the slope parameter in a semi parametric quantile model which offers,under homoscedasticity,an efficient alter-

Daarbij is in dit onderzoek de focus gelegd op de redelijkheid van het strategisch manoeuvreren met de descriptieve norm, op de mate waarin pragmatische argumentatie gebruikt

Following receipt of the 8-bit data word, the EEPROM will output a zero and the addressing device, such as a microcontroller, must terminate the write sequence with a stop

Sunquest® Laboratory Information Systems and Point of Care solutions help increase patient safety and predictive medicine insight for more community hospitals, integrated

EXT Independent control of Beam and Aura, FX available (fixture uses 25 DMX channels) COLOR CALIB ON. Color calibration

Randomized phase II study of two schedules of topotecan in previously treated patients with ovarian cancer: a National Cancer Institute of Canada Clinical Trials Group study. von

The purpose of this study is to explore human capital productivity strategies used by THL business leaders in southern Nigeria that have improved employee productivity. This

Outline of Results Briefing by SQUARE ENIX HOLDINGS held on May 13, 2013.. sufficiently   contribute   to