• No results found

Title : AN EFFICIENT MERKLE HASH TREE FOR VERIFIABLE KEY IN CLOUD STORAGE Author (s) : Aneesh T V and L. Mubarali

N/A
N/A
Protected

Academic year: 2020

Share "Title : AN EFFICIENT MERKLE HASH TREE FOR VERIFIABLE KEY IN CLOUD STORAGE Author (s) : Aneesh T V and L. Mubarali"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Aneesh T V and MubaraliL 1

AN EFFICIENT MERKLE HASH TREE FOR VERIFIABLE KEY IN CLOUD

STORAGE

Aneesh T V1 and L.Mubarali2

1

Pg Scholar, 2Head of Department, Department of Computer Science and Engineering

Maharaja Engineering College, Avinashi.

Abstract

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It

moves the application software and databases to the centralized large data centers, where the management

of the data and services may not be fully trustworthy. In particular, we consider the task of allowing a

third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored

in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of

whether his data stored in the cloud are indeed intact, which can be important in achieving economies of

scale for Cloud Computing. The key exposure problem in the settings of cloud storage auditing has been

proposed and studied. To address the challenge, existing solutions all require the client to update his

secret keys in every time period, which may inevitably bring in new local, burdens to the client, especially

those with limited computation resources, such as mobile phones. In previous work focus on how to make

the key updates as transparent as possible for the client and propose a new paradigm called cloud storage

auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely

outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal.

In particular, leverage the third party auditor (TPA) in many existing public auditing designs, let it play

the role of authorized party in our case, and make it in charge of both the storage auditing and the secure

key updates for key-exposure resistance. In this work propose Merkle hash tree extended with encrypted

version of the client’s secret key for improve the existing models by manipulating the classic Merkel

Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing

tasks, further explore the technique of bilinear aggregate signature to extend our main result into a

multiuser setting, where TPA can perform multiple auditing tasks simultaneously.

(2)

Aneesh T V and MubaraliL 2

1. INTRODUCTION

Cloud Computing is an Internet-based process of

development and use of computer technology. In

cloud it enables users to remotely store their data

in the cloud, therefore as to enjoy services

on-demand. Migrating data from user side to the

cloud offers great ease to the users as they can

access data in cloud anytime, anywhere using

any device, not caring about the capital

investment to deploy the hardware

infrastructures. Mainly for small also medium

sized enterprise with limited budgets and they

can achieve cost savings and the give to scale

investments on-demand, through using

cloud-based services to manage projects and enterprise

wide contacts and schedules. Though allowing

the cloud service provider (CSP) to operate for

making a more profit to take care of secret

corporate data, raises fundamental security and

privacy issues. For case an unreliable CSP may

sell the confidential information to an enterprise

to its closest business competitors for making a

profit. So a natural way to stay sensitive data

confidential against an untrusted CSP is to store

only the encrypted data in the cloud. Cloud

storage enables users to access their data

anywhere and at any time. It achieves the dream

of getting computing, storage and

communication resources as easy as to get water

and electricity. All resources can be gotten in a

plug-and-play way. It has the advantages of high

scalability, ease-of-use, cost-effectiveness and

simplifying infrastructure planning etc.

However, the emerging use of cloud storage has

led to the problem of verifying that storage

server indeed store the data. When users store

their data in cloud storage, they mostly concern

about whether the data is intact Remote data

possession checking is a topic that focuses on

how to frequently, efficiently and securely verify

that a storage server can faithfully store its

client’s (potentially very large) original data

without retrieving it. The storage server is

assumed to be un-trusted in terms of both

security and reliability. There are two types of

schemes, namely provable data possession

(PDP) [3] and proof of irretrievability (POR)

[1]. The difference between PDP and POR is

that POR checks the possession of data and it

can recover data in case of a failure. Usually, a

PDP can be transformed to a POR by adding

erasure or error correcting codes in an existing

system considered with the public auditablity,

they utilize RSA-based homographic tags for

auditing outsourced data. In the subsequent

work, a dynamic version of the prior PDP

scheme is proposed, in which the system

imposes a priori bound on the number of queries

(3)

Aneesh T V and MubaraliL 3 operations, i.e., it only allows very basic block

operations with limited functionality and block

insertions cannot be supported[2].

In another existing system, they

consider dynamic data storage in a distributed

scenario and the proposed challenge-response

protocol can both determine the data correctness

and locate possible errors. Another system

describes a “proof of retrievability” model,

where spot-checking and error-correcting codes

are used to ensure both “possession” and “retrievability” of data files on archive service

systems. Specifically, some special blocks called

“sentinels” are randomly embedded into the data

file F for detection purpose, and F is further

encrypted to protect the positions of these

special blocks. However, the number of queries

a client can perform is also a fixed priori, and

the introduction of pre computed “sentinels”

prevents the development of realizing dynamic

data updates. In addition, public auditability is

not supported in their scheme. Cloud storage is

universally viewed as one of the most important

services of cloud computing. One important

security problem is how to efficiently check the

integrity of the data stored in cloud. In recent

years, many auditing protocols for cloud storage

have been proposed to deal with this problem.

The key exposure problem is another important

problem in cloud storage auditing [4].

Cloud storage auditing protocol with verifiable

outsourcing of key updates, in our design, the

third party auditor (TPA) plays the role of the

authorized party who is in charge of key

updates. In addition, similar to traditional public

auditing protocols, another important task of the

TPA is to check the integrity of the client’s files

stored in cloud. The TPA does not know the real

secret key of the client for cloud storage

auditing, but only holds an encrypted version. In

the detailed protocol, we use the blinding

technique with homomorphic property to form

the encryption algorithm to encrypt the secret

keys held by the TPA [5]. It makes our protocol

secure and the decryption operation efficient.

Meanwhile, the TPA can complete key updates

under the encrypted state. The client can verify

the validity of the encrypted secret key when he

retrieves it from the TPA.

In this work proposed System improve the

existing proof of storage models by

manipulating the classic Merkle Hash Tree

construction for block tag authentication. We

propose a technique for traversal of Merkle trees

which is structurally very simple and allows for

various tradeoffs between storage and

computation. For one choice of parameters, the

total space required is bounded by 1.5 log2

N/log(log N )hash values, and the worstcase

computational effort is 2 log N/log(log N) hash

(4)

Aneesh T V and MubaraliL 4 the possibility of using Merkle authentication, or

digital signatures in slow or space constrained

environments, focus on improving the

efficiency of the authentication/signature

operation, building on earlier, Although hash

functions are very efficient, too many secret leaf

values would need to be authenticated for each

digital signature. By reducing the time or space

cost, find that for medium - size trees the

computational cost can be made sufficiently

efficient for practical use. This reinforced the

belief that practical, secure

signature/authentication protocols are realizable.

Fig1. Network model

2. PROBLEM FORMATION

Security problem is how to efficiently check the

integrity of the data stored in cloud. The key

exposure problem, itself is non-trivial by nature.

Once the client’s secret key for storage auditing

is exposed to cloud, the cloud is able to easily

hide the data loss incidents for maintaining its

reputation, even discard the client’s data rarely

accessed for saving the storage space. Cloud

storage auditing protocol with key-exposure

resilience. In that protocol, the secret keys for

cloud storage auditing are updated periodically.

As a result, any dishonest behaviors, such as

deleting or modifying the client’s data

previously stored in cloud, can all be detected,

even if the cloud gets the client’s current secret

key for cloud storage auditing.

3. PROPOSED METHODOLOGY

i) Construction of a network model

Initially the basic network model for the

cloud data storage is developed in this module.

Three different network entities can be identified

as follows: Client: an entity, which has large

data files to be stored in the cloud and relies on

the cloud for data maintenance and computation,

can be either individual consumers or

organizations; Cloud Storage Server (CSS): an

entity, which is managed by Cloud Service

Provider (CSP), has significant storage space

and computation resource to maintain the

clients’ data; Third Party Auditor: an entity,

which has expertise and capabilities that clients

do not have, is trusted to assess and expose risk

of cloud storage services on behalf of the clients

upon request. In the cloud paradigm, by putting

(5)

Aneesh T V and MubaraliL 5 clients can be relieved of the burden of storage

and computation.

ii) Third Party Auditor

Our design is based on the structure of the

protocol proposed the same binary tree structure

as to evolve keys, which have been used to

design several cryptographic schemes. This tree

structure can make the protocol achieve fast key

updates and short key size. One important

difference between the proposed protocol and

the protocol is that the proposed protocol uses

the binary tree to update the encrypted secret

keys rather than the real secret keys. One

problem we need to resolve is that the TPA

should perform the outsourcing computations for

key updates under the condition that the TPA

does not know the real secret key of the client.

Traditional encryption technique is not suitable

because it makes the key update difficult to be

completed under the encrypted condition.

Besides, it will be even more difficult to enable

the client with the verification capability to

ensure the validity of the encrypted secret keys.

To address these challenges, propose to explore

the blinding technique with homomorphic

property to efficiently “encrypt” the secret keys.

It allows key updates to be smoothly performed

under the blinded version, and further makes

verifying the validity of the encrypted secret

keys possible. In the designed SysSetup

algorithm, the TPA only holds an initial

encrypted secret key and the client holds a

decryption key which is used to decrypt the

encrypted secret key. In the designed KeyUpdate

algorithm, homomorphic property makes the

secret key able to be updated under encrypted

state and makes verifying the encrypted secret

key possible. The Ver ESK algorithm can make

the client check the validity of the encrypted

secret keys immediately.

iii) Default Integrity Verification

Here in this module we present the

default integrity verification process. The client

or TPA can verify the integrity of the outsourced

data by challenging the server. Before

challenging, the TPA first use spk to verify the

signature on t. If the verification fails, reject by

emitting FALSE; otherwise, recover u.

iv) Implementation of basic protocol

based on Merkle Hash tree

In this module we present the basic

protocol based on merkle hash tree which

supports public auditability and data dynamics.

A Merkle Hash Tree (MHT) is a well studied

authentication structure, which is intended to

efficiently and securely prove that a set of

elements are undamaged and unaltered. It is

constructed as a binary tree where the leaves in

the MHT are the hashes of authentic data values.

The verifier with the authentic hr requests and

requires the authentication of the received

(6)

Aneesh T V and MubaraliL 6 the auxiliary authentication information. The

verifier can then verify and then checking if the

calculated hr is the same as the authentic one.

MHT is commonly used to authenticate the

values of data blocks.

4. Dynamic Data Operation with Integrity

and Batch Auditing for Multi-client Data

Now we show how our scheme can

explicitly and efficiently handle fully dynamic

data operations including data modification (M),

data insertion (I), and data deletion (D) for cloud

data storage. Note that in the following

descriptions, we assume that the file F and the

signature Ф have already been generated and

properly stored at server. The root metadata R

has been signed by the client and stored at the

cloud server, so that anyone who has the client’s

public key can challenge the correctness of data

storage. Here in this module present the batch

auditing for multi-client data. Our scheme to

allow for provable data updates and verification

in a multiclient system. As in the BLS based

construction, the aggregate signature scheme

allows the creation of signatures on arbitrary

distinct messages. Moreover, it supports the

aggregation of multiple signatures by distinct

signers on distinct messages into a single short

signature, and thus greatly reduces the

communication cost while providing efficient

verification for the authenticity of all messages.

Finally we present the performance of the

proposed model with the existing approaches.

Here we have used the following performance

metrics namely metric\rate, service computation

cost, verifier computation time and

communication cost. From this analysis we can

know that the proposed system having the better

performance than the existing system.

Fig2. Comparison of Verification Time

CONCLUSION

To ensure cloud data storage security, it

is critical to enable a TPA to evaluate the service

quality from an objective and independent

perspective. Public auditability also allows

clients to delegate the integrity verification tasks

to TPA while they themselves can be unreliable

or not be able to commit necessary computation

resources performing continuous verifications.

Another major concern is how to construct

verification protocols that can accommodate

dynamic data files. In this paper, we explored

the problem of providing simultaneous public

(7)

Aneesh T V and MubaraliL 7 integrity check in Cloud Computing. Our

construction is deliberately designed to meet

these two important goals while efficiency being

kept closely in mind. To achieve efficient data

dynamics, improve the existing proof of storage

models by manipulating the classic Merkel Hash

Tree construction for block tag authentication.

To support efficient handling of multiple

auditing tasks, we further explore the technique

of bilinear aggregate signature to extend our

main result into a multiuser setting, where TPA

can perform multiple auditing tasks

simultaneously. Extensive security and

performance analysis show that the proposed

scheme is highly efficient and provably secure.

Fig3. Comparison of Computation Time

REFERENCES

1. A. Juels and B.S. Kaliski Jr., “Pors:

Proofs of Retrievability for Large Files,”

Proc. ACM Conf. Computer and Comm.

Security, P. Ning, S.D.C. di Vimercati,

and P.F. Syverson, eds., pp. 584-597,

2007

2. H. Shacham and B. Waters, “Compact

Proofs of Retrievability,” Proc. 14th Int’l Conf. Theory and Application of

Cryptology and Information Security:

Advances in Cryptology, J. Pieprzyk,

ed., pp. 90-107, 2008

3. G. Ateniese, R. Burns, R. Curtmola, J.

Herring, L. Kissner, Z. Peterson, and D.

Song, “Provable Data Possession at Untrusted Stores,” Proc. ACM Conf.

Computer and Comm. Security, pp. 598-

609, 2007

4. C. Wang, Q. Wang, K. Ren, and W.

Lou, “Privacy-preserving public

auditing for storage security in cloud

computing,” IEEE Trans. Computers,

vol. 62, no. 2, pp. 362-375, Feb. 2013.

5. Qian Wang1 , Cong Wang1 , Jin Li1 ,

Kui Ren1 , and Wenjing Lou2 1

Enabling Public Verifiability and Data

Dynamics for Storage Security in Cloud

Computing Illinois Institute of

Technology, Chicago IL 60616, USA

6. B. Wang, B. Li, and H. Li Oruta,

“Oruta: Privacy-preserving public auditing for shared data in the cloud,”

IEEE Trans. Cloud Comput., vol. 2, no.

1, pp. 43–56, Jan./Mar. 2014.

7. E.-C. Chang and J. Xu, “Remote

(8)

Aneesh T V and MubaraliL 8 Server,” Proc. 13th European

Symp.Research in ComputerSecurity

(ESORICS ’08), pp. 223-237, 2008

8. M.A. Shah, R. Swaminathan, and M.

Baker, “Privacy-PreservingAudit and Extraction of Digital Contents,” Report

2008/186,Cryptology ePrint Archive,

2008.

9. Q. Wang, K. Ren, W. Lou, and Y.

Zhang, “Dependable and SecureSensor

Data Storage with Dynamic Integrity

Assurance,” Proc.IEEE INFOCOM, pp.

954-962, Apr. 2009.

10. T. Schwarz and E.L. Miller, “Store,

Forget, and Check: UsingAlgebraic

Signatures to Check Remotely

References

Related documents

Many metabolic disease risk factors, including insulin re- sistance begin to accumulate in young adulthood [15]. Consequently, there is an acute need to establish a sensi-

Thus both control mothers and 4-day virgins complete the cycle by the 13th day of age after emergence; the 6-day virgins complete it by the 15th day.-Measurements of

In order to check the sensitivity of the stability assessment results presented in the previous section, additional nonlinear analyses have been performed using the

With the aim of re-situating modern Iranian drama within the context of international dramatic developments, the plays of Gholamhossein Sa‘edi, Akbar Radi, and

The degree of fiscal decentralization is in the very poor category, where the average is less than 10%, which is 7.92%, which means that Dharmasraya District's Original Revenue

Capabilities : Describe current overall practices in information sharing and gathering; what type of information is shared, types of cyber-threats, timeliness, processes

TPA makes task of Client by verifying the integrity of data stored in cloud .The Dynamic third party auditing system does auditing using public key based homomorphic

First, our proposed model can estimate the probability that a user will be at a given location at a specific time in the future, by using both spatial and temporal information