ABSTRACT: Cloud computing services can be private, public or hybrid. Private cloud services are delivered from a business' data center to internal users. This model offers versatility and convenience, while preserving the management, control and security common to local data centers. Internal users may or may not be billed for services through IT chargeback. In the public cloud model, a third-party provider delivers the cloud service over the internet. Public cloud services are sold on demand, typically by the minute or hour. Customers only pay for the CPU cycles, storage or bandwidth they consume. Leading public cloud providers include Amazon Web Services (AWS), Microsoft Azure, IBM SoftLayer and Google Compute Engine Hybrid cloud is a combination of public cloud services and on-premises private cloud -- with orchestration and automation between the two. Companies can run mission-critical workloads or sensitive applications on the private cloud while using the public cloud for bursting workloads that must scale on demand. The goal of hybrid cloud is to create a unified, automated, scalable environment that takes advantage of all that a public cloud infrastructure can provide while still maintaining control over mission-critical data. Existing solutions of encrypted data de duplication suffer from security weakness. They cannot flexibly support dataaccess control and revocation. Therefore, few of them can be readily deployed in practice. In this paper, we propose a scheme to de duplicate encrypted data stored in cloud based on ownership challenge and proxy re-encryption.
In this paper, we present an attribute-based storage system which employs ciphertext-policy attribute-based encryption (CP-ABE) and supports securededuplication. To enable the deduplication and distributed storage of the data across HDFS. And then using two way cloud in our storage system is built under a hybrid cloud architecture, where a private cloud manipulates the computation and a public cloud manages the storage. The private cloud is provided with a trapdoor key associated with the corresponding ciphertext, with which it can transfer the ciphertext over one access policy into ciphertexts of the same plaintext under any other accesspolicies without being aware of the underlying plaintext. After receiving a storage request, the private cloud first checks the validity of the uploaded item through the attached proof. If the proof is valid, the private cloud runs a tag matching algorithm to see whether the same data underlying the ciphertext has been stored. If so, whenever it is necessary, it regenerates the ciphertext into a ciphertext of the same plaintext over an access policy which is the union set of both accesspolicies. like public cloud and private cloud. We have shown the concept of deduplication effectively and security is achieved by means of Proof of Ownership of the file. That is attribute-based storage system ciphertext-policy attribute-based encryption (CP-ABE) and supports securededuplication
The proposed Cryptography technique is a symmetric key encryption for data retrieval from end user access to securedata Transmission is initiated in a dispersed environment where the single storage is a secure way of authenticity and storage or hosting services. In addition of that for preventing the unauthorized access to the system a strong user authentication technique using the normal credential and multi replica of different uploaded data files are stored on cloud server and have been updating data files at the same place. Multi replica baseddata blocks are ensured that data integrity between two parties is secure in such a way that public cloud data are accessible throughout the session. Furthermore, for securing the data in storage and entrusted network 3 DES used to performing encryption and decryption and performing document indexing for hash tree generation. The proposed solution reduces storage complexity and increase availability of data.
Accesspolicies Cisco Secure ACS 5.5 supports a rules-based, attribute-driven policy model that provides greatly increased power and flexibility for access control policies that may include authentication protocol requirements, device restrictions, time-of- day restrictions, and other access requirements. Cisco Secure ACS may apply downloadable access control lists (dACLs), VLAN assignments, and other authorization parameters. Version 5.5 can also disable user accounts within the internal database based on the expiration of a user or group. Furthermore, it allows comparison between the values of any two attributes that are available to Cisco Secure ACS to be used in identity, group-mapping, and authorization policy rules.
based on the pay-as-you-go model. Number of users tremendously increases the size of data with more number of duplicate copies of identical data. To reduce the amount of storage space datadeduplication technique is used. Datadeduplication is the data compression technique for eliminating the duplicate copy of same data. To protect the confidentiality of sensitive data, convergent encryption technique is used and secure proof of ownership(PoW) is used to block unauthorized access of the data. Data are encrypted with a convergent key , which is derived from the content of the data copy to obtain the same ciphertext using the cryptographic hash function. After data encryption, users sends the ciphertext to the cloud. To provide higher level of confidentiality and security, key management technique is used to manage the convergent keys using cryptographic hash function and sends it to the cloud.
Abstract: This paper represents that, many systems are using for the removing of replica copies of repeating data, from that approaches, one of the crucial important knowledge compression system is data duplication. Many advantages with this knowledge duplication, ordinarily it is going to lower the quantity of storage space and save the bandwidth when utilising in cloud storage. To protect confidentiality of the sensitive data whilst helping de- duplication data is encrypted via the proposed convergent encryption system before out sourcing. Issues licensed data duplication formally addressed by using the primary attempt of this paper for better safeguard of data protection. This is one-of- a-kind from the normal duplication techniques. The differential privileges of users are additional regarded in duplicate check apart from the data itself. In hybrid cloud structure licensed replica assess supported by means of several new duplication constructions. Based on the definitions specified in the proposed security model, our scheme is comfortable. Proof of the suggestion carried out on this paper by way of conducting experiment-mattress experiments.
Auditor: Auditor is a TPA work as proficiency and capabilities where cloud users do not have to faith to assess the cloud storage service reliability on behalf of the user upon request. The set of permissions and the symmetric key for each privilege is allocates and stored in private cloud. The user registers into the system, permissions are assigned to user according to identity given by the user at registration time; means on basis of situation which access by the user. The data owner with permission can upload and share a file to users, further the data owner performs identification and sends the file tag to the private server. Private cloud server checks the data owner and computes the file token and will send back the token to the data owner. The data owner throws this file token and a request to upload a file to the storage provider. If duplicate file is found then user needs to run the PoW protocol with the storage provider to prove that user has an ownership of respective file. In the PoW result; if proof of ownership of file is approved then user will be provided a pointer for that file. And on the next case; for no duplicate is found for the file, the storage provider will be come again a signature for the result of that proof for the particular file. To upload file user sends the privilege set as well as the proof to the private cloud server in the form of a request. The private cloud server verifies the signature first on receiving the request for the user to upload file
Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data . Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file; hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks was recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-ofownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyse their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication .
Abstract: - As the cloud computing science develops for the duration of the last decade, outsourcing knowledge to cloud service for storage becomes an attractive pattern, it's nothing but remote gaining access to on heavy knowledge renovation and management. The content material shared by way of extraordinary privileges customers; arguably it raises security issues to guard the authorized knowledge whilst supporting datadeduplication. DataDeduplication is one in all principal data compression strategies for taking away duplicate copies of repeating information, and has been commonly utilized in cloud storage to diminish the quantity of cupboard space and keep bandwidth. As more company and private users outsource their data to cloud storage providers, data breach incidents make end- to-end encryption. The convergent encryption technique proposed to encrypt the data earlier than outsourcing which provides a protocol that decrypt the ciphertext to supply data from the private to public cloud. The Advance encryption standard (AES) enhances the token generation for each and every user, Hybrid cloud appoarch is introduced. DataDeduplication is one in every of foremost information compression procedures for eliminating duplicate copies of repeating data, and has been broadly utilized in cloud storage to minimize the amount of storage space and bandwidth.
Storer et al.  have developed two models for secure deduplicated storage which are authenticated and anonymous. These two models demonstrate that security can be combined with deduplication in such a way that provides a multiple range of security characteristics. In the models they presented, security is provided through the use of convergent encryption. In both the authenticated and anonymous models, a map is created for each file that describes how to reconstruct a file from blocks. To prevent information leakage, several solutions have been proposed. However, these solutions are based on a strong assumption that all individual files are independent of each other. Shin et al.  proposed a storage GW-basedsecure client-side deduplication protocol. A storage GW is a network appliance that provides access to the remote cloud server and simplifies interactions with cloud storage services, and is used in various cloud service delivery models such as public, private, and hybrid cloud computing. The proposed solution, by utilizing the storage GW as an important component in the system design, achieves greater network efficiency and architectural flexibility while also reducing the risk of information leakage.
The creator Prakash Gapat, Snehal Khillare, Akshay Khiste, and Rohini Pise have shown difference between traditional encryption algorithm and Convergent Encryption Algorithm. For the preservation of delicate data convergent encryption technique is used and then data is stored on cloud storage, In order to make system more secure, the different privileges to users are again considered while checking duplicate content. But the problem which occurs in this approach is that even if some contents of both files are different, it stores them as two different files which lead to reduction in cloud storage space. The solution for this problem is to perform apply technique for deduplication (block level).In this approach the file which is to be stored on cloud storage is divided into number of different blocks based on contents and deduplication is performed on these blocks .
on authorization to securedata . In 2016, Shuai Wang et.al proposed a RRMFS file system to support datadeduplication. .In the same year, Zheng Yan et.al presented a scheme for ownership and re- encryption to deduplicate encrypted data stored in cloud . In the same year, Naresh Kumar et.al performed a comparative analysis of various deduplication techniques is done using destor tool. Datadeduplication technique uses several chunking algorithms fixed length and variable length chunking . In the same year, Jun Ren et.al proposed a method based on differential privacy for securedatadeduplication . In the same year, Saurabh Singh et.alprovided cloud security survey with discussion about security issues and challenges . In the same year, Feilong Tang et.al introduced an approach called Load Balanced Flow Scheduling approach for dynamic load balancing and to maximize network throughput . In 2017, Danoing Li et.al proposed a method called CSPD using modified DCT-based Perceptual Image Hash (D-phash) to improve the accuracy of the duplicate check . In the same year, Hui Cui et.al implemented an ABE encryption system for cloud storage based on attributes . In the same year, Rayan Dasoriya et.al presented a dynamic load balancing algorithm to distribute the load across multiple connected network links . In the same year, Shunrong Jiang et.al proposed data confidentiality and ownership management system for datadeduplicationbased on Proof of Ownership (PoW) technique . In the same year, Himshai Kambo et.al implemented a securededuplication mechanism based on CDC and MD5 algorithm. CDC used to break the data streams using randomization and MD5 algorithm creates the hash values for the segments or chunks created by CDC. It was used to improve network bandwidth .
Fine-grained access control systems facilitate granting differential get entry to rights to a set of users and permit flexibility in specifying the get right of entry to rights of individual customers. Several techniques are recognized for imposing excellent grained get entry to manipulate. Common to the prevailing strategies and the references therein) is the truth that they hire a depended on server that stores the records in clean. Access control relies on software checks to ensure that a user can get admission to a bit of facts most effective if he's legal to achieve this. This scenario isn't always mainly attractive from a protection viewpoint. In the occasion of server compromise, as an instance, because of a software vulnerability exploit, the capacity for information theft is vast. Furthermore, there is usually a chance of “insider attacks” in which someone gaining access to the server steals and leaks the statistics, for example, for financial profits. Some strategies create consumer hierarchies and require the customers to share a common secret key if they are in a common set in the hierarchy. The statistics is then categorized consistent with the hierarchy and encrypted below the public key of the set it is meant for. Clearly, such methods have numerous barriers. If a 3rd party have to get right of entry to the information for a fixed, a consumer of that set both desires to behave as an intermediary and decrypt all applicable entries for the birthday party or have to deliver the celebration its non- public decryption key, and for this reason permit it have
163 | P a g e On the other hand, to protect data in public cloud servers from unauthorized entities, the client has to ensure that only authorized users are able to obtain the decrypting keys. As such, the data owner has to encrypt the data deciphering key, using the public key of the recipient user. This key is, then, integrated by the data owner in user metadata, ensuring data confidentiality against malicious users, as well as flexible access control policies. To illustrate our solution for improving data security and efficiency, we first present the different prerequisites and assumptions. Then, we introduce three use cases for storing, retrieving and sharing data among a group of users.
In order to have a secure storage of deduplicated data over a cloud computing we use the encryption/decryption technique. Encryption is the process of converting a plaintext into a ciphertext. Decryption is the process of converting a ciphertext into plain text. The process of symmetric key encryption where same key is used for both encryption and decryption .the key ought to be send firmly over a network. To stop unauthorized access, a secure proof of possession protocol is additionally required to provide the proof that the user indeed owns the similar file when a duplicate is found. Once the proof, consequent users with the similar file are going to be provided a pointer from the server while not having to transfer the similar file.
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. Use of Cloud computing has rapid and wider scope now a days. Cloud storage services provide huge amount of virtual Infrastructure environment abstracting the underlying platform and other technical details from the user. Cloud users need to pay as per the resource usage allocated to him/her. Distributed computing has massive variety of degree in data sharing in current period. Distributed computing is give accurate measure of virtual environment concealing the stage and working
In this paper, first we proposed a new RBE scheme that achieves efficient user revocation. Then we presented a RBAC based cloud storage architecture which allows an organisation to store data securely in a public cloud, while maintaining the sensitive information related to the organization’s structure in a private cloud. Then we have developed a secure cloud storage system architecture and have shown that the system has several superior characteristics such as constant size cipher text and decryption key. From our experiments, we observe that both encryption and decryption computations are efficient on the client side, and decryption time at the cloud can be reduced by having multiple processors, which is common in a cloud environment. We believe that the proposed system has the potential to be useful in commercial situations as it captures practical accesspoliciesbased on roles in a flexible manner and provides securedata storage in the cloud enforcing these accesspolicies.
and deduplication of data. SecCloud helps audit the integrity of data stored in racks of storage devices at datacenters. SecCloud helps us achieve a privelage to access the files with the help of Proof of Ownership protocol so clients feel they have the control over the files they are storing at unknown data centres across the globe. The usage of SecCloud system has reduced the computational overhead for the user as well as auditor. The computation of the overall system is increased especially while uploading a file because SecCloud aids with generating data tags before uploading. It also helps us achieve efficient auditing phases. SecCloud+ allows applying the techniques of integrity auditing and deduplication on encrypted data for security measures. The implementation of generation of OTP helps us save from security threats.
This Project the goal of saving storage space for cloud storage services also is used for securededuplication, but several process have been this same concept for deduplication. However this project flow some different modules in there. In this case, if two users upload the same file, the cloud server can discern the equal ciphertexts and store only one copy of them and blocks the other or same uploader to upload the duplicate file. In this process some authentication schemes are available for security purpose. For security establishment in this system, a new security algorithm is implemented called “Advanced Cryptographic Standards (ACS)”. Through this process data owners and users can ensure for secured deduplicationpolicies. An owner wants to outsource their data to the cloud and share it with users possessing certain credentials. The proposed system has several advantages; some of them are described below: (a) Firstly, it can be used to confidentially share data with users by specifying accesspolicies rather than sharing decryption keys and (b) Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion.
Attribute-based encryption (ABE) has been generally utilized as a part of distributed computing where an information supplier outsources his/her scrambled information to a cloud specialist provider and can impart the information to clients having particular accreditations (or attributes). Be that as it may, the standard ABE framework does not bolster securededuplication, which is vital for disposing of copy duplicates of indistinguishable information with a specific end goal to spare storage room and system transmission capacity. In this paper, we show an attribute-based capacity framework with securededuplication in a crossover cloud setting, where a private cloud is in charge of copy discovery and an open cloud deals with the capacity. Contrasted and the earlier information deduplication frameworks, our framework has two points of interest. Right off the bat, it can be utilized to privately impart information to clients by determining access