Now-a-days the number of users using cloud storage has increased so that the data stored has been increased in exponential rates. The data should be secured and the storage should be used efficiently. But a lot of duplicate data is present as two or more users may upload the same data. To make use of the cloud storage efficiently we have to reduce the redundant data hence improving the resources like storage space, disk I/O operations of the cloud vendors. Data De-Duplication is the process to remove redundant data and store only one instance of duplicate data. The objective of the proposed system is to increase the efficient comparison of **hash** values of a different data blocks and security of data. This paper includes a method for data deduplication using **SHA** (**Secure** **Hash** **Algorithm**) and AES. **SHA** is used as it is more **secure** than other hashing algorithms. The data is encrypted using AES at owner machine itself and using **SHA** the redundant data will be eliminated.

**Secure** **Hash** **Algorithm**-2 (**SHA**-2) is a set of **secure** **hash** functions standardized by NIST as part of the **Secure** **Hash** Standard in FIPS 180-4 [6]. Although there is a new version of the standard called **SHA**-3 [7], NIST does not currently intend to remove **SHA**- 2 from the revised **Secure** **Hash** Standard as no significant attack on **SHA**-2 has been demonstrated. Rather, **SHA**-3 can be used in the information security applications that need to improve the robustness of NIST’s overall **hash** **algorithm** toolkit. There are six

20 Read more

when we talk about network , network based application and web based application or Web Services like SOAP(Simple object access protocol). Authentication of a user to learn his or her identity. The identity information might be used to make sure a person should have access to the Web services or not. We may also use the identity to track the user's activities to analyse the user responsiveness for many purposes. Here in this paper we have implemented **SHA** (**Secure** **Hash** **Algorithm**), that is much capable to do this job for **secure** authentication of user.

Abstract— **Hash** functions are the most widespread among all cryptographic primitives, and are currently used in multiple cryptographic schemes and in security protocols. A famous **secure** **hash** **algorithm** given by the National Institute of Standard and Technology (NIST). **SHA** stands for "**secure** **hash** **algorithm**". The four **SHA** algorithms are structured differently and are named **SHA**-0, **SHA**-1, **SHA**-2, and **SHA**-3. **SHA**-1 is very similar to **SHA**-0, but corrects an error in the original **SHA** **hash** specification that led to significant weaknesses. The **SHA**-0 **algorithm** was not adopted by many applications. **SHA**-2 on the other hand significantly differs from the **SHA**-1 **hash** function.SHA-1 is the most widely used of the existing **SHA** **hash** functions, and is employed in several widely used applications and protocols.

A **hash** function takes a variable sized input message and produces a fixed-sized output. The output is usually referred to as the **hash** code or the **hash** value or the message digest (Kak, 2014), **hash** functions play a significant role in today's cryptographic applications. **SHA** (**Secure** **Hash** **Algorithm**) is a famous message compress standard used in computer cryptography, it can compress a long message to become a short message abstract (Iyer & Mandal, 2013). In this paper, **SHA**-1 is implemented using LabVIEW.

It is a **hash** function used in cryptography which can produce 160-bit **hash** value. **SHA** is ‘**Secure** **Hash** **Algorithm**’. It is typically seen as a hexadecimal number of digits. There are four varieties of **SHA** algorithms with different structures namely **SHA**-0, **SHA**-1, **SHA**-2, and **SHA**-3. **SHA**-1 **algorithm** is similar to **SHA** except it corrects error from **SHA**-0. The error corrected was a major weakness of **SHA** – 0. Because of weakness in **SHA** – 0 the **algorithm** is not used by many applications. **SHA**-2 is also very popular and different from **SHA** -1. Among all **SHA** algorithms **SHA**-1 is the most popular **hash** function and is widely used in many applications.

**Hash** functions were introduced in Cryptology as a tool to protect the integrity of information. **Secure** **Hash** **Algorithm**-1 (**SHA**-1) and Message Digest-5 (MD-5) are among the most commonly used **hash** function message digest algorithms. Scientists have found collision attacks on **SHA**-1, MD-5 **hash** functions so the natural response to overcome this threat was assessing the weak points of these protocols that actually depend on collision resistance for their security. So to increase the security, modified **SHA**-192 is introduced in this paper having a message digest of length 192 bits with larger bit difference. To generate larger bit difference, best properties of MD-5andSHA-1 are combined. So the new solution will be no longer vulnerable to the collision attacks.SHA-192 currently used in security applications and protocols applications including Transport Layer Security (TLS), **Secure** Socket Layer (SSL), Internet Protocol Security (IPSec) and as Digital Signatures. This technique is designed by using Verilog HDL with Xilinx ISE Design suite 12.4 version tool. The designs implemented in Xilinx SPARTAN 3E XC3S500EFG320 FPGA board.

Brintha Rajakumari.S, Mohamed Badruddin .M and Qasim Uddin introduces **secure** login. Data distribution can be manipulated for better data mining to gain better conclusion and defend [4]. Data integration means that no sensitive data can be disclosed during data mining. **Secure** **Hash** **Algorithm** (**SHA**) provides more security and privacy data mining model. It addresses the problem of relinquishing private data. Different datasets composed of same set of user is held by the two parties. In data storage and encryption, the cloud server will partitioning the data into many portions when once data was stored into the web server; and store all the data in the separate server. Data and source code are stored in the data server. Admin is a person, who integrates the data in the web server.

This proposal uses a k-ary tree as an Authenticated Data Structure (ADS), for the management of certificate revoca- tion in VANETs. By using this ADS, the process of query on the validity of certificates will be more efficient because OBUs will send queries to RSUs, who will answer them on behalf of the CA. In this way, at the same time the CA will no longer be a bottleneck, and OBUs will not have to download the entire CRL. In particular, the used perfect k-ary trees are based on the application of a duplex construction of the **Secure** **Hash** **Algorithm** **SHA**-3 that was recently chosen as standard [?], because the combination of both structures allows improving efficiency of updating and querying of revoked certificates.

**SHA**-1(**Secure** **Hash** **Algorithm**) is iterative, one-way **hash** functions that can process a message to produce a condensed representation called a message digest. This **algorithm** enable the determination of a message’s integrity: any change to the message will, with a very high probability, results in a different message digest. This property is useful in the generation and verification of digital signatures and message authentication codes, and in the generation of random numbers (bits). **SHA**-1 can be described in two stages: pre-processing and **hash** computation.

From the table password is same and the salts are different, the string given for hashing is combination of both password+salt and is given to any desirable hashing **algorithm** (here MD5 and **SHA**-1) to generate the Salted **Hash**. Other solution for the different types of attack faced by authentication algorithms is using PBKDF2 with HmacSHA1 **algorithm**. From password salting and creating **secure** **hash** it is possible to **secure** the password but, as the technology of hardware is also rapidly growing because of which any password **hash** can be cracked with brute force type of attack, rainbow table attack, and dictionary attacks. To solve this problem, a common idea of making this types of attacks slower that is to create method which make all the different types of attack slower. This feature can be implemented by using CPU intensive algorithms like PBKDF2, Bcrypt or Scrypt. These algorithms take the security factor as the number of iteration count. If the hardware configuration is increases next year then its iteration argument is going to be increased as challenge against the hardware. This iteration count argument makes the **algorithm** slower[10].

The AES **algorithm** implemented in VHDL to realize the confidentiality property to support the security of the system. The original message that processed by **SHA**-1 **algorithm** first, represent the plaintext. The **hash** code that produced via **SHA**-1 represents the key or the password of AES **algorithm** to generate another code which is very difficult to break. The **SHA**-1 and AES implemented by using VHDL , first the original message is processed via **SHA**-1 **algorithm** then the code generated by **SHA**-1 is processed via AES **algorithm** to give a very **secure** code that cannot be breakable easily.

15 Read more

Fridrich and Goljan [8] located that DCT coefficients can specify image substance material and proposed a strong hashing technique based in this observation for utility in virtual watermarking. Venkatesan et al. [4] used records of wavelet coefficients to construct image **hash**. This method is flexible to JPEG density, median filtering and rotation within 2°, but brittle to gamma correction and assessment alteration. Lin and Chang [9] designed an image authentication gadget with strong hashing, based on invariant members of the family among DCT coefficients at the identical role in detach blocks. RT (Radon Transform) [15], [12], [16],[17],[18] is obtained numerous attentions because of geometric transforms. For instance, Lefebvre et al. [15] pioneered using RT to build strong hashes. Seo et al. [16] subjugated vehicle-correlation of every outcrop inside the RT area to devise image hashing. De Roover et al. [17] planned a system, known as RASH method. Ou and Rhee [12] carried out RT to enter image, arbitrarily determined on 40 projections to carry out 1-D DCT, and took the first coefficient of each projection to assemble **hash**. Wu et al. [18] subjugated RT combining with DWT and DFT to build up image hashing. DFT ( Discrete Fourier Transform) [19], function points [20], [21] and matrix factorization [7], [11], [22]are also used in image hashing. Swaminathan et al. [19] used the DFT coefficients to give image hashes. Monga and Evans [20] subjugated the quit-stopped wavelet rework to stumble on visually giant characteristic factors. To make a short **hash**, Monga et al. [21] planned a heuristic clustering **algorithm** with a polynomial time for characteristic factor density. Kozat et al. [11] viewed images and assaults as a series of linear operators, and anticipated to compute hashes the usage of SVDs (Singular Cost Decompositions). . Monga and Mihcak [22] were the primary of making use of nonnegative matrix factorization (NMF) to image hashing, and acquired an excessive performance **algorithm**.

To process arbitrary long input data, **hash** functions are generally designed by reusing small and fixed-length input functions under some composition method. The composition method a **hash** functions goes through is arbitrary-length domain extender of underlying building blocks with a fixed domain size. Such building blocks are known as compression functions. Compression function can be either keyed or keyless. So construction of a **hash** function consists of two components. First component is a compression function that maps a fixed-length input to a fixed- length output. Second component is a domain extender that uses a compression function and produces a function with arbitrary-length input and fixed-length output. The design of a compression function is the key design component of **hash** function. The various **hash** function design philosophies try to build the compression functions from different angles. Although most of the existing **hash** functions can be described as being based on a block cipher, these block cipher based **hash** functions can be further classified into two categories. The first category is the block cipher-based **hash** functions that use **hash** functions based on an existing block cipher, particularly designed for encryption/decryption purpose such as DES, AES etc. The second category is the **hash** functions that use block ciphers that have been designed particularly for use in **hash** functions. Such **hash** functions are referred as dedicated **hash** functions. A point about these block ciphers, which have been designed exclusively for use in **hash** functions, is that they are not necessarily **secure** and hence may be unsuitable for exclusive encryption/decryption purposes. Another approach of constructing **hash** functions rely on difficulty of solving some well known computational problems. It may be pointed out here that people have used stream cipher like RC4 instead of traditional approach of using block cipher in designing a **hash** function instead of block ciphers [7]. Compare to block-cipher-based hashes, the stream-cipher- based hashes have smaller block size and more number of rounds.

20 Read more

Comparisons of various **SHA**-256 base pipelined designs are a little different. The plot lines of Figure 12(b) and Figure 13(b) clearly indicate that a better ratio of throughput/area ratio is achieved when using more pipeline stages. Due to the fact that the same non-linear function is used in all 64 iterations of the **algorithm**, the plot lines of Figure 12(b) are smoother than those of **SHA**-1 base graph and do not present extremes. This conclusion is independent of the adopted FPGA family. In antithesis to the **SHA**-1 base **hash** function design, in **SHA**-256 base design, it is the application of the four pipeline stages that results to higher throughput/area ratio. This highlights the fact that this design decision is not straightforward, but also depends on each certain implemented **hash** function and its design architecture. Thus, this conclusion verifies the correctness of the decision of four-stage pipeline implementations that has been widely adopted in the designs presented in literature, which in case of **SHA**-256 has no clear motivation.

35 Read more

The future cryptographic **hash** standard **SHA**-2 should be sensible and versatile for a broad assortment of usages, featuring meanwhile a perfect security quality. In this work, we showed an aggregate hardware depiction of the **SHA**-2. A round rescheduling technique and an exceptional reason memory design are moreover proposed. Post-mix comes to fruition , a low-control littler utilization of **SHA**-2 has been Implemented. I assume that a similar approach for littler VLSI use of cryptographic traditions is a productive choice to reduce the area and power use of the organized circuit.

The future cryptographic **hash** standard **SHA**-2 should be sensible and versatile for a broad assortment of usages, featuring meanwhile a perfect security quality. In this work, we showed an aggregate hardware depiction of the **SHA**-2. A round rescheduling technique and an exceptional reason memory design are moreover proposed. Post-mix comes to fruition , a low-control littler utilization of **SHA**-2 has been Implemented. I assume that a similar approach for littler VLSI use of cryptographic traditions is a productive choice to reduce the area and power use of the organized circuit.

**Sha**-256 and MD5 were wrapped together as single block. They are instantiated and wired through a top-level unit. UVM framework was developed to test this block which consists of two independent IP’s. Testbench was modified to drive two different IP’s. One of the interesting things was these two IP’s were driven, sampled, verified in parallel independent of each other. Systemverilog Fork comes handy to perform parallel task.

216 Read more

DLIME, Finding guilt entity by adding fake records and calculate digital signatures and encrypting records so that no other third party can have access to read it. By doing these it gets easy to identify the leaker when has leaked the data on public cloud by matching the digital **hash** on public server and **hash** from the auditors database which was given by owner after he added fake record and calculated **hash**. Although LIME does not actively prevent data leakage, but it allows us to find guilt entity who leakes the data on public cloud.

In the era of advanced technology, one has to go through applications in daily lives where cryptography plays a crucial role. With the rapid growth of internet facilities and web based facilities, the gap between traditional market place and global electronic market place has been reduced remarkably. Buying and selling or any form of business transactions in which parties interact electronically through a computer mediate network rather than traditional physical exchanges (popularly known as e-commerce) is increasingly become popular over the world. Along with e-commerce, on-line banking, bank cards and credit cards at ATM, mobile communications, e-voting etc. have also emerged to be necessary now-a-days. Information being an association's or each individual's most important assets irrespective of the computing device being used, efforts have been made on providing information security, data integrity as well as confidence on data origin. The fundamental cryptographic **algorithm** which deals with information security notions like data integrity, authentication, password verification, pseudorandom generator etc. is known as **hash** function.