Additionally, with a rapid development of identity-basedcryptography [21,22,23,24], some researchers proposed many identity-based signature schemes in the random oracle model or standard model [23,25,26,27]. So, with these identity-based signature (IBS) schemes, a lot of variants, such as the identity-based ring signature schemes [28,29,30], the identity-basedgroup signature schemes [19,20], etc, have also been proposed. In 2011, Ibraimi et al.  proposed an identity-basedgroup signature with membership revocation in the standard model. However, their security model is not enough complete for identity-basedgroup signature, some notions are confused. And their scheme is not fully identity-basedgroup signature scheme, the master key of the system is still constructed on publickeycryptography. In 2014, Emura et al.  proposed an γ-hiding revocable group signature scheme in the random oracle model. Because their scheme introduces the notion of attributes, their scheme is enough complex and inefficient. EDL signature
Several Kerberos-based authentication techniques using public-keycryptography have been proposed. Public- keycryptography can be used to eliminate a single point failure problem in the Key Distribution Center (KDC) and achieve better scalability. PublicKeyCryptography for Cross-Realm Authentication in Kerberos (PKCROSS) and PublicKey Utilizing Tickets for Application Servers (PKTAPP, a.k.a. KX.509/KCA) are considered two notable techniques. The latter was suggested to improve the former, but their actual computational and communication times have been poorly understood. This paper first presents a thorough performance evaluation of the two protocols based on analytical analysis and queueing network models. As shown, PKTAPP does not scale better than PKCROSS. Then, this paper gives a new publickeycryptography- basedgroup authentication technique. We show that the new technique can achieve better scalability than PKCORSS and PKTAPP and our performance methodology is effective.
In 2007, Vaudenay  have proven that public-keycryptography can assure the highest level of feasible pri- vacy in RFID applications. Up to now, there are major classes to construct the public-key cryptosystem, which are all based on a mathematical problem that is hard to solve, such as RSA based on large Integer Factorization Problem (IFP), the Diffie-Hellman and ElGamal based on the Discrete Logarithm Problem (DLP), and the Ellip- tic Curve Cryptosystem (ECC) based on the Elliptic Curve Discrete Logarithm Problem (ECDLP). Among these hard mathematical problems, there are subexpo- nential algorithms for IFP and DLP. In the end of 1980s, Koblitz  and Miller  independently proposed using the group of points on an elliptic curve defined over a finite field in discrete logarithm cryptosystem. The ad- vantage of ECDLP is that there is absent a subexponen- tial algorithm  that could find discrete logarithm in these groups, provided that the curve and the finite field are suitably chosen. Hence, the ECDLP can be regarded as one of the hardest mathematical problem among these public-key cryptosystems. Therefore, the key length for similar level of security in ECC is far less than those public-key cryptosystems based on the IFP and DLP. Consequently, ECC increasingly becomes one of the most popular public-key cryptosystem and is used widely in constrained environment.
Beweisbare Sicherheit ist ein wichtiges Ziel beim Entwurf von Public-Key-Kryptosystemen. F¨ ur die meisten Sicherheitseigenschaften muss hierbei praktische Sicherheit gegen Angreifer mit beschr¨ ankten Mitteln betrachtet werden: Ein Angriffsszenario beschreibt, wie Gegner mit dem Kryptosystem interagieren d¨ urfen, um es anzugreifen; das System kann dann als sicher bezeichnet werden, wenn Gegner mit einem praktikablen Rechenaufwand nur ver- nachl¨ assigbare Erfolgsaussichten erzielen k¨ onnen. Da es keine Berechnungsprobleme gibt, die bewiesenermaßen in einem geeigneten Sinne schwer sind, gibt es nur wenig Hoffnung f¨ ur absolute Sicherheitsbeweise im Sinne solcher praktischer Sicherheit. Statt dessen muss auf reduktionsbasierte Sicherheitsbeweise zur¨ uckgegriffen werden: Die praktische Sicherheit eines komplexen kryptographischen Systems wird zu der Sicherheit einfacherer zugrundeliegender kryptographischer Primitive (mit jeweils passenden Sicherheitsbegriffen) in Beziehung gestellt. Der Grundgedanke ist, zu zeigen, dass das komplexe System nur dann unsicher sein kann, wenn eine der Primitiven unsicher ist. Die Sicherheit kann als ,,konkrete Sicherheit“ quantita- tiv beschrieben werden in Abh¨ angigkeit von den Mitteln, die Gegnern zur Verf¨ ugung stehen. Mit dem DHAES-Schema von Abdalla, Bellare und Rogaway kann man aus einem Schl¨ usselaustauschverfahren (key encapsulation mechanism, KEM), einem Einmal-MAC (message authentication code) und einem Pseudozufallsbitstringgenerator ein Public-Key- Verschl¨ usselungsverfahren konstruieren. Ein reduktionsbasierter Sicherheitsbeweis zeigt, dass DHAES Sicherheit gegen (adaptive) ,,Chosen-ciphertext“-Angriffe erzielt, falls die zugrun- deliegenden Primitive sicher sind. (Solche Chosen-ciphertext-Angriffe sind das allgemeinste Angriffsszenario f¨ ur Public-Key-Verschl¨ usselung.)
application protocols such as HTTP, Telnet, Network News Transfer Protocol (NNTP), or File Transfer Protocol (FTP) by establishing an RSA key exchange during the SSL handshake between client and server to authenticate each end of the connection . Moreover, RSA is used for employee verification in many organizations. Chip based smart cards use cryptographic algorithms to ensure security by checking the PIN code . Some of these cards use RSA algorithms in combination with other algorithms. For instance, IDPrime MD 3811 smart cards, a single chip based dual-interface (contact and contactless ISO14443 interface) smart card developed by Gemalto is also compatible with the NFC standard already widely used by mobile devises today and is secured with both RSA and elliptic curves algorithms . Another example of Gemalto's cards is the IDPrime PIV (personal identity verification) card v1.55 that looks at improved identification and authentication of US Federal employees and contractors to access Federal amenities and was published as FIPS PUB – 201 . Another product that uses RSA is the RSA SecurID, a two-factor authentication technology used in high- security environments to protect network resources and can be hardware authenticator (a USB token, smart card, software application residing on your smartphone or key fob) and RSA authentication manager software based tokens . The authenticator generates passcodes/pin tokens which resets itself every 60 seconds making the previous token worthless. When trying to access a protected resource, users enter the passcode together with their username which are intercepted by the RSA authentication Agent and presented to the RSA SecureID system on the RSA authentication server which validates the pass code by running the same algorithm that was used to generate the passcode to check if their 8 digit output matches the one entered by the user along with the username before granting access to the remote server .
With the development of wireless technology, the social daily life has an increasing relationship with the wireless networks and the issue of wireless network security has caught more and more attention. In this thesis, two new protocols for ZigBee security are proposed. For the first time publickey technology has been used to enhance the security strength for ZigBee master key establishment. The proposed protocols strengthened ZigBee master key establishment security, which subsequently secure the establishment of the network key and link key, both derived from the master key. By integrating unbalanced RSA into the key establishment protocols, the new methods can distribute different computation amount to the ZigBee devices in communication based on their computational capacities.
Abstract: Today, one of the biggest concerns about using the Internet for business-critical data is security. This paper will concentrate on the area of software security based on publickey cryptographic technology. The PublicKey systems make it possible for two parties to communicate securely without either having to know or trust the other party. This is possible because a third party, called the Certification Authority, that both the other parties trust identifies them, and certifies that their keys are genuine. This third party guarantees that they are who they claim to be. A publickey infrastructure (PKI) is a set of technologies and security policies that a company can use to issue, revoke, and manage digital certificates within its organizational structure. The paper tries to analyse some of major deployment aspects of an organizational PKI and the main design issues for a PublicKey Infrastructure (PKI), needed to secure network applications.
What is the point of Kerckhoffs Principle? After all, life must certainly be more difﬁcult for Trudy if she doesn’t know how a cipher works. While this may be true, it’s also true that the details of cryptosystems seldom remain secret for long. Reverse engineering efforts can easily recover algorithms from software, and algorithms embed- ded in tamper-resistant hardware are susceptible to similar attacks. And even more to the point, secret crypto-algorithms have a long history of failing to be secure once the algorithm has been exposed to public scrutiny—see  for a timely example. For these reasons, the cryptographic community will not accept an algorithm as secure until it has withstood extensive analyses by many cryptographers over an extended period of time. The bottom line is that any cryptosystem that does not satisfy Kerckhoffs Principle must be assumed ﬂawed. That is, a cipher is “guilty until proven innocent.”
Mobile agent systems provide a great flexibility and customizability to distributed applications like e-business and information retrieval in the current scenario. Security is a crucial concern for such systems, especially when they are used to deal with money transactions. Mobile agents moving around the network are not safe since the remote hosts that accommodate the agents can initiate all kinds of attacks and attempt to analyze the agents' decision logic and the agents' accumulated data. Hence, mobile agent security is one of the most challenging problems unsolved. This paper analyzes the security attacks to mobile agents by malicious hosts and proposes solutions based on publickey authentication technique and cryptography to address some of these problems. An experimental application is developed, and security and performance of proposed solutions are also evaluated. A performance model is developed in order to tune the parameters of execution environment to meet the desired level of performance and security.
482 | P a g e some thing was not recovered by any stretch of the imagination, and so forth. Crafted by [26, 17] accomplishes this property too, with the same poly-logarithmic cost1 per inquiry both for the database-client communication and the genuine database work. We push that both the developments of [26, 17] and the later work of [10, 28, 16] apply just to the private-key setting for clients who claim their information and wish to transfer it to an outsider database that they don't trust. Open Databases Here the database information is open, (for example, stock quotes) however the client is unconscious of it and wishes to recover a few information thing or look for a few information thing, without uncovering to the database chairman which thing it is. The guileless arrangement is that the client can download the whole database. Open Information Retrieval (PIR) conventions enable client to recover information from an open database with far littler correspondence then simply downloading the whole database. PIR was first appeared to be conceivable just in the setting where there are many duplicates of a similar database and none of the duplicates can converse with each other . PIR was appeared to be workable for a solitary database by Kushilevitz and Ostrovsky  (utilizing homomorphic encryption plan of ). The correspondence multifaceted nature of  arrangement (i.e. the quantity of bits transmitted between the client 1The poly-logarithmic development of [26, 17] requires vast constants, which makes it unrealistic; however their fundamental O( √ n) arrangement was as of late appeared to be pertinent for some pragmatic applications . 2 and the database) is O(n ), where n is the extent of the database and > 0. This was lessened to poly- logarithmic overhead by Cachin, Micali, and Stadler . As pointed out in , the model of PIR
In 1984, Shamir  proposed a concept of identity-basedcryptography. In this new paradigm of cryptography, user’s identifier information such as email address, IP addresses, social security number, a photo, a phone number, postal address etc., instead of digital certificates can be used as publickey for encryption or signature verification. As a result, identity-basedcryptography significantly reduces the system complexity and the cost for establishing and managing the publickey authentication framework known as PublicKey Infrastructure (PKI). Although Shamir  easily constructed an identity-based signature (IBS) scheme using the existing RSA  function, he was unable to construct an identity-based encryption (IBE) scheme, which became a long-lasting open problem. Only in 2001, Shamir's open problem was independently solved by Boneh and Franklin  and Cocks . Thanks to their successful realization of identity-based encryption, identity-basedcryptography is now hot area within the research community.
The simplest attack to decipher a DES key is the brute force attack. The brute force attack on the DES algorithm is feasible because of the relatively small key length (56 bit) and ever-increasing computational power of the computers. Until the mid-1990s, brute force attacks were beyond the capabilities of hackers because the cost of computers that were capable of hacking was extremely high and unaffordable. With the tremendous advancement in the field of computing, high-performance computers are relatively cheaper and, therefore, affordable. In fact, general purpose PCs today can be successfully used for brute force attacks. Many hackers today are using more powerful techniques, such as Field Programmable Gate Array (FPGA) and Application- Specific Integrated Circuits (ASIC) technology that provide faster and cheaper means of hacking. Any cipher can be broken by trying all keys that possibly exist. However, in brute force attacks, the time taken to break a cipher is directly proportional to the length of the key. In a brute force attack, keys are randomly generated and applied to the cipher text until the legitimate key is generated. This key decrypts the data into its original form. Therefore, the encryption key length is a major factor that needs to be considered while choosing a key. The longer the encryption keys, the stronger the security. For example, in case of a 32-bit long key, the number of steps required to break the cipher are about 232 or 109. Similarly, a 40-bit key requires about 240 steps. This is something which can be achieved in one week by anyone sitting on his personal computer. A 56-bit key is known to have been broken by professionals and governments by using special hardware in a few months time. Today, 128-bit encryption is considered to be the safest and most reliable means of encrypting messages.
The rapid increase in the Internet's connectiv- ity has lead to proportional increase in the devel- opment of Web-based applications. Usage of down- loadable content has proved eective in a number of emerging applications including electronic com- merce, software components on-demand, and collab- orative systems. In all these cases, Internet user agents (like browsers, tuners) are widely used by the clients to utilize and execute such downloadable con- tent. With this new technology of using download- able content comes the problem of the downloaded content obtaining unauthorized access to the client's resources. In eect, granting a hostile remote prin- cipal the requested access to client's resources may lead to undesirable consequences. Hence it is impor- tant for the browsers to provide a framework such that the user can ne tune his system according to his trust relationship with the content authors. Currently available systems either do not allow the downloaded content to access any of the local re- sources or allows all the contents to have the same privileges. In this paper, we present the design and implementation of a model that provides resource ac- cess control of a ner granularity for an user agent. Using our model, the client will be able to selectively grant access to resources based on a trust relationship with the principal, who has certied the authenticity of the contents.
----------------------------------------------------------------------------- ABSTRACT ----------------------------------------------------------------------------------- There are huge numbers of algorithms available in symmetry key block cipher. All these algorithms have been used either complicated keys to produce cipher text from plain text or a complicated algorithms for it. The level of security of all algorithms is dependent on either number of iterations or length of keys. In this paper, a symmetry key block cipher algorithm has been proposed to encrypt plain text into cipher text or vice versa using a frame set. A comparative study have been made with RSA, DES, IDEA, BAM and other algorithms with Chi-square value, frequency distribution, bit ratio to check the security level of proposed algorithm. Finally, a comparison has been made for time complexity for encryption of plain text and decryption from cipher text with the well-known existing algorithms.
Now, here is a proposed hybrid security concept in which we combine the traditional approach with a new approach (DNA Cryptography). The extra advantage of this method is that it provides two levels of security (first is achieved by encrypting data symmetrically and second is through encrypting the key itself asymmetrically which is used to encrypt the data symmetrically). In this new algorithm we are using a new improved version of RSA that makes it too strong to break. It decreases its factorization possibility (the only way to crack RSA). The asymmetric keycryptography begins from a password / passphrase / key (that is used to encrypt data [achieved from the DNA sequence]) and this mechanism is introduced to enhance the security so that it could lead towards robustness in comparison to the traditional One Time Pad DNA, PCR, Index based symmetric encryption and asymmetric (digital signature method) (Cherian, Raj & Abraham, 2013).
Here we will formulate and discuss the algorithm in five sections. The first section deals with the pre- processing of data and it includes the first stage of the algorithm, i.e. the traversal stage. In the second section the mathematical foundations of circulant matrices are established and the fundamental idea regarding solutions of system of non-homogeneous equations is also explored. In the third section the system of equations is used to define the key agreement between the communicating parties, the key generation and the encryption and decryption processes. In the fourth section the security of the PublicKeyCryptography using system of non-homogeneous linear equations is analysed. In the fifth section an example is given to illustrate the working of the entire algorithm Section 1: Pre-Processing Of Data
As is typically the case in cryptography, we are currently very far from es- tablishing the security of public-keycryptography unconditionally. Rather, to establish security, we rely on certain computational intractability assumptions. Despite four decades of extensive research, we currently only know constructions of public-key encryption from a handful of assumptions, most notably assump- tions related to the hardness of factoring, finding discrete logarithms and compu- tational problems related to lattices (as well as a few more exotic assumptions). One of the central open problems in cryptography is to place public-key encryption on firmer complexity-theoretic grounding, ideally by constructing public-key encryption from the minimal assumption that one-way functions exist. Such a result seems well beyond current techniques, and by the celebrated result of Impagliazzo and Rudich [IR89] requires a non-blackbox approach. Given that, a basic question that we would like to resolve is the following:
In RSA, the execution time primarily depends on the file size. As the file size increases, the execution time will be increased automatically. In RSA1, the execution time is purely based on the repetition of each character in the test data. If the number of the occurrences of a character increases (as the same is being represented by the same number) the execution time will get reduced with in turn enhance the performance of the system. Even for an increased file size the execution time may remain same due to the duplication of characters.
 Johann Van Der Merwe, J. Dawoudand Stephen McDonald, Key management schemes based on the key predistribution techniques proposed for sensor networks may be another avenue to solve the key management problem in authority-based MANETs. Key management schemes are designed either for an “open” (self- organized) or “closed” (authority-based) network and consequently aimed at different applications. “Open” or fully self-organized MANETs have some inherent security implications and must be analyzed accordingly. It is therefore not always possible to compare schemes that assume the existence of a trusted authority with those that are fully self-organized. This study confirms that key management mechanisms proposed to guarantee the security of conventional networks are not necessarily suitable or adaptable to MANETs.  JeroenHoebeke, Ingrid Moerman, Bart Dhoedt and Piet Demeester Current devices, their applications and protocols are solely focused on cellular or wireless local area networks (WLANs), not taking into account the great potential offered by mobile ad hoc networking. This type of network, operating as a stand-alone network or with one or multiple points of attachment to cellular networks or the Internet, coat the way for numerous new and exciting applications. Application scenarios involve, but are not limited to emergency and deliverance operations, conference or campus settings, car networks, personal networking, etc. This paper provides penetration into the potential applications of ad hoc networks and discusses the technological challenges that protocol designers and network developers are faced with.