Authentication systems motivates all the research of zeroknowledge proofs in which prover wants to prove its identity to a verifier through some secret information (such as a password) but never wants that the second party to get anything about this secret. This known as "zero-knowledgeproof. Identification, key exchange and other basic cryptographic operations is mainly allowed by ZeroKnowledge Protocol. Implementation is done without showing any important information during the conversation. For resource constrained devices ZKP is very useful and attractive. ZKP is an interactive proof system which involve node P and node V. P plays prover role whereas V is verifier. In a series of communications prover conveys the verifier of some secrets through series of communication. In each and every communication a challenge, or question, are comes from the verifier and basically prover response. Normally less bandwidth, less, small computational power, and less memory is needed by ZKP based protocols.
Kata sandi (password) adalah metode otentikasi yang paling sering digunakan di berbagai sistem keamanan. Kemudahan dalam hal implementasi menjadi faktor utama dari pemanfaatan sistem berbasis password. Pengguna juga sudah terbiasa dengan sistem semacam ini sehingga waktu penyesuaian bisa diminimalkan. Di sisi lain, banyaknya penggunaan jaringan yang belum aman (misalnya protokol HTTP) masih menjadi ancaman bagi pengguna mengingat seringkali password menjadi satu-satunya mekanisme yang digunakan. Serangan penyadapan (baik aktif maupun pasif) yang terjadi di jaringan seperti man-in-the-middle-attack yang dikombinasikan dengan serangan session hijacking menjadi sangat praktis untuk digunakan. Firesheep  dan sslstrip  merupakan salah satu contoh nyata bagaimana pencurian cookies dan password bisa terjadi melalui jaringan nirkabel dan platform web (protokol HTTP), bahkan pada kasus Firesheep, pengguna tidak perlu melakukan apapun kecuali tersambung ke dalam jaringan publik yang sama dengan korban. Semenjak adanya ekspoloitasi memanfaatkan Firesheep, banyak orang dan perusahaan mulai menyadari pentingnya protokol HTTPS. Pada tahun 2013, Facebook mengumumkan bahwa mereka telah mengaktifkan fitur HTTPS untuk semua penggunanya secara default meskipun telah memperkenalkan fitur ini
4. Proposed Scheme. The proposed scheme main concern is to secure the controller device in the SDN network from being accessed by malicious users. These malicious users try to control the whole SDN network, disable some services or shut the controller down by sending malicious requests to the controller. To protect the controller from this kind of malicious users, we preferred to use an identification scheme, so the user has to identify himself to the controller but not as a traditional way by sending his password in every attempt of login. The way we prefer is to let the user convince the controller that he knows the password without revealing the password itself or any partial information about it and prove that he is the real user. We utilized the ZKP identification scheme proposed by Feige, Fait and Shamir in  to improve the security of SDN controller against fake users. In this scheme, the user does not need to send his password in every login attempt where he has to prove that he knows the secret which controller has without revealing the actual secret or any information related to it. Three significant elements used in this scheme:
of the witness vector and the norm of the vector computed by the knowledge extractor: The latter is only guaranteed to be O(n) e larger than the former in the case of the infinity norm, where n denotes the dimension of the corresponding worst-case lattice problem. As a consequence, cryptographic schemes using these proof systems as building blocks rely on a stronger security assumption than the assumed hardness of finding a witness for the ISIS instance, by a O(n) e factor. This hints that the existing ZKPoK for the ISIS ∞ problem are sub-optimal: Is it possible to design an efficient ZKPoK for ISIS ∞ whose security provably relies on a weaker assumption than the existing ones? In this work, we reply positively, and describe such a ZKPoK, for which there is only a constant gap between the norm of the witness vector and the norm of the vector computed by the extractor. We also briefly describe a scheme with no gap (i.e., constant factor 1), but that is less efficient.
The idea of using distance and special relativity (a theory of motion justifying that the speed of light is a sort of asymptote for displacement) to prevent communication between participants to multi-prover proof systems can be traced back to Kilian. Probably, the original authors (Ben Or, Goldwasser, Kilian and Wigderson) of  had that in mind already, but it is not explicitly written anywhere. Kent was the first author to venture into sustainable relativistic commitments  and introduced the idea of arbitrarily prolonging their life span by playing some ping-pong protocol between the provers (near the speed of light). This idea was made considerably more practical by Lunghi et al. in  who made commitment sustainability much more efficient. This culminated into an actual implementation by Verbanis et al. in  where commitments were sustained for more than a day!
Furthermore, knowledge assumptions present challenges with regard to auxiliary inputs as is also pointed out in the early works of Hada and Tanaka [HT98]. Intuitively the problem arises if we consider what happens if an adversary is given as auxiliary input an obfuscated program. The adversary simply compiles and executes the obfuscated program to obtain the commitment message. Then a knowledge assumption, which is expected to hold for all auxiliary inputs, would imply an efficient extraction of the committed value. This would imply an efficient deobfuscation, which seems problematic. It was recently suggested by Bitansky et al [BCCT12] that it is more reasonable to assume that knowledge assumptions only hold with respect to “benign” auxiliary inputs. One of our contributions is to put forward a framework for formulating knowledge assumptions with respect to Admissible Adversaries. This allows us to specify a set of auxiliary inputs with respect to which the knowledge of exponent assumption would hold. For applications in cryptography we want this class to be as large as possible. Despite these drawbacks, the study of knowledge assumptions in cryptography has been thriving recently. This is evident by the long list of interesting research papers cited above. (See Section 8 for more details).
Authenticity of photos is crucial for journalism and investigations but is difficult to ensure due to powerful digital editing tools. One approach is to rely on special cameras that sign photos via secret keys embedded in them, so that anyone can verify the signature accompanying an image. (Some such cameras already exist.) However, often it is not useful or acceptable to release the original photograph because, e.g., some information needs to be redacted or blurred. These operations, however, cause the problem that the signature will not verify relative to the edited photo. A recent paper proposes an approach, called PhotoProof [NT16], that relies on zkSNARKs to prove, in zeroknowledge, that the edited image was obtained from a signed (and thus valid) input image only according to a set of permissible transformations. (More precisely, the camera actually signs a commitment to the input image, and this commitment and signature also accompany the edited image, and thus can be verified separately.)
Trusted/untrusted setup. All the above constructions –and the one we provide here– are in the trusted-setup model, i.e., the party that generates the scheme parameters originally, holds some trapdoor information that is not revealed to the adversary. E.g., for the RSA-based constructions, any adversary that knows the fac- torization of the modulo can trivially cheat. An alternative body of constructions aims to build trapdoorless accumulators (also referred to as strong accumulators) [Nyb96a,Nyb96b,San99,BLL00,CHKO08,Lip12], where the owner is entirely untrusted (effectively the owner and the server are the same entity). Unfortu- nately, the earlier of these works are quite inefficient for all practical purposes, while the more recent ones either yield witnesses that grow logarithmically with the size of X or rely on algebraic groups, the use of which is not yet common in cryptography. Alternatively, trapdoorless accumulators can be trivially con- structed from zero-knowledge sets [MRK03], a much stronger primitive. While a scheme without the need for a trusted setup is clearly more attractive in terms of security, it is safe to say that we do not yet have a practical scheme with constant-size proofs, based on standard security assumptions.
It may seem strange to consider the simulator’s output distribution on no instances, since the zero-knowledge condition does not provide any guarantees about the quality of simulation on no instances. Indeed, Claim 3.9 is not derived from the zeroknowledge property of the proof system. Rather, it is based on the soundness of the proof system and the fact that the simulator always produces accepting transcripts (by our modification above). Intuitively, it says that the simulation captures at most an s(|x|) fraction of the probability space of the verifier’s messages. Indeed, it is shown in [AH, PT, GV] that if this were not the case, then the simulator could be used to construct a prover strategy that convinces the verifier to accept with probability greater than s(|x|), contradicting the soundness of the proof system. Now, given input x, we construct circuits that sample from the following (joint) random variables.
It also makes sense to compare our work to the “MPC-in-the-head” technique from , where the idea is to construct a zero-knowledge protocol from a multiparty computation protocol. The technique requires a commitment scheme and it is well known that such schemes cannot offer unconditional security for both parties. This means that for any protocol constructed using MPC-in-the-head, either soundness or zero- knowledge will be computational, unlike our results that require no computational assumptions. Nevertheless, it should be noted that one can adapt the construction from  to give a protocol for quadratic residuosity modulo some number N with complexity similar to ours. This is because one can start the construction of  from a multiparty protocol that is particularly efficient because it is based on a special-purpose secret sharing scheme that works modulo N . For other cases, such as discrete log in a group of unknown order, no such secret-sharing scheme is available, one has to resort to generic MPC protocols, and the complexity then becomes much larger than ours.
Our interpretation of the zeroknowledge property is that whatever any (possibly malicious) verifier can learn from the protocol (when run on an instance that has a solution) he could have also learned when presented with a sample from the distribution of legal solutions that corresponds to the protocol (when run by the honest parties). This is indeed a distribution since a yes instance x can have many possible solutions, and the one that is output can depend on the randomness. This is in contrast to the decisional version of zeroknowledge protocols, where any instance has only one possible solution - either yes or no. Later we will show that if some search problem has a Search − ZK protocol that always outputs the same solution for every yes instance x (i.e. |supp ((P, V ) (x))| = 1) then in some sense the problem has a decisional zero-knowledge protocol.
output is the same whether b = 0 or 1, and P has probability exactly 1/2 of guessing b correctly. The complexity of the verifier V in the above protocol is that of selecting a random permuta- tion and performing two matrix multiplications, both of which can be done in log-space. Hence by Lemma 2.3, V has efficient perfect locally computable PREs. The prover P is computable in polynomial time because all the prover does is run the (polynomial time) algorithm for dBDGNI. (That the running time of reconstruction algorithm of the resulting secret sharing scheme is n O(d) can be seen by tracing its dependence on the running time of the algorithm for dBDGNI - the one in  runs in time n O(d) - in the proof of Theorem 3.1.)
If zeroknowledge proofs or arguments are also “proof of knowledge”, it is known as zeroknowledge proofs or arguments of knowledge (ZKPoK or ZKAoK). In fact,  discussed how to modify their protocol such that the modified protocol is a proof of knowledge. The presented method needs a public-coin zeroknowledgeproof of knowledge, in which the verifier proves to the prover that the previous commitment is “valid”. Concretely, the verifier first commit to a random challenge ch. At a later decommitment stage, the verifier prove, by a public-coin ZKPoK , that the committed value is indeed ch.  first presented a simple construction of super-constant-round LR-ZKPoK for HC problem. An interesting problem left by [13, 20, 7] is how to con- struct constant-round leakage-resilient ZKPoK (or ZKAoK) for NP. In this paper, We focus on how to construct constant-round LR-ZKAoK for NP and give a construction for HC problem by means of non-black-box simulation techniques.
Private-coin vs Public-coin. In the study of ZK proofs, whether or not the verifier makes its random coins public or keeps them private has a strong bearing on the round-complexity. Indeed, constructing public-coin ZK proofs is viewed as a harder task. Very recently, Kalai, Rothblum and Rothblum  ruled out constant round public-coin ZK proof systems for NP, even w.r.t. non-black-box simulation, assuming the existence of certain kinds of program obfuscation . However, their approach breaks down in the private coin setting, where a verifier may keep its random coins used during the protocol private from the prover. This is not surprising, since five round private-coin ZK proofs are already known . In this work, we investigate the feasibility of constructing private-coin ZK proofs (via non-black-box techniques) in less than five rounds. We remark that a candidate construction of three-round (private-coin) ZK proof system was given by Lepinski  based on a highly non-standard “knowledge-type” assumption; we discuss the bearing of our results on Lepinski’s protocol (and the underlying assumption) below.
Another important reason for choosing Quorum is that it is built by JPMorgan Chase with the support of Microsoft. By choosing Quorum, we can not only do our implementation of a ZeroKnowledgeProof for transactions but also study the maturity of a solution built by major actors of the blockchain ecosystem. Finally, Quorum uses the GNU Lesser General Public License v3.0, which enables us to work freely and fork the original protocol for our prototype.
Delegation of Computation. However, most of these complex cryptographic primitives, such as anony- mous authentications and DAAs, achieve their ultimate impact when implemented on portable and mobile devices. This increases the contrast between the important needs to embed these protocols in such lightweight devices and their practical limitations when performing many exponentiations or pairing evaluations. A common way to overcome this problem is to delegate (when possible) some com- putations to a more powerful, but not fully trusted, delegatee as in [5, 7, 3, 8]. Since the latter entity cannot have access to secret values, most of the computations on the prover’s side have to be performed by the constrained device, which reduces the benefits of server-aided cryptography. Moreover, if the DLRS involved in the protocol contains several relations or variables, the overall computational cost may remain prohibitive. One may argue that exponentiations in the first flow of Schnorr’s protocol are precomputable. This is true if the basis is fixed, but when the proof is used as a building block in a more complex construction, the basis is not always fixed or known in advance (as e.g. in DAA schemes [5, 3]). The lack of way to efficiently delegate the prover’s side of the proof of knowledge may then prevent portable devices to get access to all features of modern cryptography.
Designing a DVNIZK with unbounded soundness has proven to be highly non-trivial. In fact, apart from publicly-verifiable NIZKs (which can be seen as particular types of DVNIZKs where the secret key of the verifier is the empty string), the only known construction of DVNIZK claiming to satisfy unbounded soundness is the construction of [DFN06], where the claim is supported by a proof of security in an idealized model. However, we found this claim to be flawed: there is an explicit attack against the unbounded soundness of any protocol obtained using the compiler of [DFN06], which operates by using slightly malformed proofs to extract the verification key. In Appendix A, we describe our attack, and identify the flaw in the proof of Theorem 5 in [DFN06, Appendix A]. We have notified the authors of our finding and will update future versions of this work with their reply. To our knowledge, constructing designated-verifier zero-knowledgeproof systems whose soundness is maintained after polynomially many interactions with the prover remains an open question. In all current constructions, the common reference string and the public key must be refreshed after a logarithmic number of proofs.
Finally note that if one can transform any statistical ZK proof into a resettable statistical ZK proof without losing the efficiency of the prover, then together with our main result of Theorem 1.1 this would imply that the problems with efficient ZK-PCPs are exactly those in SZK ∩ NP. Relation to Basing Cryptography on Tamper-Proof Hardware. A main motivation of [IMS12] to study the possibility of efficient ZK-PCPs for NP comes from a recent line of work on basing cryptography on tamper-proof hardware (e.g. [Kat07,MS08,CGS08,GKR08,GIS + 10,Kol10, GIMS10]). In this model, the parties can exchange classical bits as well as hardware tokens that hide a stateful or stateless efficient algorithm. The receiver of a hardware token is only able to use it as a black-box and call it polynomially many inputs. Using stateless hardware tokens makes the protocol secure against “resetting” attacks where the receiver of a token is able to reset the state of the token (say, by cutting its power). The work of Goyal et al. [GIMS10] focused on the power and limits of stateless tamper-proof hardware tokens in achieving statistical security and proved that statistical zero-knowledge for all of NP is possible using a single stateless token sent from the prover to the verifier followed by O(1) rounds of classical interaction. A natural question remaining open after the work of [GIMS10] was whether the classical interaction can be eliminated and achieve statistical ZK for NP using only a single stateless token. It is easy to see that this question is in fact equivalent to our main question above, and thus our Theorem 1.1 proves that a single (efficient) stateless token is not sufficient for achieving statistical ZK proofs for NP.
Zero-knowledge proofs are of considerable theoretical and practical inter- est to mathematicians and cryptographers alike. ZKPs acheive the seem- ingly contradictory goals of proving a statement without revealing anything other than the fact that the statement is indeed true. The zero-knowledge proofs presented for Graph Isomorphism and 3-colorability have further im- plications on complexity theory which are not fully discussed in this paper. The Fiat-Shamir Identification Protocol, while not normally implemented in modern cryptosystems, is the basis of existing zero-knowledge entity au- thentication schemes and shows that such schemes can actually be used in practice.
Zero-knowledge proofs, introduced in the seminal work of Goldwasser, Micali and Rackoff [GMR85], are a funda- mental building block in cryptography. Loosely speaking, a zero-knowledgeproof is an interactive proof between two parties — a prover and a verifier — with the seemingly magical property that the verifier does not learn any- thing beyond the validity of the statement being proved. Subsequent to their introduction, zero-knowledge proofs have been the subject of a great deal of research (see, for example, [BSMP91, DDN91, Ost91, OW93, DNS98, CGGM00, Bar01, IKOS09]), and have found numerous applications in cryptography (e.g., [GMW87, FFS88]). Concurrent zeroknowledge. The original definition of zeroknowledge is only relevant to the “stand-alone” setting where security holds only if the protocol runs in isolation. As such, unfortunately, it does not suffice if one wishes to run a zero-knowledgeproof over a modern network environment, such as the Internet. Towards that end, Dwork, Naor and Sahai [DNS98] initiated the study of concurrent zero-knowledge (cZK) proofs that remain secure even if several instances of the protocol are executed concurrently under the control of an adversarial verifier. Subsequent to their work, cZK has been the subject of extensive research, with a large body of work devoted to studying its round-complexity. In the standard model, the round-complexity of cZK was improved from polynomial to slightly super-logarithmic in a sequence of works [RK99, KP01, PRS02]. In particular, the