polynomial time probabilistic algorithm

Top PDF polynomial time probabilistic algorithm:

Probabilistic & Statistical Methods in Cryptology   An Introduction by Selected Topics pdf

Probabilistic & Statistical Methods in Cryptology An Introduction by Selected Topics pdf

The most famous classical factorization algorithms are the Quadratic Sieve (QS) and the Number Field Sieve (NFS). Though being subexponential, they are not polynomial. The QS is the fastest general-purpose factorization algo- rithm for numbers with less than 110 digits, whereas the NFS has the same property for numbers with more than 110 digits (see Schneier (1996), p.256). Recently, RSA-576 with a number of 174 decimal digits was factorized by Franke from the Universoty of Bonn with the aid of the NFS. The NFS was also used to factorize the Mersenne number 2 757 −1 (with 288 decimal digits) by the Internet project NESNET (about 5 months of computing time on up to 120 machines was necessary). Further limits on the factorization of large numbers can be found on the Internet site CiteSeer. For particular types of numbers to be factorized, many specially designed algorithms have been de- veloped, which in these cases are faster than the above-mentioned ones. A new direction of cryptanalysis would be the possible use of quantum com- puters istead of classical Turing machines. Up to now, quantum computing has been more or less only a theoretical concept based on the superposition principle of quantum mechanics. Beyond some basic experiments, nobody has really an idea how to realize physical quantum computers working effi- ciently. However, if one day quantum computers could be built, this would have dramatic consequences for cryptology. Namely, in the second half of the 1990s, Peter Shor showed that on a quantum computer, large numbers can be factorized in linear (with respect to the length of the binary expansion of the number) time! So in this case, the RSA and all related systems would be worthless against an adversary who has a quantum computer at his disposal. Shor’s method is in fact a hybrid algorithm in the sense that it consists of four components, one being done by quantum computing and three others (a little trick based on the Euclidean algorithm from elementary number theory, Fourier transform and continued-fraction approximation) that can be done on a classical computer. (Note that the Fourier transform component can, but need not be done on a quantum computer.) This algorithm will be ex- plained in Section 3.4. We note that Shor has developed another algorithm for solving the discrete logarithm problem on quantum computers. Here, we will not discuss that, but the principles are similar.
Show more

159 Read more

Enhanced  Security  of  Attribute-Based  Signatures

Enhanced Security of Attribute-Based Signatures

For policy-based signatures the authors show in [BF14] that without restricting the polices the reverse direction of Corollary 2.1 does not hold, by presenting a counterexample. Since, ABS is a special variant of policy-based signatures, the counterexample can also be applied to the reverse direction of Theorem 2.1. In the following we will restate the counterexample. To see that simulation-based privacy is a stronger notion than standard privacy, let us consider our example scheme above, where the Sign algorithms appends the used attribute set to the signature. Assume a policy class where computing a satisfying attribute set is computationally hard (satisfiability of CNF formulas). Obviously, the example scheme with this policy class is not perfectly simulation private, since under usual assumptions there is no probabilistic polynomial-time algorithm SimSign that given just the master secret and message, computes the satisfying attribute set and appends it to the signature.
Show more

32 Read more

Scalable  Deniable  Group  Key  Establishment

Scalable Deniable Group Key Establishment

Communication network and adversarial capabilities. The network is non-private and fully asynchronous. Arbitrary point-to-point connections among the users are possible (no dedicated broadcast functionality is assumed to be available). The adversary A is modeled as probabilistic polynomial time algorithm with complete control over the communication network. Its capabilities are captured by these oracles :

17 Read more

Algorithm and Complexity Lctn   Herbert S  Wilf pdf

Algorithm and Complexity Lctn Herbert S Wilf pdf

How hard is it to factor a large integer? At this writing, integers of up to perhaps a couple of hundred digits can be approached with some confidence that factorization will be accomplished within a few hours of the computing time of a very fast machine. If we think in terms of a message that is about the length of one typewritten page, then that message would contain about 8000 bits, equivalent to about 2400 decimal digits. This is in contrast to the largest feasible length that can be handled by contemporary factoring algorithms of about 200 decimal digits. A one-page message is therefore well into the zone of computational intractability. How hard is it to find the multiplicative inverse, mod (p − 1)(q − 1)? If p and q are known then it’s easy to find the inverse, as we saw in corollary 4.3.1. Finding an inverse mod n is no harder than carrying out the extended Euclidean algorithm, i.e., it’s a linear time job.
Show more

139 Read more

On scenario aggregation to approximate robust combinatorial optimization problems

On scenario aggregation to approximate robust combinatorial optimization problems

As most robust combinatorial min-max and min-max regret problems with dis- crete uncertainty sets are NP-hard, research in approximation algorithm and approx- imability bounds has been a fruitful area of recent work. A simple and well-known approximation algorithm is the midpoint method, where one takes the average over all scenarios, and solves a problem of nominal type. Despite its simplicity, this method still gives the best-known bound on a wide range of problems, such as ro- bust shortest path or robust assignment problems.

12 Read more

A  Revocable  Online-Offline  Certificateless  Signature  Scheme  without  Pairing

A Revocable Online-Offline Certificateless Signature Scheme without Pairing

We propose a time-interval based revocable online-offline certificateless signature scheme that does not use pairing. We prove our scheme secure using a tight security reduction to the Computational Diffie Hellman problem in the random oracle model. In this scheme, we give key sanity checks for both user verification and public verification. The size of our partial private keys are small as they are elements of the field Z q . Unlike in [14], signatures can be generated at any time instant providing greater flexibility.

25 Read more

Semantic  Security   and  Indistinguishability  in  the  Quantum  World

Semantic Security and Indistinguishability in the Quantum World

A secret-key encryption scheme is said to be IND-qCPA secure if the success probability of any quantum probabilistic polynomial-time adversary winning the game defined by qCPA learning p[r]

37 Read more

Algorithms for Extended Galois Field Generation and Calculation

Algorithms for Extended Galois Field Generation and Calculation

The paper aims to suggest algorithms for Extended Galois Field generation and calculation. The algorithm analysis shows that the proposed algorithm for finding primitive polynomial is faster than traditional polynomial search and when table operations in GF(p m ) are used the algorithms are faster than traditional polynomial addition and subtraction.

7 Read more

An algorithm for the spectral factorization of unimodular para hermitian polynomial matrices in continuous time

An algorithm for the spectral factorization of unimodular para hermitian polynomial matrices in continuous time

Most of the algorithms for the spectral factorizations of matrix polynomials depend on the existence of the roots of given polynomial matrices, so it is almost impossible to execute the [r]

7 Read more

A polynomial time biclustering algorithm for finding approximate expression patterns in gene expression time series

A polynomial time biclustering algorithm for finding approximate expression patterns in gene expression time series

Figure 14 shows a summary of these top 12 1-CCC-Biclus- ters (expression patterns, number of genes and contiguous time points) together with information about functional enrichment relatively to terms in the Gene Ontology. To perform the analysis for functional enrichment, we con- sidered only the "Biological Process" ontology and terms above level 2. We used the p-values obtained using the hypergeometric distribution to assess the over-representa- tion of a specific GO term. In order to consider an e-CCC- Bicluster to be highly significant, we require its genes to show highly significant enrichment in one or more of the "Biological Process" ontology terms by having a Bonfer- roni corrected p-value below 0.01. An e-CCC-Bicluster is considered as significant if at least one of the GO terms analyzed is significantly enriched by having a (Bonferroni corrected) p-value in the interval [0.01, 0.05[. Note that, although we only consider as functionally enriched the terms with Bonferroni corrected p-values below 0.01 (for high statistical significance), or below 0.05 (for statistical significance), the p-values presented in the text are with- out correction, as it is common practice in the literature. It is worth noting that all the 1-CCC-Biclusters analyzed have in general a large number of GO terms enriched (after Bonferroni correction), and all of them have at least one term whose p-value is highly significant (see Figure 14, for details). This means all the 1-CCC-Biclusters iden- tified are biologically relevant as reported by functional enrichment analysis performed using the Gene Ontology. Figure 15 and Figure 16 show a detailed analysis of the Gene Ontology annotations together with information about transcriptional regulations available in the YEAS- TRACT database, for the 1-CCC-Biclusters with transcrip- tional up-regulation patterns and 1-CCC-Biclusters with transcriptional down-regulation patterns, respectively. When the 1-CCC-Bicluster has more than 10 terms enriched or its genes are co-regulated by more than 10 transcription factors (TFs), only the 10 terms with lower p- value or the 10 transcription factors regulating the higher percentage of the genes in the 1-CCC-Bicluster are listed.
Show more

39 Read more

A polynomial-time algorithm for a flow-shop batching problem with equal-length operations

A polynomial-time algorithm for a flow-shop batching problem with equal-length operations

While the earlier algorithms have time complexities O(n) [3, 6] and O ( √ n) [9], which are both exponential with respect to the binary encoded input length, the com- plexity of the new algorithm is O ¡ log 3 n ¢ . The main challenges of developing a fast solution algorithm involve identification of a complex discrete periodic structure, find- ing out how it can be described in mathematical terms using a recursive representation and proving that the identified structure and the algorithm based on this structure are correct.

35 Read more

Spatial isolation implies zero knowledge even in a quantum world

Spatial isolation implies zero knowledge even in a quantum world

In fact, we actually use a more refined version, which tests a polynomial’s individual degree rather than its total degree. In the classical setting, such a test is implicit in [GS06] via a reduction from individual-degree to total-degree testing. Informally, this reduction first invokes the test for low total degree, then performs univariate low-degree testing with respect to a random axis-parallel line in each axis. We extend this reduction and its analysis to the setting of MIP * . (See Section 7 for details.) The analysis of the low individual degree test was communicated to us by Thomas Vidick, to whom we are grateful for allowing us to include it here. With the foregoing low-degree test at our disposal, we are ready to outline the simple transformation from low-degree IPCPs to MIP * protocols. We begin with a preprocessing step. Note that the low individual degree test provides us with means to assert that the provers can (approximately) only use their entangled state to choose a low-degree polynomial Q, and answer the verifier with the evaluation of Q on a single, uniformly distributed point (or plane). Thus, it is important that the IPCP verifier (which we start from) only makes a single uniform query to its oracle. By adapting techniques from [KR08], we can leverage the algebraic structure of the low-degree IPCP and capitalize on the interaction to ensure the IPCP verifier has this property, at essentially the cost of increasing the round complexity by 1. 5
Show more

56 Read more

A polynomial time algorithm for calculating the probability of a ranked gene tree given a species tree

A polynomial time algorithm for calculating the probability of a ranked gene tree given a species tree

Minimizing (12) as a criterion for the ranked species tree will tend to penalize long edges of the species tree which have multiple lineages persisting through multiple species divergence events. As an example, in Figure 1b, the gene tree has a MDC cost of 1 since there are two lineages exiting the population immediately ancestral to A and B; however the cost according (12) is 2 because there are two edges on the beaded version of the species tree (Figure 2) that each have an extra lineage. In Figure 1c, the gene tree has a MDC cost of 0 for the species tree since it has the matching unranked topology; however, the number of extra lineages from equation (12) is 1. We note that in Figure 1c, interval τ 3 , incomplete lineage sorting (and deep coalescence) have not occurred as these concepts are normally used. To capture the idea that coalescence has nevertheless occurred in a more ancient time interval than allowed, we might refer to the coalescence of A and B in Figure 1c as an “ancient lineage sorting” event (rather than incomplete lineage sorting event) or an ancient coales- cence rather than a deep coalescence. We could therefore refer to minimizing equation (12) as the Minimize Ancient Coalescence (MAC) criterion, which would provide an interesting comparison to the usual topology-based MDC criterion.
Show more

9 Read more

List-Decoding Multiplicity Codes

List-Decoding Multiplicity Codes

• Our first result shows that univariate multiplicity codes achieve “list-decoding capacity.” Specif- ically, for every R,ε , there is a family of univariate multiplicity codes of rate R which can be list-decoded from a (1 − R − ε) fraction of errors in polynomial time. This thus provides a new (and perhaps more natural) example of an explicit capacity-achieving list-decodable code, the only other example being the original “Folded Reed-Solomon codes” of Guruswami and Rudra [10]. • Our second result shows how to list-decode multivariate multiplicity codes up to the Johnson bound.
Show more

34 Read more

Pricing Ad Slots with Consecutive Multi-unit Demand

Pricing Ad Slots with Consecutive Multi-unit Demand

We make a study of three types of mechanisms and consider the revenue maximization problem under these mechanisms, and compare their effectiveness in revenue maximization under a dynamic setting where buyers may change their bids to improve their utilities. Our results make an important step toward the understanding of the advantages and disadvantages of their uses in practice. Assume the ad supplier divides the ad space into small enough slots (pieces) such that each advertiser is interested in a position with a fixed number of consecutive pieces. In modelling values to the advertisers, we modify the position auction model from the sponsored search market [9, 18] where each ad slot is measured by the Click Through Rates (CTR), with users’ interest expressed by a click on an ad. Since display advertising is usually sold on a per impression (CPM) basis instead of a per click basis (CTR), the quality factor of an ad slot stands for the expected impression it will brings in unit of time. Unlike in the traditional position auctions, people may have varying demands (need different spaces to display their ads) in a rich media auction, and correspondingly the market maker should make a decision on which ads should be displayed.
Show more

28 Read more

A Polynomial Time Dynamic Programming Algorithm for Phrase Based Decoding with a Fixed Distortion Limit

A Polynomial Time Dynamic Programming Algorithm for Phrase Based Decoding with a Fixed Distortion Limit

Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and tar- get languages. The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases. However, the algorithms dif- fer in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giv- ing an exponential number of possible states; in con- trast to our approach, the translations are not formed in strictly left-to-right ordering on the source side. 3 Background: The Traveling Salesman
Show more

14 Read more

Optimizing Analytical Queries on Probabilistic Databases with Unmerged Duplicates Using MapReduce

Optimizing Analytical Queries on Probabilistic Databases with Unmerged Duplicates Using MapReduce

An input or our system is a large probabilistic database, which contains unmerged similar records. The small example probabilistic database is shown in table 2.1. Here the L_name “Lorys” occurs 10 times in the database. It gives the 10 similar results for single query. There is need to find the solution for handling these types of multifunction query.The databases are taken from the online shopping customer’s database which stores the data in sequence from various system resources. Here we are going to consider two databases having different sizes. The main purpose behind taking two databases is the time performance.
Show more

9 Read more

Exact Polynomial time Algorithm for the Clique Problem and P = NP for Clique Problem

Exact Polynomial time Algorithm for the Clique Problem and P = NP for Clique Problem

The paper is organized in three sections. Section 1 presents introduction part containing previous works of other researchers including the Minimum Nil Sweeper Algorithm. Section 2 presents a new general algorithm to solve the Clique problem for an arbitrary undirected graph. Complexity of the general algorithm has been analysed and finally P=NP for Clique problem is proved. In section 3, some examples graphs cited and are tested by both the algorithms. And finally a theorem related to the intersection graph is stated.

5 Read more

Algorithm to Compute Sudoku Solution by Using Constraint Satisfaction and Permutation in Polynomial Time

Algorithm to Compute Sudoku Solution by Using Constraint Satisfaction and Permutation in Polynomial Time

ABSTRACT: A standard Sudoku puzzle has a 9 x 9 grid which is to be filled with numbers 1-9 in such a way that the numbers are not repeated in individual row, column or the 9 mini-grids of 3 x 3. A partially filled grid having a unique solution acts a puzzle to be completed. Sudoku solvers are generally implemented using 2 techniques namely solving by rules or using brute force brute force. The algorithm proposed in this paper combines the two approaches to reduce the time complexity and solve it in polynomial time.

5 Read more

Verifiable  Order  Queries   and  Order  Statistics  on  a  List  in  Zero-Knowledge

Verifiable Order Queries and Order Statistics on a List in Zero-Knowledge

In this paper, we define the security properties for ZKL and PPAL and provide efficient constructions for them. The privacy property against the verifier in ZKL and the client in PPAL is zero knowledge. That is, the answers and the proofs are indistinguishable from those that are generated by a simulator that knows nothing except the previous and current queries and answers and, hence, cannot possibly leak any information beyond that. While we show that PPAL can be implemented using our ZKL construction, we also provide a direct PPAL construction that is considerably more efficient thanks to the trust that clients put in the list owner. Let n be the size of the list and m be the size of the query, i.e., the number of list elements whose order is sought. Our PPAL construction uses proofs of O(m) size and allows the client to verify a proof in O(m) time. The owner executes the setup in O(n) time and space. The server uses O(n) space to store the list and related authentication information, and takes O(min(mlog n,n)) time to answer a query and generate a proof. In contrast, in the ZKL construction, the time and storage requirements have an overhead that linearly depends on the security parameter. Note that ZKL also supports (non-)membership queries. The client in PPAL and the verifier in ZKL require only one round of communication for each query. Our ZKL construction is based on zero knowledge sets and homomorphic integer commitments. Our PPAL construction uses a novel technique of blinding of accumulators along with bilinear aggregate signatures. Both constructions are secure in the random oracle model.
Show more

33 Read more

Show all 10000 documents...