β, γ. Our analysis also uses a parameter δ, which specifies the lower range **limit** for the tradeoff algorithm of A as M ≥ T 1/δ . For example, we established the tradeoff T M 3 = N for A 7,1 in the range T 1/4 ≤ M ≤ T 1/2 , hence we have β = 1, γ = 3, δ = 4. We will derive a basic tradeoff for M ≥ T 1/δ and then extend it to M < T 1/δ . Furthermore, the tradeoffs depend on the input range size L = 2 ` since the complexity of the preparation phase algorithm varies according on whether or not ` is below some value. For example, when applying PCS with a small value of `, we have to apply it to an expanding function and adapt the complexity as specified on Section 3.6.

Show more
39 Read more

Let us take one of our concrete parameters as an example.
Let n = 144, k = 5, i.e. we suggest finding 32-XOR on 144 bits. A straightforward implementation of the **generalized** **birthday** algorithm would require 1.6 GBytes of RAM and about 1 minute of a single CPU core. Recomputing the hash values for the first two steps and truncating the indices to 8 bits at the last two steps, we can decrease the peak tuple length to 176 bits, thus in total requiring 704 MBytes, or aggressively trim to 4 bits, reaching 500 MBytes. However, further reductions are more expensive. Using 2 24 instead of 2 25 tuples would cause the computational penalty factor of 2 10 , and factor of 2 20 for using 2 20 tuples (q = 1/32). We summarize that for large memory reductions the computational penalty would be prohibitive even for adversaries equipped with a number of parallel computational cores.

Show more
16 Read more

Let r, B and w be positive integers. Let C be a linear code of length Bw and subspace of F r 2 . The k-regular-decoding **problem** is to find a nonzero codeword consisting of w length-B blocks with Hamming weight k. This **problem** was mainly studied after 2002. Not being able to solve this **problem** is critical for cryptography as it gives a fast attack against FSB, SWIFFT and learning parity with noise. In this paper, the classical methods are used in the same algorithm and improved.

13 Read more

P. N. RATHIE and P. ZÖRNIG Received 3 October 2001
We study the **birthday** **problem** and some possible extensions. We discuss the uni- modality of the corresponding exact probability distribution and express the mo- ments and generating functions by means of conﬂuent hypergeometric functions U(−;−; −) which are computable using the software Mathematica. The distribu- tion is **generalized** in two possible directions, one of them consists in considering a random graph with a single attracting center. Possible applications are also in- dicated.

14 Read more

There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given **problem**. Generally a given **problem** is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given **problem** as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's **problem** solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in **problem** modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving **generalized** assignment **problem** which is known as NP-hard **problem** is presented in detail along with some comparisons.

Show more
35 Read more

)
We present the table of three basic component of 𝑅, which are interval truth –membership, Interval indeterminacy membership and interval falsity-membership part.To choose the best candidate, we firstly propose the induced interval neutrosophic membership functions by taking the arithmetic average of the end point of the range, and mark the highest numerical grade (underline) in each row of each table. But here, since the last column is the grade of such belongingness of a candidate for each pair of parameters, **its** not taken into account while making. Then we calculate the score of each component of 𝑅 by taking the sum of products of these numerical grades with the corresponding values of μ. Next, we calculate the final score by subtracting the score of falsity-membership part of 𝑅 from the sum of scores of truth-membership part and of indeterminacy membership part of 𝑅.The machine with the highestscore is the desired machine by company.

Show more
19 Read more

1 Introduction
We denote by V (G) and E(G) the vertex set and the edge set of a simple graph G, respec- tively. Also, we denote by tG the vertex-disjoint union of t > 0 copies of G.
A factor F of G is a spanning subgraph of G, namely, a subgraph of G such that V (F ) = V (G); also, if F is i-regular, we call F an i-factor. In particular, a 1-factor of G (also called a perfect matching) is the vertex-disjoint union of edges of G whose vertices partition V (G), while a 2-factor of G is the vertex-disjoint union of cycles whose vertices span V (G). A 2-factor of G containing only one cycle is usually called a Hamiltonian cycle. We say that a factor is uniform when **its** components are pairwise isomorphic. Hence, a 1-factor is uniform, whereas a 2-factor might not be.

Show more
12 Read more

4 rational, so they do not borrow more than the sum of their own estate, the debt of the bankrupt agent, and the total debt from other agents to him.
The **generalized** bankruptcy **problem** is a triple ( ), where is a vector of estates of agents, ( ) , is a non-negative matrix of claims ( is a sum that agent owes to agent ) and nobody owes more than the sum he will have if all other agents return all the debts. In addition, each agent endowed with **its** own . The following condition holds

12 Read more

We consider our approach demonstrates that quantum mechanics is applicable in every scale of nature, and that the macroscopic world is a consequence of **its** asymptotic behavior in the high energy regime. Even though our approach gives the correct classical results for periodic quantum systems, it is far from the general solution to the classical **limit** **problem**. There are still related open prob- lems needed for a general mathematical formulation of the classical **limit** **problem**, as the study of the unbound systems. We are currently exploring residual effects of quantum transitions at macrocopic level [23].

Show more
Although relevant work was done on the GBPP, only limited research has been devoted so far to the study of **its** approximability (Baldi et al., 2013).
In this paper we propose a thorough study on the approximability of the GBPP and show significant results. Firstly, we formally define the GBPP according to the previous works in the literature (Baldi, 2014; Baldi et al., 2012, 2014). According to this definition, given an instance of the **problem**, the sign objective function of any solution built from the instance can be either negative, zero or positive. Nevertheless the worst-case ratio definition requires the sign of the objective function of any solution of the **problem** to be non-negative. Therefore, we state an equivalent definition of the GBPP satisfying this requirement. We prove that not only the GBPP is a NP-hard **problem**, but that even the **problem** of finding a feasible solution is NP-hard as well. We also claim that the GBPP is not approximable, unless P = NP. Finally, we study the particular case of the GBPP with a single bin type and we show how it can be reduced to the BPR (D´osa and He, 2006; Bein et al., 2008; Epstein and Levin, 2010; Epstein, 2010), which is approximable. We also show that, while standard and widespread heuristics like the FF and the BF have a finite worst- case ratio when employed to address the Bin Packing **Problem** (Johnson et al., 1974), this ratio becomes infinite if they are extended to the GBPP. Finally, we prove a very important result, i.e., the GBPP satisfies the Bellman’s optimality principle. Exploiting this principle we develop a dynamic programming solution approach.

Show more
10 Read more

Alice and Bob are playing a game on board which is a 2-dimensional 300 × 300 grid. The board is subdivided into cells. Each cell can be uniquely identified by two integers representing (x, y) coordinates, each in the range from 1 to 300.
Two tokens are on the board on distinct cells. Alice starts the game. On each player’s turn, that player chooses one of the tokens, chooses one of the coordinates of the cell it’s on, and reduces that coordinate by some positive amount. The moved token cannot jump over or occupy the same space as the other token. The token must also remain on the board (so both of **its** coordinates need to stay positive). The first player unable to make a move loses. Note that both players can move either token.

Show more
28 Read more

procedures for large classes of discrete combinatorial and continuous non-convex programming problems. A dual ascent procedure using first level or Level-1 RLT (or RLT1) is the basis for the algorithm.
Several years ago, Hahn and Grant [20] introduced the Level-1 RLT Dual ascent Procedure (DP) for solving difficult instances of the Quadratic Assignment **Problem** (QAP). Continuous improvements in **its** implementation have achieved QAP exact solution run times that are at the leading edge for single processor platforms. See Loiola et al. [29], Drezner et al [12] and Hahn et al [22, 23]. These methods have also been applied to solving practical instances of the Quadratic 3-Dimensional Assignment **Problem** (Q3AP)

Show more
33 Read more

In this paper, we introduce a stochastic model for the S-GBPP. In most papers dealing with uncertainty, the probability distribution of the random variables is given and their expected value can then be calculated. This is not the case of the S-GBPP, where the probability distribution of the random item profit is unknown, because it is difficult to be measured in practice and any assumption on **its** shape would be arbitrary.

In this paper we introduce a stochastic model for the S-GBPP. In most papers dealing with uncertainty, the prob- ability distribution of the random variables is given and their expected value can then be calculated. This is not the case of the S-GBPP, where the probability distribution of the random item profit is unknown, because it is di fficult to be measured in practice and any assumption on **its** shape would be arbitrary.

10 Read more

To know more about these subjects see [9].
5.3 Differential Equations
This section is inspired by the beautiful class notes by Rafael Ortega in [2].
A differential equation is first and foremost a statement that relates a function values with those of **its** derivatives. In this text we will always reduce differential equations to first order systems, i. e., equations containing only first derivatives. Moreover we are only interested in Ordinary Differential Equations.

93 Read more

• the family {F y,H } is continuous in **its** index parameters (y, H) in a wide sense. For example, {ψ y,H } tends to the identity function as y → 0.
3.2. Exact separation of sample eigenvalues. We need first quote two results of Bai and Silverstein [2, 3] on exact separation of sample eigenval- ues. Recall the ESD’s (H n ) of (T p ), y n = p/n, and let {F y n ,H n } be the sequence of associated M.P. distributions. One should not confuse the M.P.

23 Read more

n ) , defined in Section 2.3 below, and ϕ(n, H 1 , H 2 ) is a suitable scaling factor; see [ 25 ],
pages 9–10, for **its** definition. The **limit** in the case (b ) is the value of a two-parameter Hermite
sheet (see Section 2.4 ), of order k with Hurst parameter (1 −k(1−H 1 ), 1 −k(1−H 2 )) ∈ ( 1 2 , 1) 2 , at point (1, 1). Contrary to the one-parameter case, the results obtained in [ 25 ] are proved only for one-dimensional laws; neither finite-dimensional (except in the particular setting of [ 24 ]) nor functional convergence (i.e., tightness in a function space) of Hermite variations has been established so far. (In particular in the d-parameter realm with d ≥ 2, tightness is a non-trivial issue, which has not been addressed in [ 25 ] or in the related paper [ 24 ].)

Show more
38 Read more

Remark. (i) Since the functions g 0 , e g and C are bounded, the Kolmogorov criterion yields that the process Z takes **its** values in the space C ([0, 1]). Moreover, let us note that Z has independent increments.
(ii) The assumptions are the same as in Theorems 3 and 5 in [ 4 ], except for the sequence of subdivisions and for the function φ. In [ 4 ], the assumptions are stronger because we wanted to show an almost sure asymptotic development.

22 Read more

electronic bottlenecks in data networks. New developments in cloud computing will compound this **problem** even more.
The growing demand for high speed, low latency data transmission has generated a need for substantially increased capacity and improved connectivity within data centres. However current data centres performing all data processing based on electronic switching are incapable of sustaining these demands into the future [3] and therefore new technological solutions are required. It has been suggested that the fundamental limits of data centre switching which relies on bandwidth limited CMOS electronics is now perhaps being reached [4]. It is believed that all-optical systems using photonic integrated circuits and highly scalable optical interconnects may provide an answer with the promise of data rates exceeding Terabits per second.

Show more
sphere to show th a t for a given t, in m ost directions, 9 9 {s) ds is close to **its** spherical average. (So far th e proof holds for all sym m etric, convex bodies).
To show th a t this spherical average, jgn-\ ds d(r{ 6 )^ is close to th e integral of we identify in Perissinaki’s argum ent th e property of Ç balls which ensures th a t th e spherical average of th e densities is close to 9 . This property is precisely th e one stated in the concentration hypothesis. We rem ark th a t although the hypothesis says th a t m ost of K concentrates in a th in spherical shell of radius y/np^ it does not tell us th a t K looks like a spherical shell. B ut since our initial aim is to estim ate an avera 9 e over the sphere, it does not m a tte r how mass is distrib uted w ithin th e shell.

Show more
81 Read more