Abstract. An adaptive mesh-free approach is developed to compute the lowerbounds of limit loads in plane strain soil mechanics problems. There is no pre-dened connectivity between nodes in the mesh-free techniques, and this property facilitates the implementation of h-adaptivity. Nodes may be added, moved, or discarded without complex changes in the data structures involved. In this regard, the Shepard mesh-free method is used in conjunction with the nodal stress rate smoothing technique and the lower bound limit analysis theory to establish a non-linear optimization problem. This problem is solved by the second-order cone programming technique, and the result is a stress eld that satises the lower bound requirements in a non-rigorous manner. The lack of rigorousness arises from relaxation during nodal stress rate smoothing process. An error estimator is introduced by the application of Taylor series expansion, and by controlling the local error via a user-dened tolerance, the adaptive renement strategy is established. To demonstrate the eectiveness of the proposed method, the procedure is applied to the examples of purely cohesive and cohesive-frictional soils.
In a linear secret sharing scheme, the secret value and the shares are vectors over some ﬁnite ﬁeld, and every share is the value of a given linear map on some random vector. The homomorphic properties of linear secret sharing schemes are very important for some of the main applications of secret sharing as, for instance, secure multiparty computation. On the other hand, linear secret sharing schemes are obtained when applying the best known techniques to construct eﬃcient schemes, as the decomposition method by Stinson . Because of that, it is also interesting to consider the parameters λ(Γ) and e λ(Γ), the inﬁmum of the (average) complexities of all linear secret sharing schemes for Γ. Obviously, σ(Γ) ≤ λ(Γ). In fact, almost all known upper bounds on the optimal complexity are upper bounds on λ, and the same applies to the corresponding parameters for the average optimal complexity. Even though non-linear secret sharing schemes have been proved to be in general more eﬃcient than the linear ones [3, 6], not many examples of access structures with σ(Γ) < λ(Γ) are known.
In order to find the lower bound we consider the dual problem and in a similar way the lower bound to the optimal objective value of the dual problem is found. It goes without saying that the minimum value of the objective funtion in the feasible region of (1) is zero and by lower bound we mean the lower bound to the optimal objective function vaue of (1) which is the same in both primal and dual problem.
We present a brief overview of the techniques used by Larsen and Nielsen , which uses the information transfer technique. We also describe why information transfer does not seem to be of use for differentially private RAM lowerbounds. Information transfer first builds a binary tree over Θ(n) operations where the first operation is assigned to the leftmost leaf, the second operation is assigned to the second leftmost leaf and so forth. Each cell probe is assigned to at most one node of the tree as follows. For a cell probe, we identify the operation that is performing the probe as well as the most recent operation that overwrote the cell that is being probed. The cell probe is assigned to the lowest common ancestor of the leaves associated with the most recent operation to overwrite the cell and the operation performing the probe. Let us fix any node of the tree and consider the subtree rooted at the fixed node. It can be shown that the probes assigned to the root is the entirety of information that can be transferred from the updates of the left subtree to be used to answer queries in right subtree. Consider the sequence of operations where all leaves in the left subtree write a randomly chosen b-bit string to unique array entries and all leaves in the right subtree read an unique, updated array entry. For any DS to return the correct b-bit strings asked by the queries in the right subtree, a large amount of information must be transferred from the left subtree to the right subtree. Thus, many probes should be assigned to the root of this subtree. Suppose that for another sequence of operations, DS assigns significantly less probes to the root of this subtree. Then, a computational adversary can count the probes and distinguish between the worst case sequence and any other sequence contradicting obliviousness. As a result, there must be many probes assigned to each node of the information transfer tree. Each cell probe is assigned to at most one node. So, summing up the tree provides a lower bound on the number of cell probes required.
In this paper, “A review on LowerBounds for the Domination Number” can make an in depth study in graphs and its related works. We also discussed about the properties of the Lowerbounds for the domination number. We also arrived out Lowerbounds on The distance domination number of a Graph. Also we preliminary Lemmas of Lower bound on the distance domination and we obtain certain direct Product graph on connection with other lowerbounds on the distance domination.
Most of the results on convergence properties of aggregation methods are obtained for the re- gression and the gaussian white noise models. Nevertheless, Catoni (1997, 2004), Devroye and Lugosi (2001), Yang (2000a), Zhang (2003) and Rigollet and Tsybakov (2004) have explored the performances of aggregation procedures in the density estimation framework. Most of them have established upper bounds for some procedure and do not deal with the problem of optimality of their procedures. Nemirovski (2000), Juditsky and Nemirovski (2000) and Yang (2004) state lowerbounds for aggregation procedure in the regression setup. To our knowledge, lowerbounds for the performance of aggregation methods in density estimation are available only in Rigollet and Tsy- bakov (2004). Their results are obtained with respect to the mean squared risk. Catoni (1997) and Yang (2000a) construct procedures and give convergence rates w.r.t. the KL loss. One aim of this paper is to prove optimality of one of these procedures w.r.t. the KL loss. Lowerbounds w.r.t. the Hellinger’s distance and L 1 -distance (stated in Section 3) and some results of Birg´e (2004) and
This graph invariant was proposed as a structure-descriptor, used in the modeling of certain features of the 3D structure of organic molecules , in particular of the degree of proteins and other long-chains biopolymers [3,4]. It has also found applications in a large variety of other problems, see, e.g., . Lower and upper bounds have been established for the Estrada index, see . Some other properties for the Estrada index may be found in . Here we present some easily computed lowerbounds for the Estrada index.
Constant number of calls to stateless tokens. In our last result we show that there exists a functionality for which there does not exist any protocol in the stateless hardware token model making at most a constant number of calls. To this end, we introduce the notion of an obfuscation complete oracle scheme, a variant of obfuscation tailored to the setting of hardware tokens. Goyal et. al.  have shown such a scheme can be realized under computational assumptions (refer to Section 6.2.2 in the full version). We derive a lower bound stating that a constant number of calls to the obfuscation oracle does not suffice. This can be seen as a strengthening of the impossibility result by Barak et al.  which states that at least one call to the obfuscation oracle is required. This result can then be translated to a corresponding result in the hardware token model. This result holds even if the hardware is a complex stateless token (and hence still relevant even in light of our previous results) and (more importantly) against computational adversaries. Previous known lowerbounds on complex tokens were either for the case of stateful hardware [23, 24, 18] or in the information theoretic setting [25, 38].
With this approach, we can safely add any new cuts to our set of pseudo-boolean constraints, guaranteeing the completeness of the algorithm. However, since the cuts de- pend on all decision assignments made in the search tree from the root node to the current node N where the cut was generated, the constraint will not be used other than at the subtree with root at the node N . Moreover, if a conflict occurs involving the generated cuts at node N , the search cannot backtrack to a node higher than N in the search tree. The second technique for associating dependencies with cuts follows the ideas proposed in  for LPR. Since each cut is derived from the outcome of solving the LPR formu- lation, then we can associate with each cut the same de- pendencies we associate with lower bound conflicts. Given the solution to the LPR formulation, the dependencies are identified as all 0-valued literals in all clauses for which the value of the slack variable is 0 . Albeit this technique is more accurate than the first one, it is possible to achieve in- creased accuracy, by analyzing the process associated with the identification of Gomory mixed-integer cuts.
Now, recall that ∀Exp+Res works by applying propositional resolution to the clauses in the complete universal expansion of a PCNF. In fact, the conjuncts of the full expansion are exactly the allowable axiom clauses. An interesting question arises: how many such clauses must be introduced as axioms? It is perhaps not too difficult to see that the smallest unsatisfiable subset of the allowable axioms has cardinality not less than strategy size – this holds because the initial instantiations, one per axiom, encompass a complete set of responses for a winning strategy. Hence strategy size is an absolute proof-size lower-bound in ∀Exp+Res.
In , Rhoades investigated the lowerbounds question for the Rhaly matrices, and obtained some partial results. In this paper, we return to the study of the Rhaly matrices, as special cases of factorable matrices. This paper diﬀers from  in three important respects. First, one general result is proved, and the theorems of  then follow as special cases. Second, the results of  are extended to all p > 1. Third, as an application of the general procedure developed here, we are able to provide a new proof of [7, Theorem 1] as well as to verify the conjecture that, for the weighted mean methods with p n = (n + 1) α ,
In recent times the Bregman divergence (or Bregman distance) x F ∗ (y, x), introduced by Bregman in , has been used as a generalized distance measure in various branches of applied mathematics, for example optimization, inverse problems, statistics and compu- tational mathematics, especially machine learning. To get an overview over the Bregman divergence and its possible applications in optimization and inverse problems we refer to [2–4]. In particular the Bregman divergence has been used for various algorithms in nu- merical analysis and also for convergence analysis of numerical methods and algorithms. Especially when doing convergence analysis it is often crucial to have lower and upper bounds on the Bregman divergence in terms of norms. In  the authors prove upper and lowerbounds for expressions
Example 6 (Diet problem): A dietician wishes to mix two types of foods in such a way that vitamin contents of the mixture contain atleast 8 units of vitamin A and 10 units of vitamin C. Food ‘I’ contains 2 units/kg of vitamin A and 1 unit/kg of vitamin C. Food ‘II’ contains 1 unit/kg of vitamin A and 2 units/kg of vitamin C. It costs Rs 50 per kg to purchase Food ‘I’ and Rs 70 per kg to purchase Food ‘II’. Formulate this problem as a linearprogramming problem to minimise the cost of such a mixture. Solution Let the mixture contain x kg of Food ‘I’ and y kg of Food ‘II’. Clearly, x ≥ 0, y ≥ 0. We make the following table from the given data:
bounds for sin x/x can not be included each other. In fact, this obstacle can be improved and we can obtain better results (see (2.9)). Moreover, we can estimate the errors of lowerbounds and upper bounds for strengthened Jordan ’ s inequality (see (2.12)).
The linearprogramming method was is widely used technique for sizing and optimization of renewable systems, Nogueira et al  used Linearprogramming for sizing and simulation of a PV–wind–battery hybrid energy system, with minimum cost and high reliability. Huneke et al  Used linearprogramming to obtain optimal configuration for a solar–wind–battery–diesel based power generator. Linearprogramming technique is better than other approaches as it improves the quality of decision, more flexible and problems can be solved easily.
Memory lower bound of restricted reductions. Auerbach et al. initiated the study on memory-tightness, and provided general techniques helping achieve memory-tight reductions. Surprisingly, as negative results, they showed a mem- ory lower bound of reductions from multi-challenge unforgeability (mUF) to stan- dard unforgeability (UF) for signatures. The former security notion is defined in exactly the same way as the latter except that it gives an adversary many chances to produce a valid forgery rather than one chance. Although it is trivial to reduce mUF security to UF security tightly in both running-time and success probability, Auerbach et al. showed that some class of reductions between these two security notions inherently and significantly increase memory usage, unless they sacrifice the efficiency of the running-time. Specifically, they proved that such a reduction must consume roughly Ω(q/(p + 1)) bits of memory, where 2q is the number of queries made by an adversary and p is the number of times an adversary is run. The class of black-box reductions they treated is restricted, in the sense that a reduction R only runs an adversary A sequentially from be- ginning to end, and is not allowed to rewind A. Moreover, R only forwards the public keys and signing queries between its challenger and A, and the forgery made by R should be amongst the ones generated by A. This result implies that in practice, UF security and mUF security may not really be equivalent. As an open problem left by Auerbach et al., it is not clear whether this result holds when a reduction does not respect the restrictions. Moreover, this result does not rule out the possibility that there exists a memory-tight restricted reduction that directly derives mUF security from some memory-sensitive problem. There- fore, it is desirable to clarify whether there exists a memory lower bound of any natural reduction from mUF security to any common assumption.
Both the original and the new argument can trade the running time of the nondeterministic algorithm for PIT and the size of the arithmetic circuits for P ERM in part (ii). However, due to the use of the implication NEXP ⊆ SIZE(poly(n)) ⇒ NEXP = MA from , the original argument does not accommodate changes to either the right-hand side or the left-hand side of (i), whereas the new argument allows us to play with both sides. On the left-hand side, the proof in  can only handle time bounds that are at least exponential. 5 This is true even when the running time of the nondeterministic algorithm for PIT is polynomial, in which case our argument only needs the time bound on the left-hand side of (i) to be superpolynomial. On the right-hand side, the proof in  can only handle circuit sizes that are polynomial; 6 our proof gives nontrivial results for circuit sizes ranging from linear to linear-exponential. We can further improve the parameterized version of Theorem 2.1 by slightly modifying our argument and incorporating Toda’s Theorem . The strengthening and improved parameterization are captured in the following statement, which appears in . We use (N∩ coN)TIME(τ) as a shorthand for NTIME(τ ) ∩ coNTIME(τ ).
Abstract. The objective of coding theory is to protect a message going through a noisy channel. The nature of errors that cause noisy channel depends on diﬀerent factors. Accordingly codes are needed to develop to deal with diﬀerent types of errors. Sharma and Gaur  introduced a new kind of error which is termed as ‘key error’. This paper presents lower and upper bounds on the number of parity-check digits required for linear codes capable of correcting such errors. An example of such a code is also provided. Keywords: Parity check matrix, syndrome, standard array, solid burst error. AMS Subject Classiﬁcation (2010): 94B05, 94B65.
Abstract. Some sharp bounds for the Euclidean operator radius of two bounded linear operators in Hilbert spaces are given. Their connection with Kittaneh’s recent results which provide sharp upper and lowerbounds for the numerical radius of linear operators are also established.
The bounds in the Theorem each involve three terms. The last one expresses the dependence of the estimation error on the confidence parameter δ and a model-selection penalty ln (1/ (γε)) for the choice of the margin γ. Note that it generally decreases as 1/ √ nm. This is not an a priori advantage of multi-task learning, but a trivial consequence of the fact that we estimate an average of m probabilities (in contrast to Ben David, 2003, where bounds are valid for each individual task - of course under more restrictive assumptions). The 1/ √ nm decay however implies that even for moderate values of m and n the parameter ε in Theorem 9 can be chosen very small, so that the factor 1/ (1 − ε) in the second term on the right of the two bounds is very close to unity.