training , data mining , economics , computational chemistry  and physics , etc. Moreover, NSO has applications in solving difficult smooth prob- lems, for instance in decompositions, dual formulations, and exact penalty func- tions, etc. Like other branches of optimization, the study of nonsmooth optimiza- tion contains two components: optimization algorithms and theories regarding the nonsmooth structures. This thesis deals with both of the two components. In the nonsmooth optimization world, there are several major groups of methods. One is the bundle type method and another is the smoothing method. In the literature more research of bundle methods can be found due to its satisfying numerical per- formance. It has been observed that for large scale problems these methods can have significant computational overheads. In large scale optimization, iteratively solving linear subproblems instead of quadratic subproblems can significantly re- duce the computation cost. In 2003, a bundle trustregion algorithm with only linear subproblems was proposed to solve a convex stochastic problem . In the thesis we consider unstructured problems for which a linear local model is also used and thus only linear subproblems need to be solved. To date no bundle type method with linear subproblem has been developed in the literature for the solution of unstructured nonsmooth optimization problems.
region methods; see, e.g., [9,17,18]. Line search methods refer to a procedure in which one moves along a (descent) direction as long as a sufficient reduc- tion in the objective is achieved. On the other hand, in the classical trustregion methods, a trial step is computed by minimizing a (quadratic) model of the objective function at the current point over a region around this point. Then, using the so-called trustregion ratio, the trial step is accepted/rejected and the new point as well as the radius is updated accordingly. It has been shown that trustregion methods have appropriate global and local conver- gence properties. These methods have been widely studied in the literature; see, e.g., [9, 12, 17, 19, 24, 25].
The algorithm just mentioned is based on the assumption that an exact subproblem solution can be efficiently computed in every iteration. This is a reasonable assumption for small- to medium-scale problems, but is intractable for large-scale optimization. Thus, we extend these ideas for a general inexact regularized Newton framework to improve the practical performance of the algorithm, especially for large-scale problems. In fact, the main contributions of this extension relate to advancing the understanding of optimal complexity algorithms for solving the smooth optimization problem (1.1). Our proposed framework is intentionally very general; it is not a trustregion method, a quadratic regu- larization method, or a cubic regularization method. Rather, we propose a generic set of conditions that each trial step must satisfy that still allow us to establish an optimal first- order complexity result as well as a second-order complexity bound similar to the methods above. Our framework contains as special cases other optimal complexity algorithms such as arc and trace (see  and Chapter 3).
In the problems considered here, where there is a continuum of constraints, it is not clear how to use the method of  because the active set, being uncountable, will never be fully identified, and the construction of a path on which to search for a Cauchy point would lead to infinitely many knots to test. Instead we solve an unconstrained trustregion problem for a reduced quadratic model and project the solution of that problem onto the active set.
f x within a suitable neighborhood of x k . The neighborhood is referred to as the trustregion and is often represented by a ball centered at the current iterate x k with radius ∆ k . Then solve a trustregion sub-problem, i.e.,
In summary, we have shown that a trustregion Newton method is effective for training large- scale logistic regression problems as well as L2-SVM. The method has nice optimization properties following past developments for large-scale unconstrained optimization. It is interesting that we do not need many special settings for logistic regression; a rather direct use of modern trustregion techniques already yields excellent performances. From this situation, we feel that many useful optimization techniques have not been fully exploited for machine learning applications.
In this paper, a practical and efficient method for solving nonlinear equations has been presented. Like the algorithm introduced in , it does not need to solve the two-dimensional trust-regionsubproblem several times. Instead, it uses a curvilinear search direction similar to the one used in . The numerical results indicate that using a
In HSPM, IVT technique enables HSPM to determine all intervals which contain at least one minimizer. The trusted interval was credited in reducing the unnecessary function evaluations since the local search step will be applied only on the trusted intervals found, and the same minimizer will not be located repeatedly. Besides that, since a trusted interval is expected to be convex, then we can say that HSPM is able to identify the convex parts from a non-convex feasible region.
Difficulties of CRP mainly arise from its huge scale, even though for its LP relaxations. The magnitude of variables can be up to several hundred million in one model for a large air- line’s monthly problem. To overcome the difficulty, two main methods have been proposed. The first is the column gen- eration method, such as . It requires some properties that can’t be possessed by the real-world problems. For example, it assumes that the cost of a roster must be decomposed into tasks or combinations of tasks in sequence. As the business rules are complex and mutable, it isn’t satisfying usually, so the subproblem can’t be modeled accurately as a shortest path problem with resource constraints. The second divides scheduling time zone into multiple short stages, whose size is controlled by the so-called generation-and-optimization strategy, and circularly improves the subproblems in each
 Schultz, G.A., Schnabel, R.B. and Bryrd, R.H. (1985) A family of trust-region-based algorithms for unconstrained minimization with strong global convergence properties. SIAM Journal on Numerical Analysis, 22(1), 47-67.  Byrd, R.H., Schnabel, R.B. and Schultz G.A. (1987) A
The spiral branches of the intermediate range are exact and as the singularity is approached occupy an infinitely wide range of scales between the global and corner scales. The shape in this range is inherently scale invariant. The twist of the spiral and the angles between the branches appear to be independent of the global boundary shape, again suggesting universality. It should be borne in mind, however, that the low degree of twist and the limited range of resolvable scales make precise estimates of the rotation rate and angles difficult and a definitive statement of universality impossible. Work is underway to construct a direct numerical simulation of (3.5) in the corner region, which has the potential to answer the question of universality or whether other corner shapes may be possible.
Because we have found high proportions of psychological distress in both victims and controls (Papanikolaou, Adamis, Mellon, & Prodromitis, 2011) we have hypothesized that maybe controls have been affected by the media and the distressing images which they broadcasting every day. However this ex- planation is less likely here. Media is difficult to rip apart the trust in all the more important organizations in so brief time (6 months after the disaster) and destroy the civic status of entire communities, despite their powerful influences. A previous study (Lyberaki & Paraskevopoulos, 2002) has shown that Greeks have a low level of trust in the most public institutions, like political parties, the civil service, the government and the parliament. Similarly a more recent survey (Papadimitriou, 2007) in younger Greek population (18 to 28 years old) has reported that 90% did not trust the parliamentary members, 80% did not trust the trade unions, 76% did not trust the politics, and only the 38% trust the church. In addition the same survey reported that more than half (53%) of Greek young people are unconcerned about other people and only 21.5% trust other people and those only to some degree. This survey also reveals that a 38% of young people may offer financial help in case of natural disaster to the victims.
known Benders decomposition technique is applied in this paper. Accordingly, the resulted MINLP optimization problem is handled by decomposing the problem into two levels. The first level (master problem) is a mixed-integer linear problem and the second one is a nonlinear subproblem. Figure 5 illustrates the proposed procedure including the steps. In the beginning, initial power flow is accomplished to find initial voltages and reactive power output accompanied by the initial values of the Jacobian matrix. Now, the iterative Benders decomposition method begins. The master problem minimizes total expected payment function, operating regions of energy providers and reactive power outputs, while the sub- problem solves an AC power flow and checks the feasibility of master problem. So, required updated parameters are added to the master problem. The mathematical formulation of master and slave problems are presented below. Further details of Benders decomposition method are given in .
5.3.2 CoNLL-2010 Experiments. The CoNLL-2010 shared task Learning to detect hedges and their scope in natural language text focused on uncertainty detection. Two subtasks were deﬁned at the shared task: The ﬁrst task sought to recognize sentences that contain some uncertain language in two different domains and the second task sought to recognize lexical cues together with their linguistic scope in biological texts (i.e., the text span in terms of constituency grammar that covers the part of the sentence that is modiﬁed by the cue). The lexical cue recognition subproblem of the second task 7 is identical to the problem setting used in this study, with the only major difference being the types of uncertainty addressed: In the CoNLL-2010 task biological texts contained only epistemic, doxastic, and investigation types of uncertainty. Apart from these differences, the CoNLL-2010 shared task offers an excellent testbed for comparing our uncertainty detection model with other state-of-the-art approaches for uncertainty detection and to compare different classiﬁcation approaches. Here we present our detailed experiments using the CoNLL data sets, analyze the performance of our models, and select the most suitable models for further experiments.
We present the ﬁrst attempt to give the robust reformulation for solving the box- constrained stochastic linear variational inequality problem. For three types of uncertain variables, the robust reformulation of SLVI(l, u, F) can be solved as either a quadratically constrained quadratic program (QCQP) or a convex program, which are all more tractable and can provide solutions for SLVI(l, u, F), no matter for monotone or non-monotone F.
ization projection minimization problem (), and show theoretically that () is a good ap- proximation. In Section , we develop a shrinkage-thresholding representation theory for the subproblem of () and propose a shrinkage-thresholding projection (STP) algorithm for (). The convergency of the STP algorithm is proved. Numerical results are demon- strated in Section to show that () is promising to provide a sparsest solution of LCP. 2 The l 1 regularized approximation