• No results found

On Some Theoretical and Computational Aspects of the Minimax Problem

N/A
N/A
Protected

Academic year: 2021

Share "On Some Theoretical and Computational Aspects of the Minimax Problem"

Copied!
41
0
0

Loading.... (view fulltext now)

Full text

(1)

I

4

*

I

I

I

I

I

I

I

I

I

I

I

I

I

I

REPORT R-423 JULY 1969

COORDINATED SCIEN CE LABORATORY

ON SOME THEORETICAL

AND COMPUTATIONAL ASPECTS

OF THE MINIMAX PROBLEM

JURAJ MEDANIC

UNIVERSITY OF ILLIN O IS - URBANA, ILLIN O IS

(2)

Juraj Medanic

Coordinated Science Laboratory University of Illinois

Urbana, Illinois

This wor k was supported in part by the Joint Services Electronics Program (U.S„ Army, U.S. Navy, and U.S. Air Force) under contract DAAB 07-67-C-0199; and in part by Air Force AF-AFOSR 68-1679; and in part by NSF Grant GK-3893.

Reproduction in whole or in part is permitted for any purpose of the United States Government.

This document has been approved for public release and sale; its distribution is unlimited.

(3)

OF THE MINIMAX PROBLEM*

by

Juraj Medanic**

Coordinated Science Laboratory University of Illinois

Urbana, Illinois

This work was supported by the Joint Services Electronics Program (U.S. Army, U.S. Navy, and U.S. Air Force) under Contract No. DAAB-07-67-C-0199; in part by Air Force AF-AFOSR 68-1579; and in part by NSF Grant GK-3893.

•irk

(4)

This report contains both a brief summary of the pertinent theory of the minimax problem in finite dimensional spaces and the development of an elimination type algorithm for the practical computation of the minimax solution. Because algorithms for finding saddle point type solutions are easier to implement it is of considerable interest to know when they can be employed in minimax problems. A lemma giving sufficient conditions for the minimax to be a local saddle point is presented in the paper. However, the most interesting minimax problems do not admit saddle point solutions and other algorithms must be sought. Two lemmas are given in the paper which guarantee the convergence of the elimination algorithm proposed in the paper for the computational solution of the minimax problem. A shortcoming of the algorithm is that its application is restricted to minimax problems in which the max function is pseudo-convex.

(5)

OF THE MINIMAX PROBLEM

Introduction

The minimax problem consists in determining x* and possibly

y*

such that

J(x*,y*) = min max J(x,y) . x e X ye Y

(1)

Solutions of many problems in decision theory, in the theory of games and differential games, in control under uncertainty and operations research rests on the possibility of computing the above defined minimax solution.

The degree of difficulty is associated with the nature of the admissible subsets X and Y of the respective spaces X and Y and on the dependence of J on x and y. In decision theory and the theory of games x and y are

frequently probability distributions over appropriate sample spaces and X and Y are collections of admissible probability distributions; in differen­

tial games X and Y are usually subsets of spaces of piecewise continuous, or measurable functions; in control problems involving uncertainty and operations research problems X and Y may be subsets of finite dimensional spaces. Theoretical aspects of the problems are fairly well understood but practical computational procedures are lacking even for the simplest case where X and Y are finite dimensional subsets of E^ and E g , respectively, and J(x,y) is a real valued function defined on XxY. In this paper we review briefly the basic properties of the minimax solution and summarize the computational difficulties involved in solving minimax problems in finite dimensional spaces.

(6)

It is well known that if there exists a global saddle point (x*,y*), i.e. if J(x*,y) < J(x*,y*) < J(x,y*) for all (x,y)eXxY then the minimax solution coincides with the saddle point solution. When this is the case, the search for the minimax solution is relatively simple, since (i) most methods in nonlinear programming may be employed to obtain the saddle point solution and (ii) by appropriate sequencing of minimizing and maximiz­ ing iteration steps both upper and lower bounds on the saddle point may be

obtained and used as a stopping criterion. The existence of a global saddle point

can, however, be ascertained only for the class of convex-concave* functions. To justify the use of saddle point algorithms in the solution of minimax problems, it is sufficient to have the minimax solution be a local saddle point of J(x,y) in XxY. It is therefore natural and important to ask the question: When is a local saddle point of J(x,y) the minimax solution in XxY? The first contribution of this paper is a lemma stating sufficient

conditions for the minimax solutions to be a local saddle point. If a function satisfies the conditions in the lemma, saddle point algorithms can be used to find the minimax solution. Some of the most interesting minimax problems,

however, do not admit a saddle point solution. The second contribution of this paper is an algorithm which obtains the minimax solution in a class of minimax problems where the minimax solution is not a saddle point and local methods cannot be used to obtain it.

Applications

Many fields of system theory in the design and planning stage utilize in some manner solutions based on the minimax philosophy. For the

(7)

purposes of this paper it is sufficient to point out some practical fields of application. The first two problems are discussed in the book by

Danskin [ l] and have been selected because they represent a class of realistic problems and also because the desired solution is truly a

minimax (more precisely maxmin) solution. The third example is not a true minimax problem but represents a common approach in the design of some control systems in the presence of uncertainty.

Consider first the "weapons allocation" problems. Suppose a set of n weapon systems is given and is to be used in the possible counter­ attack of n different targets. The amount of the i-th type of system

(measured in proportion of the budged) is x^ and its effectiveness against the specified i-th target is b^. The value of the i-th target on a relative scale is v_^. It is assumed the defender of the targets will strike first, with the full knowledge of the system x = ( X p . . . , x n) and with ample time to

develop an optimal counter against the selected system x. If he applies y^ units of attack to the i-th weapons system the original number of weapons

"a iy i

is reduced to x^ = x^e . The damage the residual quantity x! can do to

b.xl 1

the i-th target in the counterattack is v^(l - e 1 1) and suppose the damage adds from target to target. Then the total damage resulting to the enemy defending the targets, after the enemy has attacked and the residual weapons have been used to counterattack is,

J ( x 9y) n . S v . (1 i= 1 r - a . x . l l e - b . y .

it

l ). (1)
(8)

for x is naturally given by the maxmin solution J(x,y) - b . y . l l n “a ix i e max min 2 v. (1 - e ) x G X yeY i

= 1

1

under the constraints

(

2

)

2 x . i=l 1

x. > 0, y. > 0

l — y l — (3)

where c and c are constants.

x

y

Consider next the "residual value of a target" problem. Suppose now that one target is defended by x^ definse units of the i-th type of weapons and attacked by y. attack units of the enemy's i-th type of weapons,

-k.x./y. 1 I

1

l

J1

The quantity e is the probability that an individual attack unit of the i-th type gets through, the ratio x^/y^ being the amount of defense with the i-th type weapon against an individual (enemy's) i-th unit and k^ the effectiveness of such defense; a. is the probability that a surviving attack

-k.x./y.

unit destroys the target. The quantity 1 - a.e 1 1 1 is then the proba--k.x./y. y. bility that the target survives the attack and finally (1 - a^e 1 1 1) 1 is the probability the target survives the attack by weapons of the i-th type. The total probability of the target to survive the attack by all available types of weapons, called the residual value of the target, is

J(x ,y)

n

2

i=l

V

1

-k.x./y. l l iN a.e ) l n 2 x. i=l 1 n 2 y. = c , i=l i y ’ x. > 0, y. >l — ’ h 0. (4)
(9)

selecting x constructs his defense first and that the player selecting y moves in full knowledge of what the x-player has selected. Hence, once again, a true maxmin problem.

Many related and interesting problems can be found in Danskin [l]. All are distinguished by the specific form of J(x,y) which is separable with respect to the pairs ( ^ » y ^ • That is, in general, the criterion has the form

n

J(x,y) = J i V v y p (

5

)

which has allowed Danskin to analyze the features of the solution by

extensive application of a lemma due to Gibbs. Although Danskin emphasizes the theoretical aspects of the problem some examples have been solved by hand but no indication is given of an existing computer algorithm. Indeed, it seems that outside of an outright search procedure, or analytic calcula­ tion of the max function no organized way of computing the maxmin solution is made available. However, quantitative solutions of realistic problems can only be obtained by computerized methods. Moreover, even qualitative solutions are hard to analyze in the absence of specific properties of J(x,y) .

In control theory the design of compensating or control circuits in linear control systems is a classical and fairly developed field. Many different criteria can be devised and are employed in accepted synthesis

procedures. The minimax criterion (and its modifications) also possess its advantages if there is some uncertainty concerning the values of plant

(10)

as in Fig. 1, where the time constant T of the compensating circuit and the over all gain constant K are adjustable parameters. On the other hand, the motor time constant T^, the damping factor § and the natural frequency (0^ of the driving servo may be uncertain parameters varying within specified

bounds (for example if the same compensating system is to be used on a batch of servo units). Given fixed values for T^, 5, and U)^ a frequently used design criterion is to minimize the integral squared error,

joo

J(T,K,Tm ,|,a)n) = ^ J' E(s)E(-s)ds (6)

J -joo

where in this problem

E (s)

T s3 + <2?0) T + 1) s2 + (id2T + 2?u> )s +

w2

_____ m_____ n m ______ n m n ______ n_______ T s4 + (2§tt) T + l)s3 + (U)2T + 2§U) ) s 2 + (U)2 + TK) s + K

m n m n m n n

(7)

Given only bounds for T^, §, and 0)^, a feasible modification of the design criterion is to select values T* and K* such that

J(T*,K*,T

*,£>*,&

*) = min max J(T,K,T ,5,0)). (8) 111 n [T,K] [T ,?,0) ] m n

m ? ’ n

In case the minimax solution to problems with uncertainty is judged to be too pessimistic, modified versions such as the performance sensitivity [

2

] or segment approach [

3

] may be used leading to more realistic designs. The characteristic feature of these designs remains, however, the presence of a computational minimax problem.

In this paper, a method for solving the minimax problem based on the contraction of the admissible domain of minimizing parameters to an

(11)

arbitrary small neighborhood about the minimax value will be presented. The following, Section 3, of this paper is devoted to the presentation of the theoretical aspects of the minimax problem and justification of the algorithm. The algorithm itself is presented in Section 4 and problems discussed in this section are solved in Section 5.

2. Theory

Consider a real valued function J(x,y) defined on the subsets X and Y of the respective finite dimensional spaces and E g . It is assumed that

(a) X and Y are closed, bounded (hence compact) and convex and that

3 J

(b) J(x,y) together with the partial derivatives ^ is jointly continuous in x and y.

The solution of the minimax problem consists in obtaining x* and y* such that

J(x*,y*) = min max J(x,y) . (2.1)

xeX yeY

In this paper an algorithm will be presented which solves the problem under the additional assumption

(c) J(x,y) is convex in X for all yeY. Let the max function be defined by

F(x) = max J(x,y) (2.2)

yeY

(12)

denoted by Y(x) and called the answering set. The minimax problem, therefore, consists in finding the minimum in X of F ( x ) . All difficulties encountered in the solution of the minimax problem are related to the nature of F(x) . Let us therefore summarize the basic properties of F(x) in the form of a

lemma:

Lemma 1. The max function F(x) is continuous in X and possesses at any xeX a directional derivative D^F(x) in the arbitrary direction characterized by the unit direction vector Y. The directional derivative is upper semi-continuous in the pair (Y,x), continuous in Y alone for all xeX and defined by

D F ( x ) = max |^(x,y) *Y. (2.3)

Y y0Y(x) °X

The answering set Y(x) is upper semi-continuous with respect to inclusion.

Continuity of F(x) and upper-semiconductivity of Y(x) follows from the continuity of J(x,y) and the definition of F(x) [

4

]. The existence of the directional derivative follows from the continuity of F ( x ) . Upper semi-continuity of D^FCx) in the pair (x,Y) as well as the continuity in Y alone have been shown by Danskin. The existence of the minimax solution follows from the properties of F(x) :

Lemma 2. There exists a global minimum of F(x) in X.

Existence of the global minimum follows by the theorem of Weierstrass from the continuity of F(x) and compactness of X.

(13)

To characterize the local minima of F ( x ) , let the directional derivative be used to make a first order expansion of F(x) about some

nominal point x in the direction Yer(x) , where T(x) is the cone of admis­ sible unit direction vectors at xeX:

F(x + Yh) = F(x) + Dy F(x) -h + 0(h) (2.4)

and h is the step size, h = ||x-x||, x = x + Yh. For h sufficiently small the

first order approximation is valid for all Yer(x) . in order that the point x be a local minimum of F(x) it is necessary that an arbitrary small step

in any direction does not further decresas the value of the max function, i.e.,

F ( x + yh) “ F(x) > 0, for all Y,

||v||

= 1, (2.5) and in particular

y ep i a ) [ F ( x + Y h ) - F(x)] > 0. (

2

.

6

)

In view of the definition of the directional derivative (2.3), the expansion (2.4) and the definition of h this leads to the following necessary condition for local minimax solutions:

Lemma 3 . In order that x* be a local minimax solution it is necessary and sufficient that

min max § “ (x*,y)-Y > 0. (2.7)

Y®r (x) yeY(x*)

(14)

[1,4,5,6], In case the admissible domain X is given by a set of inequality constraints which satisfy the Kuhn-Tucker constraint qualification [

7

], the appropriate necessary condition has recently been obtained by Bram [8]:

Lemma 4 . In order that x* be the local minimax solution of J(x,y) under the set of inequality constraints cp(x) < 0 and satisfying the Kuhn-Tucker constraint qualification it is necessary that there exist a vector V > 0 such that

min ( max (x*,y) + V*|“£:)*Y > 0. (2.8)

Y yeY(x*) dX

Moreover, if cp^(x*) < 0, = 0.

The necessary conditions in lemmas 3 and 4 tend to draw a parallel between the minimax problem and minimization problems of nonlinear program­ ming. Formally, the basic difference between the two is that in minimization problems F(x) is usually continuous and continuously differentiable whereas in the minimax problem the most interesting case occurs when it is only continuous. The necessary conditions in minimax problems are thereby more difficult to employ constructively and, in particular, the multiplier

theorem is not as revealing as in minimization problems. Additional computa­ tional burden and principal difficulties are involved in the solution of minimax problems because F(x) is not explicitely known and instead of an r-dimensional search in X one must carry out an r+s-dimensional search in

t XxY.

t

It is for this reason why both necessary conditions and the expression for D^F(x) are stated in terms of the function J(x,y).

(15)

It is not difficult to see that local methods based on recursive relations X n+1 V n " X n + X hn n yn + Tlk n n n (2.9)

such as gradient and Newton-Raphson type algorithms (e.g., the algorithm in [

9

]) may be used to find the minimax solution but only if it is at the same time a local saddle point in XxY, i.e., if at least in the neighborhood of

(x*,y*) the saddle point condition is satisfied:

J(x*,y) < J(x*,y*) < J(x,y*), for all (x,y) eN(x*,y*) . (2.10)

A new result in terms of a sufficient condition for the minimax solution to be a local saddle point is given by the following lemma:

Lemma 5 . If for all xeX the answering set Y(x) is convex then there exists a local saddle point (x*,y*) which is the minimax solution of J(x,y) in XxY.

The proof is given in the Appendix. It is interesting to note the relation

of the above lemma to the saddle point theorem of Ky Fan (reduced to finite dimensional spaces) [ll]. Let, analogous to Y(x) , X(y) be the set of

points xeX which minimizes J(x,y) for a given yeY. Then,

Lemma 6 . If for all xeX and yeY the answering sets Y(x) and X(y) are convex then there exists a pair (x*,y*)®XxY which is a global saddle point of J(x,y) in XxY, i.e. satisfies

(16)

J(x*,y) < J(x*,y*) < J(x,y*), for all (x,y)eXxY. (2.11)

It is well known that when (2.11) holds minmax J(x,y) = maxmin J(x,y) = J ( x *,y*)). Consequently, in view of lemma 5, omitting the convexity requirement on X(y) has the effect of reducing the minimax solution from a global saddle point to a local saddle point.

In particular, convexity of the set Y(x) is assured if J(x,y) is concave in y, and the above lemmas then indicate that the minimax is at

least a local saddle point. If concavity can not be assumed the solution is generally not a saddle point and algorithms based on local properties of J(x,y) only, in general, will not converge. To the authors best knowledge,

two algorithms have been proposed for the solution of the general minimax problem in finite dimensional spaces. The first [2] is based on the proper sequencing of minimizing and maximizing iterations, the second [6] on the utilization of local maxima in Y and the generalized definition of the gradient of F ( x ) . Both algorithms are, however, best suited for the type of problems analyzed by Damjanov [l3] where the set Y is a finite set. Both algorithms have extensive computational requirements, Heller's having

the advantage in that most of the tasks required can be performed by

standard routines for global maximization, linear and quadratic programming. Among the weaker points of both algorithms is the stopping criteria for estimating the distance from the current position to the minimax solution.

In this paper we present an algorithm for an important subclass of

minimax problems which do not satisfy the condition in lemma 5. The proposed algorithm may be shown to converge to the minimax solution whenever the

(17)

special case when J(x,y) is convex in X (assumption (c)). The use of c o n ­ vexity property allows great simplifications in computational requirements. Moreover, the nature of the algorithm, which consists of reducing the

domain X to an arbitrary neighborhood of the minimax point x*, inherently eliminates the difficult question of estimating the distance to the minimax solution. In this section two lemmas are given which guarantee the conver­ gence by the proposed algorithm to the minimax solutions.

Denote X by X and consider an arbitrary x eX , where X is a

J

o

J

n n* n

subset of X . Maximize J(x ,y) to obtain y such that

o n

J

F(x ) = J(x ,y ) = max J(x ,y) . (2.11)

v n n ,yn n ,y

yeY

Expand J(x,yn) about x^ and use the convexity assumption to obtain

J(x,y ) > J(x ,y ) + ^ (x ,y ) (x-x ) , for all xeX. (2.12)

\ \ n ’-'n' ox v n ’-'n v n7 * v

'

Define the set

H (y ) = {x : (x ,y ) (x-x )) < 0} (2.13) n w n ox v n ,yn v n

and define the recursive relation

X . = H (y ) fix , x eX , n = 0,1,... . (2.14)

n+1 n w n n* n n*

The following lemma justified the use of the elimination algorithm:

(18)

The following modification of the above contraction process is of practical interest. Suppose is a subset of Xq obtained by the modified

process and let x^eX^. Find any y^eY such that

J(x ,y. ) > J(x i jy 0 , x .eX

v

n iyk

N n-l’^n-l *

n-1 n-1

(2.15)

If such a y^ does not exist take y^ = y^, where y^ is given by (2.11). Define

H (y ) = (x : (x ,y ) (x-x ) < O]

n k ox n

n ' —

(2.16)

and redefine the recursive relation as

X n = H (y.)flX ,

n+1 n w k n (2.17)

Lemma 8. The series [x } contracts to the minimax solution x*eX.

--- n

The proofs are given in the Appendix.

3. Algorithm

Construction of an algorithm based on lemmas 7 and 8 consists in resolving the following tasks (a) maximization over Y to obtain Y(x^) ,

(b) reduction of X^, (c) selection of the next nominal point x^+ ^ and (d) determination of a viable stopping criterion. The following algorithm was developed and applied to solve the examples in the following section. The basic steps of the algorithm are presented first and the modified algorithm acutally constructed is then described. In order to give the

(19)

only one element or that only one element from Y ( x p was utilized in the algorithm. Hence, at the beginning of the n-th elimination the reduced domain X is defined by the set of linear constraints,

n

J

9

g*x £ 8^x i » i = 1,..., n + m - 1 (3.1)

where for i = l,...,m, the inequalities correspond to the constraints originally defining X while the rest have been added in the previous n-1 eliminations. The original constraints are specified by input data which give g^ and x^, i = l,...,m while for i = m f l , ...,n-l, x^ is the nominal point in the i-.th elimination, y^ is an element of Y ( x p and

d J , v

-k

8 i = si (V y i> • (3.2)

Let x^ be the nominal point obtained in the n-1 elimination. Maximize J(x ,y) to obtain y eY(x ) and find g which defines the set

n J,/

J n

n °n

Hn (y ) by the linear inequality

x < g *

n x .n (3.3)

Contraction of X^ to X ^ ^ consists in adjoining (3.3) to the set of constraints (3.1).

To select ’x ^ ^ e X ^ observe that all points on the ray

x = x - \ g , \ > 0 (3.4)

n n

belong to for sufficiently small X, such that none of the constraints in (3.1) is violated. It is proposed that the nominal point be selected as

(20)

the midpoint between x^ and the first intersection of the ray (3.4) with the constraint set (3.1). The value of X corresponding to the point of intersection is X n min X . i 1 min i n

8

(3.5)

and X > 0. Hence, the new nominal point is

x t = x - %X g .

n+1 n n n (3.6)

If, as assumed, the admissible domain X = Xq , defined by (3.1) for i = m, was bounded and the point x is chosen in the interior of X , the same

o o ’

properties will hold for x^ and X^, and a meaningful elimination is

performed in each iteration whenever gn + -^

4

0* Degeneracy may occur if in some iteration g ^ = 0 . To overcome this it is sufficient to reselect x , by changing the step size to kX g instead of %X g .

n+1 n n n°n

The step size between x and x .. is selected as the basic

n n+1

stopping criterion. The procedure is interrupted when the setp size is below a prespecified bound q, i.e. when

* J | g n l l < q • (3.7)

Before the search is completely terminated an additional stopping criterion is applied which contains an estimate of the hypervolume of X . The

n

following criterion was used in solving the examples in the following t

section. If after N eliminations (3.7) becomes satisfied, solve the

+

A different criterion is used in Ref. C15] and has been applied with success in [l4].

(21)

following nonlinear progamming problem: Maximize

s =

||x-xj|

(3.8)

under the set of constraints

(3.1)

and

(3.3)

written in matrix form as

Gx < z . (3.9) Let Stop if S o max xeXn x-x (3.10) S < d. o — (3.11)

If not, take a step, basically in the direction connecting x^ to x, but modified so as to insure that the new nominal point x^+ ^ is not on the boundary of X^. In the algorithm used here this was achieved by taking

xn+1 to be a convex combination of three points ( x ^ ^ x ^ x ) . Continue the basic elimination procedure.

Modifications made in the actual implementation of the algorithm relate to the maximization over Y and were induced by the result of lemma 8. The procedure used to maximize was to replace the set Y by a grid of points covering Y. The step of the grid is adjustable and a finer grid is used when maximizing for the first time. The coarser grid is used in all

subsequent maximizations whenever the grid search produces a point J(x ,y ) n k

(22)

In general, the grid search, that is maximization over Y, will

produce either the set Y(x ) or, with the coarse step, the set Y (x ) of

n n

points satisfying (2.15). In any case, the obtained set need not be a

singleton and the following modification of the algorithm was made in order to utilize the possibility of greater elimination. For every y^eY(x^) find

^ T

the corresponding X ^ given by (3.5) where now g^ = gn (yk) = ^ ^Xn ,yk ^ * ’

find the step sizes

Cqk 3 , qk = ^ n J l8 n (y k) II * L e t § n an d

be values corresponding to the maximal step size in [q^] and find the new nominal point from (3.6) as before. Contraction of X to X - now consists in

n n+1

adjoining to (3.1) the set of constants

Sn (yk}* X - 8n (yk> * V V ?( x n} (3'12)

except those hyperplanes in (3.12) which would make x lie outside of X ,.

n+1 n+1

The general flow diagram of the elimination algorithm is presented in Figure 2.

4. Solution of Examples

The solution of minimax problems by the algorithm presented in the previous section demands only that a subroutine supplying J(x,y) and ^ (x,y) be added to the basic program. The application is now illustrated

in the examples cited in Section 3.

The constraint set in the first two examples contains an equality constraint n £ x. i=l L c

X

(4.1)
(23)

and the best way to handle the constraint in these examples is to use it to reduce the dimensionality of the space of x, i.e. to place

x = c - S x . . (4.2)

n x i=1 i

The problem of finding the maxmin solution, rewritten in the form of a minimax problem then takes the form

J(x,y) = n -2 v . (1 i=l 1 -<2.y, Q 1 1 - p . x. e e ) x^ > 0, i = 1,...,n-l c x n S x. > 0 i=l 1 “ (4.3) (4.4) (4.5)

and the gradient has the form

-sT -a.y. -ß.x.e Oj Q 1 1 1 1 t— = -v.p.e e OX. 1 1 1 -Q'.y. l l + v ß e n n

- a y

*e n n -ß x e n n -O' y n n ,i=l,...,n-l (4.6)

where x^ in (4.3) and (4.6) is a dependent variable defined by (4.2). The maximization over Y was handled by prescribing a "grid" of points and selecting the maximum of J(x,y) for fixed x over the grid, refining the step of the grid as necessary in accordance with the modification of the algorithm as discussed in the previous section. For a specific example with the numerical values n = 4,

(24)

< H-* II 1 i = 4 II

CM

> 2 3 2 = 2 *2 = 3 II C O > 3

CO

II

CM

a a “ 3 = 2 v4 = 4 34 = 1 T— 1 II

and for c x = Cy = t^ie final result in terms of the coordinates of the last nominal point was

x^ = .0000 x 0 = .1891 x 0 = .2942 x. = .5167

2 3 4 (4.7)

and was attained in 22 iterations. The final function value was J = -2.23525 and the radius of the constraints sphere, i.e., of the bound Sq was .00717.

Therefore, the minimax solution is within the sphere of radius Sq = .00717

with the center at the last nominal point.

In the second example, the equality constraint was handled in the same way; the relevant expressions defining the minimax problem have the form n -k x /y y i J(x,y) = - £ v (1 -

a e

) i= 1

L

1 (4.8) x. > 0 i — n c - 2 x. > 0 x i=l 1 (4.9) (4.10) k.x. k.x. l i l i

y,

i

dy

— = -v.k.Q'.e

y i

(l-Ck'.e

'i

)

OX. I l l v i ' i k x k x n n n n

y , - i

y - i

y y n

+ Vnknane

(X “ ane

* )

> i = l,...,n-l

(4.11)
(25)

where again x^ in (4.7) and (4.10) is a dependent variable given by (4.2). For the example with numerical values n = 4. c = 1, c = 2 .

y x

V1 " 1

t-H

II

1

O'

1

CM

II

CM>

¿4

CM

ii

5

*2

v 3 = 3

CO

II

7

^3

v

4 = 4

k4 =

11

^4

the result in terms of the coordinates of the final nominal point was

X]_ = .71352 x2 = .38893 x3 = .48833 x4 = .40922

and was attained in 102 iterations. The final function value was J = -9.960 and the radius of the constant sphere was S0 = .8117*10 < d = 10 In both examples as well as others with different data, it was found that the time consuming stage of the algorithm is the maximization which occurs

(a) when the set Y(x) is sought and (b) when the second stopping criterion has to be applied. As indicated (a) was handled by a grid with varying

grid step, while (b) was handled by Rosenbrock's rotating coordinate algorithm modified for use on CDC 1604 by Heller [6]. In the second example the need

to perform (b) arose in approximately 107o of iterations while the need to perform (a) occurs in every iteration. One can therefore see the advantages of the time-saving modification implemented into the algorithm and based on lemma 8.

In the third example, standard tables were used to compute the explicit expression for J(T,K,Tm ,§ ,

00

^) with E(s) given by (6). It was
(26)

observed, however, that the obtained functional was not convex in the pair T,K for all points (T^,? ,a)n) . Therefore a mapping

3 = TK (4.12)

was employed and the space of minimizing parameters transformed into

x = |K|

lK I

't k1 with U) (4.13) m

with the result that J(x,y) was convex in x for all yeY. The expression for J(x,y) was of the form

6 ! 62 J = - + - (4.14) where 6 1 ■ c32( -do2d3 + d od ld 2> + <c22 ' 2c lc3>dod ld4

6 = ( c .2 - 2c c„)d d,d. + c 2 (-d1d 2 + d,d,d )

2 1 o 2 o 3 4 o 1 4 2 3 4 (4.15) A = 2d d.(-d d,2 - d,2d. + d 1d„d,) o 4 o 3 1 4 1 2 3 and c = cu 0 ? c, = U) T + 2§U) 1 m cn = 25^T + 1 2 m c = T 3 m

d = K

o

d 1 =

GO2 + 3 2

d_ = UD T + 2§0J

2 m

d . = 2§(J0T + 1

3 m (4.16) d. = T 4 m
(27)

"N

j

^

j

from which it is simple to find ^ and . The following bounds were used for the variables:

400 < K < 5 0 0 160 < Y < 300 g < oo < 11 — n — .7 <

Î

< .8 The solution .09 < T < .11 . — m — K* = 500 T* = .32

was obtained in 54 iterations, with the final value for J = .3283 and the _3

radius of the constraint sphere SQ < 5.10 . The envelope of the step responses corresponding to various ((1^,5,^) for K*, T* are not presented since nothing essentially new can be visualized from them aside from the usual property of the minimax solution, i.e. that the dispersion of the

output signal at any instant of time is less than with other pairs (K,T).

5. Conclusions

In this paper, the intention was to present some definitive results on the minimax problem in finite dimensional spaces. The section on theory contains a brief summary of results which are important in characterizing the minimax solution. Lemma 5 which gives sufficient conditions for a local saddle point to be the minimax solution is a new result and extends the class of minimax problems in which local saddle

(28)

point methods may be used for the minimax search. Lemmas 7 and 8 are also new results in so far as the minimax problem is concerned, but the result

in lemma 7 is an extension of the result used by Wilde [lO] in contour gradient schemes for finding the minimum of a convex function in a convex d o m a i n .

The algorithm based on lemmas 7 and 8 was found to be relatively efficient although its merits can be better judged after comparisons with other algorithms. Presently, the author is aware of only two algorithms [2,6] and a comparative study was not undertaken. The algorithm, as constructed, is only one possible way of performing the basic tasks of selecting nominal points, maximizing over Y and judging when to stop. All these tasks were resolved in the simplest manner. Moreover, the method of maximizing over Y is tailored to the result in lemma 8 and complete maximi­ zation is carried out only when necessary. This advantage is not possessed by other algorithms, in which complete maximization in each iteration is mandatory. If, in the elimination algorithm, maximization over Y is

necessary, the fact that the answering set Y(x) is not a singleton does not

hinder, but only helps since it allows greater elimination per iteration. Furthermore, the algorithm possesses the desirable property of faster

solution time with good initial guess xqSX since the closer J ( x Q ,yo) = F(xq)

is to the minimax value the smaller number of complete maximizations will be necessary. Finally, the fact that the algorithm reduces the admissible domain to an arbitrary, prespecified, neighborhood of the minimax solution

inherently eliminates the difficult question of estimating the distance from the minimax solution. This is one of the more difficult questions in

(29)

minimax search, since one may find ways to construct a monotonically decreasing sequence {f(x.)} of values of the max function but nothing can

be said about the speed of convergence nor can the necessary condition (2.7)

be used as a simple stopping criterion.

It should be mentioned that the algorithm is not restricted to minimax problems where X is defined by linear constraints as in (3.1).

Nonlinear boundaries of X may, for the purpose of implementing the algorithm, be represented by a suitable number of supporting hyperplanes. Since this representation is carried out only for the purpose of selecting nominal points it does not affect the final minimax value. Moreover, due to the nature of the algorithm, the original nonlinear constraints are constantly replaced by linear constraints, i .e., hyperplanes through nominal points.

A serious limitation of the elimination method is that J(x,y) must be convex in X. It is unfortunate since it restricts the generality of the method and excludes direct application to the general minimax problem.

References

1. Danskin, J. M . , The Theory of M a x - M i n . Series Economitrics and Operations Research, Vol. V, Springer-Verlag, New York, 1967. 2. Salmon, D., "Minimax Controller Design," IEEE Trans, on Automatic

C o n t r o l . Vol. AC-13, No. 3, July, 1968.

3. Medanic, J., "Segment Method for the Control of Systems in the Presence of Uncertainty," Proc. J A C C , Boulder, Colorado, 1969.

4. Psheniehnii, "A Dual Method in Extremum Problems I and II," K y b e r n e t i c a , Vol. 4, Nos. 4 and 33.

5. Demjanov, V. F., K i b e r n e t i k a . Vol. 2, No. 6, 1966.

6. Heller, J. E., "A Gradient Algorithm for Minimax Design," Report R - 4 0 6 ,

(30)

7. Kuhn, H. W. and Tucker, A. W. , "Nonlinear Programming," Proc. Second Berkeley S y m p . , Math. Stat. and Prob., J. Neyman, ed., Univ. of Calif. Press, 1951.

8. Bram, J., "The Lagrange Multiplier Theorem for Max-Min with Several Constraints," SIAM Journal. Vol. 14, pp. 665-667, 1966.

9. Koivuniemi, A. J., "Parameter Optimization in Systems Subject to Worst (Bounded) Disturbances," IEEE Trans, on Automatic C o n t r o l . Vol. AC-11, July, 1966.

10. Wilde, D. J., Foundations of Optimization, Prentice-Hall, 1967. 11. Ky, Fan, "Minimax Theorems," Proc. Natl. Acad. S c i . . Vol. 39, 1953. 12. Kakutani, S., "A Generalization of Brouwer's Fixed Point Theorem,"

Duke Math. J . , Vol. 8, No. 3, 1941.

13. Demjanov, V. F„, "Algorithms for Some Minimax Problems," J, Computer

and System Sciences, Vol. 2, pp. 342-380, (1968).

14. Ciric, V., Ph.D. thesis, Rice University, Houston, Texas, May, 1969. 15. Medanic, J., "Elimination Algorithm for Computation of the Minimax,"

Fourth Allerton Conference on Circuit and System Theory, University of Illinois, October, 1968.

(31)

Appendix

This appendix contains the proofs of lemmas 5, 7, and 8. The proofs of the other lemmas may be found in the literature as noted with each lemma. The result of lemma 5 may be anticipated; however, this author has never encountered it in the form presented in this paper, nor is he aware that the problem solved by the lemma has ever been paused.

To prove lemma 5 introduce the following definitions. Definition 1 . For an arbitrary xeX let

“ (y,V) = (x,y) -Y (A.2)

where the domain of definition of y is the answering set Y(x) and the domain of definition of y is T (x) , the cone of admissible unit direction vectors IMI = 1 with the vertex at xeX.

Definition 2 . Let the domain of definition of

y

be extended to the convex cone f(x) such that if (x) then kyef(x) for 0 < k < 1. Clearly f(x) is the convex cone with vertex at xeX.

Definition 3 . Let the answering set Y(Y), Y eF(x) be the subset consisting of yeY(x) which maximize against the given Y. Let the answering set F ( y ) , yeY(x) be the subset consisting of Y®r(x) which minimize against the given y.

If for all xeX, the answering set Y(x) is convex, as in lemma 5, then Proposition 1 . For all yeY(x), Yef(x) the answering sets F(y) and Y(Y) are convex.

(32)

and convexity of f ( x ) . Convexity of Y(Y) follows by contradiction. Suppose for some Y the set Y(Y) is not convex. Then in view of the expansion (2.4) for the max function, there would exist, for an arbitrary small h, a point x + 6 x = x + Y h such that the answering set Y ( x + 6 x ) would not be convex. This contradicts the assumption in the lemma and hence Y(Y) is convex for

all YeF (x) .

Proposition 2 . There exists a Y*eF (x) and y*eY(x) such that

W ( y ,Y*)< U)(y*,Y*) < w ( y * , Y ) (A. 2)

holds for all xeX, yeY(x) and Yef (x) .

Proof: Because U)(y,Y) is continuous in the variables, because the sets Y(x) and F (x) are closed and bounded and the answering sets Y(Y) and T(y) are convex for all Yef (x) and y e Y ( x ) , respectively, the result in

proposition 2 follows by Kakutani's or Ky Fan's saddle point theoriem (lemma 6 in the p a p e r ) .

Now, the definition of (Jtf(y,Y) and the result in proposition 2 imply that

(x,y) -Y* < (x,y*) -Y* < (x,y*) *Y (A.3)

holds for all xeX, yeY(x) and Y e f(x)

(y *

and Y* depending of course on xeX). In particular for the minimax point x*,
(33)

By the necessary condition for the minimax solution (lemma 3 in the paper) for

Y

= Y* there must exist at least one yeY(x*) such that

o < (X*,y)-Y*. (A.5)

Combining (A.4) and (A. 5) shows that

0 < (x*,y*) *Y (A.6)

must hold for all

Y ^ r

(x) and hence for all

Y e F

(x), since

T

(x) is a subset of f (x) . But then

x*

is a local minimum of J(x,y*) since r(x) is the set of

admissible unit direction vectors at x * . On the other hand, since y * e Y ( x * ) , y* is a global maximum of J(x*,y) and hence (x*,y*) is a local saddle point. This completes the proof of lemma 5.

The proof of lemma 7 consists in showing that any point dropped in the contraction of to X ^ by way of (2.14) cannot be the minimax solution.

Denote the complement of H (y ) by H (y ) and let xeH (y ) D X . Then, x

r n w n

J

n w n' n VJn n *

cannot be the minimax solution since from (2.13) J(x,y ) > J(x ,y ) and

v , y n/ v n ,yn therefore

F(k) = max J(x,y)>J(x,y ) > J(x ,y ) > min max J(x,y). (A.7)

y«Y n n n xeX yey

The proof of lemma 8 consists in showing that any point dropped in the contraction of X^ to by way of (2.17) cannot be the minimax

solution, and proceeds in a similar fashion. Denote, analogously, the complement of H (y.) by H (y. ) and take xeH (y ) PI X .

UK.

Then, x cannot be

h k

n K

n

the minimax solution since J(x,y.) > J(x n ,y ,) and therefore ,yk n - l ^ n - l

(34)

F(x) = max J(x,y) > J(x,yk> >

J (xn_i>yn.;|)

> min

max J(x,y)

. (A.8)

yeY xe X yeY

ACKNOWLEDGMENT

The author wishes to acknowledge helpful and stimulating discussions with Professors P. Kokotovic, J. B. Cruz, Jr., and W. R. Perkins, as well as his collegues in the Control Systems Group of the Coordinated Science

Laboratory. The author is indebted in particular to J. Heller for programming the algorithm on the CDC 1604 at CSL.

(35)

FS-1953

(36)
(37)

Dr A.A. Dougal Asst Director (Research) Ofc of Defense Res & Eng Department of Defense Washington, D.C. 20301 Office of Deputy Director (Research and Information, Rm 3D1037) Department of Defense

The Pentagon Washington, D.C. 20301 Director.Advanced Research Projects Agency

Department of Defense Washington, D.C. 20301 Director for Materials Sciences Advanced Research Projects Agency Department of Defense Washington, D.C. 20301 Headquarters

Defense Communications Agency (340) Washington, D.C. 20305 Defense Documentation Center Attn: DDC-TCA Cameron Station

Alexandria, Virginia 22314 (50 Copies) Director

National Security Agency Attn: TDL

Fort George G. Meade, Maryland 20755 Weapons Systems Evaluation Group Attn: Colonel Blaine 0. Vogt 400 Army-Navy Drive Arlington, Virginia 22202 Central Intelligence Agency Attn: OCR/DD Publications Washington, D.C. 20505 Hq USAF (AFRDD) The Pentagon Washington, D.C. 20330 Hq USAF (AFRDDG) The Pentagon Washington, D.C. 20330 Hq USAF (AFRDSD) The Pentagon Washington, D.C. 20330 Colonel E.P. Gaines, Jr. ACDA/FO

1901 Pennsylvania Ave N.W. Washington, D.C. 20451 Lt Col R.B. Kalisch (SREE) Chief, Electronics Division Directorate of Engineering Sciences Air Force Office of Scientific Research Arlington, Virginia 22209 Dr I.R. Mirman AFSC (SCT)

Andrews Air Force Base, Maryland 20331 AFSC (SCTSE)

Andrews Air Force Base, Maryland 20331 Mr Morton M. Pavane, Chief

AFSC Scientific and Technical Liaison Office 26 Federal Plaza, Suite 1313

New York, New York 10007 Rome Air Development Center Attn: Documents Library (EMTLD) Griffiss Air Force Base, New York 13440 Mr H.E. Webb (EMMIIS)

Rome Air Development Center Griffiss Air Force Base, New York 13440 Dr L.M. Hollingsworth

AFCRL (CRN) L.G. Hanscom Field Bedford, Massachusetts 01730 AFCRL ^RMPLR), Stop 29 AFCRL Research Library L.G. Hanscom Field Bedford, Massachusetts 01730

Hq ESD (ESTI) L.G. Hanscom Field

Bedford, Massachusetts 01730 (2 copies) Professor J. J. D'Azzo

Dept of Electrical Engineering Air Force Institute of Technology Wright-Patterson AFB, Ohio 45433 Dr H.V. Noble (CAVT) Air Force Avionics Laboratory Wright-Patterson AFB, Ohio 45433 Director

Air Force Avionics Laboratory Wright-Patterson AFB, Ohio 45433 AFAL (AVTA/R.D. Larson Wright-Patterson AFB, Ohio 45433 Director of Faculty Research Department of the Air Force U.S. Air Force Academy Colorado Springs, Colorado 80840 Academy Library (DFSI3) USAF Academy

Colorado Springs, Colorado 80840 Director

Aerospace Mechanics Division Frank J. Seiler Research Laboratory (OAR) USAF Academy

Colorado Springs Colorado 80840 Director, USAF PROJECT RAND Via: Air Force Liaison Office The RAND Corporation Attn: Library D 1700 Main Street Santa Monica, California 90045 Hq SAMSO (SMTTA/Lt Nelson) AF Unit Post Office Los Angeles, California 90045 Det 6, Hq OAR

Air Force Unit Post Office Los Angeles, California 90045 AUL3T-9663

Maxwell AFB, Alabama 36112 AFETR Technical Library (ETV.MU-135) Patrick AFB,Florida 32925 ADTC (ADBPS-12) Eglin AFB, Florida 32542 Mr B.R. Locke

Technical Adviser, Requirements USAF Security Service Kelly Air Force Base, Texas 78241 Hq AMD (AMR)

Brooks AFB, Texas 78235 USAFSAM (SMK0R) Brooks AFB, Texas 78235 Conmanding General

Attn: STEWS-RE-L, Technical Library White Sands Missile Range New Mexico 88002 (2 copies) Hq AEDC (AETS)

Attn: Library/Documents Arnold AFS, Tennessee 37389 European Office of Aerospace Research APO New York 09667

Phsical & Engineering Sciences Division U.S. Army Research Office 3045 Columbia Pike Arlington, Virginia 22204 Commanding General U.S. Army Security Agency Attn: IARD-T Arlington Hall Station Arlington, Virginia 22212

Commanding General U.S. Army Materiel Command Attn: AMCRD-TP Washington, D.C. 20315

Technical Director (SMUFA-A2000-107-1) Frankford Arsenal

Philadelphia, Pennsylvania 19137 Redstone Scientific Information Center Attn: Chief, Document Section U.S. Army Missile Command Redstone Arsenal, Alabama 35809 Commanding General U.S. Army Missile Command Attn: AMSMI-REX Redstone Arsenal, Alabama 35809 Commanding General

U.S. Army Strategic Communications Command Attn: SCC-CG-SAE

Fort Huachuca, Arizona 85613 Commanding Officer

Army Materials and Mechanics Res. Center Attn: Dr H. Priest

Watertown Arsenal Watertown, Massachusetts 02172 Coranandant

U.S. Army Air Defense School Attn: Missile Science Division, C6& Dept P.0. Box 9390

Fort Bliss, Texas 79916 Comnandant

U.S. Army Command & General Staff College Attn: Acquisitions, Library Division Fort Leavenworth, Kansas 66027 Conmanding Officer

U.S. Army Electronics RAD Activity White Sands Missile Range, New Mexico 88002 Mr Norman J. Field, AMSEL-RD-S Chief, Office of Science A Technology Research and Development Directorate U.S. Army Electronics Command Fort Monmouth, New Jersey 07703 Commanding Officer Harry Diamond Laboratories Attn: Dr Berthold Altman (AMXDO-TI) Connecticut Avenue and Van Ness St N.W. Washington, D.C. 20438

Director

Walter Reed Army Institute of Research Walter Reed Army Medical Center Washington, D.C. 20012 Commanding Officer (AMXRD-BAT) U.S. Army Ballistics Research Laboratory Aberdeen Proving Ground

Aberdeen, Maryland 21005 Technical Director U.S. Army Limited War Laboratory Aberdeen Proving Ground Aberdeen, Maryland 21005 Commanding Officer Human Engineering Laboratories Aberdeen Proving Ground Aberdeen, Maryland 21005 U.S. Army Munitions Command Attn: Science A Technology Br. Bldg 59 Picatinny Arsenal, SMUPA-VA6 Dover, New Jersey 07801 U.S. Army Mobility Equipment Research and Development Center

Attn: Technical Document Center, Bldg 315 Fort Belvoir, Virginia 22060 Director

U.S. Army Engineer Geodesy, Intelligence A Mapping Research and Development Agency Fort Belvoir, Virginia 22060 Dr Herman Robl Deputy Chief Scientist U.S. Army Research Office (Durham) Box CM, Duke Station Durham, North Carolina 27706

(38)

Mr Robert 0. Parker, AMSEL-RD-S Executive Secretary, JSTAC U.S. Army Electronics Command Fort Monmouth, New Jersey 07703

Commander

Naval Electronics Laboratory Center Attn: Library

San Diego, California 92152 (2 copies)

Code 5340A, Box 15 U.S. Naval Missile Center Point Mugu, California 93041 Conmanding General

U.S. Army Electronics Command Fort Monmouth, New Jersey 07703 Attent ion:AMSEL-SC RD-GF RD-MT XL-D 1 copy to each sym­ bol listed individu­ ally addressed XL-E XL-C XL-S (Dr R. Buser) HL-CT-DD HL-CT-R HL-CT-L (Dr W.S. McAfee) HL-CT-0 HL-CT-I HL-CT-A NL-D NL-A NL-P NL-P-2 (Mr D. Haratz) NL-R (Mr R. Kulinyi) NL-S KL-D KL-E KL-S (Dr H. Jacobs) KL-SM (Drs Schiel /Hie sima ir) KL-T

VL-D

VL-F (Mr R.J. Nieraela) WL-D

Dr A.D. Schnitzler, AMSEL-HL-NVII Night Vision Laboratory, USAECOM Fort Belvoir, Virginia 22060 Dr G.M. Janney, AMSEL-HL-NVOR Night Vison Laboratory,USAECOM Fort Belvoir, Virginia 22060 Atmospheric Sciences Office Atmospheric Sciences Laboratory White Sands Missile Range New Mexico 88002 Missile Electronic Warfare, Technical Area, AMSEL-WT-KT White Sands Missile Range New Mexico 88002

Deputy Director and Chief Scientist Office of Naval Research Branch Office 1030 Est Gree Street

Pasadena, California 91101 Library (Code2124) Technical Report Section Naval Postgraduate School Monterey, California 93940 Glen A. Myers (Code 52Mv) Assoc Professor of Elec. Engineering Naval Postgraduate School Monterey, California 93940 Conmanding Officer and Director U.S. Naval Underwater Sound Laboratory Fort Trumbull

New London, Connecticut 06840 Commanding Officer Naval Avionics Facility Indianapolis, Indiana 46241 Dr H. Harrison, Code RRE Chief, Electrophysies Branch National Aeronautics & Space Admin. Washington, D.C. 20546 NASA Lewis Research Center Attn: Library 21000 Brookpark Road Cleveland, Ohio 44135 Los Alamos Scientific Laboratory Attn: Report Library P.0. Box 1663 Los Alamos, New Mexico 87544 Federal Aviation Administration Attn: Admin Stds Div (MS-110) 800 Independence Ave S.W. Washington, D.C. 20590 Project Manager

Commcn Positioning & Navigation Systems Attn: Harold H. Bahr (AMCPM-NS-TM), Bldg 439 U.S. Army Electronics Command Fort Monmouth, New Jersey 07703 Director, Electronic Programs Attn: Code 427 Department of the Navy Washington, D.C. 20360 Comnander

U.S. Naval Security Group Conmand Attn: G43

3801 Nebraska Avenue Washington, D.C. 20390 Director

Naval Research Laboratory Washington, D.C. 20390 Attn: Code 2027 6 copies

Dr W.C. Hall, Code 7000 1 copy Dr A. Brodizinsky, Sup.Elec Div. 1 copy

Head, Technical Services Division Naval Investigative Service Headquarters 4420 North Fairfax Drive

Arlington, Virginia 22203 Commander

U.S. Naval Ordnance Laboratory Attn: Librarian

White Oak, Maryland 21502 (2 copies) Conmanding Officer

Office of Naval Research Branch Office Box 39 FPO

New York, New York 09510 Conmanding Officer

Office of Naval Research Branch Office 219 South Dearborn Street Chicago, Illinois 60604 Conmanding Officer

Office of Naval Research Branch Office 495 Summer Street

Boston, Massachusetts 02210 Dr G.M.R. Winkler

Director, Time Service Division U.S. Naval Observatory Washington, D.C. 20390

Commander (ADL) Naval Air Development Center Johnsville, Warminster, Pa 18974 Naval Air Systems Conmand

AIR 03

Washington, D.C. 20360 2 copies

Commanding Officer Naval Training Device Center Orlando, Florida 32813 Naval Ship Systems Conmand

Ship 031 Washington, D.C. 20360

Commander (Code 753) Naval Weapons Center Attn: Technical Library China Lake, California 93555 Naval ship Systems Conmand

Ship 035 Washington, D.C. 20360 U.S. Naval Weapons Laboratory Dahlgren, Virginia 22448

Commanding Officer Naval Weapons Center Corona Laboratories Attn: Library Corona, California 91720

Mr M. Zane Thornton, Chief Network Engineering, Communications and Operations Branch Lister Hill National Center for Biomedical Communications 8600 Rockville Pike Bethesda, Maryland 20014 U.S. Post Office Department Library - Room 1012 12th & Pennsylvania Ave, N.W. Washington, D.C. 20260 Director

Research Laboratory of Electronics Massachusetts Institute of Technology Canfor idge, Massachusetts 02139 Mr Jerome Fox, Research Coordinator Polytechnic Institute of Brooklyn 55 Johnson Street

Brooklyn, New York 11201 Director

Columbia Radiation Laboratory Columbia University 538 West 120th Street New York, New York 10027 Director

Coordinated Science Laboratory University of Illinois Urbana, Illinois 61801 Director

Stanford Electronics Laboratories Stanford University Stanford, California 94305 Director

Microwave Physics Laboratory Stanford University Stanford, California 94305 Director, Electronics Research Laboratory University of California

Berkeley, California 94720 Director

Electronic Sciences Laboratory University of Southern California Los Angeles, California 90007 Director

Electronics Research Center The University of Texas at Austin Austin Texas 78712

Division of Engineering and Applied Physics 210 Pierce Hall

Harvard University Cambridge, Massachusetts 02138 Dr G.J. Murphy

The Technological Institute Northwestern University Evanston, Illinois 60201 Dr John C. Hancock, Head School of Electrical Engineering Purdue University Lafayette, Indiana 47907 Dept of Electrical Engineering Texas Technological College Lubbock, Texas 79409 Aerospace Corporation P.0. Box 95085

Los Angeles, California 90045 Attn: Library Acquisitions Group Proffessor Nicholas George California Inst of Technology Pasadena, California 91109 Aeronautics Library Graduat Aeronautical Laboratories California Institute of Technology 1201 E. California Blvd Pasadena, California 91109

(39)

Raytheon Company Attn: Librarian Bedford, Massachusetts 01730 Raytheon Company Research Division Library 28 Seyon Street Waltham, Massachusetts 02154 Dr Sheldon J. Wells

Electronic Properties Information Center Mall Station E-175

Hughes Aircraft Company Culver City, California 90230 Dr Robert E. Fontana Systems Research Laboratories Inc. 7001 Indian Ripple Road Dayton, Ohio 45440

Airborne Instruments Laboratory Deerpark, New York 11729 Raytheon Company Attn: Librarian Bedford, Massachusetts 01730 Lincoln Laboratory

Massachusetts Institute of Technology Lexington, Massachusetts 02173 The University of Iowa The University Libraries Iowa City, Iowa 52240 Lenkurt Electric Co,Inc 1105 County Road San Carlos, California 94070 Attn: Mr E.K. Peterson Nuclear Instrumentation Group

Bldg 29, Room 101 Lawrence Radiation Laboratory University of California Berkeley, California 94720 Sylvania Electronic Systems Applied Research Laboratory Attn: Documents Librarian 40 Sylvan Road Waltham, Massachusetts 02154 Hollander Associates P.0. Box 2276 Fullerton, California 92633 Illinois Institute of Technology Dept of Electrical Engineering Chicago, Illinois 60616 The University of Arizona Dept of Electrical Engineering Tucson, Arizona 85721 Utah State University Dept Of Electrical Engineering Logan, Utah 84321 Case Institute of Technology Engineering Division University Circle Cleveland, Ohio 44106 Hunt Library Carnegie-Melion University Schenley Park Pittsburgh, Pennsylvania 15213 Dr Leo Youns

Stanford Research Institute Menlo Park, California 94025 School of Engineering Sciences Arizona State University Tempe, Arizona 85281

Engineering & Mathmatical Sciences Library University of California at Los Angeles 405 Hilgred Avenue

Los Angeles, California 90024 The Library

Government Publications Section University of California Santa Barbara, California 93106 Carnegie Institute of Technology Electrical Engineering Department Pittsburgh, Pennsylvania 15213 Professor Joseph E. Rowe

Chairman, Dept of Electrical Engineering The University of Michigan Ann Arbor, Michigan 48104 New York University College of Engineering New York, New York 10019 Syracuse University Dept of Electrical Engineering Syracuse, New York 13210

ERRATUM

Mr Jerome Fox, Research Coordinator Polytechnic Institute of Brooklyn 55 Johnson St (Should be 333 Jay St) Brooklyn, N.Y. 11201

OMIT

Mr Morton M. Pavane, Chief

AFSC Scientific & Tech. Liaison Office 26 Federal Plaza, Suite 1313

New York, New York 10007

Philco Ford Corporation Communications & Electronics Div. Union Meeting and Jolly Rods Blue Bell, Pennsylvania 19422 Union Carbide Corporation Electronic Division P.0. Box 1209

Mountain View, California 94041 Electromagnetic Compatibility Analysis Center (ECAC), Attn: ACLP

North Severn Annapolis, Maryland 21402 Director

U. S. Army Advanced Materiel Concepts Agency Washington, D.C. 20315

ADDENDUM

Dept of Electrical Engineering Rice University

Houston, Texas 77001

Research Laboratories for the Eng. Sc. School of Engineering & Applied Science University of Virginia

Charlottesville, Virginia 22903 Dept of Electrical Engineering College of Engineering & Technology Ohio University

Athens, Ohio 45701 Project Mac

Document Room

Massachusetts Institute of Technology 545 Technology Square

Cambridge, Massachusetts 02139 Lehigh University

Dept of Electrical Engineering Bethelehm, Pennsylvania 18015 Commander Test Command (TCDT-) Defense Atomic Support Agency Sandia Base

Albuquerque, New Mexico 87115

Materials Center Reading Room 13-2137 Massachusetts Institute of Technology Cambridge, Massachusetts 02139

Professor James A. Cadzow

Department of Electrical Engineering State University of New York at Buffalo Buffalo, New York 14214

(40)

1. o r i g i n a t i n g A C T I V I T Y ( C o r p o r a t e a u t h o r ) University of Illinois

Coordinated Science Laboratory U r b a n a s Illinois 61801

Za. R E P O R T S E C U R I T Y C L A S S I F I C A T I O N Unclassified

2 b . G R O U P

3. R E P O R T T I T L E

ON SOME THEORETICAL AND COMPUTIONAL ASPECTS OF THE MINIMAX PROBLEM

4. D E S C R I P T I V E N O T E S ( T y p e o f r e p o r t a n d i n c l u s i v e d a t e s ) 5 . A U T H O R ( S ) ( F i r s t n a m e , m i d d l e i n i t i a l , l a s t n a m e ) MEDANIC, Juraj R E P O R T D A T E July, 1969 7 a . T O T A L N O . O F P A G E S 32__________ 7 b . N O . O F R E F S 8a. C O N T R A C T O R G R A N T N O .

DAAB 07-67-C-0199; also in part AF-AFOSR b. p r o j e c t N O . 68-1579; NSF Grant GK-3893. 9a. O R I G I N A T O R ' S R E P O R T N U M B E R ( S )

R-423

l

_

9 b . O T H E R R E P O R T N O ( S ) ( A n y o t h e r n u m b e r s t h a t m a y b e a s s i g n e d t h i s r e p o r t ) 10. D I S T R I B U T I O N S T A T E M E N T

This document has been approved for public release and sale; its distribution is u n l i mited.

11. S U P P L E M E N T A R Y N O T E S 12. S P O N S O R I N G Ml L I T A R Y A C T I V I T Y

Joint Services Electronics Program thru U.S. Army Electronics Command Fort Monmouth, New Jersey 07703 13. A B S T R A C T

This report contains both a brief summary of the pertinent theory of the minimax problem in finite dimensional spaces and the development of an elimination type

algorithm for the practical computation of the minimax soluti

References

Related documents

If the scheduling agent can be any of the legislators from the proto-party, then all equilibrium outcomes of the scheduling auction coincide with the status quo (i.e., for

HISTORY OF MAJOR CHANGES IN THE DISTRICT OF COLUMBIA TAX STRUCTURE – FISCAL YEAR 1970 – FISCAL YEAR 2006 REVENUE SOURCE FISCAL YEAR OF ENACTMENT FISCAL YEAR EFFECTIVE

95TH TROOP COMMAND FORT HARRISON MT ARMY NATIONAL GUARD. TC DET MOV CNTL REG FORT MISSOULA MT

Professor J House US Army Command &amp; General Staff College. Professor N J Housley University

Leavenworth Army Base

Because we are interested in the impact of the growth of China and India’s demand on LAC exports, as well as the impact of China and India’s trade flows with LAC and the rest of

Forsyth Library, Fort Hays Kansas State College, &#34;Forsyth Library Equipment List&#34;

Psychometric properties of the national eye institute visual function questionnaire (nei-vfq). Mowry EM, Loguidice MJ, Daniels AB, Jacobs DA, Markowitz CE, Galetta SL, et al.