• No results found

Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

N/A
N/A
Protected

Academic year: 2021

Share "Never look back - A modified EnKF method and its application to the training of neural networks without back propagation"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

Never look back - A modified EnKF method and its

application to the training of neural networks without

back propagation

Eldad Haber∗†, Felix Lucka‡and Lars Ruthotto§†

Abstract

In this work, we present a new derivative-free optimization method and investigate its use for training neural networks. Our method is motivated by the Ensemble Kalman Filter (EnKF), which has been used successfully for solving optimization problems that involve large-scale, highly nonlinear dynamical systems. A key benefit of the EnKF method is that it requires only the evaluation of the forward propagation but not its derivatives. Hence, in the context of neural networks it alleviates the need for back propagation and reduces the memory consumption dramatically. However, the method is not a pure "black-box" global optimization heuristic as it efficiently utilizes the structure of typical learning problems. Promis-ing first results of the EnKF for trainPromis-ing deep neural networks have been presented recently by Kovachki and Stuart. We propose an important modification of the EnKF that enables us to prove convergence of our method to the minimizer of a strongly convex function. Our method also bears similarity with implicit filtering and we demonstrate its potential for minimizing highly oscillatory functions using a simple example. Further, we provide numerical examples that demonstrate the potential of our method for training deep neural networks.

1

Introduction

Advances in numerical algorithms for training neural networks have been fueling the recent deep learning revolution. Virtually all training algorithms employ derivative-based optimization methods, and in particular, stochastic gradient descent (SGD) methods (see, e.g., [1, 2, 27, 26, 6, 13, 18, 11, 24] and reference within).

A key requirement of derivative-based optimization methods is an efficient way of computing the gradient of the objective function, which is used to update the parameters to be optimized. When training neural networks, this is commonly done using back propagation [22], i.e., propagating prediction errors from the output layer backwards through the network. Back propagation relies upon a means to store or quickly recompute hidden features and activations, which can be prohibitive when the neural network is deep or when the size of the data is large, e.g., when training a network to learn from videos or 3D datasets. While the storage problem is less pronounced when using stochastic gradient methods, the performance of these methods depends in a non-trivial way on parameters like batch size and learning rate [6], which often need to be tuned manually by solving the problem repeatedly. Also, SGD is very difficult to parallelize efficiently.

Department of Earth and Ocean Science, The University of British Columbia, Vancouver, BC, Canada

Xtract Inc., Vancouver, Canada

Computational Imaging, Centrum Wiskunde & Informatica (CWI), The Netherlands; Department of Computer Science, University College London, UK

§

Department of Mathematics and Computer Science, Emory University, USA

Preprint. Work in progress.

(2)

To overcome some of the drawbacks associated with derivative-based learning algorithms, we propose a new derivative-free optimization method that can be seen as a slightly modified version of the Ensemble Kalman Filter (EnKF). Our method can either augment or replace the SGD for the training of neural networks. EnKF has a long track record in data assimilation and been applied to large-scale optimization problems, e.g., in weather and flow prediction [5, 19, 20, 8, 7, 4, 3]. In recent years, a version of the EnKF method that recovers an unknown parameter from noisy data has been applied to inverse problems [10, 9, 23]. Key advantages of the EnKF method for large-scale problems is that it can be easily parallelized and is derivative-free, i.e., it requires only evaluations of the forward operator. Another advantage of EnKF, that is particularly attractive for highly nonlinear forward problems is its potential to explore the objective function. However, EnKF is not a "black-box" metaheuristic for global optimization of arbitrary functions, like, e.g., genetic algorithms, which were also tested for deep learning recently [21]. It makes explicit and efficient use of the specific structure of learning problems.

In this paper, we continue the work of Kovachki and Stuart for training neural networks [15, 14], which is based on the EnKF presented in [23]. We modify this scheme and prove that our method converges to the global minimizer of a strongly convex function at a sublinear rate. This is an improvement over the method discussed in [23], which converges to the projection of the minimizer onto a low-dimensional subspace. Furthermore, our method can be parallelized easily, which can be attractive in a high-performance computing environment. Using an intuitive heuristic argument and a simple numerical experiment we also show how avoiding back propagation can be advantageous for problems that exhibit local high frequency oscillations. These properties motivate us to use our method for training deep neural networks even though we are not able to prove convergence to the global solution of such non-convex problems. Using derivative-free optimization methods for training deep neural networks has the advantage that no back propagation is needed and therefore, it is possible to work with arbitrary long networks without any memory limitations. We illustrate the potential of our method using a classical convolutional neural network for solving the MNIST problem of classifying hand-written digits [17] and show that our method converges faster than a SGD type method.

Our paper is organized as follows. In Sec. 2, we derive our method by modifying the version of the Ensemble Kalman Filter presented in [23]. In Sec. 3, we state and prove our convergence result for strongly convex functions and discuss the relation of our method to SGD. In Sec. 4, we present several practical improvements of our method. In Sec. 5, we present preliminary numerical results for strongly convex problems and the training of some common neural networks. In Sec. 6 we provide some discussion and concluding remarks.

2

The Ensemble Kalman Filter

Among the many variants of the Ensemble Kalman Filter (EnKF), we focus our attention on the recent version presented in [23]. This version simplifies much of the original EnKF discussion and allows for simple analysis and explanation. The method is an iterative method for the minimization of a function of the form

φ(θ) =D(F (θ)), (2.1)

where θ ∈ Rn denotes the parameter to be recovered, F : Rn → Rmis the forward operator, andD : Rm

→ R is a loss or misfit function. This formulation includes, e.g., linear least-squares problems that are obtained by choosingD(t) = 12ktk2and F (θ) = Aθ− b for fixed A ∈ Rm×n and b∈ Rm. For most classification problems, F (θ) is a neural network andD(·) is the soft-max function. The key advantage of EnKF is that it does not rely on derivatives of F with respect to θ, which is usually computationally demanding, one only needs to evaluate the derivative of the loss function, which is computationally cheap. We will therefore call it "derivative-free" and use the short notation∇D(θ) for ∇tD(t) evaluated at t = F (θ), but again stress that this is a rather different

approach to other "derivative-free" optimization heuristics that try to minimize φ(θ) without utilizing any of its structure, i.e., by evaluations of φ for given θ’s only.

The first and simplest version of our algorithm consists of three steps that are listed in Algorithm 1. In step one of each iteration j we randomly choose k different particles θi, i = 1, . . . , k i.i.d. as

θ∼ ¯θj+ ω. The distribution of ω needs fulfill Eω = 0, Cov(θ) = σ2I (where I denotes the identity

(3)

Algorithm 1 Derivative-free algorithm for solving minθD(F (θ))

Inputs: Starting guess ¯θ0∈ Rn, parameter σ > 0, number of particles k, covariance matrix Γ,

and maximum number of iterations N . for j = 0, 1, . . . , N− 1 do

[1] Draw k particles i.i.d. as θi= ¯θj+ ωi, i = 1, . . . , k, with E(ω) = 0 and Cov(ω) = σ2I.

Set Ωj = [ω1, . . . , ωk]∈ Rn×k.

[2] Evaluate the forward operator F on each particle θiand on θjto construct

Qj= F (¯θj+ Ωj)− F (¯θj) := [F (θ1)− F (¯θj), . . . , F (θk)− F (¯θj)]∈ Rm×k (2.2)

[3] Set Hj = I or Hj = Q>jQj+ Γ and update

¯

θj+1= ¯θj− µjΩjH−1j Q >

j∇D(¯θj), (2.3)

where µjis determined using a line search and∇D(¯θj) denotes the gradient of the loss function,

∇tD(t) evaluated at t = F (¯θj).

end for

Output: Optimized parameters ¯θN.

ω ∼ N (0, σ2I), or ω =

±σ with equal probability (Rademacher distribution). The second step requires one to evaluate the forward operator (but not its derivatives) k times. In the third step of the algorithm, one needs to compute a step, by using a matrix, Hjand computing its inverse times a

residual vector. The matrix can be chosen as an identity matrix or as Hj= Q>jQj+ Γ, where Γ is a

data Covariance matrix. The latter leads to the usual Kalman Gain matrix [3, 10, 9, 23]. To update the solution, we choose the step size µjusing a simple Armijo line search to ensure reduction of the

objective function. We provide some justification for this in Sec. 3.2.

At this stage, we point to an important modification to the EnKF as presented in in [3, 10, 9, 23], which allows us to better analyze its convergence properties. First, we re-sample the particles at each iteration. This allows us to use a simple convergence proof of the method to the global minimum of a strongly convex function. We suspect that this requirement can be relaxed in future work. A second change from the original algorithm is at step 2, where the original EnKF algorithm defines the matrix Qjas the difference between F (¯θj+ Ωj) and its mean over all the particles. Clearly, both versions

are the same for a linear forward problem. Finally, the last difference between our method to the original EnKF method is that the original EnKF algorithm propagates all the particles while we only update the mean of their distribution. It is also possible to propagate all the particles but we avoid this here for simplicity. As we see next, these modifications allow us to prove convergence of our algorithm and the insight we obtain this way allows us to extend the method and propose ways to further accelerate it.

3

Convergence of our algorithm and comparison to existing methods

We first prove that when applied to strongly convex problems our method converges to the global minimizer. In Sec. 3.2 we compare our method to stochastic gradient methods and other methods commonly used in derivative-free optimization.

3.1 Convergence analysis for strongly convex problems

To prove convergence of the method we need some assumptions that are typically used also for the proof of convergence for stochastic gradient descent methods. We prove convergence for the case Hj = I and note that it can be extended for any symmetric positive definite Hj.

A1: The function φ(θ) =D(F (θ)) is smooth and strongly convex, i.e., there exists a L > 0 such that for all θ1, θ2∈ Rn

D(F (θ2))− D(F (θ1))≥ (θ2− θ1)>J(θ1)>∇D(θ1) +

L

2kθ2− θ1k

(4)

Here, J(θ) =θF (θ)>∈ Rm×nis the Jacobian of the mapping of F .

A2: The Taylor expansion error of F can be made sufficiently small, i.e., assuming that we sampled an ω withkωk = O(σ), we have that

F (¯θ + ω)− F (¯θ) = J(¯θ)ω + O(σ2).

Using A2 we can write the matrix Qjin (2.2) as

Qj = JjΩj+O(σ2).

Plugging this into (2.3) we obtain ¯ θj+1= ¯θj− µj  ΩjΩ>jJ>j∇D(¯θj) +O(σ2)  . (3.4)

From our assumptions on the distribution of the particles, it easily follows that E ΩjΩ>j = kσ2I. If

we define gj := ΩjΩ>jJ>j∇D(¯θj), we can compute that

E gj = kσ2J>j∇D(¯θj) = kσ2∇φ(¯θj), Ekgjk2=k∇φ(¯θj)k2+O(k2σ4). (3.5)

Now, let θ∗be the minimizer of (2.1). Using A1 we have that φ(θ∗)− φ(¯θj) ≥ (θ∗− ¯θj)>J>j∇D(¯θj) + L 2k¯θj− θ ∗ k2 φ(¯θj)− φ(θ∗) ≥ (¯θj− θ∗)>J>∗∇D(θ∗) + L 2k¯θj− θ ∗ k2=L 2k¯θj− θ ∗ k2,

where J∗:= J(θ∗), and we used that the gradient at θ∗vanishes. Summing both inequalities we

obtain

(¯θj− θ∗)>J>j∇D(¯θj)≥ Lk¯θj− θ∗k2. (3.6)

Let us assume that within Algorithm 1, we arrived at a concrete ¯θj. We now derive an inequality

that describes how the distancek¯θj+1− θ∗k2compares tok¯θj− θ∗k2in expectation. Technically,

this means that all expected values here are conditioned on ¯θjand only account for the randomness

induced by drawing a new set of particles, which is manifest in Ωj. We first use (3.4) and (3.5) to

estimate:

Ek¯θj+1− θ∗k2= Ek¯θj− µj(gj+O(σ2))− θ∗k2

= Ek¯θj− θ∗k2− 2µjE g>j(¯θj− θ∗) +O(µjσ2) +O(µ2jk 2σ4)

≤ Ek¯θj− θ∗k2− 2µjkσ2∇φ(¯θj)>(¯θj− θ∗) +O(µjσ2) +O(µ2jk 2σ4)

Using (3.6) we obtain

Ek¯θj+1− θ∗k2≤ = (1 − 2µjLkσ2)Ek¯θj− θ∗k2+O(µjσ2) +O(µ2jk2σ4) (3.7)

We can now prove convergence by induction when we choose the learning rate µj= jLkσ1 2.

Theorem 1 At the j-th iteration, it holds that Ek¯θj− θ∗k2≤

C

j, (3.8)

where the constantC only depends on the initial distance,0− θ∗k, the smoothness of φ, L, and k.

Proof 1 For j = 0, it is clear that as we pick θ0manually, Ekθ0−θ∗k = kθ0−θ∗k can be bounded

in the above way. The termsO(µjσ2) +O(µ2jk2σ4) in (3.7) all come from smoothness assumptions

onφ and can thus be bounded accordingly. If we choose the learning rate µj= jLkσ1 2 and absorb

remaining dependencies onk and L in C as well, we obtain Ek¯θj+1− θ∗k2≤ =  12 j  Ek¯θj− θ∗k2+ C j + C j2. (3.9)

If we then assume that the convergence holds at iterationj, we prove that it holds for j + 1: Ekθj+1− θ∗k2≤  12 j  C j + C j + C j2 = C  1 j − 2 j2 + 1 j + 1 j2  ≤j + 1C .

(5)

3.2 Comparison to other methods

It is interesting to note the difference between our derivative-free optimization algorithm to stochastic gradient descent (SGD) and its variants. To this end, note that the SGD iteration can be written as

θj+1= θj− ηjJj>XjX>j∇D(θj),

where the rows of Xj ∈ Rs×mare randomly chosen rows of the m× m identity matrix and ηjis the

learning rate. Although the choice of the learning rate in Theorem 1 is similar to the one commonly used to prove convergence of SGD, it is important to note one major difference between our update rule and the SGD step. In contrast to SGD step, the step size in our method can always be chosen so that the objective function is non-increasing. This is because the computed direction is a descent direction over the subspace spanned by Ωj, since

−∇φ>

jΩ>jJ>j∇D = −(∇D)>J>jΩjΩ>jJ>j∇D ≤ 0.

Hence, common line search strategies can be used to ensure that the objective is monotonically non-increasing. As we see in the numerical examples this makes the selection of hyper parameters easier than in the case of SGD.

Of course this property does not come without a price. This version of the method requires the propagation of all the data making it expensive for problems where the number of data is very large. Nonetheless, it is possible to combine the method with a stochastic selection of the data making it competitive with standard SGD iteration. In this case, the iteration has the form

¯

θj+1= ¯θj− µjΩjH−1j Q >

jXjX>j∇D(θj). (3.10)

It is straight forward to extend the proof of convergence for this case when Xjand Ωjare uncorrelated.

Although the convergence proof covers only strongly convex problems, one motivation to use our derivative-free method for highly nonlinear problems can be made by looking at the behavior of the method for problems with small, yet, high-frequency oscillations.

Example 1 Let l∈ R,  > 0, A ∈ Rm×n, andB∈ Rm×nbe given and

F (θ) = S(θ) + N (θ) = Aθ +  sin(lBθ)

whereS is a smooth function and N is an oscillatory one, and assume the least squares misfit function, D(t) = 1

2ktk

2. In this case the necessary conditions for a minimum are

J>∇D(θ) = (A>+ lB>diag(cos(lBθ)))(Aθ +  sin(lBθ)− dobs) = 0

whereJ = A + ldiag(cos(lBθ))B. Clearly, the misfit function can have many local minima. For large values ofl, the function presents some local highly oscillatory behavior as well as some “slow” low-frequency modes. As a result, if we are to compute the solution using a derivative-based

technique, we would probably fail.

It is important to understand the cause of the oscillatory gradients. Note that the gradient of our function is simply

∇θD = J>∇D = (∇S)>D + (∇N)>∇D.

While∇D is smooth or oscillates mildly, the second part, J contains much higher oscillations. For the problem at hand,∇S is linear, however, the oscillations in ∇N are amplified by a factor of l. Indeed, even if the function has small oscillations, its derivative has much larger oscillations and therefore, the solution can be difficult to obtain. The source of the oscillations can be tracked to the gradient of the oscillatory partN .

It is important to note that the approximate JacobianQj = F (¯θj+ Ωj)− F (¯θ), computed in

our method does not suffer from this problem as long as we choose the elements ofΩ sufficiently large. This is because we only evaluate the forward operator and determining the step is similar to numerical differentiation. Thus, our method may present better properties for problems with highly oscillatory local behavior compared with derivative-based (i.e. back propagation) descent methods. The above discussion shows that there is a strong link between the EnKF method and implicit filtering discussed in [12]. In fact, the EnKF can be seen as a stochastic version of implicit filtering, that is known to work well for noisy functions. Finally, the method can be seen as a particular implementation of a stochastic coordinate descent, where the coordinates are picked by the matrix Ωjand the gradient is evaluated numerically.

(6)

Algorithm 2 Derivative-free algorithm for solving minθD(F (θ)) with memory

Inputs: Starting guess ¯θ0∈ Rn, parameter σ > 0, number of particles k, covariance matrix Γ,

and maximum number of iterations N . Allocate Ω = [] and Q = [] for j = 0, 1, . . . , N− 1 do

[1] Draw k particles i.i.d. as θi= ¯θj+ ωi, i = 1, . . . , k, with E(ω) = 0 and Cov(ω) = σ2I.

Set Ωj = [ω1, . . . , ωk]∈ Rn×k.

[2] Evaluate the forward operator F on each particle θiand on θjto construct

Qj= F (¯θj+ Ωj)− F (¯θj) := [F (θ1)− F (¯θj), . . . , F (θk)− F (¯θj)]∈ Rm×k (4.11)

[3] Update matrices Ω← [Ω, Ωj] and Q← [Q, Qj] by adding columns.

[4] Set Hj = I or Hj = Q>Q + Γ and update

¯

θj+1= ¯θj− µjΩH−1j Q >

∇D(¯θj) (4.12)

where µjis determined using a line search and∇D(¯θj) denotes the gradient of the loss function,

∇tD(t) evaluated at t = F (¯θj).

Output: Optimized parameters ¯θN.

end for

4

Practical improvements and extensions

While Algorithm 1 can be used directly for the solution of the training problem, some simple modifications can boost its performance considerably.

4.1 Reducing the number of particles per iteration

The computational cost at each iteration scales linearly with the number of particles used and therefore, it is desirable to reduce the number of particles used at each iteration. To this end, we propose to re-use the matrices computed from previous particles at earlier iterations at the computation. The modified algorithm is described in Algorithm 2. This version of the algorithm is similar to the variance reduction SGD method proposed in [16]. In practice, one allocates memory for, say ` vectors for Q and Ω and saves only the last ` vectors. It is important to see that this version of the method requires much less forward calculations at each step. We will show how this version of the code can be advantageous compared to the “vanilla” version presented in Algorithm 1.

4.2 A Gauss-Newton like update

The update of ¯θ proposed in (2.3) or (4.12) reduces to a steepest descent algorithm when we use a full basis to sample the problem. However, the original EnKF algorithm uses a Kalman gain matrix to update the solution. It is well known that Kalman filtering and the Kalman gain can be interpreted as a Gauss-Newton iteration [25]. We can thus use the matrix Q to assess an approximation of the Hessian. Noting that

E QQ>≈ E JΩΩ>J>= σ2kJJ> , one can use the update

¯ θj+1= ¯θj− µj σ2kΩQ >(QQ>+ Γ)−1 ∇D(¯θj) (4.13)

where Γ is a data covariance matrix. This update is similar to the common EnKF presented in the original work [3]. It is important to realize that the system (4.13) needs not be formed and it is possible to use k steps of the conjugate gradient method, preconditioned by Γ to solve the system exactly.

(7)

0 20 40 60 80 100 102 103 104 105 iterations 0 20 40 60 80 100 100 101 iterations

objective function convergence, D(F (θj) convergence,kθj − θ∗k

Figure 1: Convergence of our methods for a regression problem with highly oscillatory forward operator as described in Example 1. Left: Plot of the objective function for m = 300, n = 200, l = 20, and  = 1 along two randomly chosen orthogonal directions. Center: Reduction of the objective function value for different values of k (indicated by colors) and different algorithms (dashed lines indicate Algorithm 1 and solid line represent Algorithm 2). Right indicates the distance to the true solution. Note the improvements achieved by increasing k and storing previous forward evaluations.

5

Numerical examples

In this section, we present results from two numerical experiments. First, we demonstrate our algorithm’s potential to converge on non-smooth objective functions. Second, we compare our method to a SGD scheme for the MNIST problem of classifying hand-written digits.

5.1 Application to nonlinear regression

We apply our method to regression problem described in Example 1, which involves the highly oscillatory forward problem

F (θ) = Aθ +  sin(lBθ). (5.14)

Here, we choose A and B as random matrices of size 300×200, we set l to 20, and use  = 1. Figure 1 visualizes the resulting objective function and the results obtained using both versions of our method as listed in Algorithm 1 and Algorithm 2 and varying the number of particles k∈ {5, 10, 15, 20, 25}. As can be seen from the plot of the objective function, this problem is challenging for gradient-based methods. Similar to implicit filtering methods [12], our method is better-equipped to deal with this non-smoothness because of the spatial sampling when approximating the gradient. We note that the convergence accelerates when more particles are used and the benefit of storing intermediate evaluations of the forward model is obvious.

5.2 Image classification using Convolutional Neural Networks

We use our method to train the weights of a simple neural network to classify the hand-written digits in the MNIST data set [17]. The data set consists of 60,000 gray scale images of resolution 28× 28 that is divided into 50,000 training and 10,000 test images associated with their labels.

Here, we denote by F (θ, x) the forward propagation of an image x through the CNN consisting of two hidden layers. The first one uses a convolution with 5× 5 stencils and 32 output channels and a ReLU activation function. This layer is followed by an average pooling operator and a second convolutional layer with 5× 5 stencils, 64 output channels, and also ReLu activation. The last part of the forward model is a second average pooling operator, reducing the image size to 7× 7. The network has 52, 000 weights.

As common in classification using deep neural networks, a fully-connected layer followed by a softmax hypothesis function, H, is applied to the output of the neural network and the result is compared to the given label using a cross entropy loss function. In the notation of our method, we interpret these parts as being associated with the misfit functionD but note that this function depends on the weights of the fully connected layer, denoted by W. Overall, denoting the image-label pairs

(8)

5 10 15 20 25 30 35 10−4 10−3 10−2 10−1 100

number of forward propagations

5 10 15 20 25 30 35 90 92 94 96 98 100

number of forward propagations

5 10 15 20 25 30 35 10 4 10 3 10 2 10 1 100

number of forward propagations

5 10 15 20 25 30 35 90 92 94 96 98 100

number of forward propagations proposed, test proposed, training ADAM, test ADAM, training 5 10 15 20 25 30 35 104 103 102 101 100

number of forward propagations

5 10 15 20 25 30 35 90 92 94 96 98 100

number of forward propagations proposed, test proposed, training k = 10, with memory k = 10, no memory loss function classification accuracy

loss function classification accuracy

loss function classification accuracy in %

Figure 2: Convergence of our derivative-free method with variable projection (blue) and the stochastic gradient method ADAM (red) for training a convolutional neural network with 2 layers to classify the MNIST dataset. Left: Convergence of the loss function on the training (solid line) and test data (dashed line). Right: classification accuracy. The x-axis is scaled according to the number of forward propagations. It can be seen that ADAM generalizes slightly better while our proposed method converges considerably faster.

by{(xi, ci)}si=1we phrase the learning problem as an optimization problem

min W,θ 1 s s X i=1 D(H(W, F (θ, xi)) + R(W, θ), (5.15)

where R is a regularizer. Here, for simplicity we use a weight decay regularizer for W with a weight of 100 and no regularization on θ.

While the objective function in 5.15 is non-convex in the weights θ, it is convex with respect to W for the softmax cross entropy loss function and regularizer at hand. Hence, for a given choice of θ the optimal W∗(θ) can be computed efficiently using, e.g., Newton’s method, which is feasible in our case since the number of elements in W is only 31, 370. We exploit this structure in our method in the following way. In each iteration, we evaluate the forward operator F for all the examples in the training data set. Then, we solve the classification problem using 10 iterations of an inexact Newton method using up to 20 iterations of a Conjugate Gradient (CG) scheme to determine the direction. Keeping W∗(θ) we approximate the Jacobian of the neural network by using Algorithm 1 with one mini batch consisting of 16 examples and four particles. Line-search and updating of the current value of θ is done as described above.

Figure 2 shows the convergence of our method in comparison with the stochastic gradient method ADAM with a mini batch size of 16 and a learning rate ηj= 10−3/√j. The number of epochs is

chosen so that an equal number of forward propagations is performed. Note that the x-axis scales with the number of forward propagations and we do not account for back-propagation in ADAM and the classification solves in our method. ADAM generalizes slightly better in this case, achieving a test accuracy of (99.12% vs. 98.38%) but our method converges considerably faster both in terms of forward propagations and in terms of runtime (238.5 vs. 1,751 secs on a TITAN X with CUDNN 7.1 in MATLAB2017b).

6

Conclusions

In this paper, we present a new derivative-free method for minimizing functions that are a concatena-tion of a misfit or loss funcconcatena-tion and a forward operator. Our method is derivative-free in the sense that the Jacobian of the forward operator is estimated using its evaluations at randomly chosen points around the current iterate while the derivative of the misfit is computed exactly. Our method can be seen as a modified version of the EnKF with improved convergence properties for strongly convex problems. Borrowing ideas from variance reduction methods we present an improved version of our method that stores intermediate evaluations of the forward operator.

Our numerical experiments show two benefits of our method. First, we use a simple nonlinear least-squares problem with an oscillatory forward operator to demonstrate the smoothing property

(9)

of our method. In fact our method can be seen as a randomized version of the implicit filtering [12] applied to a randomly chosen low-dimensional subspace. Second, motivated by the works [15, 14], we show that our method achieves comparable classification results to the SGS variant ADAM for the MNIST data set. In particular, our method parallelizes better and achieves a lower loss value within only a few forward propagations. Notable only four particles are used to optimize around 50,000 weights which motives the use of our method also for more challenging learning tasks.

7

Acknowledgements

We thank Nikola Kovachki and Andrew M. Stuart (Caltech) for sharing their results on applying EnKF to neural networks [15, 14] that motivated this work. LR’s work is supported by the US National Science Foundation (NSF) awards DMS 1522599 and DMS 1751636. FL’s is supported in parts by the European Union’s Horizon 2020 research and innovation programme H2020 ICT 2016-2017 under grant agreement No 732411 (as an initiative of the Photonics Public Private Partnership) and the Netherlands Organisation for Scientific Research (NWO 613.009.106/2383).

References

[1] D. P. Bertsekas. Incremental gradient, subgradient, and proximal methods for convex optimization: A survey. Math Programming B, 129:123–165, 2011.

[2] L Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade. Springer, Berlin, Heidelberg, 2012.

[3] G. Evensen and P. J. van Leeuwen. An ensemble Kalman smoother for nonlinear dynamics. Mon. Wea. Rev., 128:1852–1867, 2000.

[4] Geir Evensen. The Ensemble Kalman Filter: theoretical formulation and practical implementation. Ocean Dynam., 53, 2003.

[5] A. Fahimuddin. 4D Seismic History Matching Using the Ensemble Kalman Filter (EnKF): Possibilities and Challenges. Ph.d. dissertation, The University of Bergen, 2010. Norway.

[6] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.

[7] P. L. Houtekamer and H. L. Mitchell. A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129:123–137, January 2001.

[8] Brian R. Hunt, Eric J. Kostelich, and Istvan Szunyogh. Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230:112–126, 2007.

[9] Marco A. Iglesias. Iterative regularization for ensemble data assimilation in reservoir models. Computa-tional Geosciences, 19(1):177–212, Feb 2015.

[10] Marco A Iglesias, Kody J H Law, and Andrew M Stuart. Ensemble kalman methods for inverse problems. Inverse Problems, 29(4):045001, 2013.

[11] A. Juditsky, G. Lan, A. Nemirovski, and A. Shapiro. Stochastic approximation approach to stochastic programming. SIAM J. Sci. Optim., 19:1574–1609, 2009.

[12] C.T. Kelley. Implicit Filtering. SIAM, Philadelphia, 2011.

[13] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. arXiv.org, 2016. [14] Nikola Kovachki and Andrew M Stuart. Derivative-free ensemble methods for machine learning tasks

(slides for presentation at CM+X Workshop available at http://cmx.caltech.edu/ipml-program.html), Febru-ary 2018.

[15] Nikola Kovachki and Andrew M Stuart. Ensemble Kalman inversion: a derivative-free technique for machine learning tasks. in preparation, 2018.

[16] N. Le Roux, M. Schmidt, and F. Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. ArXiv e-prints, February 2012.

[17] Y LeCun, B E Boser, and J S Denker. Handwritten digit recognition with a back-propagation network. Advances in Neural Networks, 1990.

[18] J.J. Linderoth, A. Shapiro, and S. Wright. The empirical behavior of sampling methods for stochastic programming. Annals of Operations Research, 142:215–241, 2006.

[19] V. Nenna, A. Pidlisecky, and R. Knight. Application Of An Extended Kalman Filter Approach To Inversion Of Time-lapse Electrical Resistivity Imaging Data For Monitoring Infiltration. Water Resources Research, 78:120–132, 2011.

[20] Edward Ott, Brian R Hunt, Istvan Szunyogh, Aleksey V Zimin, Eric J Kostelich, Matteo Corazza, Eugenia Kalnay, DJ Patil, and James A Yorke. A local ensemble kalman filter for atmospheric data assimilation. Tellus A, 56(5):415–428, 2004.

(10)

[21] F. Petroski Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune. Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning. ArXiv e-prints, December 2017.

[22] D.E. Rumelhart, Geoffrey Hinton, and J. Williams, R. Learning representations by back-propagating errors. Nature, 323(6088):533–538, 1986.

[23] C. Schillings and A. M. Stuart. Analysis of the ensemble kalman filter for inverse problems. SIAM J. Numerical Analysis, 39:1264–1290, 2017.

[24] A. Shapiro, D. Dentcheva, and D. Ruszczynski. Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia, 2009.

[25] Micheal Verhaegen and Paul Van Dooren. Numerical aspects of different kalman filter implementations. IEEE Transactions on Automatic Control, 31(10):907–917, 1986.

[26] J. Weston, F. Ratle, H. Mobahi, and R. Collobert. Deep learning via semi-supervised embedding. Neural Networks: Tricks of the Trade Lecture Notes in Computer Science Volume 7700, pages 639–655, 2012. [27] Chiyuan Zhang, Qianli Liao, Alexander Rakhlin, Karthik Sridharan, Brando Miranda, Noah Golowich, and

Tomaso Poggio. Theory of deep learning iii: Generalization properties of sgd. Technical report, Center for Brains, Minds and Machines (CBMM), 2017.

References

Related documents

This basically deals with the queries of user for a company simultaneously running its e-courier service where the users having booked some couriers or both for delivery can

OrganisationalUnit  and   Organisation.  To  contribute  to  a  more  consistent  design  of  indicators  and  to  counter   dysfunctional  effects  caused  by

International Data Collection Project: Semester 1 2012 ; Higher Education Statistics Agency, Students in Higher Education Institutions 2009/10.. Top ten UK institutions

Overall, the drug release of the formulations made by the lyophilization method delivered high drug release followed by solid dispersion, physical mixture and pure drug, in

present study, administration of DZP (2 mg/kg; i.p.) followed by the administration of fluoxetine (20 mg/kg; i.p.) to the unstressed mice reduced the immobility

The titled compounds and their nanoparticles were screened for their growth promoting activity on test vegetable crop plants viz, Momordica charantia-L-Bitter

For both Kappa and F1 values for this species, an increase in accuracy was observed using discrete and full-waveform data derived from LiDAR, which confirms that adding