• No results found

Style augmentation : data augmentation via style randomization.

N/A
N/A
Protected

Academic year: 2021

Share "Style augmentation : data augmentation via style randomization."

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

13 May 2019

Version of attached le:

Accepted Version

Peer-review status of attached le:

Peer-reviewed

Citation for published item:

Jackson, Philip and Atapour-Abarghouei, Amir and Bonner, Stephen and Breckon, Toby and Obara, Boguslaw (2019) 'Style augmentation : data augmentation via style randomization.', IEEE/CVF Conference on Computer Vision and Pattern Recognition, Deep Vision Long Beach, CA, USA, 16-20 June 2019.

Further information on publisher's website:

http://cvpr2019.thecvf.com/

Publisher's copyright statement: Additional information:

Use policy

The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:

• a full bibliographic reference is made to the original source

• alinkis made to the metadata record in DRO

• the full-text is not changed in any way

The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult thefull DRO policyfor further details.

(2)

000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107

Style Augmentation: Data Augmentation via Style Randomization

Anonymous CVPR submission

Paper ID 22

Abstract

We introduce style augmentation, a new form of data augmentation based on random style transfer, for improv-ing the robustness of Convolutional Neural Networks (CNN) over both classification and regression based tasks. Dur-ing trainDur-ing, style augmentation randomizes texture, trast and color, while preserving shape and semantic con-tent. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sam-pling target style embeddings from a multivariate normal distribution instead of computing them from a style image. In addition to standard classification experiments, we inves-tigate the effect of style augmentation (and data augmenta-tion generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alterna-tive to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to im-prove network performance. We validate the efficacy of our technique with domain transfer experiments in classifica-tion and monocular depth estimaclassifica-tion illustrating superior performance over benchmark tasks.

1. Introduction

Whilst deep neural networks have shown record-breaking performance on complex machine learning tasks over the past few years, exceeding human performance lev-els in certain cases, most deep modlev-els heavily rely on large quantities of annotated data for individual tasks, which is often expensive to obtain. A common solution is to augment smaller datasets by creating new training samples from ex-isting ones via label-preserving transformations [39].

Data augmentation imparts prior knowledge to a model by explicitly teaching invariance to possible transforms that preserve semantic content. This is done by applying said transform to the original training data, producing new sam-ples whose labels are known. For example, horizontal flipping is a popular data augmentation technique [18], as

it clearly does not change the corresponding class label. The most prevalent forms of image-based data augmen-tation include geometric distortions such as random crop-ping, zooming, rotation, flipcrop-ping, linear intensity scaling and elastic deformation. Whilst these are successful at teaching rotation and scale invariance to a model, what of color, texture and complex illumination variations?

Tobin et al. [33] show that it is possible for an object de-tection model to generalize from graphically rendered vir-tual environments to the real world, by randomizing color, texture, illumination and other aspects of the virtual scene. It is interesting to note that, rather than making the virtual scene as realistic as possible, they attain good generaliza-tion by using an unrealistic but diverse set of random tex-tures. In contrast, Atapour & Breckon [1] train on highly photorealistic synthetic images, but find that the model gen-eralizes poorly to data from the real world. They are able to rectify this by using CycleGAN [44] and fast neural style transfer [17] to transform real world images into the domain of the synthetic images. These results together suggest that deep neural networks can overfit to subtle differences in the distribution of low-level visual features, and that randomiz-ing these aspects at trainrandomiz-ing time may result in better gen-eralization. However, in the typical case where the training images come not from a renderer but from a camera, this randomization must be done via image manipulation, as a form of data augmentation. It is not clear how standard data augmentation techniques could introduce these subtle, com-plex and ill-defined variations.

Neural style transfer [9] offers the possibility to alter the distribution of low-level visual features in an image whilst preserving semantic content. Exploiting this concept, we propose Style Augmentation, a method to use style trans-fer to augment arbitrary training images, randomizing their color, texture and contrast whilst preserving geometry (see Figure1). Although the original style transfer method was a slow optimization process that was parameterized by a target style image [9], newer approaches require only a sin-gle forward pass through a style transfer network, which is parameterized by a style embedding [10]. This is impor-tant, because in order to be effective for data augmentation,

(3)

108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Figure 1: Style augmentation applied to an image from the Office dataset [24] (original in top left). Shape is preserved but

the style, including texture, color and contrast are randomized. style transfer must be both fast and randomized. Since the style transfer algorithm used in our work is parameterized by an R100 embedding vector, we are able to sample that

embedding from a multivariate normal distribution, which is faster, more convenient and permits greater diversity than sampling from a finite set of style images.

In addition to standard classification benchmarks, we evaluate our approach on a range of domain adaptation tasks. To the best of our knowledge, this is the first time data augmentation has been tested for domain adaptation. Ordinarily, data augmentation is used to reduce overfitting and improve generalization to unseen images from the same domain, but we reason that domain biasisa form of over-fitting, and should therefore benefit from the same counter-measures. Data augmentation is not domain adaptation, but it can reduce the need for domain adaptation, by training a model that is more general and robust in the first place. Although this approach may not exceed the performance of domain adaptation to a specific target domain, it has the advantage of improving accuracy onallpotential target do-mains before they are even seen, and without requiring sep-arate procedures for each.

In summary, this work explores the possibility of per-forming data augmentation via style randomization in order to train more robust models that generalize to data from un-seen domains more effectively. Our primary contributions can thus be summarized as follows:

• Style randomization- We propose a novel and effective method for randomizing the action of a style transfer network to transform any given image such that it con-tains semantically valid but random styles.

• Style augmentation - We utilize the randomized ac-tion of the style transfer pipeline to augment image datasets to greatly improve downstream model perfor-mance across a range of tasks.

• Omni-directional domain transfer- We evaluate the ef-fectiveness of using style augmentation to implicitly improve performance on domain transfer tasks, which

ordinarily require adapting a model to a specific target domain post-training.

These contributions are reinforced via detailed experi-mentation, supported by hyperparameter grid searches, on multiple tasks and model architectures. We open source our PyTorch implementation as a convenient data augmentation package for deep learning practitioners∗.

2. Related Work

2.1. Domain Bias

The issue of domain bias or domain shift [12] has long plagued researchers working on the training of discrimina-tive, predicdiscrimina-tive, and generative models. In short, the prob-lem is that of a typical model trained on a specific distri-bution of data from a particular domain will not generalize well to other datasets not seen during training. For exam-ple, a depth estimation model trained on images captured from roads in Florida may fail when deployed on German roads [35], even though the task is the same and even if the training dataset is large. Domain shift can also be caused by subtle differences between distributions, such as variations in camera pose, illumination, lens properties, background and the presence of distractors.

A typical solution to the problem of domain shift is trans-fer learning, in which a network is pre-trained on a related task with a large dataset and then fine-tuned on the new data [26]. This can reduce the risk of overfitting to the source domain because convolutional features learned on larger datasets are more general [41]. However, transfer learning requires reusing the same architecture as that of the pre-trained network and a careful application of layer freezing and early stopping to prevent the prior knowledge being forgotten during fine-tuning.

Another way of addressing domain shift is domain adap-tation, which encompasses a variety of techniques for adapt-ing a model post trainadapt-ing to improve its accuracy on a

(4)

216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 cific target domain. This is often accomplished by

mini-mizing the distance between the source and target feature distributions in some fashion [6,11,14,21,22,34]. Cer-tain strategies have been proposed to minimize Maximum Mean Discrepancy (MMD), which represents the distance between the domains [22,30], while others have used ad-versarial training to find a representation that minimizes the domain discrepancy without compromising source accu-racy [11,14,34]. Although many adversarial domain adap-tation techniques focus on discriminative models, research on generative tasks has also utilized domain transfer [6]. Li et al. [21] propose adaptive batch normalization to reduce the discrepancy between the two domains. More relevant to our work is [1], which employs image style transfer as a means to perform domain adaptation based on [20].

Even though domain adaptation is often effective and can produce impressive results, its functionality is limited in that it can only help a model generalize to a specific tar-get domain. In contrast, our approach introduces more vari-ation into the source domain by augmenting the data (Sec-tion2.3), which can enhance the overall robustness of the model, leading to better generalization to many potential target domains, without first requiring data from them.

2.2. Style Transfer

Style transfer refers to a class of image processing algo-rithms that modify the visual style of an image while pre-serving its semantic content. In the deep learning litera-ture, these concepts are formalized in terms of deep con-volutional features in the seminal work of Gatys et al. [9]. Style is represented as a set of Gram matrices [25] that de-scribe the correlations between low-level convolutional fea-tures, while content is represented by the raw values of high level semantic features. Style transfer extracts these rep-resentations from a pre-trained loss network (traditionally VGG [28]), and uses them to quantify style and content losses with respect to target style and content images and combines them into a joint objective function. Formally, the content and style losses can be defined as:

Lc= X i∈C 1 ni ||fi(x)−fi(c)||2F, (1) Ls= X i∈S 1 ni ||G[fi(x)]− G[fi(s)]||2F, (2)

wherec,sandxare the content, style and restyled images,

fis the loss network,fi(x)is the activation tensor of layeri

after passingxthroughf,niis the number of units in layer

i,CandSare sets containing the indices of the content and style layers, G[fi(x)] denotes the Gram matrix of layeri

activations of f, and|| · ||F denotes the Frobenius norm.

The overall objective can then be expressed as:

min

x Lc(x, c) +λLs(x, s), (3)

whereλis a scalar hyperparameter determining the relative weights of style and content loss. Originally, this objec-tive was minimized directly by gradient descent in image space [9]. Although the results are impressive, this pro-cess is very computationally inefficient, leading to the emer-gence of alternative approaches that use neural networks to approximate the global minimum of the objective in a single forward pass [3,17,36]. These are fully-convolutional net-works that are trained to restyle an input image while pre-serving its content. Although much faster, these networks only learn to apply a single style, and must be re-trained if a different style is required, hence enabling only single-domain rather the multi-single-domain adaptability proposed here. Building on the work of [37], and noting that there are many overlapping characteristics between styles (e.g. brushstrokes), Dumoulin et al. [7] train one network to ap-ply up to 32 styles using conditional instance normalization, which sets the mean and standard deviation of each interme-diate feature map to different learned values for each style. Ghiasi et al. [10] generalizes this to fully arbitrary style transfer, by using a fine-tuned InceptionV3 network [31] to predict the renormalization parameters from the style age. By training on a large dataset of style and content ages, the network is able to generalize to unseen style im-ages. Concurrently, Huang et al. [15] match the mean and variance statistics of a convolutional encoding of the con-tent image with those of the style image, then decode into a restyled image, while Yanai [40] concatenates a learned style embedding onto an early convolutional layer in a style transformer network similar to that of Johnson et al. [17].

In this work, while we utilize the approach presented in [10] as part of our style randomization procedure, any style transfer method capable of dealing with unseen arbi-trary styles can be used as an alternative, with the quality of the results dependent on the efficacy of the style transfer approach.

2.3. Data Augmentation

Ever since the work of Krizhevsky et al. [18], data aug-mentation has been a standard technique for improving the generalization of deep neural networks. Data augmenta-tion artificially inflates a dataset by using label-preserving transforms to derive new examples from the originals. For example, [18] creates ten new samples from each original by cropping in five places and mirroring each crop hori-zontally. Data augmentation is actually a way of explicitly teaching invariance to whichever transform is used, there-fore any transform that mimics intra-class variation is a suitable candidate. For example, the MNIST (handwrit-ten digit) dataset [19] can be augmented using elastic dis-tortions that mimic the variations in pen stroke caused by uncontrollable hand muscle oscillations [4,27]. Yaeger et al. [39] also use the same technique for balancing

(5)

324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 class frequencies, by producing augmentations for

under-represented classes. Wong et al. [38] compare augmenta-tions in data space versus feature space, finding data aug-mentations to be superior.

Bouthillier et al. [2] argues that dropout [29] corresponds to a type of data augmentation, and proposes a method for projecting dropout noise back into the input image to create augmented samples. Likewise, Zhong et al. [43] presents random erasing as a data augmentation, in which random rectangular regions of the input image are erased. This is di-rectly analogous to dropout in the input space and is shown to improve robustness to occlusion.

The closest work to ours is that by Geirhos et al. [8], who have recently shown that CNNs trained on ImageNet are more reliant on textures than they are on shape. By train-ing ResNet-50 on a version of ImageNet with randomized textures (a procedure that amounts to performing style aug-mentation on all images), they are able to force the same network to rely on shape instead of texture. This not only agrees more closely with human behavioural experiments, but also confers unexpected bonuses to detection accuracy when the weights are used in Faster R-CNN, and robustness to many image distortions that did not occur in the train-ing set. Our work corroborates and extends these results by showing an additional benefit in robustness to domain shift, and shows that style randomization can be used as a conve-nient and effective data augmentation technique.

3. Proposed Approach

For style transfer to be used as a data augmentation tech-nique, we require a single style transfer algorithm that is both fast and capable of applying as broad a range of styles as possible. These requirements narrow our search space considerably, since most approaches are either too ineffi-cient [9] or can only apply a limited number of styles [7,17]. We chose the approach of Ghiasi et al. [10], for its speed, flexibility, and visually compelling results. A critical part of our data augmentation technique is providing a method for randomizing the action of the style transfer network. In this section we will introduce the style transfer pipeline we utilize and detail our novel randomization procedure.

3.1. Style Transfer Pipeline

Our chosen style transfer network (Detailed in Figure2) employs a style predictor network to observe an arbitrary style image and output a style embeddingz ∈ R100. For

our approach we completely dispense with this style predic-tor network, instead we sample the style embedding directly from a multivariate normal distribution. The mean and co-variance of this distribution are matched to those of the dis-tribution of style embeddings arising from the Painter By

Input Image Output Image

Inception Style Embedding + + Convolution Strided Convolution Upsample Convolution Residual Block Style Image + + + +

Figure 2: Diagram of the arbitrary style transfer pipeline of Ghiasi et al. [10].

Numbers (PBN) dataset†, which are used as training data for the style transfer network. Therefore, sampling from this distribution simulates choosing a random PBN image and computing its style embedding, at much lower compu-tational cost, and without requiring the entire PBN dataset. Additionally, the size and diversity of this dataset forces the network to learn a robust mapping that generalizes well to unseen style images, much like large labelled datasets en-abling classification networks to generalize well.

The style embeddingzinfluences the action of the trans-former network via conditional instance normalization [7], in which activation channels are shifted and rescaled based on the style embedding. Concretely, if xis a feature map prior to normalization, then the renormalized feature map is as follows:

x0=γ(x−µ

σ ) +β, (4)

whereµandσare respectively the mean and the standard deviation across the feature map spatial axes, andβ andγ

are scalars obtained by passing the style embedding through a fully-connected layer. As shown in Figure2, all convolu-tional layers except for the first three perform condiconvolu-tional instance renormalization. In this way, the transformer net-work outputxis conditioned on both the content image and the style image:

x=T(c, P(s)). (5)

3.2. Randomization Procedure

Randomizing the action of the style transfer pipeline is as simple as randomizing the style embedding that determines the output style. Ordinarily, this embedding is produced by the style predictor network, as a function of the given style image. Rather than feeding randomly chosen style images through the style predictor to produce random style embed-dings, it is more computationally efficient to simulate this

(6)

432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Original =0.0 =0.2 =0.4 =0.6 =0.8 =1.0

Figure 3: Output of transformer network with different val-ues for the style interpolation parameterα.

process by sampling them directly from a probability tribution. However, it is important that this probability dis-tribution closely resembles the disdis-tribution of embeddings observed during training. Otherwise, we risk supplying an embedding unlike any that were observed during training, which may produce unpredictable behavior. We use a mul-tivariate normal as our random embedding distribution, the mean and covariance of which are the empirical mean and covariance of the set of all embeddings of PBN images. Qualitatively, we find that this approximation is sufficient to produce diverse yet sensible stylizations (see Figure1).

To provide control over the strength of augmentation (see Figure 3), the randomly sampled style embedding can be linearly interpolated with the style embedding of the input image, P(c). Passing P(c)instructs the transformer net-work to change the image style to the style it already has thus leaving it mostly unchanged. In general, our random embedding is therefore a function of the input content im-agec:

z=αN(µ,Σ) + (1−α)P(c) (6)

whereP is the style predictor network, andµ, Σ are the mean vector and covariance matrix of the style image em-beddingsP(s):

µ=Es[P(s)], (7)

Σi,j= Cov [P(s)i, P(s)j]. (8)

4. Experimental Results

We evaluate our proposed style augmentation method on three distinct tasks: image classification, cross-domain classification and depth estimation. We present results on the STL-10 classification benchmark [5] (Section4.1), the Office domain transfer benchmark [24] (Section4.2), and the KITTI depth estimation benchmark [35] (Section4.3). We also perform a hyperparameter search to determine the best ratio of unaugmented to augmented training images and the best augmentation strength α(see Eqn. 6). In all experiments, we use a learning rate of10−4and weight de-cay of10−5, and we use the Adam optimizer (momentum

β1= 0.5,β2= 0.999, initial learning rate of0.001). Although we evaluate style augmentation on domain transfer tasks, our results should not be compared directly with those of domain adaptation methods. Domain adap-tation uses information about a specific target domain to

16:1 8:1 4:1 2:1 1:1 1:2 1:4 1:81:161:32 Unaugm ent ed : Augm ent ed 0.66 0.67 0.68 0.69 0.70 0.71 0.72 F in a l Te s t A c c u ra c y 0.0 0.25 0.5 0.75 1.0 α 0.66 0.67 0.68 0.69 0.70 0.71 0.72 F in a l Te s t A c c u ra c y

Figure 4: Hyperparameter searches on augmentation ratio and style transfer strength (α). Curves are averaged over four experiments; error bars denote one standard deviation. Blue lines depict unaugmented baseline accuracy.

Task Model Augmentation Approach

None Trad Style Both

AW→D InceptionV3 0.789 0.890 0.882 0.952 ResNet18 0.399 0.704 0.495 0.873 ResNet50 0.488 0.778 0.614 0.922 VGG16 0.558 0.830 0.551 0.870 DW→A InceptionV3 0.183 0.160 0.254 0.286 ResNet18 0.113 0.128 0.147 0.229 ResNet50 0.130 0.156 0.170 0.244 VGG16 0.086 0.149 0.111 0.243 AD→W InceptionV3 0.695 0.733 0.767 0.884 ResNet18 0.414 0.600 0.424 0.762 ResNet18 0.491 0.676 0.508 0.825 VGG16 0.465 0.679 0.426 0.752

Table 1: Test accuracies on the Office dataset [24] withA,D

andWdenoting theAmazon,DSLRandWebcamdomains.

0 10 20 30 40 It erat ion (x103) 0.3 0.4 0.5 0.6 0.7 0.8 Te s t A c c u ra c y Unaugmented Traditional Augmentation Style Augmentation Both

Figure 5: Comparing test accuracy curves for a standard classification task on the STL-10 dataset [5].

improve performance on that domain. In contrast, data augmentation is domain agnostic, improving generalization to all domains without requiring information about any of them. Therefore we compare our approach against other data augmentation techniques.

4.1. Image Classification

We evaluate our style augmentation on the STL-10 dataset [5]. STL-10 consists of 10 classes with only 500 la-belled training examples each, a typical case in which data

(7)

540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 augmentation would be curial since the number of labelled

training images is limited.

Prior to the final optimization, we perform a hyperpa-rameter search to determine the optimal values for the ratio of unaugmented to augmented images and the strength of the style transfer, as determined by the interpolation hyper-parameterα. We train the InceptionV3 [31] architecture to classify STL-10 images, performing40,000iterations, aug-menting the data with style augmentation, and we repeat each experiment four times with different random seeds.

First we test augmentation ratios, interpolating in factors of two from 16 : 1(unaugmented : augmented) to 1 : 32. Since we do not know the optimal value ofα, we sample it uniformly at random from the interval[0,1]in these ex-periments. Figure4 (left) demonstrates the results of this search. We plot the final test accuracy after40,000 itera-tions. A ratio of 2 : 1(corresponding to an augmentation probability of0.5) appears to be optimal. Fixing the aug-mentation ratio at2 : 1, we repeat the experiment forαand find an optimal value of 0.5 (Figure4, right). Style aug-mentation takes2.0ms on average per image on a GeForce 1080Ti, which corresponds to a6%training time increase on this task when the optimal augmentation ratio of2 : 1is used. If time is critical, the augmentation ratio can be set as low as16 : 1and still provide a significant accuracy boost, as Figure4shows.

With suitable hyperparameters determined, we next compare style augmentation against a comprehensive mix of seven traditional augmentation techniques: horizontal flipping, small rotations, zooming (which doubles as ran-dom cropping), ranran-dom erasing [43], shearing, conversion to grayscale and random perturbations of hue, saturation, brightness and contrast. As in the hyperparameter search, we train InceptionV3 [31] to40,000iterations on the5,000

labeled images in STL-10. As seen in Figure5, while style augmentation alone leads to faster convergence and better final accuracy versus the unaugmented baseline, in combi-nation with the seven traditional augmentations, it yields an improvement of8.5%.

Moreover, without using any of the unlabeled data in STL-10 for unsupervised training, we achieve a final test ac-curacy of80.8%after100,000iterations of training. This surpasses the reported state of the art [32,42], using only supervised training with strong data augmentation.

4.2. Cross-Domain Classification

To test the effect of our approach on generalization to unseen domains, we apply style augmentation to the Of-fice cross-domain classification dataset [24]. The Office dataset consists of 31 classes and is split into three domains:

Amazon, DSLRandWebcam. The classes are typical ob-jects found in office settings, such as staplers, mugs and desk chairs. TheAmazondomain consists of 2817 images

Unaugmented Style Augmentation Traditional Augmentation Style and Traditional

0 50 100 150 200 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 20 40 60 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 50 100 150 200 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 1.0 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 20 40 60 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.0 0.2 0.4 0.6 0.8 T e s t A c c u ra c y 0 50 100 150 200 It erat ion (x103) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.00 0.05 0.10 0.15 0.20 0.25 T e s t A c c u ra c y 0 20 40 60 It erat ion (x103) 0.00 0.05 0.10 0.15 0.20 0.25 T e s t A c c u ra c y 0 5 10 15 20 It erat ion (x103) 0.00 0.05 0.10 0.15 0.20 0.25 T e s t A c c u ra c y InceptionV3 R esNet18 R esNet50 VGG16

ADW AWD DWA

Figure 6: Results of the experiments using the Office dataset. Note the consistent superiority of traditional aug-mentation techniques combined with style augaug-mentation (red curve).

scraped from Amazon product listings, whileDSLRand We-bcamcontain 498 and 795 images, captured in an office en-vironment with a DSLR camera and webcam, respectively. We test the effect of style augmentation by training stan-dard classification models on the union of two domains, and testing on the other. We also compare the effects of style augmentation on four different convolutional architec-tures: InceptionV3 [31], ResNet18 [13], ResNet50 [13] and VGG16 [28]. For each combination of architecture and do-main split, we compare test accuracy with no augmentation (None), traditional augmentation (Trad), style augmenta-tion (Style) and the combinaaugmenta-tion of style augmentaaugmenta-tion and traditional augmentation (Both). Traditional augmentation refers to the same mix of techniques as in Section4.1.

Figure 6 shows test accuracy curves for these experi-ments, and Table1contains final test accuracies. In certain cases, style augmentation alone (green curve) outperforms all seven techniques combined (orange curve), particularly when the InceptionV3 architecture [31] is used. This points to the strength of our style augmentation technique and the invariances it can introduce into the model to prevent over-fitting.

An extreme domain shift is introduced into the model when the union of theDSLRandWebcamis used for train-ing and the network in tested on theAmazondomain. This is due to the large discrepancy between theAmazonimages

(8)

648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 Figure 7: Examples of input monocular synthetic images post style augmentation.

and the other two domains and makes the classification task extremely difficult. However, as seen in Figure6, our style augmentation technique is capable of consistently improv-ing the test accuracy even though the unaugmented model is barely outperforming random guess work. In all experi-ments, the combination of our style augmentation and tra-ditional techniques achieves the highest final accuracy and fastest convergence (see Figure6).

To confirm that the benefits of style augmentation could not be realized more easily with simple colour space distor-tions, we ablate against color jitter augmentation, i.e. ran-dom perturbations in hue, contrast, saturation and bright-ness (see Table 2). The experiment shows that style aug-mentation confers accuracy gains at least 4%higher than those resulting from color jitter.

AD→W AW→D DW→A

Unaugmented 0.684 0.721 0.152

Color Jitter 0.726 0.850 0.185

Style Augmentation 0.765 0.893 0.215

Table 2: Comparing style augmentation against color jitter (test accuracies on Office, with InceptionV3.)

4.3. Monocular Depth Estimation

Finally, we evaluate our approach within monocular depth estimation - the task of accurately estimating depth information from a single image. The supervised training

of a monocular depth estimation model is especially chal-lenging as it requires large quantities of ground truth depth data, which is extremely expensive and difficult to obtain. An increasingly common way to circumvent this problem is to capture synthetic images from virtual environments, which can provide perfect per-pixel depth data for free [1]. However, due to domain shift, a model trained on synthetic imagery may not generalize well to real-world data.

Using our style augmentation approach, we train a super-vised monocular depth estimation network on 65,000 syn-thetic images captured from the virtual environment of a gaming application [23]. The depth estimation network is a modified U-net with skip connections between every pair of corresponding layers in the encoder and decoder [1] and is trained using a global `1 loss along with an adversar-ial loss to guarantee mode selection [16]. By using style augmentation, we hypothesise that the model will learn in-variance towards low-level visual features such as texture and illumination, instead of overfitting to them. The model will therefore generalize better to real-world images, where these attributes may differ. Examples of synthetic images with randomized styles are displayed in Figure7.

Quantitative and qualitative evaluations were run using the test split in the KITTI dataset [35]. Similar to our classification experiments, we compare style augmentation against traditional data augmentation techniques. However, since object scale is such a vital cue for depth estimation, any transformations that rescale the image must be ruled out. This eliminates zooming, shearing and random

(9)

crop-756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863

Augmentation Error Metrics (lower, better) Accuracy Metrics (higher, better)

Abs. Rel. Sq. Rel. RMSE RMSE log σ <1.25 σ <1.252 σ <1.253

None 0.280 0.051 0.135 0.606 0.656 0.862 0.926

Trad 0.266 0.045 0.128 0.527 0.671 0.872 0.936

Style 0.256 0.040 0.123 0.491 0.696 0.886 0.942

Both 0.255 0.041 0.123 0.490 0.698 0.890 0.945

Table 3: Comparing the results of a monocular depth estimation model [1] trained on synthetic data when tested on real-world images from [35].

Figure 8: Results of unaugmented model (None), style (Style) traditional (None), and complete augmentation (Both) applied to depth estimation on KITTI [35].

ping (which requires rescaling to keep the cropped regions a constant size). Random erasing makes no sense in this context since we never estimate the depth to an occluded point. Rotation seems promising, but was empirically found to worsen the results. This leaves horizontal flipping, con-version to grayscale, and perturbations of hue, saturation, contrast and brightness as our traditional augmentations for depth estimation.

As seen in the numerical results in Table 3, models trained with style augmentation generalize better than those trained on traditionally augmented data. These results sug-gest that style augmentation may be a useful tool in monoc-ular depth estimation, given that most traditional augmenta-tions cannot be used, and the ones that can made little differ-ence. Moreover, qualitative results seen in Figure8indicate how our augmentation approach can produce sharper output depth with fewer artefacts.

5. Discussion

The information imparted to the downstream network by style augmentation, in the form of additional labelled im-ages, is ultimately derived from the pre-trained VGG work which forms the loss function of the transformer net-work (see Eqn.1,2). Our approach can therefore be inter-preted as transferring knowledge from the pre-trained VGG network to the downstream network. By learning to alter style while minimizing the content loss, the transformer net-work learns to alter images in ways which the content layer

(i.e. a high level convolutional layer in pretrained VGG) is invariant to. In this sense, style augmentation transfers im-age invariances directly from pretrained VGG to the down-stream network.

The case for our style augmentation method is strength-ened by the work of Geirhos et al. [8], who recently showed that CNNs trained on ImageNet learn highly texture-dependent representations, at the expense of shape sensi-tivity. This supports our hypothesis that CNNs overfitting to texture is a significant cause of domain bias in deep vi-sion models, and heavily suggests style augmentation as a practical tool for combating it.

As in [8], we found that style augmentation worsens ac-curacy on ImageNet - this conforms to our overall hypoth-esis, since texture correlates strongly enough with class la-bel that CNNs can achieve good accuracy by relying on it almost entirely, and style augmentation removes this corre-lation. We do however find that style augmentation moder-ately improves validation accuracy on STL-10, suggesting that some image classification datasets have stronger corre-lation between textures and labels than others.

6. Conclusion

We have presented style augmentation, a novel approach for image-based data augmentation driven by style transfer. Style augmentation uses a style transfer network to perturb the color and texture of an image, whilst preserving shape and semantic content, with the goal of improving the ro-bustness of any downstream convolutional neural networks. Our experiments demonstrate that our approach yields sig-nificant improvements in test accuracy on several computer vision tasks, particularly in the presence of domain shift. This provides evidence that CNNs are heavily reliant on texture, that texture reliance is a significant factor in domain bias, and that style augmentation is viable as a practical tool for deep learning practitioners to mitigate domain bias and reduce overfitting.

(10)

864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971

References

[1] A. Atapour-Abarghouei and T. P. Breckon. Real-time monocular depth estimation using synthetic data with do-main adaptation via image style transfer. InConf. Computer Vision Pattern Recognition, pages 1–8, 2018.1,3,7,8

[2] X. Bouthillier, K. Konda, P. Vincent, and R. Memisevic. Dropout as data augmentation.arXiv:1506.08700, 2015.4

[3] T. Q. Chen and M. Schmidt. Fast patch-based style trans-fer of arbitrary style. InWorkshop in Constructive Machine Learning, 2016.3

[4] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmid-huber. Deep big simple neural nets excel on digit recognition.

Neural Computation, 22(12):3207–3220, 2010.3

[5] A. Coates, A. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. InInt. Conf. Ar-tificial Intelligence and Statistics, pages 215–223, 2011.5

[6] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell. Adversarial fea-ture learning. InInt. Conf. Learning Representations, 2017.

3

[7] V. Dumoulin, J. Shlens, and M. Kudlur. A learned represen-tation for artistic style. InInt. Conf. Learning Representa-tions, 2017.3,4

[8] Geirhos. et al. Imagenet-trained cnns are biased towards tex-ture; increasing shape bias improves accuracy and robust-ness.ICLR, 2019.4,8

[9] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. InIEEE Conf. Com-puter Vision and Pattern Recognition, pages 2414–2423, 2016.1,3,4

[10] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens. Exploring the structure of a real-time, arbitrary neural artistic stylization network. InBritish Machine Vision Conference, 2017.1,3,4

[11] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li. Deep reconstruction-classification networks for unsu-pervised domain adaptation. InInt. Conf. Computer Vision, pages 597–613, 2016.3

[12] A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borg-wardt, and B. Sch¨olkopf. Covariate Shift by Kernel Mean Matching. MIT press, 2008.2

[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InIEEE Conf. Computer Vision and Pattern Recognition, pages 770–778, 2016.6

[14] J. Hoffman, E. Tzeng, T. Darrell, and K. Saenko. Simulta-neous deep transfer across domains and tasks. InDomain Adaptation in Computer Vision Applications, pages 173– 187. 2017.3

[15] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Int. Conf. Computer Vision, 2017.3

[16] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. InIEEE Conf. Computer Vision and Pattern Recognition, 2017.7

[17] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. InEuro. Conf. Computer Vision, pages 694–711, 2016.1,3,4

[18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks.ACM Communications, 60(6):84–90, 2017.1,3

[19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceed-ings of the IEEE, 86(11):2278–2324, 1998.3

[20] Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neu-ral style transfer. InInt. Conf. Artificial Intelligence, pages 2230–2236, 2017.3

[21] Y. Li, N. Wang, J. Shi, J. Liu, and X. Hou. Revisit-ing batch normalization for practical domain adaptation.

arXiv:1603.04779, 2016.3

[22] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning trans-ferable features with deep adaptation networks. InInt. Conf. Machine Learning, pages 97–105, 2015.3

[23] R. Miralles. An open-source development environment for self-driving vehicles. Master’s thesis, Universitat Oberta de Catalunya, 2017.7

[24] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting vi-sual category models to new domains. InEuro. Conf. Com-puter Vision, pages 213–226, 2010.2,5,6

[25] H. Schwerdtfeger. Introduction to Linear Algebra and the Theory of Matrices. P. Noordhoff, 1950.3

[26] L. Shao, F. Zhu, and X. Li. Transfer learning for visual cat-egorization: A survey. IEEE Trans. Neural Networks and Learning Systems, 26(5):1019–1034, 2015.2

[27] P. Simard, D. Steinkraus, and J. Platt. Best practices for con-volutional neural networks applied to visual document anal-ysis. InInt. Conf. Document Analysis and Recognition, vol-ume 1, pages 958–963, 2003.3

[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. InInt. Conf. Learning Representations, 2015.3,6

[29] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neu-ral networks from overfitting. Machine Learning Research, 15:1929–1958, 2014.4

[30] B. Sun and K. Saenko. Deep coral: Correlation alignment for deep domain adaptation. InWorkshops in Euro. Conf. Computer Vision, pages 443–450, 2016.3

[31] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In

IEEE Conf. Computer Vision and Pattern Recognition, pages 2818–2826, 2016.3,6

[32] M. Thoma. Analysis and optimization of convolutional neu-ral network architectures. Master’s thesis, Karlsruhe Institute of Technology, 2017.6

[33] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neu-ral networks from simulation to the real world. InInt. Conf. Intelligent Robots and Systems, pages 23–30, 2017.1

[34] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. InIEEE Conf. Computer Vision and Pattern Recognition, 2017.3

[35] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. Sparsity invariant CNNs. InInt. Conf. 3D Vision, pages 11–20, 2017.2,5,7,8

(11)

972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079

[36] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In Int. Conf. Machine Learning, pages 1349–1357, 2016.3

[37] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance nor-malization: The missing ingredient for fast stylization.

arXiv:1607.08022, 2016.3

[38] S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell. Understanding data augmentation for classification: When to warp?arXiv:1609.08764, 2016.4

[39] L. S. Yaeger, R. F. Lyon, and B. J. Webb. Effective training of a neural network character classifier for word recognition. In

Advances in Neural Information Processing Systems, pages 807–816. 1997.1,3

[40] K. Yanai. Unseen style transfer based on a conditional fast style transfer network. InLearning Representations Work-shops, 2017.3

[41] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How trans-ferable are features in deep neural networks? InAdvances in Neural Information Processing Systems, pages 3320–3328. 2014.2

[42] J. Zhao, M. Mathieu, R. Goroshin, and Y. LeCun. Stacked what-where auto-encoders. InInt. Conf. Learning Represen-tations, 2016.6

[43] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation.arXiv:1708.04896, 2017.4,6

[44] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial net-works. InInt. Conf. Computer Vision, 2017.1

Figure

Figure 1: Style augmentation applied to an image from the Office dataset [24] (original in top left)
Figure 2: Diagram of the arbitrary style transfer pipeline of Ghiasi et al. [10].
Table 1: Test accuracies on the Office dataset [24] with A, D and W denoting the Amazon, DSLR and Webcam domains.
Figure 6: Results of the experiments using the Office dataset. Note the consistent superiority of traditional  aug-mentation techniques combined with style augaug-mentation (red curve).
+3

References

Related documents

Increasing root biomass with increasing plant species richness (Ravenek et al., 2014) allows for more exhaustive water use and therefore decreases downward water fluxes (i.e. loss

The Versaderm method, however, involved using the Versaderm dressing, which comes with sterile foam stabilization tape and a transparent center.. Thus, we selected the

The second level of distributed computing is governed by a software load manager, and is used to distribute the processing load necessary for computer vision tasks which require

A hybrid model, involving the coupling of an empirical statistical model (logistic regression), which allows the integration of other drivers of slum development, with a

Based on the findings of this research, the study recommends that Moi University, Nairobi Campus and other public Universities organizations needs to embark on

With this art- icle, we fill a gap in the literature by identifying the factors that make a technology startup valuable to specific stakeholders (e.g., investors, customers,

 Before leaving Chapter 3, be sure you know how to calculate mole fractions from known mass fractions &amp; vice versa; the definition of average molecular weight of a mixture,

The main differences are the higher share of workers in large firms and the public sector in urban areas, and the higher share of unskilled self-employed and, in