This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.



Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

K. Simonyan, A. Vedaldi, A. Zisserman
Workshop at International Conference on Learning Representations, 2014
Download the publication : simonyan14a.pdf [2.2Mo]  
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].

Additional Material:


Links:


BibTex reference:

@InProceedings{Simonyan14a,
  author       = "Karen Simonyan and Andrea Vedaldi and Andrew Zisserman",
  title        = "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps",
  booktitle    = "Workshop at International Conference on Learning Representations",
  year         = "2014",
}

Other publications in the database: