Skip to content

Saliency Maps

View colab tutorial | View source | 📰 Paper

Saliency is one of the most easy explanation method based on the gradient of a class score relative to the input.

Quote

An interpretation of computing the image-specific class saliency using the class score derivative is that the magnitude of the derivative indicates which pixels need to be changed the least to affect the class score the most. One can expect that such pixels correspond to the object location in the image.

-- Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (2013)1

More precisely, for an image \(x\) the importance map \(\phi\) according to a classifier \(f\) is defined as:

\[ \phi = | \nabla_{x} f(x) | \]

more precisely, in the image case, Xplique is faithful to the original method and returns the max on the axis of channels, with \(\phi_i \in \mathbb{R}^3\) for RGB, the importance for the pixel \(i\) is given by \(||\phi_i||_{\infty}\)

Example

from xplique.attributions import Saliency

# load images, labels and model
# ...

method = Saliency(model)
explanations = method.explain(images, labels)

Notebooks

Saliency

Used to compute the absolute gradient of the output relative to the input.

__init__(self,
         model: keras.src.engine.training.Model,
         output_layer: Union[str, int, None] = None,
         batch_size: Optional[int] = 64,
         operator: Optional[Callable[[keras.src.engine.training.Model, tf.Tensor, tf.Tensor], float]] = None)

Parameters

  • model : keras.src.engine.training.Model

    • The model from which we want to obtain explanations

  • output_layer : Union[str, int, None] = None

    • Layer to target for the outputs (e.g logits or after softmax).

      If an int is provided it will be interpreted as a layer index.

      If a string is provided it will look for the layer name.

      Default to the last layer.

      It is recommended to use the layer before Softmax.

  • batch_size : Optional[int] = 64

    • Number of inputs to explain at once, if None compute all at once.

  • operator : Optional[Callable[[keras.src.engine.training.Model, tf.Tensor, tf.Tensor], float]] = None

    • Function g to explain, g take 3 parameters (f, x, y) and should return a scalar, with f the model, x the inputs and y the targets. If None, use the standard operator g(f, x, y) = f(x)[y].

explain(self,
        inputs: Union[tf.Dataset, tf.Tensor, numpy.ndarray],
        targets: Union[tf.Tensor, numpy.ndarray, None] = None) -> tf.Tensor

Compute saliency maps for a batch of samples.

Parameters

  • inputs : Union[tf.Dataset, tf.Tensor, numpy.ndarray]

    • Dataset, Tensor or Array. Input samples to be explained.

      If Dataset, targets should not be provided (included in Dataset).

      Expected shape among (N, W), (N, T, W), (N, H, W, C).

      More information in the documentation.

  • targets : Union[tf.Tensor, numpy.ndarray, None] = None

    • Tensor or Array. One-hot encoding of the model's output from which an explanation is desired. One encoding per input and only one output at a time. Therefore, the expected shape is (N, output_size).

      More information in the documentation.

Return

  • explanations : tf.Tensor

    • Saliency maps.