Skip to content

Gradient \(\odot\) Input

View colab tutorial | View source | 📰 Paper

Gradient \(\odot\) Input is a visualization techniques based on the gradient of a class score relative to the input, element-wise with the input. This method was introduced by Shrikumar et al., 20161, in an old version of their DeepLIFT paper2.

Quote

Gradient inputs was at first proposed as a technique to improve the sharpness of the attribution maps. The attribution is computed taking the (signed) partial derivatives of the output with respect to the input and multiplying them with the input itself.

-- Towards better understanding of the gradient-based attribution methods for Deep Neural Networks (2017)3

A theoretical analysis conducted by Ancona et al, 20183 showed that Gradient \(\odot\) Input is equivalent to \(\epsilon\)-LRP and DeepLift methods under certain conditions: using a baseline of zero, and with all biases to zero.

More precisely, the explanation \(\phi\) for an input \(x\) and a classifier \(f\) is defined as

\[ \phi = x \odot \nabla_x f(x) \]

with \(\odot\) the Hadamard product.

Example

from xplique.attributions import GradientInput

# load images, labels and model
# ...

method = GradientInput(model)
explanations = method.explain(images, labels)

Notebooks

GradientInput

Used to compute elementwise product between the saliency maps of Simonyan et al. and the input (Gradient x Input).

__init__(self,
         model: keras.src.engine.training.Model,
         output_layer: Union[str, int, None] = None,
         batch_size: Optional[int] = 64,
         operator: Optional[Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float]] = None,
         reducer: Optional[str] = 'mean')

Parameters

  • model : keras.src.engine.training.Model

    • The model from which we want to obtain explanations

  • output_layer : Union[str, int, None] = None

    • Layer to target for the outputs (e.g logits or after softmax).

      If an int is provided it will be interpreted as a layer index.

      If a string is provided it will look for the layer name.

      Default to the last layer.

      It is recommended to use the layer before Softmax.

  • batch_size : Optional[int] = 64

    • Number of inputs to explain at once, if None compute all at once.

  • operator : Optional[Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float]] = None

    • Function g to explain, g take 3 parameters (f, x, y) and should return a scalar, with f the model, x the inputs and y the targets. If None, use the standard operator g(f, x, y) = f(x)[y].

  • reducer : Optional[str] = 'mean'

    • String, name of the reducer to use. Either "min", "mean", "max", "sum", or None to ignore.

      Used only for images to obtain explanation with shape (n, h, w, 1).

explain(self,
        inputs: Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, ],
        targets: Union[tensorflow.python.framework.tensor.Tensor, , None] = None) -> tensorflow.python.framework.tensor.Tensor

Compute the explanations of the given inputs. Accept Tensor, numpy array or tf.data.Dataset (in that case targets is None)

Parameters

  • inputs : Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, ]

    • Dataset, Tensor or Array. Input samples to be explained.

      If Dataset, targets should not be provided (included in Dataset).

      Expected shape among (N, W), (N, T, W), (N, H, W, C).

      More information in the documentation.

  • targets : Union[tensorflow.python.framework.tensor.Tensor, , None] = None

    • Tensor or Array. One-hot encoding of the model's output from which an explanation is desired. One encoding per input and only one output at a time. Therefore, the expected shape is (N, output_size).

      More information in the documentation.

Return

  • explanations : tensorflow.python.framework.tensor.Tensor

    • Explanation generated by the method.