Deconvnet¶
View colab tutorial | View source | 📰 Paper
Deconvnet is one of the first attribution method and was proposed in 2013. Its operation is similar to Saliency: it consists in backpropagating the output score with respect to the input, however, at each non-linearity (the ReLUs), only the positive gradient (even of negative activations) are backpropagated.
More precisely, with \(f\) our classifier and \(f_l(x)\) the activation at layer \(l\), we usually have:
with \(\mathbb{1}(.)\) the indicator function. With Deconvnet, the backpropagation is modified such that :
Example¶
from xplique.attributions import DeconvNet
# load images, labels and model
# ...
method = DeconvNet(model)
explanations = method.explain(images, labels)
Notebooks¶
DeconvNet
¶
Used to compute the DeconvNet method, which modifies the classic Saliency procedure on
ReLU's non linearities, allowing only the positive gradients (even from negative inputs) to
pass through.
__init__(self,
model: keras.src.engine.training.Model,
output_layer: Union[str, int, None] = None,
batch_size: Optional[int] = 32,
operator: Union[xplique.commons.operators_operations.Tasks, str,
Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None,
reducer: Optional[str] = 'mean')
¶
model: keras.src.engine.training.Model,
output_layer: Union[str, int, None] = None,
batch_size: Optional[int] = 32,
operator: Union[xplique.commons.operators_operations.Tasks, str,
Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None,
reducer: Optional[str] = 'mean')
Parameters
-
model : keras.src.engine.training.Model
The model from which we want to obtain explanations
-
output_layer : Union[str, int, None] = None
Layer to target for the outputs (e.g logits or after softmax).
If an
int
is provided it will be interpreted as a layer index.If a
string
is provided it will look for the layer name.Default to the last layer.
It is recommended to use the layer before Softmax.
-
batch_size : Optional[int] = 32
Number of inputs to explain at once, if None compute all at once.
-
operator : Union[xplique.commons.operators_operations.Tasks, str, Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None
Function g to explain, g take 3 parameters (f, x, y) and should return a scalar, with f the model, x the inputs and y the targets. If None, use the standard operator g(f, x, y) = f(x)[y].
-
reducer : Optional[str] = 'mean'
String, name of the reducer to use. Either "min", "mean", "max", "sum", or
None
to ignore.Used only for images to obtain explanation with shape (n, h, w, 1).
explain(self,
inputs: Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, ] ,
targets: Union[tensorflow.python.framework.tensor.Tensor, , None] = None) -> tensorflow.python.framework.tensor.Tensor
¶
inputs: Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor,
targets: Union[tensorflow.python.framework.tensor.Tensor,
Compute the explanations of the given inputs.
Accept Tensor, numpy array or tf.data.Dataset (in that case targets is None)
Parameters
-
inputs : Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor,
] Dataset, Tensor or Array. Input samples to be explained.
If Dataset, targets should not be provided (included in Dataset).
Expected shape among (N, W), (N, T, W), (N, H, W, C).
More information in the documentation.
-
targets : Union[tensorflow.python.framework.tensor.Tensor,
, None] = None Tensor or Array. One-hot encoding of the model's output from which an explanation is desired. One encoding per input and only one output at a time. Therefore, the expected shape is (N, output_size).
More information in the documentation.
Return
-
explanations : tensorflow.python.framework.tensor.Tensor
Explanation generated by the method.