Sobol Attribution Method¶
View colab tutorial | View source | 📰 Paper
The Sobol attribution method from Fel, Cadène & al.1 is an attribution method grounded in Sensitivity Analysis. Beyond modeling the individual contributions of image regions, Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a neural network’s prediction through the lens of variance.
Quote
The total Sobol index \(ST_i\) which measures the contribution of the variable \(X_i\) as well as its interactions of any order with any other input variables to the model output variance.
-- Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis (2021)1
More precisely, the attribution score \(\phi_i\) for an input variable \(x_i\), is defined as
Where \(\mathbb{E}_{X \sim i}(Var_{X_i}(f(x) | X_{\sim i}))\) is the expected variance that would be left if all variables but \(X_{\sim i}\) were to be fixed.
In order to generate stochasticity(\(X_i\)), a perturbation function is used and uses perturbation masks to modulate the generated perturbation. The perturbation functions available are inpainting that modulates pixel regions to a baseline state, amplitude and blurring.
The calculation of the indices also requires an estimator -- in practice this parameter does not
change the results much -- JansenEstimator
being recommended.
Finally the exploration of the manifold exploration is made using a sampling method, several samplers are proposed: Quasi-Monte
Carlo (ScipySobolSequence
, recommended) using Scipy's sobol sequence, Latin hypercubes
-- LHSAmpler
-- or Halton's sequences HaltonSequence
.
Tip
For quick a faithful explanations, we recommend to use grid_size
in \([7, 12)\),
nb_design
in \(\{16, 32, 64\}\) (more is useless), and a QMC sampler.
(see SobolAttributionMethod
documentation below for detail on those parameters).
Example¶
from xplique.attributions import SobolAttributionMethod
from xplique.attributions.global_sensitivity_analysis import (
JansenEstimator, GlenEstimator,
LHSampler, ScipySobolSequence,
HaltonSequence)
# load images, labels and model
# ...
# default explainer (recommended)
explainer = SobolAttributionMethod(model, grid_size=8, nb_design=32)
explanations = method(images, labels) # one-hot encoded labels
If you want to change the estimator or the sampling:
from xplique.attributions import SobolAttributionMethod
from xplique.attributions.global_sensitivity_analysis import (
JansenEstimator, GlenEstimator,
LHSampler, ScipySobolSequence,
HaltonSequence)
# load images, labels and model
# ...
explainer_lhs = SobolAttributionMethod(model, grid_size=8, nb_design=32,
sampler=LHSampler(),
estimator=GlenEstimator())
explanations_lhs = explainer_lhs(images, labels)
Notebooks¶
SobolAttributionMethod
¶
Sobol' Attribution Method.
Compute the total order Sobol' indices using a perturbation function on a grid and an
adapted sampling as described in the original paper.
__init__(self,
model,
grid_size: int = 8,
nb_design: int = 32,
sampler: Optional[xplique.attributions.global_sensitivity_analysis.replicated_designs.ReplicatedSampler] = None,
estimator: Optional[xplique.attributions.global_sensitivity_analysis.sobol_estimators.SobolEstimator] = None,
perturbation_function: Union[Callable, str, None] = 'inpainting',
batch_size=256,
operator: Union[xplique.commons.operators_operations.Tasks, str,
Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None)
¶
model,
grid_size: int = 8,
nb_design: int = 32,
sampler: Optional[xplique.attributions.global_sensitivity_analysis.replicated_designs.ReplicatedSampler] = None,
estimator: Optional[xplique.attributions.global_sensitivity_analysis.sobol_estimators.SobolEstimator] = None,
perturbation_function: Union[Callable, str, None] = 'inpainting',
batch_size=256,
operator: Union[xplique.commons.operators_operations.Tasks, str,
Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None)
Parameters
-
model : model
Model used for computing explanations.
-
grid_size : int = 8
Cut the image in a grid of (grid_size, grid_size) to estimate an indice per cell.
-
nb_design : int = 32
Must be a power of two. Number of design, the number of forward will be: nb_design * (grid_size**2 + 2). Generally not above 32.
-
sampler : Optional[xplique.attributions.global_sensitivity_analysis.replicated_designs.ReplicatedSampler] = None
Sampler used to generate the (quasi-)monte carlo samples, QMC (sobol sequence recommended). For more option, see the sampler module.
-
estimator : Optional[xplique.attributions.global_sensitivity_analysis.sobol_estimators.SobolEstimator] = None
Estimator used to compute the total order sobol' indices, Jansen recommended. For more option, see the estimator module.
-
perturbation_function : Union[Callable, str, None] = 'inpainting'
Function to call to apply the perturbation on the input. Can also be string: 'inpainting', 'blurring', or 'amplitude'.
-
batch_size : batch_size=256
Batch size to use for the forwards.
-
operator : Union[xplique.commons.operators_operations.Tasks, str, Callable[[keras.src.engine.training.Model, tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor], float], None] = None
Function g to explain, g take 3 parameters (f, x, y) and should return a scalar, with f the model, x the inputs and y the targets. If None, use the standard operator g(f, x, y) = f(x)[y].
explain(self,
inputs: Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, numpy.ndarray],
targets: Union[tensorflow.python.framework.tensor.Tensor, numpy.ndarray, None] = None) -> tensorflow.python.framework.tensor.Tensor
¶
inputs: Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, numpy.ndarray],
targets: Union[tensorflow.python.framework.tensor.Tensor, numpy.ndarray, None] = None) -> tensorflow.python.framework.tensor.Tensor
Compute the total Sobol' indices according to the explainer parameter (perturbation
function, grid size...). Accept Tensor, numpy array or tf.data.Dataset (in that case
targets is None).
Parameters
-
inputs : Union[tf.Dataset, tensorflow.python.framework.tensor.Tensor, numpy.ndarray]
Images to be explained, either tf.dataset, Tensor or numpy array.
If Dataset, targets should not be provided (included in Dataset).
Expected shape (N, W, H, C) or (N, W, H).
-
targets : Union[tensorflow.python.framework.tensor.Tensor, numpy.ndarray, None] = None
One-hot encoding for classification or direction {-1, +1} for regression.
Tensor or numpy array.
Expected shape (N, C) or (N).
Return
-
attributions_maps : tensorflow.python.framework.tensor.Tensor
GSA Attribution Method explanations, same shape as the inputs except for the channels.