MuFidelity¶
MuFidelity is a fidelity metric measuring the correlation between important variables defined by the explanation method and the decline in the model score when these variables are reset to a baseline state.
Quote
[...] when we set particular features \(x_s\) to a baseline value \(x_0\) the change in predictorâ€™s output should be proportional to the sum of attribution scores.
 Evaluating and Aggregating Featurebased Model Explanations (2020)^{1}
Formally, given a predictor \(f\), an explanation function \(g\), a point \(x \in \mathbb{R}^n\) and a subset size \(k\) the MuFidelity metric is defined as:
Info
The better the method, the higher the score.
Example¶
from xplique.metrics import MuFidelity
from xplique.attributions import Saliency
# load images, labels and model
# ...
explainer = Saliency(model)
explanations = explainer(inputs, lablels)
metric = MuFidelity(model, inputs, labels)
score = metric.evaluate(explainations)
MuFidelity
¶
Used to compute the fidelity correlation metric. This metric ensure there is a correlation
between a random subset of pixels and their attribution score. For each random subset
created, we set the pixels of the subset at a baseline state and obtain the prediction score.
This metric measures the correlation between the drop in the score and the importance of the
explanation.
__init__(self,
model: Callable,
inputs: Union[tf.Dataset, tf.Tensor, numpy.ndarray],
targets: Union[tf.Tensor, numpy.ndarray, None] = None,
batch_size: Optional[int] = 64,
grid_size: Optional[int] = 9,
subset_percent: float = 0.2,
baseline_mode: Union[Callable, float] = 0.0,
nb_samples: int = 200,
operator: Optional[Callable] = None,
activation: Optional[str] = None)
¶
model: Callable,
inputs: Union[tf.Dataset, tf.Tensor, numpy.ndarray],
targets: Union[tf.Tensor, numpy.ndarray, None] = None,
batch_size: Optional[int] = 64,
grid_size: Optional[int] = 9,
subset_percent: float = 0.2,
baseline_mode: Union[Callable, float] = 0.0,
nb_samples: int = 200,
operator: Optional[Callable] = None,
activation: Optional[str] = None)
Parameters

model : Callable
Model used for computing metric.

inputs : Union[tf.Dataset, tf.Tensor, numpy.ndarray]
Input samples under study.

targets : Union[tf.Tensor, numpy.ndarray, None] = None
Onehot encoded labels or regression target (e.g {+1, 1}), one for each sample.

batch_size : Optional[int] = 64
Number of samples to explain at once, if None compute all at once.

grid_size : Optional[int] = 9
If none, compute the original metric, else cut the image in (grid_size, grid_size) and each element of the subset will be a super pixel representing one element of the grid.
You should use this when dealing with medium / large size images.

subset_percent : float = 0.2
Percent of the image that will be set to baseline.

baseline_mode : Union[Callable, float] = 0.0
Value of the baseline state, will be called with the a single input if it is a function.

nb_samples : int = 200
Number of different subsets to try on each input to measure the correlation.

operator : Optional[Callable] = None
Function g to explain, g take 3 parameters (f, x, y) and should return a scalar, with f the model, x the inputs and y the targets. If None, use the standard operator g(f, x, y) = f(x)[y].

activation : Optional[str] = None
A string that belongs to [None, 'sigmoid', 'softmax']. Specify if we should add an activation layer once the model has been called. It is useful, for instance if you want to measure a 'drop of probability' by adding a sigmoid or softmax after getting your logits. If None does not add a layer to your model.
evaluate(self,
explanations: Union[tf.Tensor, numpy.ndarray]) > float
¶
explanations: Union[tf.Tensor, numpy.ndarray]) > float
Evaluate the fidelity score.
Parameters

explanations : Union[tf.Tensor, numpy.ndarray]
Explanation for the inputs, labels to evaluate.
Return

fidelity_score : float
Metric score, average correlation between the drop in score when variables are set to a baseline state and the importance of these variables according to the explanations.