Skip to content

Modern Feature Visualization (MaCo)

View colab tutorial | View source | 📰 Paper

Feature visualization has become increasingly popular, especially after the groundbreaking work by Olah et al. 1, which established it as a vital tool for enhancing explainability. Despite its significance, the widespread adoption of feature visualization has been hindered by the reliance on various tricks to create interpretable images, making it challenging to scale the method effectively for deeper neural networks.

Addressing these limitations, a recent method called MaCo 2 offers a straightforward solution. The core concept involves generating images by optimizing the phase spectrum while keeping the magnitude of the Fourier spectrum constant. This ensures that the generated images reside in the space of natural images in the Fourier domain, providing a more stable and interpretable approach.

Quote

It is known that human recognition of objects in images is driven not by magnitude but by phase. Motivated by this, we propose to optimize the phase of the Fourier spectrum while fixing its magnitude to a typical value of a natural image (with few high frequencies). In particular, the magnitude is kept constant at the average magnitude computed over a set of natural images (such as ImageNet)

MaCo -- Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization (2023)2

To put it more precisely, let \(\phi^{(n)}\) be an explanation of a neuron \(n\), and let \(x* \in \mathcal{X}\) be the corresponding input defined as:

\[ \varphi* = \underset{\varphi}{arg\ max}\ f(\mathcal{F}^{-1}(r e^{i \varphi}))^{(n)} \]

where \(x* = \mathcal{F}^{-1}(r e^{i \varphi*})\), \(f(x)^{(n)}\) represents the neuron score for a given input, and \(\mathcal{F}^{-1}\) denotes the 2-D inverse Fourier transform.

In the optimization process, MaCo also generates an alpha mask, which is used to identify the most important area of the generated image. For the purpose of correctly visualizing the image blended with the alpha mask, we provide utilities in the xplique.plot module.

Notebooks

  • MaCo: Getting started In this notebook, you'll be introduced to the fundamentals of MaCo while also experimenting with various hyperparameters.

Examples

To optimize the logit 1 of your neural network (we recommend to remove the softmax activation of your network).

from xplique.features_visualizations import Objective
from xplique.features_visualizations import maco
from xplique.plot import plot_maco
# load a model...

# targeting the logit 1 of the layer 'logits'
# we can also target a layer by its index, like -1 for the last layer
logits_obj = Objective.neuron(model, "logits", 1)
image, alpha = maco(logits_obj)
plot_maco(image, alpha)

Or if you want to visualize a specific CAV (or any direction, like multiple neurons) in your models:

from xplique.features_visualizations import Objective
from xplique.features_visualizations import maco
from xplique.plot import plot_maco
# load a model...

# cav is a vector of the shape of an activation in the -2 layer
# e.g 2048 for Resnet50
logits_obj = Objective.direction(model, -2, cav)
image, alpha = maco(logits_obj)
plot_maco(image, alpha)

maco(objective: xplique.features_visualizations.objectives.Objective,
     optimizer: Optional[keras.src.optimizers.optimizer.Optimizer] = None,
     maco_dataset: Optional[tf.Dataset] = None,
     nb_steps: int = 256,
     noise_intensity: Union[float, Callable, None] = 0.08,
     box_size: Union[float, Callable, None] = None,
     nb_crops: Optional[int] = 32,
     values_range: Tuple[float, float] = (-1, 1),
     custom_shape: Optional[Tuple] = (512, 512)) -> Tuple[tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor]

Optimise a single objective using MaCo method. Note that, unlike classic fourier optimization, we can only optimize for one objective at a time.

Parameters

  • objective : xplique.features_visualizations.objectives.Objective

    • Objective object.

  • optimizer : Optional[keras.src.optimizers.optimizer.Optimizer] = None

    • Optimizer used for gradient ascent, default Nadam(lr=1.0).

  • maco_dataset : Optional[tf.Dataset] = None

    • Dataset on which to create the Fourier magnitude buffer. If None, ImageNet is going to be used for RGB images. A dataset is required for grayscale images.

  • nb_steps : int = 256

    • Number of iterations.

  • noise_intensity : Union[float, Callable, None] = 0.08

    • Control the noise injected at each step. Either a float : each step we add noise with same std, or a function that associate for each step a noise intensity.

  • box_size : Union[float, Callable, None] = None

    • Control the average size of the crop at each step. Either a fixed float (e.g 0.5 means the crops will be 50% of the image size) or a function that take as parameter the step and return the average box size. Default to linear decay from 50% to 5%.

  • nb_crops : Optional[int] = 32

    • Number of crops used at each steps, higher make the optimisation slower but make the results more stable. Default to 32.

  • values_range : Tuple[float, float] = (-1, 1)

    • Range of values of the inputs that will be provided to the model, e.g (0, 1) or (-1, 1).

  • custom_shape : Optional[Tuple] = (512, 512)

    • If specified, optimizes images of the given size. Used with a low box size to optimize bigger images crop by crop.

Return

  • image_optimized : Tuple[tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor]

    • Optimized image for the given objective.

  • transparency : Tuple[tensorflow.python.framework.tensor.Tensor, tensorflow.python.framework.tensor.Tensor]

    • Transparency of the image, i.e the sum of the absolute value of the gradients of the image with respect to the objective.