Shortcuts

deel.torchlip.utils

Normalization hooks

Bjorck normalization

deel.torchlip.utils.bjorck_norm(module: deel.torchlip.utils.bjorck_norm.T_module, name: str = 'weight', n_iterations: int = 15) deel.torchlip.utils.bjorck_norm.T_module[source]

Applies Bjorck normalization to a parameter in the given module.

Bjorck normalization ensures that all eigen values of a vectors remain close or equal to one during training. If the dimension of the weight tensor is greater than 2, it is reshaped to 2D for iteration. This is implemented via a hook that applies Bjorck normalization before every forward() call.

Note

It is recommended to use torch.nn.utils.spectral_norm() before this hook to greatly reduce the number of iterations required.

See Sorting out Lipschitz function approximation.

Parameters
  • module – Containing module.

  • name – Name of weight parameter.

  • n_iterations – Number of iterations for the normalization.

Returns

The original module with the Bjorck normalization hook.

Example

>>> m = bjorck_norm(nn.Linear(20, 40), name='weight')
>>> m
Linear(in_features=20, out_features=40, bias=True)
deel.torchlip.utils.remove_bjorck_norm(module: deel.torchlip.utils.bjorck_norm.T_module, name: str = 'weight') deel.torchlip.utils.bjorck_norm.T_module[source]

Removes the Bjorck normalization reparameterization from a module.

Parameters
  • module – Containing module.

  • name – Name of weight parameter.

Example

>>> m = bjorck_norm(nn.Linear(20, 40))
>>> remove_bjorck_norm(m)

Frobenius normalization

deel.torchlip.utils.frobenius_norm(module: deel.torchlip.utils.frobenius_norm.T_module, name: str = 'weight') deel.torchlip.utils.frobenius_norm.T_module[source]

Applies Frobenius normalization to a parameter in the given module.

W=WW\mathbf{W} = \dfrac{\mathbf{W}}{\Vert{}\mathbf{W}\Vert{}}

This is implemented via a hook that applies Bjorck normalization before every forward() call.

Parameters
  • module – Containing module.

  • name – Name of weight parameter.

Returns

The original module with the Frobenius normalization hook.

Example:

>>> m = frobenius_norm(nn.Linear(20, 40), name='weight')
>>> m
Linear(in_features=20, out_features=40, bias=True)
deel.torchlip.utils.remove_frobenius_norm(module: deel.torchlip.utils.frobenius_norm.T_module, name: str = 'weight') deel.torchlip.utils.frobenius_norm.T_module[source]

Removes the Frobenius normalization reparameterization from a module.

Parameters
  • module – Containing module.

  • name – Name of weight parameter.

Example

>>> m = frobenius_norm(nn.Linear(20, 40))
>>> remove_frobenius_norm(m)

L-Conv normalization

deel.torchlip.utils.lconv_norm(module: torch.nn.modules.conv.Conv2d) torch.nn.modules.conv.Conv2d[source]

Applies Lipschitz normalization to a kernel in the given convolutional. This is implemented via a hook that multiplies the kernel by a value computed from the input shape before every forward() call.

See Achieving robustness in classification using optimal transport with hinge regularization.

Parameters

module – Containing module.

Returns

The original module with the Lipschitz normalization hook.

Example:

>>> m = lconv_norm(nn.Conv2d(16, 16, (3, 3)))
>>> m
Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1))
deel.torchlip.utils.remove_lconv_norm(module: torch.nn.modules.conv.Conv2d) torch.nn.modules.conv.Conv2d[source]

Removes the Lipschitz normalization hook from a module.

Parameters

module – Containing module.

Example

>>> m = lconv_norm(nn.Conv2d(16, 16, (3, 3)))
>>> remove_lconv_norm(m)

Utilities

deel.torchlip.utils.sqrt_with_gradeps(input: torch.Tensor, eps: float = 1e-06) torch.Tensor[source]

Square-root of input with a “valid” gradient at 0.

fx=12x+ϵ\frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}+\epsilon}
Parameters
  • input – Tensor of arbitrary shape.

  • eps – Value to add to the input when computing gradient (must be positive).

Returns

A tensor whose value is the square-root of the input but whose associated autograd functions is SqrtEpsGrad.


© Copyright 2020, IRT Antoine de Saint Exupéry - All rights reserved. DEEL is a research program operated by IVADO, IRT Saint Exupéry, CRIAQ and ANITI..

Built with Sphinx using PyTorch's theme provided originally by Read the Docs.