deel.torchlip.utils¶
Normalization hooks¶
Bjorck normalization¶
- deel.torchlip.utils.bjorck_norm(module: Module, name: str = 'weight', eps: float = 0.001) Module [source]¶
Applies Bjorck normalization to a parameter in the given module.
Bjorck normalization ensures that all eigen values of a vectors remain close or equal to one during training. If the dimension of the weight tensor is greater than 2, it is reshaped to 2D for iteration. This is implemented via a Bjorck normalization parametrization.
Note
It is recommended to use
torch.nn.utils.parameterize.spectral_norm()
before this hook to greatly reduce the number of iterations required.See Sorting out Lipschitz function approximation.
- Parameters:
module – Containing module.
name – Name of weight parameter.
n_iterations – Number of iterations for the normalization.
- Returns:
The original module with the Bjorck normalization hook.
Example
>>> m = bjorck_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True)
- deel.torchlip.utils.remove_bjorck_norm(module: Module, name: str = 'weight') Module [source]¶
Removes the Bjorck normalization reparameterization from a module.
- Parameters:
module – Containing module.
name – Name of weight parameter.
Example
>>> m = bjorck_norm(nn.Linear(20, 40)) >>> remove_bjorck_norm(m)
Frobenius normalization¶
- deel.torchlip.utils.frobenius_norm(module: Module, name: str = 'weight', disjoint_neurons: bool = True) Module [source]¶
Applies Frobenius normalization to a parameter in the given module.
This is implemented via a hook that applies Frobenius normalization before every
forward()
call.- Parameters:
module – Containing module.
name – Name of weight parameter.
disjoint_neurons – Normalize, independently per neuron or not, the matrix weight.
- Returns:
The original module with the Frobenius normalization hook.
Example:
>>> m = frobenius_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True)
- deel.torchlip.utils.remove_frobenius_norm(module: Module, name: str = 'weight') Module [source]¶
Removes the Frobenius normalization reparameterization from a module.
- Parameters:
module – Containing module.
name – Name of weight parameter.
Example
>>> m = frobenius_norm(nn.Linear(20, 40)) >>> remove_frobenius_norm(m)
L-Conv normalization¶
- deel.torchlip.utils.lconv_norm(module: Conv2d | Conv1d, name: str = 'weight') Conv2d | Conv1d [source]¶
Applies Lipschitz normalization to a kernel in the given convolutional. This is implemented via a hook that multiplies the kernel by a value computed from the input shape before every
forward()
call.See Achieving robustness in classification using optimal transport with hinge regularization.
- Parameters:
module – Containing module.
name – Name of weight parameter.
- Returns:
The original module with the Lipschitz normalization hook.
Example:
>>> m = lconv_norm(nn.Conv2d(16, 16, (3, 3))) >>> m Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1))
- deel.torchlip.utils.remove_lconv_norm(module: Conv2d, name: str = 'weight') Conv2d [source]¶
Removes the normalization parametrization for lipschitz convolution from a module.
- Parameters:
module – Containing module.
name – Name of weight parameter.
Example
>>> m = lconv_norm(nn.Conv2d(16, 16, (3, 3))) >>> remove_lconv_norm(m)
Utilities¶
- deel.torchlip.utils.sqrt_with_gradeps(input: Tensor, eps: float = 1e-06) Tensor [source]¶
Square-root of input with a “valid” gradient at 0.
- Parameters:
input – Tensor of arbitrary shape.
eps – Value to add to the input when computing gradient (must be positive).
- Returns:
A tensor whose value is the square-root of the input but whose associated autograd functions is
SqrtEpsGrad
.