deel.torchlip.functional¶
Non-linear activation functions¶
invertible down/up sample¶
- deel.torchlip.functional.invertible_downsample(input: torch.Tensor, kernel_size: Union[int, Tuple[int, ...]]) torch.Tensor [source]¶
Downsamples the input in an invertible way.
The number of elements in the output tensor is the same as the number of elements in the input tensor.
- Parameters
input – A tensor of shape , or to downsample.
kernel_size – The downsample scale. If a single-value is passed, the same value will be used alongside all dimensions, otherwise the length of
kernel_size
must match the number of dimensions of the input (1, 2 or 3).
- Raises
ValueError – If there is a mismatch between
kernel_size
and the input shape.
Examples
>>> x = torch.rand(16, 16, 32, 32) >>> x.shape (16, 16, 32, 32) >>> y = invertible_downsample(x, (2, 4)) >>> y.shape (16, 128, 16, 8)
See also
- deel.torchlip.functional.invertible_upsample(input: torch.Tensor, kernel_size: Union[int, Tuple[int, ...]]) torch.Tensor [source]¶
Upsamples the input in an invertible way. The number of elements in the output tensor is the same as the number of elements in the input tensor.
The number of input channels must be a multiple of the product of the kernel sizes, i.e.
where is the number of inputs channels and the kernel size for dimension and the number of dimensions.
- Parameters
input – A tensor of shape , or to upsample.
kernel_size – The upsample scale. If a single-value is passed, the same value will be used alongside all dimensions, otherwise the length of
kernel_size
must match the number of dimensions of the input (1, 2 or 3).
- Raises
ValueError – If there is a mismatch between
kernel_size
and the input shape.
Examples
>>> x = torch.rand(16, 128, 16, 8) >>> x.shape (16, 128, 16, 8) >>> y = invertible_upsample(x, (2, 4)) >>> y.shape (16, 16, 32, 32)
See also
max_min¶
- deel.torchlip.functional.max_min(input: torch.Tensor, dim: Optional[int] = None) torch.Tensor [source]¶
Applies max-min activation on the given tensor.
If
input
is a tensor of shape anddim
isNone
, the output can be described as:where is the batch size and is the size of the tensor.
- Parameters
input – A tensor of arbitrary shape.
dim – The dimension to apply max-min. If None, will apply to the 0th dimension if the shape of input is or to the first if its .
- Returns
A tensor of shape or depending on the shape of the input.
Note
M. Blot, M. Cord, et N. Thome, « Max-min convolutional neural networks for image classification », in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, p. 3678‑3682.
group_sort¶
- deel.torchlip.functional.group_sort(input: torch.Tensor, group_size: Optional[int] = None) torch.Tensor [source]¶
Applies GroupSort activation on the given tensor.
See also
- deel.torchlip.functional.group_sort_2(input: torch.Tensor) torch.Tensor [source]¶
Applies GroupSort-2 activation on the given tensor. This function is equivalent to
group_sort(input, 2)
.See also
- deel.torchlip.functional.full_sort(input: torch.Tensor) torch.Tensor [source]¶
Applies FullSort activation on the given tensor. This function is equivalent to
group_sort(input, None)
.See also
others¶
- deel.torchlip.functional.lipschitz_prelu(input: torch.Tensor, weight: torch.Tensor, k_coef_lip: float = 1.0) torch.Tensor [source]¶
Applies k-Lipschitz version of PReLU by clamping the weights
Loss functions¶
Binary losses¶
- deel.torchlip.functional.kr_loss(input: torch.Tensor, target: torch.Tensor, true_values: Tuple[int, int] = (0, 1)) torch.Tensor [source]¶
Loss to estimate the Wasserstein-1 distance using Kantorovich-Rubinstein duality, as per
where and are the distributions corresponding to the two possible labels as specific by
true_values
.- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
true_values – Tuple containing the two label for the predicted class.
- Returns
The Wasserstein-1 loss between
input
andtarget
.
- deel.torchlip.functional.neg_kr_loss(input: torch.Tensor, target: torch.Tensor, true_values: Tuple[int, int] = (0, 1)) torch.Tensor [source]¶
Loss to estimate the negative wasserstein-1 distance using Kantorovich-Rubinstein duality.
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
true_values – Tuple containing the two label for the predicted classes.
- Returns
The negative Wasserstein-1 loss between
input
andtarget
.
See also
- deel.torchlip.functional.hinge_margin_loss(input: torch.Tensor, target: torch.Tensor, min_margin: float = 1) torch.Tensor [source]¶
Compute the hinge margin loss as per
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input containing target labels (-1 and +1).
min_margin – The minimal margin to enforce.
- Returns
The hinge margin loss.
- deel.torchlip.functional.hkr_loss(input: torch.Tensor, target: torch.Tensor, alpha: float, min_margin: float = 1.0, true_values: Tuple[int, int] = (- 1, 1)) torch.Tensor [source]¶
Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
alpha – Regularization factor between the hinge and the KR loss.
min_margin – Minimal margin for the hinge loss.
true_values – tuple containing the two label for each predicted class.
- Returns
The regularized Wasserstein-1 loss.
See also
multiclass losses¶
- deel.torchlip.functional.kr_multiclass_loss(input: torch.Tensor, target: torch.Tensor) torch.Tensor [source]¶
Loss to estimate average of W1 distance using Kantorovich-Rubinstein duality over outputs. In this multiclass setup thr KR term is computed for each class and then averaged.
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input. target has to be one hot encoded (labels being 1s and 0s ).
- Returns
The Wasserstein multiclass loss between
input
andtarget
.
- deel.torchlip.functional.hinge_multiclass_loss(input: torch.Tensor, target: torch.Tensor, min_margin: float = 1) torch.Tensor [source]¶
Loss to estimate the Hinge loss in a multiclass setup. It compute the elementwise hinge term. Note that this formulation differs from the one commonly found in tensorflow/pytorch (with marximise the difference between the two largest logits). This formulation is consistent with the binary classification loss used in a multiclass fashion.
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input containing one hot encoding target labels (0 and +1).
min_margin – The minimal margin to enforce.
Note
target should be one hot encoded. labels in (1,0)
- Returns
The hinge margin multiclass loss.
- deel.torchlip.functional.hkr_multiclass_loss(input: torch.Tensor, target: torch.Tensor, alpha: float = 0.0, min_margin: float = 1.0) torch.Tensor [source]¶
Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.
- Parameters
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
alpha – Regularization factor between the hinge and the KR loss.
min_margin – Minimal margin for the hinge loss.
true_values – tuple containing the two label for each predicted class.
- Returns
The regularized Wasserstein-1 loss.
See also