deel.torchlip.functional¶
Non-linear activation functions¶
max_min¶
- deel.torchlip.functional.max_min(input: Tensor, dim: int | None = None) Tensor [source]¶
Applies max-min activation on the given tensor.
If
input
is a tensor of shape anddim
isNone
, the output can be described as:where is the batch size and is the size of the tensor.
- Parameters:
input – A tensor of arbitrary shape.
dim – The dimension to apply max-min. If None, will apply to the 0th dimension if the shape of input is or to the first if its .
- Returns:
A tensor of shape or depending on the shape of the input.
Note
M. Blot, M. Cord, et N. Thome, « Max-min convolutional neural networks for image classification », in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, p. 3678‑3682.
group_sort¶
- deel.torchlip.functional.group_sort(input: Tensor, group_size: int | None = None, dim: int = 1) Tensor [source]¶
Applies GroupSort activation on the given tensor.
See also
- deel.torchlip.functional.group_sort_2(input: Tensor) Tensor [source]¶
Applies GroupSort-2 activation on the given tensor. This function is equivalent to
group_sort(input, 2)
.See also
others¶
Padding functions¶
- class deel.torchlip.functional.SymmetricPad(pad, onedim=False)[source]¶
Pads a 2D tensor symmetrically.
- Parameters:
pad (tuple) – A tuple (pad_left, pad_right, pad_top, pad_bottom) specifying the number of pixels to pad on each side. (or single int if common padding).
onedim – False for conv2d, True for conv1d.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
Loss functions¶
Binary losses¶
- deel.torchlip.functional.kr_loss(input: Tensor, target: Tensor, multi_gpu=False) Tensor [source]¶
Loss to estimate the Wasserstein-1 distance using Kantorovich-Rubinstein duality, as per
where and are the distributions corresponding to the two possible labels as specific by their sign.
target accepts label values in (0, 1), (-1, 1), or pre-processed with the deel.torchlip.functional.process_labels_for_multi_gpu() function.
Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels target with the deel.torchlip.functional.process_labels_for_multi_gpu() function.
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
- Returns:
The Wasserstein-1 loss between
input
andtarget
.
- deel.torchlip.functional.neg_kr_loss(input: Tensor, target: Tensor, multi_gpu=False) Tensor [source]¶
Loss to estimate the negative wasserstein-1 distance using Kantorovich-Rubinstein duality.
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
- Returns:
The negative Wasserstein-1 loss between
input
andtarget
.
See also
- deel.torchlip.functional.hinge_margin_loss(input: Tensor, target: Tensor, min_margin: float = 1) Tensor [source]¶
Compute the hinge margin loss as per
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input containing target labels (-1 and +1).
min_margin – The minimal margin to enforce.
- Returns:
The hinge margin loss.
- deel.torchlip.functional.hkr_loss(input: Tensor, target: Tensor, alpha: float, min_margin: float = 1.0, multi_gpu=False) Tensor [source]¶
Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
alpha – Regularization factor ([0,1]) between the hinge and the KR loss.
min_margin – Minimal margin for the hinge loss.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
- Returns:
The regularized Wasserstein-1 loss.
See also
multiclass losses¶
- deel.torchlip.functional.hinge_multiclass_loss(input: Tensor, target: Tensor, min_margin: float = 1) Tensor [source]¶
Loss to estimate the Hinge loss in a multiclass setup. It compute the elementwise hinge term. Note that this formulation differs from the one commonly found in tensorflow/pytorch (with maximise the difference between the two largest logits). This formulation is consistent with the binary classification loss used in a multiclass fashion.
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input containing one hot encoding target labels (0 and +1).
min_margin – The minimal margin to enforce.
Note
target should be one hot encoded. labels in (1,0)
- Returns:
The hinge margin multiclass loss.
- deel.torchlip.functional.hkr_multiclass_loss(input: Tensor, target: Tensor, alpha: float = 0.0, min_margin: float = 1.0, multi_gpu=False) Tensor [source]¶
Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.
- Parameters:
input – Tensor of arbitrary shape.
target – Tensor of the same shape as input.
alpha – Regularization factor ([0,1]) between the hinge and the KR loss.
min_margin – Minimal margin for the hinge loss.
multi_gpu (bool) – set to True when running on multi-GPU/TPU
- Returns:
The regularized Wasserstein-1 loss.
See also
others¶
- deel.torchlip.functional.process_labels_for_multi_gpu(labels: Tensor) Tensor [source]¶
Process labels to be fed to any loss based on KR estimation with a multi-GPU/TPU strategy.
When using a multi-GPU/TPU strategy, the flag multi_gpu in KR-based losses must be set to True and the labels have to be pre-processed with this function.
For binary classification, the labels should be of shape [batch_size, 1]. For multiclass problems, the labels must be one-hot encoded (1 or 0) with shape [batch_size, number of classes].
- Parameters:
labels (torch.Tensor) – tensor containing the labels
- Returns:
labels processed for KR-based losses with multi-GPU/TPU strategy.
- Return type: