• Docs >
  • deel.torchlip.functional
Shortcuts

deel.torchlip.functional

Non-linear activation functions

max_min

deel.torchlip.functional.max_min(input: Tensor, dim: int | None = None) Tensor[source]

Applies max-min activation on the given tensor.

If input is a tensor of shape (N,C)(N, C) and dim is None, the output can be described as:

out(Ni,C2j)=max(input(Ni,Cj),0)out(Ni,C2j+1)=max(input(Ni,Cj),0)\text{out}(N_i, C_{2j}) = \max(\text{input}(N_i, C_j), 0)\\ \text{out}(N_i, C_{2j + 1}) = \max(-\text{input}(N_i, C_j), 0)

where NN is the batch size and CC is the size of the tensor.

Parameters:
  • input – A tensor of arbitrary shape.

  • dim – The dimension to apply max-min. If None, will apply to the 0th dimension if the shape of input is (C)(C) or to the first if its (N,C,)(N, C, *).

Returns:

A tensor of shape (2C)(2C) or (N,2C,)(N, 2C, *) depending on the shape of the input.

Note

M. Blot, M. Cord, et N. Thome, « Max-min convolutional neural networks for image classification », in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, p. 3678‑3682.

group_sort

deel.torchlip.functional.group_sort(input: Tensor, group_size: int | None = None, dim: int = 1) Tensor[source]

Applies GroupSort activation on the given tensor.

deel.torchlip.functional.group_sort_2(input: Tensor) Tensor[source]

Applies GroupSort-2 activation on the given tensor. This function is equivalent to group_sort(input, 2).

See also

group_sort()

deel.torchlip.functional.full_sort(input: Tensor) Tensor[source]

Applies FullSort activation on the given tensor. This function is equivalent to group_sort(input, None).

See also

group_sort()

others

deel.torchlip.functional.lipschitz_prelu(input: Tensor, weight: Tensor, k_coef_lip: float = 1.0) Tensor[source]

Applies k-Lipschitz version of PReLU by clamping the weights

LPReLU(x)={x, if x0min(max(a,k),k)x, otherwise \text{LPReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \min(\max(a, -k), k) * x, & \text{ otherwise } \end{cases}

Padding functions

class deel.torchlip.functional.SymmetricPad(pad, onedim=False)[source]

Pads a 2D tensor symmetrically.

Parameters:
  • pad (tuple) – A tuple (pad_left, pad_right, pad_top, pad_bottom) specifying the number of pixels to pad on each side. (or single int if common padding).

  • onedim – False for conv2d, True for conv1d.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

Loss functions

Binary losses

deel.torchlip.functional.kr_loss(input: Tensor, target: Tensor, multi_gpu=False) Tensor[source]

Loss to estimate the Wasserstein-1 distance using Kantorovich-Rubinstein duality, as per

W(μ,ν)=supfLip1(Ω)Exμ[f(x)]Exν[f(x)]\mathcal{W}(\mu, \nu) = \sup\limits_{f\in{}Lip_1(\Omega)} \underset{\mathbf{x}\sim{}\mu}{\mathbb{E}}[f(\mathbf{x})] - \underset{\mathbf{x}\sim{}\nu}{\mathbb{E}}[f(\mathbf{x})]

where μ\mu and ν\nu are the distributions corresponding to the two possible labels as specific by their sign.

target accepts label values in (0, 1), (-1, 1), or pre-processed with the deel.torchlip.functional.process_labels_for_multi_gpu() function.

Using a multi-GPU/TPU strategy requires to set multi_gpu to True and to pre-process the labels target with the deel.torchlip.functional.process_labels_for_multi_gpu() function.

Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input.

  • multi_gpu (bool) – set to True when running on multi-GPU/TPU

Returns:

The Wasserstein-1 loss between input and target.

deel.torchlip.functional.neg_kr_loss(input: Tensor, target: Tensor, multi_gpu=False) Tensor[source]

Loss to estimate the negative wasserstein-1 distance using Kantorovich-Rubinstein duality.

Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input.

  • multi_gpu (bool) – set to True when running on multi-GPU/TPU

Returns:

The negative Wasserstein-1 loss between input and target.

See also

kr_loss()

deel.torchlip.functional.hinge_margin_loss(input: Tensor, target: Tensor, min_margin: float = 1) Tensor[source]

Compute the hinge margin loss as per

Ex[max(0,1yf(x))]\underset{\mathbf{x}}{\mathbb{E}} [\max(0, 1 - \mathbf{y} f(\mathbf{x}))]
Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input containing target labels (-1 and +1).

  • min_margin – The minimal margin to enforce.

Returns:

The hinge margin loss.

deel.torchlip.functional.hkr_loss(input: Tensor, target: Tensor, alpha: float, min_margin: float = 1.0, multi_gpu=False) Tensor[source]

Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.

Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input.

  • alpha – Regularization factor ([0,1]) between the hinge and the KR loss.

  • min_margin – Minimal margin for the hinge loss.

  • multi_gpu (bool) – set to True when running on multi-GPU/TPU

Returns:

The regularized Wasserstein-1 loss.

multiclass losses

deel.torchlip.functional.hinge_multiclass_loss(input: Tensor, target: Tensor, min_margin: float = 1) Tensor[source]

Loss to estimate the Hinge loss in a multiclass setup. It compute the elementwise hinge term. Note that this formulation differs from the one commonly found in tensorflow/pytorch (with maximise the difference between the two largest logits). This formulation is consistent with the binary classification loss used in a multiclass fashion.

Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input containing one hot encoding target labels (0 and +1).

  • min_margin – The minimal margin to enforce.

Note

target should be one hot encoded. labels in (1,0)

Returns:

The hinge margin multiclass loss.

deel.torchlip.functional.hkr_multiclass_loss(input: Tensor, target: Tensor, alpha: float = 0.0, min_margin: float = 1.0, multi_gpu=False) Tensor[source]

Loss to estimate the wasserstein-1 distance with a hinge regularization using Kantorovich-Rubinstein duality.

Parameters:
  • input – Tensor of arbitrary shape.

  • target – Tensor of the same shape as input.

  • alpha – Regularization factor ([0,1]) between the hinge and the KR loss.

  • min_margin – Minimal margin for the hinge loss.

  • multi_gpu (bool) – set to True when running on multi-GPU/TPU

Returns:

The regularized Wasserstein-1 loss.

others

deel.torchlip.functional.process_labels_for_multi_gpu(labels: Tensor) Tensor[source]

Process labels to be fed to any loss based on KR estimation with a multi-GPU/TPU strategy.

When using a multi-GPU/TPU strategy, the flag multi_gpu in KR-based losses must be set to True and the labels have to be pre-processed with this function.

For binary classification, the labels should be of shape [batch_size, 1]. For multiclass problems, the labels must be one-hot encoded (1 or 0) with shape [batch_size, number of classes].

Parameters:

labels (torch.Tensor) – tensor containing the labels

Returns:

labels processed for KR-based losses with multi-GPU/TPU strategy.

Return type:

torch.Tensor


© Copyright 2020, IRT Antoine de Saint Exupéry - All rights reserved. DEEL is a research program operated by IVADO, IRT Saint Exupéry, CRIAQ and ANITI..

Built with Sphinx using PyTorch's theme provided originally by Read the Docs.