Skip to content

deel.lip.utils

Contains utility functions.

evaluate_lip_const

evaluate_lip_const(model, x, eps=0.0001, seed=None)

Evaluate the Lipschitz constant of a model, with the naive method. Please note that the estimation of the lipschitz constant is done locally around input sample. This may not correctly estimate the behaviour in the whole domain.

PARAMETER DESCRIPTION
model

built keras model used to make predictions

TYPE: Model

x

inputs used to compute the lipschitz constant

eps

magnitude of noise to add to input in order to compute the constant

TYPE: float DEFAULT: 0.0001

seed

seed used when generating the noise ( can be set to None )

TYPE: int DEFAULT: None

RETURNS DESCRIPTION
float

the empirically evaluated lipschitz constant. The computation might also be inaccurate in high dimensional space.

Source code in deel/lip/utils.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
def evaluate_lip_const(model: Model, x, eps=1e-4, seed=None):
    """
    Evaluate the Lipschitz constant of a model, with the naive method.
    Please note that the estimation of the lipschitz constant is done locally around
    input sample. This may not correctly estimate the behaviour in the whole domain.

    Args:
        model: built keras model used to make predictions
        x: inputs used to compute the lipschitz constant
        eps (float): magnitude of noise to add to input in order to compute the constant
        seed (int): seed used when generating the noise ( can be set to None )

    Returns:
        float: the empirically evaluated lipschitz constant. The computation might also
            be inaccurate in high dimensional space.

    """
    y_pred = model.predict(x)
    # x = np.repeat(x, 100, 0)
    # y_pred = np.repeat(y_pred, 100, 0)
    x_var = x + K.random_uniform(
        shape=x.shape, minval=eps * 0.25, maxval=eps, seed=seed
    )
    y_pred_var = model.predict(x_var)
    dx = x - x_var
    dfx = y_pred - y_pred_var
    ndx = K.sqrt(K.sum(K.square(dx), axis=range(1, len(x.shape))))
    ndfx = K.sqrt(K.sum(K.square(dfx), axis=range(1, len(y_pred.shape))))
    lip_cst = K.max(ndfx / ndx)
    print(f"lip cst: {lip_cst:.3f}")
    return lip_cst

evaluate_lip_const_gen

evaluate_lip_const_gen(
    model, generator, eps=0.0001, seed=None
)

Evaluate the Lipschitz constant of a model, with the naive method. Please note that the estimation of the lipschitz constant is done locally around input sample. This may not correctly estimate the behaviour in the whole domain. The computation might also be inaccurate in high dimensional space.

This is the generator version of evaluate_lip_const.

PARAMETER DESCRIPTION
model

built keras model used to make predictions

TYPE: Model

generator

used to select datapoints where to compute the lipschitz constant

TYPE: Generator[Tuple[ndarray, ndarray], Any, None]

eps

magnitude of noise to add to input in order to compute the constant

TYPE: float DEFAULT: 0.0001

seed

seed used when generating the noise ( can be set to None )

TYPE: int DEFAULT: None

RETURNS DESCRIPTION
float

the empirically evaluated lipschitz constant.

Source code in deel/lip/utils.py
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
def evaluate_lip_const_gen(
    model: Model,
    generator: Generator[Tuple[np.ndarray, np.ndarray], Any, None],
    eps=1e-4,
    seed=None,
):
    """
    Evaluate the Lipschitz constant of a model, with the naive method.
    Please note that the estimation of the lipschitz constant is done locally around
    input sample. This may not correctly estimate the behaviour in the whole domain.
    The computation might also be inaccurate in high dimensional space.

    This is the generator version of evaluate_lip_const.

    Args:
        model: built keras model used to make predictions
        generator: used to select datapoints where to compute the lipschitz constant
        eps (float): magnitude of noise to add to input in order to compute the constant
        seed (int): seed used when generating the noise ( can be set to None )

    Returns:
        float: the empirically evaluated lipschitz constant.

    """
    x, _ = generator.send(None)
    return evaluate_lip_const(model, x, eps, seed=seed)

process_labels_for_multi_gpu

process_labels_for_multi_gpu(labels)

Process labels to be fed to any loss based on KR estimation with a multi-GPU/TPU strategy.

When using a multi-GPU/TPU strategy, the flag multi_gpu in KR-based losses must be set to True and the labels have to be pre-processed with this function.

For binary classification, the labels should be of shape [batch_size, 1]. For multiclass problems, the labels must be one-hot encoded (1 or 0) with shape [batch_size, number of classes].

PARAMETER DESCRIPTION
labels

tensor containing the labels

TYPE: Tensor

RETURNS DESCRIPTION

tf.Tensor: labels processed for KR-based losses with multi-GPU/TPU strategy.

Source code in deel/lip/utils.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
@tf.function
def process_labels_for_multi_gpu(labels):
    """Process labels to be fed to any loss based on KR estimation with a multi-GPU/TPU
    strategy.

    When using a multi-GPU/TPU strategy, the flag `multi_gpu` in KR-based losses must be
    set to True and the labels have to be pre-processed with this function.

    For binary classification, the labels should be of shape [batch_size, 1].
    For multiclass problems, the labels must be one-hot encoded (1 or 0) with shape
    [batch_size, number of classes].

    Args:
        labels (tf.Tensor): tensor containing the labels

    Returns:
        tf.Tensor: labels processed for KR-based losses with multi-GPU/TPU strategy.
    """
    eps = 1e-7
    labels = tf.cast(tf.where(labels > 0, 1, 0), labels.dtype)
    batch_size = tf.cast(tf.shape(labels)[0], labels.dtype)
    counts = tf.reduce_sum(labels, axis=0)

    pos = labels / (counts + eps)
    neg = (1 - labels) / (batch_size - counts + eps)
    # Since element-wise KR terms are averaged by loss reduction later on, it is needed
    # to multiply by batch_size here.
    return batch_size * (pos - neg)