Skip to content

deel.lip.callbacks

This module contains callbacks that can be added to keras training process.

CondenseCallback

CondenseCallback(on_epoch=True, on_batch=False)

Bases: Callback

Automatically condense layers of a model on batches/epochs. Condensing a layer consists in overwriting the kernel with the constrained weights. This prevents the explosion/vanishing of values inside the original kernel.

Warning

Overwriting the kernel may disturb the optimizer, especially if it has a non-zero momentum.

PARAMETER DESCRIPTION
on_epoch

if True apply the constraint between epochs

TYPE: bool DEFAULT: True

on_batch

if True apply constraints between batches

TYPE: bool DEFAULT: False

Source code in deel/lip/callbacks.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def __init__(self, on_epoch: bool = True, on_batch: bool = False):
    """
    Automatically condense layers of a model on batches/epochs. Condensing a layer
    consists in overwriting the kernel with the constrained weights. This prevents
    the explosion/vanishing of values inside the original kernel.

    Warning:
        Overwriting the kernel may disturb the optimizer, especially if it has a
        non-zero momentum.

    Args:
        on_epoch: if True apply the constraint between epochs
        on_batch: if True apply constraints between batches
    """
    super().__init__()
    self.on_epoch = on_epoch
    self.on_batch = on_batch

LossParamLog

LossParamLog(param_name, rate=1)

Bases: Callback

Logger to print values of a loss parameter at each epoch.

PARAMETER DESCRIPTION
param_name

name of the parameter of the loss to log.

TYPE: str

rate

logging rate (in epochs)

TYPE: int DEFAULT: 1

Source code in deel/lip/callbacks.py
205
206
207
208
209
210
211
212
213
214
def __init__(self, param_name, rate=1):
    """
    Logger to print values of a loss parameter at each epoch.

    Args:
        param_name (str): name of the parameter of the loss to log.
        rate (int): logging rate (in epochs)
    """
    self.param_name = param_name
    self.rate = rate

LossParamScheduler

LossParamScheduler(param_name, fp, xp, step=0)

Bases: Callback

Scheduler to modify a loss parameter during training. It uses a linear interpolation (defined by fp and xp) depending on the optimization step.

PARAMETER DESCRIPTION
param_name

name of the parameter of the loss to tune. Must be a tf.Variable.

TYPE: str

fp

values of the loss parameter as steps given by the xp.

TYPE: list

xp

step where the parameter equals fp.

TYPE: list

step

step value, for serialization/deserialization purposes.

TYPE: int DEFAULT: 0

Source code in deel/lip/callbacks.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
def __init__(self, param_name, fp, xp, step=0):
    """
    Scheduler to modify a loss parameter during training. It uses a linear
    interpolation (defined by fp and xp) depending on the optimization step.

    Args:
        param_name (str): name of the parameter of the loss to tune. Must be a
            tf.Variable.
        fp (list): values of the loss parameter as steps given by the xp.
        xp (list): step where the parameter equals fp.
        step (int): step value, for serialization/deserialization purposes.
    """
    self.xp = xp
    self.fp = fp
    self.step = step
    self.param_name = param_name

MonitorCallback

MonitorCallback(
    monitored_layers,
    logdir,
    target="kernel",
    what="max",
    on_epoch=True,
    on_batch=False,
)

Bases: Callback

Allow to monitor the singular values of specified layers during training. This analyze the singular values of the original kernel (before reparametrization). Two modes can be chosen: "max" plots the largest singular value over training, while "all" plots the distribution of the singular values over training (series of distribution).

PARAMETER DESCRIPTION
monitored_layers

list of layer name to monitor.

TYPE: Iterable[str]

logdir

path to the logging directory.

TYPE: str

target

describe what to monitor, can either "kernel" or "wbar". Setting to "kernel" check values of the unconstrained weights while setting to "wbar" check values of the constrained weights (allowing to check if the parameters are correct to ensure lipschitz constraint)

TYPE: str DEFAULT: 'kernel'

what

either "max", which display the largest singular value over the training process, or "all", which plot the distribution of all singular values.

TYPE: str DEFAULT: 'max'

on_epoch

if True apply the constraint between epochs.

TYPE: bool DEFAULT: True

on_batch

if True apply constraints between batches.

TYPE: bool DEFAULT: False

Source code in deel/lip/callbacks.py
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def __init__(
    self,
    monitored_layers: Iterable[str],
    logdir: str,
    target: str = "kernel",
    what: str = "max",
    on_epoch: bool = True,
    on_batch: bool = False,
):
    """
    Allow to monitor the singular values of specified layers during training. This
    analyze the singular values of the original kernel (before reparametrization).
    Two modes can be chosen: "max" plots the largest singular value over training,
    while "all" plots the distribution of the singular values over training (series
    of distribution).

    Args:
        monitored_layers: list of layer name to monitor.
        logdir: path to the logging directory.
        target: describe what to monitor, can either "kernel" or "wbar". Setting
            to "kernel" check values of the unconstrained weights while setting to
            "wbar" check values of the constrained weights (allowing to check if
            the parameters are correct to ensure lipschitz constraint)
        what: either "max", which display the largest singular value over the
            training process, or "all", which plot the distribution of all singular
            values.
        on_epoch: if True apply the constraint between epochs.
        on_batch: if True apply constraints between batches.
    """
    self.on_epoch = on_epoch
    self.on_batch = on_batch
    assert target in {"kernel", "wbar"}
    self.target = target
    assert what in {"max", "all"}
    self.what = what
    self.logdir = logdir
    self.file_writer = tf.summary.create_file_writer(
        os.path.join(logdir, "metrics")
    )
    self.monitored_layers = monitored_layers
    if on_batch and on_epoch:
        self.on_epoch = False  # avoid display bug (inconsistent steps)
    self.epochs = 0
    super().__init__()