deel.lip.callbacks
This module contains callbacks that can be added to keras training process.
CondenseCallback ¶
CondenseCallback(on_epoch=True, on_batch=False)
Bases: Callback
Automatically condense layers of a model on batches/epochs. Condensing a layer consists in overwriting the kernel with the constrained weights. This prevents the explosion/vanishing of values inside the original kernel.
Warning
Overwriting the kernel may disturb the optimizer, especially if it has a non-zero momentum.
PARAMETER | DESCRIPTION |
---|---|
on_epoch |
if True apply the constraint between epochs
TYPE:
|
on_batch |
if True apply constraints between batches
TYPE:
|
Source code in deel/lip/callbacks.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
LossParamLog ¶
LossParamLog(param_name, rate=1)
Bases: Callback
Logger to print values of a loss parameter at each epoch.
PARAMETER | DESCRIPTION |
---|---|
param_name |
name of the parameter of the loss to log.
TYPE:
|
rate |
logging rate (in epochs)
TYPE:
|
Source code in deel/lip/callbacks.py
205 206 207 208 209 210 211 212 213 214 |
|
LossParamScheduler ¶
LossParamScheduler(param_name, fp, xp, step=0)
Bases: Callback
Scheduler to modify a loss parameter during training. It uses a linear interpolation (defined by fp and xp) depending on the optimization step.
PARAMETER | DESCRIPTION |
---|---|
param_name |
name of the parameter of the loss to tune. Must be a tf.Variable.
TYPE:
|
fp |
values of the loss parameter as steps given by the xp.
TYPE:
|
xp |
step where the parameter equals fp.
TYPE:
|
step |
step value, for serialization/deserialization purposes.
TYPE:
|
Source code in deel/lip/callbacks.py
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
|
MonitorCallback ¶
MonitorCallback(
monitored_layers,
logdir,
target="kernel",
what="max",
on_epoch=True,
on_batch=False,
)
Bases: Callback
Allow to monitor the singular values of specified layers during training. This analyze the singular values of the original kernel (before reparametrization). Two modes can be chosen: "max" plots the largest singular value over training, while "all" plots the distribution of the singular values over training (series of distribution).
PARAMETER | DESCRIPTION |
---|---|
monitored_layers |
list of layer name to monitor.
TYPE:
|
logdir |
path to the logging directory.
TYPE:
|
target |
describe what to monitor, can either "kernel" or "wbar". Setting to "kernel" check values of the unconstrained weights while setting to "wbar" check values of the constrained weights (allowing to check if the parameters are correct to ensure lipschitz constraint)
TYPE:
|
what |
either "max", which display the largest singular value over the training process, or "all", which plot the distribution of all singular values.
TYPE:
|
on_epoch |
if True apply the constraint between epochs.
TYPE:
|
on_batch |
if True apply constraints between batches.
TYPE:
|
Source code in deel/lip/callbacks.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
|