xmodaler.lr_scheduler

xmodaler.lr_scheduler.build_lr_scheduler(cfg, optimizer, data_size)[source]
class xmodaler.lr_scheduler.StepLR(*, optimizer, step_size, gamma=0.1)[source]

Bases: StepLR

__init__(*, optimizer, step_size, gamma=0.1)[source]
_get_closed_form_lr()[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()[source]
load_state_dict(state_dict)

Loads the schedulers state.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.NoamLR(*, optimizer, model_size, factor, warmup, last_epoch=-1)[source]

Bases: _LRScheduler

__init__(*, optimizer, model_size, factor, warmup, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()[source]
load_state_dict(state_dict)

Loads the schedulers state.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.WarmupConstant(*, optimizer, warmup_steps, last_epoch=-1)[source]

Bases: LambdaLR

Linear warmup and then constant. Linearly increases learning rate schedule from 0 to 1 over warmup_steps training steps. Keeps learning rate schedule equal to 1. after warmup_steps.

__init__(*, optimizer, warmup_steps, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()
load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

lr_lambda(step)[source]
print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.WarmupLinear(*, optimizer, min_lr, warmup_steps, t_total, last_epoch=-1)[source]

Bases: LambdaLR

Linear warmup and then linear decay. Linearly increases learning rate from 0 to 1 over warmup_steps training steps. Linearly decreases learning rate from 1. to 0. over remaining t_total - warmup_steps steps.

__init__(*, optimizer, min_lr, warmup_steps, t_total, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()
load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

lr_lambda(step)[source]
print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.WarmupCosine(*, optimizer, min_lr, warmup_steps, t_total, cycles=0.5, last_epoch=-1)[source]

Bases: LambdaLR

Linear warmup and then cosine decay. Linearly increases learning rate from 0 to 1 over warmup_steps training steps. Decreases learning rate from 1. to 0. over remaining t_total - warmup_steps steps following a cosine curve. If cycles (default=0.5) is different from default, learning rate follows cosine function after warmup.

__init__(*, optimizer, min_lr, warmup_steps, t_total, cycles=0.5, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()
load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

lr_lambda(step)[source]
print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.WarmupCosineWithHardRestarts(*, optimizer, warmup_steps, t_total, cycles=1.0, last_epoch=-1)[source]

Bases: LambdaLR

Linear warmup and then cosine cycles with hard restarts. Linearly increases learning rate from 0 to 1 over warmup_steps training steps. If cycles (default=1.) is different from default, learning rate follows cycles times a cosine decaying learning rate (with hard restarts).

__init__(*, optimizer, warmup_steps, t_total, cycles=1.0, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()
load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

lr_lambda(step)[source]
print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.WarmupMultiStepLR(*, optimizer, milestones, gamma=0.1, warmup_factor=0.3333333333333333, warmup_iters=500, warmup_method='linear', last_epoch=-1)[source]

Bases: _LRScheduler

__init__(*, optimizer, milestones, gamma=0.1, warmup_factor=0.3333333333333333, warmup_iters=500, warmup_method='linear', last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()[source]
load_state_dict(state_dict)

Loads the schedulers state.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.MultiStepLR(*, optimizer, milestones, gamma=0.1, last_epoch=-1)[source]

Bases: MultiStepLR

__init__(*, optimizer, milestones, gamma=0.1, last_epoch=-1)[source]
_get_closed_form_lr()[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()[source]
load_state_dict(state_dict)

Loads the schedulers state.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)
class xmodaler.lr_scheduler.FixLR(*, optimizer, last_epoch=-1)[source]

Bases: LambdaLR

Fix LR

__init__(*, optimizer, last_epoch=-1)[source]
_initial_step()

Initialize step counts and performs a step

classmethod from_config(cfg, optimizer, data_size)[source]
get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()
load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

lr_lambda(step)[source]
print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)