- Docs >
- torch.optim >
- SGD
- class torch.optim.SGD(params, lr=0.001, momentum=0, dampening=0, weight_decay=0, nesterov=False, *, maximize=False, foreach=None, differentiable=False, fused=None)[source]¶
Implements stochastic gradient descent (optionally with momentum).
input:γ(lr),θ0(params),f(θ)(objective),λ(weightdecay),μ(momentum),τ(dampening),nesterov,maximizefort=1to…dogt←∇θft(θt−1)ifλ=0gt←gt+λθt−1ifμ=0ift>1bt←μbt−1+(1−τ)gtelsebt←gtifnesterovgt←gt+μbtelsegt←btifmaximizeθt←θt−1+γgtelseθt←θt−1−γgtreturnθt
Nesterov momentum is based on the formula fromOn the importance of initialization and momentum in deep learning.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts definingparameter groups
lr (float, optional) – learning rate (default: 1e-3)
momentum (float, optional) – momentum factor (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
dampening (float, optional) – dampening for momentum (default: 0)
nesterov (bool, optional) – enables Nesterov momentum (default: False)
maximize (bool, optional) – maximize the objective with respect to theparams, instead of minimizing (default: False)
foreach (bool, optional) – whether foreach implementation of optimizeris used. If unspecified by the user (so foreach is None), we will try to useforeach over the for-loop implementation on CUDA, since it is usuallysignificantly more performant. Note that the foreach implementation uses~ sizeof(params) more peak memory than the for-loop version due to the intermediatesbeing a tensorlist vs just one tensor. If memory is prohibitive, batch fewerparameters through the optimizer at a time or switch this flag to False (default: None)
differentiable (bool, optional) – whether autograd shouldoccur through the optimizer step in training. Otherwise, the step()function runs in a torch.no_grad() context. Setting to True can impairperformance, so leave it False if you don’t intend to run autogradthrough this instance (default: False)
fused (bool, optional) – whether the fused implementation (CUDA only) is used.Currently, torch.float64, torch.float32, torch.float16, and torch.bfloat16are supported. (default: None)
Note
The foreach and fused implementations are typically faster than the for-loop,single-tensor implementation. Thus, if the user has not specified BOTH flags(i.e., when foreach = fused = None), we will attempt defaulting to the foreachimplementation when the tensors are all on CUDA. For example, if the user specifiesTrue for fused but nothing for foreach, we will run the fused implementation. Ifthe user specifies False for foreach but nothing for fused (or False for fused butnothing for foreach), we will run the for-loop implementation. If the user specifiesTrue for both foreach and fused, we will prioritize fused over foreach, as it istypically faster. We attempt to use the fastest, so the hierarchy goes fused ->foreach -> for-loop. HOWEVER, since the fused implementation is relatively new,we want to give it sufficient bake-in time, so we default to foreach and NOTfused when the user has not specified either flag.
Example
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)>>> optimizer.zero_grad()>>> loss_fn(model(input), target).backward()>>> optimizer.step()
Note
The implementation of SGD with Momentum/Nesterov subtly differs fromSutskever et. al. and implementations in some other frameworks.
Considering the specific case of Momentum, the update can be written as
vt+1pt+1=μ∗vt+gt+1,=pt−lr∗vt+1,
where p, g, v and μ denote theparameters, gradient, velocity, and momentum respectively.
This is in contrast to Sutskever et. al. andother frameworks which employ an update of the form
vt+1pt+1=μ∗vt+lr∗gt+1,=pt−vt+1.
The Nesterov version is analogously modified.
Moreover, the initial value of the momentum buffer is set to thegradient value at the first step. This is in contrast to some otherframeworks that initialize it to all zeros.
- add_param_group(param_group)¶
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be madetrainable and added to the Optimizer as training progresses.
- Parameters
param_group (dict) – Specifies what Tensors should be optimized along with groupspecific optimization options.
- load_state_dict(state_dict)¶
Loads the optimizer state.
- Parameters
state_dict (dict) – optimizer state. Should be an object returnedfrom a call to state_dict().
- register_load_state_dict_post_hook(hook, prepend=False)¶
Register a load_state_dict post-hook which will be called afterload_state_dict() is called. It should have thefollowing signature:
hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used.The hook will be called with argument
self
after callingload_state_dict
onself
. The registered hook can be used toperform post-processing afterload_state_dict
has loaded thestate_dict
.- Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired beforeall the already registered post-hooks onload_state_dict
. Otherwise,the providedhook
will be fired after all the already registeredpost-hooks. (default: False)
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemoveableHandle
- register_load_state_dict_pre_hook(hook, prepend=False)¶
Register a load_state_dict pre-hook which will be called beforeload_state_dict() is called. It should have thefollowing signature:
hook(optimizer, state_dict) -> state_dict or None
The
optimizer
argument is the optimizer instance being used and thestate_dict
argument is a shallow copy of thestate_dict
the userpassed in toload_state_dict
. The hook may modify the state_dict inplaceor optionally return a new one. If a state_dict is returned, it will be usedto be loaded into the optimizer.The hook will be called with argument
self
andstate_dict
beforecallingload_state_dict
onself
. The registered hook can be used toperform pre-processing before theload_state_dict
call is made.- Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired beforeall the already registered pre-hooks onload_state_dict
. Otherwise,the providedhook
will be fired after all the already registeredpre-hooks. (default: False)
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemoveableHandle
- register_state_dict_post_hook(hook, prepend=False)¶
Register a state dict post-hook which will be called afterstate_dict() is called. It should have thefollowing signature:
hook(optimizer, state_dict) -> state_dict or None
The hook will be called with arguments
self
andstate_dict
after generatingastate_dict
onself
. The hook may modify the state_dict inplace or optionallyreturn a new one. The registered hook can be used to perform post-processingon thestate_dict
before it is returned.- Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired beforeall the already registered post-hooks onstate_dict
. Otherwise,the providedhook
will be fired after all the already registeredpost-hooks. (default: False)
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemoveableHandle
- register_state_dict_pre_hook(hook, prepend=False)¶
Register a state dict pre-hook which will be called beforestate_dict() is called. It should have thefollowing signature:
hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used.The hook will be called with argumentself
before callingstate_dict
onself
.The registered hook can be used to perform pre-processing before thestate_dict
call is made.- Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired beforeall the already registered pre-hooks onstate_dict
. Otherwise,the providedhook
will be fired after all the already registeredpre-hooks. (default: False)
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemoveableHandle
- register_step_post_hook(hook)¶
Register an optimizer step post hook which will be called after optimizer step.It should have the following signature:
hook(optimizer, args, kwargs) -> None
The
optimizer
argument is the optimizer instance being used.- Parameters
hook (Callable) – The user defined hook to be registered.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- register_step_pre_hook(hook)¶
Register an optimizer step pre hook which will be called beforeoptimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The
optimizer
argument is the optimizer instance being used. Ifargs and kwargs are modified by the pre-hook, then the transformedvalues are returned as a tuple containing the new_args and new_kwargs.- Parameters
hook (Callable) – The user defined hook to be registered.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
- state_dict()¶
Returns the state of the optimizer as a
dict
.It contains two entries:
state
: a Dict holding current optimization state. Its contentdiffers between optimizer classes, but some common characteristicshold. For example, state is saved per parameter, and the parameteritself is NOT saved.
state
is a Dictionary mapping parameter idsto a Dict with state corresponding to each parameter.
param_groups
: a List containing all parameter groups where eachparameter group is a Dict. Each parameter group contains metadataspecific to the optimizer, such as learning rate and weight decay,as well as a List of parameter IDs of the parameters in the group.
NOTE: The parameter IDs may look like indices but they are just IDsassociating state with param_group. When loading from a state_dict,the optimizer will zip the param_group
params
(int IDs) and theoptimizerparam_groups
(actualnn.Parameter
s) in order tomatch state WITHOUT additional verification.A returned state dict might look something like:
{ 'state': { 0: {'momentum_buffer': tensor(...), ...}, 1: {'momentum_buffer': tensor(...), ...}, 2: {'momentum_buffer': tensor(...), ...}, 3: {'momentum_buffer': tensor(...), ...} }, 'param_groups': [ { 'lr': 0.01, 'weight_decay': 0, ... 'params': [0] }, { 'lr': 0.001, 'weight_decay': 0.5, ... 'params': [1, 2, 3] } ]}
- step(closure=None)[source]¶
Performs a single optimization step.
- Parameters
closure (Callable, optional) – A closure that reevaluates the modeland returns the loss.
- zero_grad(set_to_none=True)¶
Resets the gradients of all optimized torch.Tensor s.
- Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None.This will in general have lower memory footprint, and can modestly improve performance.However, it changes certain behaviors. For example:1. When the user tries to access a gradient and perform manual ops on it,a None attribute or a Tensor full of 0s will behave differently.2. If the user requests
zero_grad(set_to_none=True)
followed by a backward pass,.grad
sare guaranteed to be None for params that did not receive a gradient.3.torch.optim
optimizers have a different behavior if the gradient is 0 or None(in one case it does the step with a gradient of 0 and in the other it skipsthe step altogether).