>>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
Parameters
recurse (bool) – if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that
are direct members of this module.
Yields
torch.Tensor – module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
Return type
Iterator
[Tensor
]
compile(*args, **kwargs)
Compile this Module’s forward using torch.compile()
.
This Module’s __call__ method is compiled and all arguments are passed as-is
to torch.compile()
.
See torch.compile()
for details on the arguments for this function.
cuda(device=None)
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So
it should be called before constructing optimizer if the module will
live on GPU while being optimized.
This method modifies the module in-place.
Parameters
device (int, optional) – if specified, all parameters will be
copied to that device
Returns
Return type
Module
eval()
Set the module in evaluation mode.
This has any effect only on certain modules. See documentations of
particular modules for details of their behaviors in training/evaluation
mode, if they are affected, e.g. Dropout
, BatchNorm
,
This is equivalent with self.train(False)
.
See locally-disable-grad-doc for a comparison between
.eval() and several similar mechanisms that may be confused with it.
Returns
Return type
Module
extra_repr()
Set the extra representation of the module.
To print customized extra information, you should re-implement
this method in your own modules. Both single-line and multi-line
strings are acceptable.
Return type
forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
get_buffer(target)
Return the buffer given by target
if it exists, otherwise throw an error.
See the docstring for get_submodule
for a more detailed
explanation of this method’s functionality as well as how to
correctly specify target
.
Parameters
target (str
) – The fully-qualified string name of the buffer
to look for. (See get_submodule
for how to specify a
fully-qualified string.)
Returns
The buffer referenced by target
Return type
torch.Tensor
Raises
AttributeError – If the target string references an invalid
path or resolves to something that is not a
buffer
get_extra_state()
Return any extra state to include in the module’s state_dict.
Implement this and a corresponding set_extra_state()
for your module
if you need to store extra state. This function is called when building the
module’s state_dict().
Note that extra state should be picklable to ensure working serialization
of the state_dict. We only provide provide backwards compatibility guarantees
for serializing Tensors; other objects may break backwards compatibility if
their serialized pickled form changes.
Returns
Any extra state to store in the module’s state_dict
Return type
object
get_parameter(target)
Return the parameter given by target
if it exists, otherwise throw an error.
See the docstring for get_submodule
for a more detailed
explanation of this method’s functionality as well as how to
correctly specify target
.
Parameters
target (str
) – The fully-qualified string name of the Parameter
to look for. (See get_submodule
for how to specify a
fully-qualified string.)
Returns
The Parameter referenced by target
Return type
torch.nn.Parameter
Raises
AttributeError – If the target string references an invalid
path or resolves to something that is not an
nn.Parameter
get_submodule(target)
Return the submodule given by target
if it exists, otherwise throw an error.
For example, let’s say you have an nn.Module
A
that
looks like this:
(net_b): Module(
(net_c): Module(
(conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
(linear): Linear(in_features=100, out_features=200, bias=True)
(The diagram shows an nn.Module
A
. A
has a nested
submodule net_b
, which itself has two submodules net_c
and linear
. net_c
then has a submodule conv
.)
To check whether or not we have the linear
submodule, we
would call get_submodule("net_b.linear")
. To check whether
we have the conv
submodule, we would call
get_submodule("net_b.net_c.conv")
.
The runtime of get_submodule
is bounded by the degree
of module nesting in target
. A query against
named_modules
achieves the same result, but it is O(N) in
the number of transitive modules. So, for a simple check to see
if some submodule exists, get_submodule
should always be
used.
Parameters
target (str
) – The fully-qualified string name of the submodule
to look for. (See above example for how to specify a
fully-qualified string.)
Returns
The submodule referenced by target
Return type
torch.nn.Module
Raises
AttributeError – If the target string references an invalid
path or resolves to something that is not an
nn.Module
ipu(device=None)
Move all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So
it should be called before constructing optimizer if the module will
live on IPU while being optimized.
This method modifies the module in-place.
Parameters
device (int, optional) – if specified, all parameters will be
copied to that device
Returns
Return type
Module
load_state_dict(state_dict, strict=True, assign=False)
Copy parameters and buffers from state_dict
into this module and its descendants.
If strict
is True
, then
the keys of state_dict
must exactly match the keys returned
by this module’s state_dict()
function.
Warning
If assign
is True
the optimizer must be created after
the call to load_state_dict
unless
get_swap_module_params_on_conversion()
is True
.
Parameters
state_dict (dict) – a dict containing parameters and
persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys
in state_dict
match the keys returned by this module’s
state_dict()
function. Default: True
assign (bool, optional) – When False
, the properties of the tensors
in the current module are preserved while when True
, the
properties of the Tensors in the state dict are preserved. The only
exception is the requires_grad
field of
Default: ``False`
Returns
missing_keys is a list of str containing any keys that are expectedby this module but missing from the provided state_dict
.
If a parameter or buffer is registered as None
and its corresponding key
exists in state_dict
, load_state_dict()
will raise a
RuntimeError
.
Duplicate modules are returned only once. In the following
example, l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
... print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
1 -> Linear(in_features=2, out_features=2, bias=True)
Return type
Iterator
[Module
]
named_buffers(prefix='', recurse=True, remove_duplicate=True)
Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool, optional) – if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that
are direct members of this module. Defaults to True.
remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.
Yields
(str, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
Return type
Iterator
[Tuple
[str
, Tensor
]]
named_children()
Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
Yields
(str, Module) – Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
Return type
Iterator
[Tuple
[str
, Module
]]
named_modules(memo=None, prefix='', remove_duplicate=True)
Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
Parameters
memo (Optional
[Set
[Module
]]) – a memo to store the set of modules already added to the result
prefix (str
) – a prefix that will be added to the name of the module
remove_duplicate (bool
) – whether to remove the duplicated module instances in the result
or not
Yields
(str, Module) – Tuple of name and module
Duplicate modules are returned only once. In the following
example, l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
... print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True, remove_duplicate=True)
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that
are direct members of this module.
remove_duplicate (bool, optional) – whether to remove the duplicated
parameters in the result. Defaults to True.
Yields
(str, Parameter) – Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
Return type
Iterator
[Tuple
[str
, Parameter
]]
parameters(recurse=True)
Return an iterator over module parameters.
This is typically passed to an optimizer.
Parameters
recurse (bool) – if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that
are direct members of this module.
Yields
Parameter – module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
Return type
Iterator
[Parameter
]
register_backward_hook(hook)
Register a backward hook on the module.
This function is deprecated in favor of register_full_backward_hook()
and
the behavior of this function will change in future versions.
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_buffer(name, tensor, persistent=True)
Add a buffer to the module.
This is typically used to register a buffer that should not to be
considered a model parameter. For example, BatchNorm’s running_mean
is not a parameter, but is part of the module’s state. Buffers, by
default, are persistent and will be saved alongside parameters. This
behavior can be changed by setting persistent
to False
. The
only difference between a persistent buffer and a non-persistent buffer
is that the latter will not be a part of this module’s
state_dict
.
Buffers can be accessed as attributes using given names.
Parameters
name (str) – name of the buffer. The buffer can be accessed
from this module using the given name
tensor (Tensor or None) – buffer to be registered. If None
, then operations
that run on buffers, such as cuda
, are ignored. If None
,
the buffer is not included in the module’s state_dict
.
persistent (bool) – whether the buffer is part of this module’s
state_dict
.
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> self.register_buffer('running_mean', torch.zeros(num_features))
Return type
register_forward_hook(hook, *, prepend=False, with_kwargs=False, always_call=False)
Register a forward hook on the module.
The hook will be called every time after forward()
has computed an output.
If with_kwargs
is False
or not specified, the input contains only
the positional arguments given to the module. Keyword arguments won’t be
passed to the hooks and only to the forward
. The hook can modify the
output. It can modify the input inplace but it will not have effect on
forward since this is called after forward()
is called. The hook
should have the following signature:
hook(module, args, output) -> None or modified output
If with_kwargs
is True
, the forward hook will be passed the
kwargs
given to the forward function and be expected to return the
output possibly modified. The hook should have the following signature:
hook(module, args, kwargs, output) -> None or modified output
Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True
, the provided hook
will be fired
before all existing forward
hooks on this
torch.nn.modules.Module
. Otherwise, the provided
hook
will be fired after all existing forward
hooks on
this torch.nn.modules.Module
. Note that global
forward
hooks registered with
register_module_forward_hook()
will fire before all hooks
registered by this method.
Default: False
with_kwargs (bool) – If True
, the hook
will be passed the
kwargs given to the forward function.
Default: False
always_call (bool) – If True
the hook
will be run regardless of
whether an exception is raised while calling the Module.
Default: False
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)
Register a forward pre-hook on the module.
The hook will be called every time before forward()
is invoked.
If with_kwargs
is false or not specified, the input contains only
the positional arguments given to the module. Keyword arguments won’t be
passed to the hooks and only to the forward
. The hook can modify the
input. User can either return a tuple or a single modified value in the
hook. We will wrap the value into a tuple if a single value is returned
(unless that value is already a tuple). The hook should have the
following signature:
hook(module, args) -> None or modified input
If with_kwargs
is true, the forward pre-hook will be passed the
kwargs given to the forward function. And if the hook modifies the
input, both the args and kwargs should be returned. The hook should have
the following signature:
hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
Parameters
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If true, the provided hook
will be fired before
all existing forward_pre
hooks on this
torch.nn.modules.Module
. Otherwise, the provided
hook
will be fired after all existing forward_pre
hooks
on this torch.nn.modules.Module
. Note that global
forward_pre
hooks registered with
register_module_forward_pre_hook()
will fire before all
hooks registered by this method.
Default: False
with_kwargs (bool) – If true, the hook
will be passed the kwargs
given to the forward function.
Default: False
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_full_backward_hook(hook, prepend=False)
Register a backward hook on the module.
The hook will be called every time the gradients with respect to a module
are computed, i.e. the hook will execute if and only if the gradients with
respect to module outputs are computed. The hook should have the following
signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input
and grad_output
are tuples that contain the gradients
with respect to the inputs and outputs respectively. The hook should
not modify its arguments, but it can optionally return a new gradient with
respect to the input that will be used in place of grad_input
in
subsequent computations. grad_input
will only correspond to the inputs given
as positional arguments and all kwarg arguments are ignored. Entries
in grad_input
and grad_output
will be None
for all non-Tensor
arguments.
For technical reasons, when this hook is applied to a Module, its forward function will
receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and
will raise an error.
Parameters
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided hook
will be fired before
all existing backward
hooks on this
torch.nn.modules.Module
. Otherwise, the provided
hook
will be fired after all existing backward
hooks on
this torch.nn.modules.Module
. Note that global
backward
hooks registered with
register_module_full_backward_hook()
will fire before
all hooks registered by this method.
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_full_backward_pre_hook(hook, prepend=False)
Register a backward pre-hook on the module.
The hook will be called every time the gradients for the module are computed.
The hook should have the following signature:
hook(module, grad_output) -> tuple[Tensor] or None
The grad_output
is a tuple. The hook should
not modify its arguments, but it can optionally return a new gradient with
respect to the output that will be used in place of grad_output
in
subsequent computations. Entries in grad_output
will be None
for
all non-Tensor arguments.
For technical reasons, when this hook is applied to a Module, its forward function will
receive a view of each Tensor passed to the Module. Similarly the caller will receive a view
of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs inplace is not allowed when using backward hooks and
will raise an error.
Parameters
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided hook
will be fired before
all existing backward_pre
hooks on this
torch.nn.modules.Module
. Otherwise, the provided
hook
will be fired after all existing backward_pre
hooks
on this torch.nn.modules.Module
. Note that global
backward_pre
hooks registered with
register_module_full_backward_pre_hook()
will fire before
all hooks registered by this method.
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_load_state_dict_post_hook(hook)
Register a post hook to be run after module’s load_state_dict
is called.
It should have the following signature::hook(module, incompatible_keys) -> None
The module
argument is the current module that this hook is registered
on, and the incompatible_keys
argument is a NamedTuple
consisting
of attributes missing_keys
and unexpected_keys
. missing_keys
is a list
of str
containing the missing keys and
unexpected_keys
is a list
of str
containing the unexpected keys.
The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling load_state_dict()
with
strict=True
are affected by modifications the hook makes to
missing_keys
or unexpected_keys
, as expected. Additions to either
set of keys will result in an error being thrown when strict=True
, and
clearing out both missing and unexpected keys will avoid an error.
Returns
a handle that can be used to remove the added hook by calling
handle.remove()
Return type
torch.utils.hooks.RemovableHandle
register_parameter(name, param)
Add a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters
name (str) – name of the parameter. The parameter can be accessed
from this module using the given name
param (Parameter or None) – parameter to be added to the module. If
None
, then operations that run on parameters, such as cuda
,
are ignored. If None
, the parameter is not included in the
module’s state_dict
.
Return type
register_state_dict_pre_hook(hook)
Register a pre-hook for the state_dict()
method.
These hooks will be called with arguments: self
, prefix
,
and keep_vars
before calling state_dict
on self
. The registered
hooks can be used to perform pre-processing before the state_dict
call is made.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module.
This method sets the parameters’ requires_grad
attributes
in-place.
This method is helpful for freezing part of the module for finetuning
or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between
.requires_grad_() and several similar mechanisms that may be confused with it.
Parameters
requires_grad (bool) – whether autograd should record operations on
parameters in this module. Default: True
.
Returns
Return type
Module
set_extra_state(state)
Set extra state contained in the loaded state_dict.
This function is called from load_state_dict()
to handle any extra state
found within the state_dict. Implement this function and a corresponding
get_extra_state()
for your module if you need to store extra state within its
state_dict.
Parameters
state (dict) – Extra state from the state_dict
Return type
state_dict(*args, destination=None, prefix='', keep_vars=False)
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are
included. Keys are corresponding parameter and buffer names.
Parameters and buffers set to None
are not included.
The returned object is a shallow copy. It contains references
to the module’s parameters and buffers.
Warning
Currently state_dict()
also accepts positional arguments for
destination
, prefix
and keep_vars
in order. However,
this is being deprecated and keyword arguments will be enforced in
future releases.
Warning
Please avoid the use of argument destination
as it is not
designed for end-users.
Parameters
destination (dict, optional) – If provided, the state of module will
be updated into the dict and the same object is returned.
Otherwise, an OrderedDict
will be created and returned.
Default: None
.
prefix (str, optional) – a prefix added to parameter and buffer
names to compose the keys in state_dict. Default: ''
.
keep_vars (bool, optional) – by default the Tensor
s
returned in the state dict are detached from autograd. If it’s
set to True
, detaching will not be performed.
Default: False
.
Returns
a dictionary containing a whole state of the module
Return type
Example:
>>> # xdoctest: +SKIP("undefined vars")
>>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)
Move and/or cast the parameters and buffers.
This can be called as
to(device=None, dtype=None, non_blocking=False)
Its signature is similar to torch.Tensor.to()
, but only accepts
floating point or complex dtype
s. In addition, this method will
only cast the floating point or complex parameters and buffers to dtype
(if given). The integral parameters and buffers will be moved
device
, if that is given, but with dtypes unchanged. When
non_blocking
is set, it tries to convert/move asynchronously
with respect to the host if possible, e.g., moving CPU Tensors with
pinned memory to CUDA devices.
See below for examples.
This method modifies the module in-place.
Parameters
device (torch.device
) – the desired device of the parameters
and buffers in this module
dtype (torch.dtype
) – the desired floating point or complex dtype of
the parameters and buffers in this module
tensor (torch.Tensor) – Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format
) – the desired memory
format for 4D parameters and buffers in this module (keyword
only argument)
Returns
Return type
Module
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device, recurse=True)
Move the parameters and buffers to the specified device without copying storage.
Parameters
device (torch.device
) – The desired device of the parameters
and buffers in this module.
recurse (bool) – Whether parameters and buffers of submodules should
be recursively moved to the specified device.
Returns
Return type
Module
train(mode=True)
Set the module in training mode.
This has any effect only on certain modules. See documentations of
particular modules for details of their behaviors in training/evaluation
mode, if they are affected, e.g. Dropout
, BatchNorm
,
Parameters
mode (bool) – whether to set training mode (True
) or evaluation
mode (False
). Default: True
.
Returns
Return type
Module
xpu(device=None)
Move all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So
it should be called before constructing optimizer if the module will
live on XPU while being optimized.
This method modifies the module in-place.
Parameters
device (int, optional) – if specified, all parameters will be
copied to that device
Returns
Return type
Module
zero_grad(set_to_none=True)
Reset gradients of all model parameters.
See similar function under torch.optim.Optimizer
for more context.
Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None.
See torch.optim.Optimizer.zero_grad()
for details.
Return type