Skip to content

Commit 715bb2c

Browse files
committed
Update optimizer
1 parent eb9ecc3 commit 715bb2c

File tree

1 file changed

+27
-6
lines changed

1 file changed

+27
-6
lines changed

docs/source/common/optimizers.rst

Lines changed: 27 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -59,11 +59,8 @@ Here is a minimal example of manual optimization.
5959
From 1.2, it is left to the user's expertise.
6060

6161
.. tip::
62-
* ``self.optimizers()`` will return :class:`~pytorch_lightning.core.optimizer.LightningOptimizer` objects. You can
63-
access your own optimizer with ``optimizer.optimizer``. However, if you use your own optimizer to perform a step,
64-
Lightning won't be able to support accelerators and precision for you.
65-
* Be careful where you call ``optimizer.zero_grad()``, or your model won't converge.
66-
It is good practice to call ``optimizer.zero_grad()`` before ``self.manual_backward(loss)``.
62+
Be careful where you call ``optimizer.zero_grad()``, or your model won't converge.
63+
It is good practice to call ``optimizer.zero_grad()`` before ``self.manual_backward(loss)``.
6764

6865
-----
6966

@@ -339,6 +336,30 @@ Here is an example using a closure function.
339336

340337
------
341338

339+
Access your own optimizer [manual]
340+
----------------------------------
341+
``optimizer`` is a :class:`~pytorch_lightning.core.optimizer.LightningOptimizer` object wrapping your own optimizer
342+
configured in your :meth:`~pytorch_lightning.LightningModule.configure_optimizers`. You can access your own optimizer
343+
with ``optimizer.optimizer``. However, if you use your own optimizer to perform a step, Lightning won't be able to
344+
support accelerators and precision for you.
345+
346+
.. testcode:: python
347+
348+
def __init__(self):
349+
super().__init__()
350+
self.automatic_optimization = False
351+
352+
def training_step(batch, batch_idx):
353+
optimizer = self.optimizers()
354+
355+
# `optimizer` is a `LightningOptimizer` wrapping the optimizer.
356+
# To access it, do the following.
357+
# However, it won't work on TPU, AMP, etc...
358+
optimizer = optimizer.optimizer
359+
...
360+
361+
-----
362+
342363
Automatic optimization
343364
======================
344365
With Lightning, most users don't have to think about when to call ``.zero_grad()``, ``.backward()`` and ``.step()``
@@ -583,7 +604,7 @@ support accelerators and precision for you.
583604

584605
# `optimizer` is a `LightningOptimizer` wrapping the optimizer.
585606
# To access it, do the following.
586-
# However, It won't work on TPU, AMP, etc...
607+
# However, it won't work on TPU, AMP, etc...
587608
def optimizer_step(
588609
self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure,
589610
on_tpu=False, using_native_amp=False, using_lbfgs=False,

0 commit comments

Comments
 (0)