You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use PrecisionType enum instead of checking raw values (#10704)
* use precision type
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Copy file name to clipboardExpand all lines: docs/source/extensions/logging.rst
+81-88Lines changed: 81 additions & 88 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,45 +14,78 @@
14
14
Logging
15
15
#######
16
16
17
-
Lightning supports the most popular logging frameworks (TensorBoard, Comet, etc...).
17
+
Supported Loggers
18
+
=================
19
+
20
+
The following are loggers we support:
18
21
19
-
By default, Lightning uses `PyTorch TensorBoard <https://pytorch.org/docs/stable/tensorboard.html>`__ logging under the hood, and stores the logs to a directory (by default in ``lightning_logs/``).
22
+
.. note::
23
+
The following loggers will normally plot an additional chart (**global_step VS epoch**).
24
+
25
+
.. note::
26
+
Depending on the loggers you use, there might be some additional charts.
27
+
28
+
.. currentmodule:: pytorch_lightning.loggers
29
+
30
+
.. autosummary::
31
+
:toctree: generated
32
+
:nosignatures:
33
+
:template: classtemplate.rst
34
+
35
+
CometLogger
36
+
CSVLogger
37
+
MLFlowLogger
38
+
NeptuneLogger
39
+
TensorBoardLogger
40
+
TestTubeLogger
41
+
WandbLogger
42
+
43
+
44
+
By default, Lightning uses ``TensorBoard`` logger under the hood, and stores the logs to a directory (by default in ``lightning_logs/``).
20
45
21
46
.. testcode::
22
47
23
48
from pytorch_lightning import Trainer
24
49
25
-
# Automatically logs to a directory
26
-
# (by default ``lightning_logs/``)
50
+
# Automatically logs to a directory (by default lightning_logs/)
27
51
trainer = Trainer()
28
52
29
53
To see your logs:
30
54
31
55
.. code-block:: bash
32
56
57
+
# Install tensorboard
58
+
pip install tensorboard
33
59
tensorboard --logdir=lightning_logs/
34
60
61
+
To run tensorboard in a jupyter notebook environment, use the following in a jupyter cell:
62
+
63
+
.. code-block:: bash
64
+
65
+
%reload_ext tensorboard
66
+
%tensorboard --logdir=lightning_logs/
67
+
35
68
You can also pass a custom Logger to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
36
69
37
70
.. testcode::
38
71
39
72
from pytorch_lightning import loggers as pl_loggers
The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a few options:
101
-
102
-
* `on_step`: Logs the metric at the current step. Defaults to `True` in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_step`, and :func:`~pytorch_lightning.core.lightning.LightningModule.training_step_end`.
103
-
104
-
* `on_epoch`: Automatically accumulates and logs at the end of the epoch. Defaults to True anywhere in validation or test loops, and in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_epoch_end`.
105
-
106
-
* `prog_bar`: Logs to the progress bar.
107
-
108
-
* `logger`: Logs to the logger like Tensorboard, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
134
+
The :meth:`~pytorch_lightning.core.lightning.LightningModule.log` method has a few options:
109
135
136
+
* ``on_step``: Logs the metric at the current step.
137
+
* ``on_epoch``: Automatically accumulates and logs at the end of the epoch.
138
+
* ``prog_bar``: Logs to the progress bar.
139
+
* ``logger``: Logs to the logger like ``Tensorboard``, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
110
140
111
141
.. note::
112
142
113
143
- Setting ``on_epoch=True`` will cache all your logged values during the full training epoch and perform a
114
144
reduction in ``on_train_epoch_end``. We recommend using `TorchMetrics <https://torchmetrics.readthedocs.io/>`_, when working with custom reduction.
115
145
116
146
- Setting both ``on_step=True`` and ``on_epoch=True`` will create two keys per metric you log with
117
-
suffix ``_step`` and ``_epoch``, respectively. You can refer to these keys e.g. in the `monitor`
147
+
suffix ``_step`` and ``_epoch`` respectively. You can refer to these keys e.g. in the `monitor`
118
148
argument of :class:`~pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint` or in the graphs plotted to the logger of your choice.
119
149
120
150
121
-
If your work requires to log in an unsupported function, please open an issue with a clear description of why it is blocking you.
151
+
If your work requires to log in an unsupported method, please open an issue with a clear description of why it is blocking you.
122
152
123
153
124
-
Manual logging
125
-
==============
126
-
If you want to log anything that is not a scalar, like histograms, text, images, etc... you may need to use the logger object directly.
154
+
Manual logging Non-Scalar Artifacts
155
+
===================================
156
+
If you want to log anything that is not a scalar, like histograms, text, images, etc. you may need to use the logger object directly.
127
157
128
158
.. code-block:: python
129
159
@@ -136,14 +166,6 @@ If you want to log anything that is not a scalar, like histograms, text, images,
136
166
tensorboard.add_figure(...)
137
167
138
168
139
-
Access your logs
140
-
================
141
-
Once your training starts, you can view the logs by using your favorite logger or booting up the Tensorboard logs:
142
-
143
-
.. code-block:: bash
144
-
145
-
tensorboard --logdir ./lightning_logs
146
-
147
169
----------
148
170
149
171
********************
@@ -155,9 +177,8 @@ Use the :func:`~pytorch_lightning.loggers.base.rank_zero_experiment` and :func:`
155
177
156
178
.. testcode::
157
179
158
-
from pytorch_lightning.utilities import rank_zero_only
159
-
from pytorch_lightning.loggers import LightningLoggerBase
160
-
from pytorch_lightning.loggers.base import rank_zero_experiment
180
+
from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_experiment
181
+
from pytorch_lightning.utilities.distributed import rank_zero_only
161
182
162
183
163
184
class MyLogger(LightningLoggerBase):
@@ -217,27 +238,26 @@ Logging frequency
217
238
=================
218
239
219
240
It may slow training down to log every single batch. By default, Lightning logs every 50 rows, or 50 training steps.
220
-
To change this behaviour, set the `log_every_n_steps` :class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
241
+
To change this behaviour, set the ``log_every_n_steps`` :class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
221
242
222
243
.. testcode::
223
244
224
245
k = 10
225
246
trainer = Trainer(log_every_n_steps=k)
226
247
227
248
228
-
229
249
Log writing frequency
230
250
=====================
231
251
232
252
Writing to a logger can be expensive, so by default Lightning writes logs to disk or to the given logger every 100 training steps.
233
-
To change this behaviour, set the interval at which you wish to flush logs to the filesystem using the `flush_logs_every_n_steps` :class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
253
+
To change this behaviour, set the interval at which you wish to flush logs to the filesystem using the ``flush_logs_every_n_steps`` :class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
234
254
235
255
.. testcode::
236
256
237
257
k = 100
238
258
trainer = Trainer(flush_logs_every_n_steps=k)
239
259
240
-
Unlike the `log_every_n_steps`, this argument does not apply to all loggers.
260
+
Unlike the ``log_every_n_steps``, this argument does not apply to all loggers.
241
261
The example shown here works with :class:`~pytorch_lightning.loggers.tensorboard.TensorBoardLogger`,
242
262
which is the default logger in Lightning.
243
263
@@ -246,8 +266,8 @@ which is the default logger in Lightning.
246
266
************
247
267
Progress Bar
248
268
************
249
-
You can add any metric to the progress bar using :func:`~~pytorch_lightning.core.lightning.LightningModule.log`
250
-
method, setting `prog_bar=True`.
269
+
You can add any metric to the progress bar using :meth:`~pytorch_lightning.core.lightning.LightningModule.log`
270
+
method, setting ``prog_bar=True``.
251
271
252
272
253
273
.. code-block:: python
@@ -261,15 +281,19 @@ Modifying the progress bar
261
281
262
282
The progress bar by default already includes the training loss and version number of the experiment
263
283
if you are using a logger. These defaults can be customized by overriding the
264
-
:func:`~pytorch_lightning.callbacks.base.ProgressBarBase.get_metrics` hook in your module.
284
+
:meth:`~pytorch_lightning.callbacks.progress.base.ProgressBarBase.get_metrics` hook in your logger.
265
285
266
286
.. code-block:: python
267
287
268
-
defget_metrics(self):
269
-
# don't show the version number
270
-
items =super().get_metrics()
271
-
items.pop("v_num", None)
272
-
return items
288
+
from pytorch_lightning.callbacks.progress import Tqdm
289
+
290
+
291
+
classCustomProgressBar(Tqdm):
292
+
defget_metrics(self, *args, **kwargs):
293
+
# don't show the version number
294
+
items =super().get_metrics()
295
+
items.pop("v_num", None)
296
+
return items
273
297
274
298
275
299
----------
@@ -303,16 +327,16 @@ Read more about custom Python logging `here <https://docs.python.org/3/library/l
303
327
Logging hyperparameters
304
328
***********************
305
329
306
-
When training a model, it's useful to know what hyperparams went into that model.
307
-
When Lightning creates a checkpoint, it stores a key "hyper_parameters" with the hyperparams.
330
+
When training a model, it is useful to know what hyperparams went into that model.
331
+
When Lightning creates a checkpoint, it stores a key ``"hyper_parameters"`` with the hyperparams.
0 commit comments