Skip to content

Commit 5ccc50c

Browse files
authored
Some fixes to the trainer docstring (#9227)
1 parent f21f1be commit 5ccc50c

File tree

1 file changed

+18
-17
lines changed

1 file changed

+18
-17
lines changed

pytorch_lightning/trainer/trainer.py

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ def __init__(
159159
stochastic_weight_avg: bool = False,
160160
):
161161
r"""
162-
Customize every aspect of training via flags
162+
Customize every aspect of training via flags.
163163
164164
Args:
165165
@@ -168,7 +168,7 @@ def __init__(
168168
169169
accumulate_grad_batches: Accumulates grads every k batches or as set up in the dict.
170170
171-
amp_backend: The mixed precision backend to use ("native" or "apex")
171+
amp_backend: The mixed precision backend to use ("native" or "apex").
172172
173173
amp_level: The optimization level to use (O1, O2, etc...).
174174
@@ -207,34 +207,36 @@ def __init__(
207207
devices: Will be mapped to either `gpus`, `tpu_cores`, `num_processes` or `ipus`,
208208
based on the accelerator type.
209209
210-
distributed_backend: deprecated. Please use 'accelerator'
210+
distributed_backend: Deprecated. Please use 'accelerator'.
211211
212-
fast_dev_run: runs n if set to ``n`` (int) else 1 if set to ``True`` batch(es)
212+
fast_dev_run: Runs n if set to ``n`` (int) else 1 if set to ``True`` batch(es)
213213
of train, val and test to find any bugs (ie: a sort of unit test).
214214
215215
flush_logs_every_n_steps: How often to flush logs to disk (defaults to every 100 steps).
216216
217-
gpus: number of gpus to train on (int) or which GPUs to train on (list or str) applied per node
217+
gpus: Number of GPUs to train on (int) or which GPUs to train on (list or str) applied per node
218218
219-
gradient_clip_val: 0 means don't clip.
219+
gradient_clip_val: The value at which to clip gradients. Passing ``gradient_clip_val=0`` disables gradient
220+
clipping.
220221
221-
gradient_clip_algorithm: 'value' means clip_by_value, 'norm' means clip_by_norm. Default: 'norm'
222+
gradient_clip_algorithm: The gradient clipping algorithm to use. Pass ``gradient_clip_algorithm="value"``
223+
for clip_by_value, and ``gradient_clip_algorithm="norm"`` for clip_by_norm.
222224
223-
limit_train_batches: How much of training dataset to check (float = fraction, int = num_batches)
225+
limit_train_batches: How much of training dataset to check (float = fraction, int = num_batches).
224226
225-
limit_val_batches: How much of validation dataset to check (float = fraction, int = num_batches)
227+
limit_val_batches: How much of validation dataset to check (float = fraction, int = num_batches).
226228
227-
limit_test_batches: How much of test dataset to check (float = fraction, int = num_batches)
229+
limit_test_batches: How much of test dataset to check (float = fraction, int = num_batches).
228230
229-
limit_predict_batches: How much of prediction dataset to check (float = fraction, int = num_batches)
231+
limit_predict_batches: How much of prediction dataset to check (float = fraction, int = num_batches).
230232
231233
logger: Logger (or iterable collection of loggers) for experiment tracking. A ``True`` value uses
232234
the default ``TensorBoardLogger``. ``False`` will disable logging. If multiple loggers are
233235
provided and the `save_dir` property of that logger is not set, local files (checkpoints,
234236
profiler traces, etc.) are saved in ``default_root_dir`` rather than in the ``log_dir`` of any
235237
of the individual loggers.
236238
237-
log_gpu_memory: None, 'min_max', 'all'. Might slow performance
239+
log_gpu_memory: None, 'min_max', 'all'. Might slow performance.
238240
239241
log_every_n_steps: How often to log within steps (defaults to every 50 steps).
240242
@@ -245,7 +247,7 @@ def __init__(
245247
Deprecated in v1.5.0 and will be removed in v1.7.0
246248
Please set ``prepare_data_per_node`` in LightningDataModule or LightningModule directly instead.
247249
248-
process_position: orders the progress bar when running multiple models on same machine.
250+
process_position: Orders the progress bar when running multiple models on same machine.
249251
250252
.. deprecated:: v1.5
251253
``process_position`` has been deprecated in v1.5 and will be removed in v1.7.
@@ -280,15 +282,14 @@ def __init__(
280282
:class:`datetime.timedelta`, or a dictionary with keys that will be passed to
281283
:class:`datetime.timedelta`.
282284
283-
num_nodes: number of GPU nodes for distributed training.
285+
num_nodes: Number of GPU nodes for distributed training.
284286
285-
num_processes: number of processes for distributed training with distributed_backend="ddp_cpu"
287+
num_processes: Number of processes for distributed training with ``accelerator="ddp_cpu"``.
286288
287289
num_sanity_val_steps: Sanity check runs n validation batches before starting the training routine.
288290
Set it to `-1` to run all batches in all validation dataloaders.
289291
290292
reload_dataloaders_every_n_epochs: Set to a non-negative integer to reload dataloaders every n epochs.
291-
Default: 0
292293
293294
reload_dataloaders_every_epoch: Set to True to reload dataloaders every epoch.
294295
@@ -336,7 +337,7 @@ def __init__(
336337
reload when reaching the minimum length of datasets.
337338
338339
stochastic_weight_avg: Whether to use `Stochastic Weight Averaging (SWA)
339-
<https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/>_`
340+
<https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/>`_.
340341
341342
"""
342343
super().__init__()

0 commit comments

Comments
 (0)