Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/source/guides/speed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,16 @@ some references, [`1 <https://discuss.pytorch.org/t/guidelines-for-assigning-num

The best thing to do is to increase the ``num_workers`` slowly and stop once you see no more improvement in your training speed.

For debugging purposes or for dataloaders that load very small datasets, it is desirable to set ``num_workers=0``. However, this will always log a warning for every dataloader with ``num_workers <= min(2, os.cpu_count())``. In such cases, you can specifically filter this warning by using:

.. code-block:: python

import warnings

warnings.filterwarnings(
"ignore", ".*does not have many workers. Consider increasing the value of the `num_workers` argument*"
)

Spawn
"""""
When using ``strategy=ddp_spawn`` or training on TPUs, the way multiple GPUs/TPU cores are used is by calling ``.spawn()`` under the hood.
Expand Down