Skip to content

Commit d65b037

Browse files
awaelchliwilliamFalconBordaananyahjha93teddykoker
authored
Mocking Loggers Part 5/5 (final) (#3926)
* base * add xfail * new test * import * missing import * xfail if not installed include mkpatch fix test * mock comet comet mocks fix test remove dep undo merge duplication * line * line * convert doctest * doctest * docs * prune Results usage in notebooks (#3911) * notebooks * notebooks * revamp entire metrics (#3868) * removed metric Co-authored-by: Teddy Koker <[email protected]> * added new metrics Co-authored-by: Teddy Koker [email protected] * pep8 Co-authored-by: Teddy Koker [email protected] * pep8 Co-authored-by: Teddy Koker <[email protected]> * docs Co-authored-by: Teddy Koker <[email protected]> * docs Co-authored-by: Teddy Koker <[email protected]> * win ddp tests skip Co-authored-by: Teddy Koker <[email protected]> * win ddp tests skip Co-authored-by: Teddy Koker <[email protected]> * win ddp tests skip Co-authored-by: Teddy Koker <[email protected]> * win ddp tests skip Co-authored-by: Teddy Koker <[email protected]> * reset in compute, cache compute Co-authored-by: Teddy Koker <[email protected]> * reduce_ops handling Co-authored-by: Teddy Koker <[email protected]> * sync -> sync_dist, type annotations Co-authored-by: Teddy Koker <[email protected]> * wip docs Co-authored-by: Teddy Koker <[email protected]> * mean squared error * docstring * added mean ___ error metrics * added mean ___ error metrics * seperated files * accuracy doctest * gpu fix * remove unnecessary mixin * metric and accuracy docstring Co-authored-by: Teddy Koker <[email protected]> * metric docs Co-authored-by: Teddy Koker <[email protected]> * pep8, changelog Co-authored-by: Teddy Koker <[email protected]> * refactor dist utils, pep8 * refactor dist utils, pep8 Co-authored-by: Teddy Koker <[email protected]> * Callback docs with autosummary (#3908) * callback docs with autosummary * do not show private methods * callback base docstring * skip some docker builds (temporally pass) (#3913) * skip some docker builds * todos * skip * use badges only with push (#3914) * testtube * mock test tube * mock mlflow * remove mlflow * clean up * test * test * test * test * test * test * code blocks * remove import * codeblock * logger * wandb causes stall Co-authored-by: William Falcon <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: Ananya Harsh Jha <[email protected]> Co-authored-by: Teddy Koker <[email protected]> Co-authored-by: Jeff Yang <[email protected]>
1 parent 1a345a4 commit d65b037

File tree

8 files changed

+93
-67
lines changed

8 files changed

+93
-67
lines changed

docs/source/loggers.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ First, install the package:
7474
7575
Then configure the logger and pass it to the :class:`~pytorch_lightning.trainer.trainer.Trainer`:
7676

77-
.. testcode::
77+
.. code-block:: python
7878
7979
from pytorch_lightning.loggers import MLFlowLogger
8080
mlf_logger = MLFlowLogger(
@@ -169,7 +169,7 @@ First, install the package:
169169
170170
Then configure the logger and pass it to the :class:`~pytorch_lightning.trainer.trainer.Trainer`:
171171

172-
.. testcode::
172+
.. code-block:: python
173173
174174
from pytorch_lightning.loggers import TestTubeLogger
175175
logger = TestTubeLogger('tb_logs', name='my_model')
@@ -232,7 +232,7 @@ Multiple Loggers
232232
Lightning supports the use of multiple loggers, just pass a list to the
233233
:class:`~pytorch_lightning.trainer.trainer.Trainer`.
234234

235-
.. testcode::
235+
.. code-block:: python
236236
237237
from pytorch_lightning.loggers import TensorBoardLogger, TestTubeLogger
238238
logger1 = TensorBoardLogger('tb_logs', name='my_model')

docs/source/logging.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ Snapshot code
306306
Loggers also allow you to snapshot a copy of the code used in this experiment.
307307
For example, TestTubeLogger does this with a flag:
308308

309-
.. testcode::
309+
.. code-block:: python
310310
311311
from pytorch_lightning.loggers import TestTubeLogger
312312
logger = TestTubeLogger('.', create_git_tag=True)

pytorch_lightning/loggers/mlflow.py

Lines changed: 22 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -43,25 +43,28 @@ class MLFlowLogger(LightningLoggerBase):
4343
4444
pip install mlflow
4545
46-
Example:
47-
>>> from pytorch_lightning import Trainer
48-
>>> from pytorch_lightning.loggers import MLFlowLogger
49-
>>> mlf_logger = MLFlowLogger(
50-
... experiment_name="default",
51-
... tracking_uri="file:./ml-runs"
52-
... )
53-
>>> trainer = Trainer(logger=mlf_logger)
54-
55-
Use the logger anywhere in you :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
56-
57-
>>> from pytorch_lightning import LightningModule
58-
>>> class LitModel(LightningModule):
59-
... def training_step(self, batch, batch_idx):
60-
... # example
61-
... self.logger.experiment.whatever_ml_flow_supports(...)
62-
...
63-
... def any_lightning_module_function_or_hook(self):
64-
... self.logger.experiment.whatever_ml_flow_supports(...)
46+
.. code-block:: python
47+
48+
from pytorch_lightning import Trainer
49+
from pytorch_lightning.loggers import MLFlowLogger
50+
mlf_logger = MLFlowLogger(
51+
experiment_name="default",
52+
tracking_uri="file:./ml-runs"
53+
)
54+
trainer = Trainer(logger=mlf_logger)
55+
56+
Use the logger anywhere in your :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
57+
58+
.. code-block:: python
59+
60+
from pytorch_lightning import LightningModule
61+
class LitModel(LightningModule):
62+
def training_step(self, batch, batch_idx):
63+
# example
64+
self.logger.experiment.whatever_ml_flow_supports(...)
65+
66+
def any_lightning_module_function_or_hook(self):
67+
self.logger.experiment.whatever_ml_flow_supports(...)
6568
6669
Args:
6770
experiment_name: The name of the experiment

pytorch_lightning/loggers/test_tube.py

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,8 @@
2121

2222
try:
2323
from test_tube import Experiment
24-
_TEST_TUBE_AVAILABLE = True
2524
except ImportError: # pragma: no-cover
2625
Experiment = None
27-
_TEST_TUBE_AVAILABLE = False
2826

2927
from pytorch_lightning.core.lightning import LightningModule
3028
from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_experiment
@@ -41,22 +39,25 @@ class TestTubeLogger(LightningLoggerBase):
4139
4240
pip install test_tube
4341
44-
Example:
45-
>>> from pytorch_lightning import Trainer
46-
>>> from pytorch_lightning.loggers import TestTubeLogger
47-
>>> logger = TestTubeLogger("tt_logs", name="my_exp_name")
48-
>>> trainer = Trainer(logger=logger)
42+
.. code-block:: python
43+
44+
from pytorch_lightning import Trainer
45+
from pytorch_lightning.loggers import TestTubeLogger
46+
logger = TestTubeLogger("tt_logs", name="my_exp_name")
47+
trainer = Trainer(logger=logger)
4948
5049
Use the logger anywhere in your :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
5150
52-
>>> from pytorch_lightning import LightningModule
53-
>>> class LitModel(LightningModule):
54-
... def training_step(self, batch, batch_idx):
55-
... # example
56-
... self.logger.experiment.whatever_method_summary_writer_supports(...)
57-
...
58-
... def any_lightning_module_function_or_hook(self):
59-
... self.logger.experiment.add_histogram(...)
51+
.. code-block:: python
52+
53+
from pytorch_lightning import LightningModule
54+
class LitModel(LightningModule):
55+
def training_step(self, batch, batch_idx):
56+
# example
57+
self.logger.experiment.whatever_method_summary_writer_supports(...)
58+
59+
def any_lightning_module_function_or_hook(self):
60+
self.logger.experiment.add_histogram(...)
6061
6162
Args:
6263
save_dir: Save directory
@@ -83,7 +84,7 @@ def __init__(
8384
create_git_tag: bool = False,
8485
log_graph: bool = False
8586
):
86-
if not _TEST_TUBE_AVAILABLE:
87+
if Experiment is None:
8788
raise ImportError('You want to use `test_tube` logger which is not installed yet,'
8889
' install it with `pip install test-tube`.')
8990
super().__init__()

requirements/extra.txt

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# extended list of package dependencies to reach full functionality
22

3-
# TODO: this shall be removed as we mock them in tests
4-
mlflow>=1.0.0
5-
test_tube>=0.7.5
6-
73
matplotlib>=3.1.1
84
# no need to install with [pytorch] as pytorch is already installed and torchvision is required only for Horovod examples
95
horovod>=0.19.2, != 0.20.0 # v0.20.0 has problem with building the wheel/installation

tests/base/models.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,6 @@
88

99
from tests.base.datasets import TrialMNIST, AverageDataset, MNIST
1010

11-
try:
12-
from test_tube import HyperOptArgumentParser
13-
except ImportError as exp:
14-
# TODO: this should be discussed and moved out of this package
15-
raise ImportError('Missing test-tube package.') from exp
16-
1711
from pytorch_lightning.core.lightning import LightningModule
1812

1913

tests/loggers/test_all.py

Lines changed: 40 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
import atexit
21
import inspect
32
import os
43
import pickle
@@ -20,6 +19,7 @@
2019
from pytorch_lightning.loggers.base import DummyExperiment
2120
from tests.base import EvalModelTemplate
2221
from tests.loggers.test_comet import _patch_comet_atexit
22+
from tests.loggers.test_mlflow import mock_mlflow_run_creation
2323

2424

2525
def _get_logger_args(logger_class, save_dir):
@@ -34,27 +34,31 @@ def _get_logger_args(logger_class, save_dir):
3434

3535

3636
def test_loggers_fit_test_all(tmpdir, monkeypatch):
37-
_patch_comet_atexit(monkeypatch)
37+
""" Verify that basic functionality of all loggers. """
38+
39+
_test_loggers_fit_test(tmpdir, TensorBoardLogger)
40+
3841
with mock.patch('pytorch_lightning.loggers.comet.comet_ml'), \
3942
mock.patch('pytorch_lightning.loggers.comet.CometOfflineExperiment'):
43+
_patch_comet_atexit(monkeypatch)
4044
_test_loggers_fit_test(tmpdir, CometLogger)
4145

42-
_test_loggers_fit_test(tmpdir, MLFlowLogger)
46+
with mock.patch('pytorch_lightning.loggers.mlflow.mlflow'), \
47+
mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient'):
48+
_test_loggers_fit_test(tmpdir, MLFlowLogger)
4349

4450
with mock.patch('pytorch_lightning.loggers.neptune.neptune'):
4551
_test_loggers_fit_test(tmpdir, NeptuneLogger)
4652

47-
_test_loggers_fit_test(tmpdir, TensorBoardLogger)
48-
_test_loggers_fit_test(tmpdir, TestTubeLogger)
53+
with mock.patch('pytorch_lightning.loggers.test_tube.Experiment'):
54+
_test_loggers_fit_test(tmpdir, TestTubeLogger)
4955

5056
with mock.patch('pytorch_lightning.loggers.wandb.wandb'):
5157
_test_loggers_fit_test(tmpdir, WandbLogger)
5258

5359

5460
def _test_loggers_fit_test(tmpdir, logger_class):
55-
"""Verify that basic functionality of all loggers."""
5661
os.environ['PL_DEV_DEBUG'] = '0'
57-
5862
model = EvalModelTemplate()
5963

6064
class StoreHistoryLogger(logger_class):
@@ -78,6 +82,13 @@ def log_metrics(self, metrics, step):
7882
logger.experiment.id = 'foo'
7983
logger.experiment.project_name = 'bar'
8084

85+
if logger_class == TestTubeLogger:
86+
logger.experiment.version = 'foo'
87+
logger.experiment.name = 'bar'
88+
89+
if logger_class == MLFlowLogger:
90+
logger = mock_mlflow_run_creation(logger, experiment_id="foo", run_id="bar")
91+
8192
trainer = Trainer(
8293
max_epochs=1,
8394
logger=logger,
@@ -109,21 +120,27 @@ def log_metrics(self, metrics, step):
109120

110121

111122
def test_loggers_save_dir_and_weights_save_path_all(tmpdir, monkeypatch):
112-
_patch_comet_atexit(monkeypatch)
123+
""" Test the combinations of save_dir, weights_save_path and default_root_dir. """
124+
125+
_test_loggers_save_dir_and_weights_save_path(tmpdir, TensorBoardLogger)
126+
113127
with mock.patch('pytorch_lightning.loggers.comet.comet_ml'), \
114128
mock.patch('pytorch_lightning.loggers.comet.CometOfflineExperiment'):
129+
_patch_comet_atexit(monkeypatch)
115130
_test_loggers_save_dir_and_weights_save_path(tmpdir, CometLogger)
116131

117-
_test_loggers_save_dir_and_weights_save_path(tmpdir, TensorBoardLogger)
118-
_test_loggers_save_dir_and_weights_save_path(tmpdir, MLFlowLogger)
119-
_test_loggers_save_dir_and_weights_save_path(tmpdir, TestTubeLogger)
132+
with mock.patch('pytorch_lightning.loggers.mlflow.mlflow'), \
133+
mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient'):
134+
_test_loggers_save_dir_and_weights_save_path(tmpdir, MLFlowLogger)
135+
136+
with mock.patch('pytorch_lightning.loggers.test_tube.Experiment'):
137+
_test_loggers_save_dir_and_weights_save_path(tmpdir, TestTubeLogger)
120138

121139
with mock.patch('pytorch_lightning.loggers.wandb.wandb'):
122140
_test_loggers_save_dir_and_weights_save_path(tmpdir, WandbLogger)
123141

124142

125143
def _test_loggers_save_dir_and_weights_save_path(tmpdir, logger_class):
126-
""" Test the combinations of save_dir, weights_save_path and default_root_dir. """
127144

128145
class TestLogger(logger_class):
129146
# for this test it does not matter what these attributes are
@@ -255,18 +272,24 @@ def on_train_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_
255272
assert pl_module.logger.experiment.something(foo="bar") is None
256273

257274

258-
@pytest.mark.skipif(platform.system() == "Windows", reason="Distributed training is not supported on Windows")
259275
@pytest.mark.parametrize("logger_class", [
260-
TensorBoardLogger,
276+
CometLogger,
261277
MLFlowLogger,
262-
# NeptuneLogger, # TODO: fix: https://github.com/PyTorchLightning/pytorch-lightning/pull/3256
278+
NeptuneLogger,
279+
TensorBoardLogger,
263280
TestTubeLogger,
264281
])
265-
@mock.patch('pytorch_lightning.loggers.neptune.neptune')
266-
def test_logger_created_on_rank_zero_only(neptune, tmpdir, monkeypatch, logger_class):
282+
@pytest.mark.skipif(platform.system() == "Windows", reason="Distributed training is not supported on Windows")
283+
def test_logger_created_on_rank_zero_only(tmpdir, monkeypatch, logger_class):
267284
""" Test that loggers get replaced by dummy loggers on global rank > 0"""
268285
_patch_comet_atexit(monkeypatch)
286+
try:
287+
_test_logger_created_on_rank_zero_only(tmpdir, logger_class)
288+
except (ImportError, ModuleNotFoundError):
289+
pytest.xfail(f"multi-process test requires {logger_class.__class__} dependencies to be installed.")
290+
269291

292+
def _test_logger_created_on_rank_zero_only(tmpdir, logger_class):
270293
logger_args = _get_logger_args(logger_class, tmpdir)
271294
logger = logger_class(**logger_args)
272295
model = EvalModelTemplate()

tests/loggers/test_mlflow.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,22 @@
55
from unittest.mock import MagicMock
66
import pytest
77

8-
from mlflow.tracking import MlflowClient
98

109
from pytorch_lightning import Trainer
1110
from pytorch_lightning.loggers import MLFlowLogger
1211
from tests.base import EvalModelTemplate
1312

1413

14+
def mock_mlflow_run_creation(logger, experiment_name=None, experiment_id=None, run_id=None):
15+
""" Helper function to simulate mlflow client creating a new (or existing) experiment. """
16+
run = MagicMock()
17+
run.info.run_id = run_id
18+
logger._mlflow_client.get_experiment_by_name = MagicMock(return_value=experiment_name)
19+
logger._mlflow_client.create_experiment = MagicMock(return_value=experiment_id)
20+
logger._mlflow_client.create_run = MagicMock(return_value=run)
21+
return logger
22+
23+
1524
@mock.patch('pytorch_lightning.loggers.mlflow.mlflow')
1625
@mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient')
1726
def test_mlflow_logger_exists(client, mlflow, tmpdir):

0 commit comments

Comments
 (0)