Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
9b1e4b2
Add Google Colab badges (#5111)
shacharmirkin Dec 14, 2020
76081a7
simplify changelog (#5135)
Borda Dec 14, 2020
3540170
add copyright to tests (#5143)
Borda Dec 15, 2020
81e9d42
Fix saved filename in ModelCheckpoint if it already exists (#4861)
rohitgr7 Dec 16, 2020
151d86e
Update isort config (#5142)
akihironitta Dec 16, 2020
1d13943
Fix reset TensorRunningAccum (#5106)
VinhLoiIT Dec 16, 2020
a7fe24e
Fix hang in DDP HPC accelerators (#5157)
ananthsub Dec 16, 2020
89ff7b4
Update changelog, increment version (#5148)
SeanNaren Dec 15, 2020
2194d2d
Prune CHANGELOG.md (#5151)
SeanNaren Dec 15, 2020
58a2993
support number for logging with sync_dist=True (#5080)
tchaton Dec 16, 2020
13bbf4b
Un-balanced logging properly supported (#5119)
tchaton Dec 16, 2020
9669c80
[bugfix] remove nan loss in manual optimization (#5121)
tchaton Dec 16, 2020
6b19198
[bug-fix] Metric reduction with Logging (#5150)
tchaton Dec 16, 2020
0211f7f
Disable pl optimizer temporarily to fix AMP issues (#5163)
SeanNaren Dec 17, 2020
5119013
drop install FairScale for TPU (#5113)
Borda Dec 17, 2020
5bae639
temporarily suspend all mergify rules (#5112)
Borda Dec 17, 2020
3b83666
prune ecosystem example (#5085)
Borda Dec 17, 2020
518d915
add doctests for example 1/n (#5079)
Borda Dec 17, 2020
3c5dad7
Document speed comparison (#2072)
Borda Dec 17, 2020
cb45ab0
Prelease 1.1.2rc (#5171)
SeanNaren Dec 17, 2020
ac996fb
Fixed docs for WandbLogger (#5128)
hassiahk Dec 18, 2020
a5b2392
update DALIClassificationLoader to not use deprecated arguments (#4925)
gan3sh500 Dec 18, 2020
d72ba90
Github Actions deprecation (#5183)
InCogNiTo124 Dec 18, 2020
d0b23f7
[bugfix] Correct call to torch.no_grad (#5124)
8greg8 Dec 19, 2020
dcd29ae
feat(wandb): offset logging step when resuming (#5050)
borisdayma Dec 19, 2020
3b0197f
reduce verbosity level in drone ci (#5190)
awaelchli Dec 20, 2020
cd83829
Remove Sourcerer (#5172)
rohitgr7 Dec 20, 2020
cc14fc1
skip multi-gpu test when running on single-gpu machine (#5186)
awaelchli Dec 20, 2020
fd5322d
Update warning if ckpt directory is not empty (#5209)
rohitgr7 Dec 21, 2020
12d6437
add make cmd - clean (#5204)
Borda Dec 21, 2020
2438d74
add doctests for example 2/n segmentation (#5083)
Borda Dec 21, 2020
64f9b4d
Update README.md
williamFalcon Dec 22, 2020
2ddd36b
Update README.md
williamFalcon Dec 22, 2020
1c8ad3a
Tighten up mypy config (#5237)
alanhdu Dec 23, 2020
365b9b5
update for v1.1.2 (#5240)
Borda Dec 23, 2020
74d0652
flake8 ++
Borda Dec 23, 2020
dfbb592
fix test - reduce metric
Borda Dec 28, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions .drone.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,10 @@ steps:
- python --version
- pip --version
- nvidia-smi
- pip install -r ./requirements/devel.txt --upgrade-strategy only-if-needed -v --no-cache-dir
- pip install git+https://${AUTH_TOKEN}@github.com/PyTorchLightning/[email protected] -v --no-cache-dir
- pip install -r ./requirements/devel.txt --upgrade-strategy only-if-needed --no-cache-dir
- pip install git+https://${AUTH_TOKEN}@github.com/PyTorchLightning/[email protected] --no-cache-dir
# when Image has defined CUDa version we can switch to this package spec "nvidia-dali-cuda${CUDA_VERSION%%.*}0"
# todo: temprarl fix till https://github.com/PyTorchLightning/pytorch-lightning/pull/4922 is resolved
- pip install --extra-index-url https://developer.download.nvidia.com/compute/redist "nvidia-dali-cuda100<0.27" --upgrade-strategy only-if-needed
- pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100 --upgrade-strategy only-if-needed
- pip list
- python -m coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --durations=25 # --flake8
# Running special tests
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/release-docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- name: Get release version
if: startsWith(github.ref, 'refs/tags/') || github.event_name == 'release'
id: get_version
run: echo ::set-env name=RELEASE_VERSION::$(echo ${GITHUB_REF##*/})
run: echo "::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF##*/})"

- name: Publish Releases to Docker
# only on releases
Expand All @@ -37,6 +37,6 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
dockerfile: dockers/release/Dockerfile
build_args: PYTHON_VERSION=${{ matrix.python_version }},PYTORCH_VERSION=${{ matrix.pytorch_version }},LIGHTNING_VERSION=${{ env.RELEASE_VERSION }}
tags: "${{ env.RELEASE_VERSION }}-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }},latest-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }}"
build_args: PYTHON_VERSION=${{ matrix.python_version }},PYTORCH_VERSION=${{ matrix.pytorch_version }},LIGHTNING_VERSION=${{ steps.get_version.outputs.RELEASE_VERSION }}
tags: "${{ steps.get_version.outputs.RELEASE_VERSION }}-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }},latest-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }}"
timeout-minutes: 55
112 changes: 56 additions & 56 deletions .mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,59 +12,59 @@
# See the License for the specific language governing permissions and
# limitations under the License.

pull_request_rules:

- name: Automatic merge on approval
conditions:
- base=master
# number of review approvals
- "#approved-reviews-by>=3"
# no waiting or assigned review
- "#review-requested=0"
# no requested chnages from any reviewer
- "#changes-requested-reviews-by=0"
# this serves as ALL check has to pass as we have actually around 40 tests in total
- "#status-success>=54"
# this is just in case since we rely on GPU tests (note: redundand to the above)
- status-success=continuous-integration/drone/pr
- "status-success=ci/circleci: TPU-tests"
# this is patter-like, unofrunatly serves as `any(...)` (note: redundand to the above)
#- "status-success~=^ci/circleci:"
# no conflict with master branch
- -conflict
# was not closed yet
- -closed
# filter-out GH draft PRs
- -draft
actions:
delete_head_branch: {}
merge:
# https://doc.mergify.io/merge-action.html#strict-merge
# (on head branch) $ git merge --no-ff base
# (on head branch) # Wait for CI to go green
# (on head branch) # Squash all commits
# (on base branch) $ git merge --ff head
strict: true
method: squash
comment:
message: Great job! =)

- name: warn on conflicts
conditions:
- conflict
# filter-out GH draft PRs
- -draft
actions:
comment:
message: This pull request is now in conflict... :(

- name: add core reviewer
conditions:
# filter-out GH draft PRs
- -draft
# number of review approvals
- "#approved-reviews-by<3"
actions:
request_reviews:
teams:
- core-contributors
#pull_request_rules:
#
# - name: Automatic merge on approval
# conditions:
# - base=master
# # number of review approvals
# - "#approved-reviews-by>=3"
# # no waiting or assigned review
# - "#review-requested=0"
# # no requested chnages from any reviewer
# - "#changes-requested-reviews-by=0"
# # this serves as ALL check has to pass as we have actually around 40 tests in total
# - "#status-success>=54"
# # this is just in case since we rely on GPU tests (note: redundand to the above)
# - status-success=continuous-integration/drone/pr
# - "status-success=ci/circleci: TPU-tests"
# # this is patter-like, unofrunatly serves as `any(...)` (note: redundand to the above)
# #- "status-success~=^ci/circleci:"
# # no conflict with master branch
# - -conflict
# # was not closed yet
# - -closed
# # filter-out GH draft PRs
# - -draft
# actions:
# delete_head_branch: {}
# merge:
# # https://doc.mergify.io/merge-action.html#strict-merge
# # (on head branch) $ git merge --no-ff base
# # (on head branch) # Wait for CI to go green
# # (on head branch) # Squash all commits
# # (on base branch) $ git merge --ff head
# strict: true
# method: squash
# comment:
# message: Great job! =)
#
# - name: warn on conflicts
# conditions:
# - conflict
# # filter-out GH draft PRs
# - -draft
# actions:
# comment:
# message: This pull request is now in conflict... :(
#
# - name: add core reviewer
# conditions:
# # filter-out GH draft PRs
# - -draft
# # number of review approvals
# - "#approved-reviews-by<3"
# actions:
# request_reviews:
# teams:
# - core-contributors
17 changes: 0 additions & 17 deletions .update.sh

This file was deleted.

62 changes: 60 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,64 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed distributed setting and `ddp_cpu` only with `num_processes>1` ([#5297](https://github.com/PyTorchLightning/pytorch-lightning/pull/5297))


- Fixed the saved filename in `ModelCheckpoint` when it already exists ([#4861](https://github.com/PyTorchLightning/pytorch-lightning/pull/4861))


- Fixed `DDPHPCAccelerator` hangs in DDP construction by calling `init_device` ([#5157](https://github.com/PyTorchLightning/pytorch-lightning/pull/5157))


## [1.1.2] - 2020-12-23

### Added

- Support number for logging with `sync_dist=True` ([#5080](https://github.com/PyTorchLightning/pytorch-lightning/pull/5080)
- Added offset logging step when resuming for Wandb logger ([#5050](https://github.com/PyTorchLightning/pytorch-lightning/pull/5050)

### Removed

- `enable_pl_optimizer=False` by default to temporarily fix AMP issues ([#5163](https://github.com/PyTorchLightning/pytorch-lightning/pull/5163)

### Fixed

- Metric reduction with Logging ([#5150](https://github.com/PyTorchLightning/pytorch-lightning/pull/5150)
- Remove nan loss in manual optimization ([#5121](https://github.com/PyTorchLightning/pytorch-lightning/pull/5121)
- Un-balanced logging properly supported ([#5119](https://github.com/PyTorchLightning/pytorch-lightning/pull/5119)
- Fix hanging in DDP HPC accelerators ([#5157](https://github.com/PyTorchLightning/pytorch-lightning/pull/5157)
- Fix saved filename in `ModelCheckpoint` if it already exists ([#4861](https://github.com/PyTorchLightning/pytorch-lightning/pull/4861)
- Fix reset `TensorRunningAccum` ([#5106](https://github.com/PyTorchLightning/pytorch-lightning/pull/5106)
- Updated `DALIClassificationLoader` to not use deprecated arguments ([#4925](https://github.com/PyTorchLightning/pytorch-lightning/pull/4925)
- Corrected call to `torch.no_grad` ([#5124](https://github.com/PyTorchLightning/pytorch-lightning/pull/5124)


## [1.1.1] - 2020-12-15

### Added

- Add a notebook example to reach a quick baseline of ~94% accuracy on CIFAR10 using Resnet in Lightning ([#4818](https://github.com/PyTorchLightning/pytorch-lightning/pull/4818))

### Changed

- Simplify accelerator steps ([#5015](https://github.com/PyTorchLightning/pytorch-lightning/pull/5015))
- Refactor load in checkpoint connector ([#4593](https://github.com/PyTorchLightning/pytorch-lightning/pull/4593))

### Removed

- Drop duplicate metrics ([#5014](https://github.com/PyTorchLightning/pytorch-lightning/pull/5014))
- Remove beta arg from F1 class and functional ([#5076](https://github.com/PyTorchLightning/pytorch-lightning/pull/5076))

### Fixed

- Fixed trainer by default `None` in `DDPAccelerator` ([#4915](https://github.com/PyTorchLightning/pytorch-lightning/pull/4915))
- Fixed `LightningOptimizer` to expose optimizer attributes ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095))
- Do not warn when the `name` key is used in the `lr_scheduler` dict ([#5057](https://github.com/PyTorchLightning/pytorch-lightning/pull/5057))
- Check if optimizer supports closure ([#4981](https://github.com/PyTorchLightning/pytorch-lightning/pull/4981)
- Extend LightningOptimizer to exposure underlying Optimizer attributes + update doc ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095))
- Add deprecated metric utility functions back to functional (
[#5067](https://github.com/PyTorchLightning/pytorch-lightning/pull/5067),
[#5068](https://github.com/PyTorchLightning/pytorch-lightning/pull/5068))
- Allow any input in `to_onnx` and `to_torchscript` ([#4378](https://github.com/PyTorchLightning/pytorch-lightning/pull/4378)
- Do not warn when the name key is used in the `lr_scheduler` dict ([#5057](https://github.com/PyTorchLightning/pytorch-lightning/pull/5057))


## [1.1.0] - 2020-12-09

Expand All @@ -65,8 +123,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added logging using `self.log` in train and evaluation for epoch end hooks (
[#4552](https://github.com/PyTorchLightning/pytorch-lightning/pull/4552),
[#4495](https://github.com/PyTorchLightning/pytorch-lightning/pull/4495),
[#4439](https://github.com/PyTorchLightning/pytorch-lightning/pull/4439))
[#4684](https://github.com/PyTorchLightning/pytorch-lightning/pull/4684))
[#4439](https://github.com/PyTorchLightning/pytorch-lightning/pull/4439),
[#4684](https://github.com/PyTorchLightning/pytorch-lightning/pull/4684),
[#4913](https://github.com/PyTorchLightning/pytorch-lightning/pull/4913))
- Added ability for DDP plugin to modify optimizer state saving ([#4675](https://github.com/PyTorchLightning/pytorch-lightning/pull/4675))
- Added casting to python types for numpy scalars when logging hparams ([#4647](https://github.com/PyTorchLightning/pytorch-lightning/pull/4647))
Expand Down
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.PHONY: test
.PHONY: test clean

test:
# install APEX, see https://github.com/NVIDIA/apex#linux
Expand All @@ -13,3 +13,7 @@ test:

# specific file
# python -m coverage run --source pytorch_lightning -m py.test --flake8 --durations=0 -v -k

clean:
# clean all temp runs
rm -rf $(shell find . -name "mlruns" )
18 changes: 5 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,11 @@ Scale your models, not the boilerplate.**

---

## NEWS
[Dec 2020 - Read about how Facebook uses Lightning to standardize deep learning across research and production teams](https://ai.facebook.com/blog/reengineering-facebook-ais-deep-learning-platforms-for-interoperability)

---

## PyTorch Lightning is just organized PyTorch
Lightning disentangles PyTorch code to decouple the science from the engineering.
![PT to PL](docs/source/_images/general/pl_quick_start_full_compressed.gif)
Expand Down Expand Up @@ -73,19 +78,6 @@ Lightning can automatically export to ONNX or TorchScript for those cases.

---

## Trending contributors

[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/0)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/1)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/2)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/3)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/4)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/5)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/6)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6)
[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/7)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7)

---

## Continuous Integration
<center>

Expand Down
17 changes: 17 additions & 0 deletions benchmarks/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os

BENCHMARK_ROOT = os.path.dirname(__file__)
PROJECT_ROOT = os.path.dirname(BENCHMARK_ROOT)
60 changes: 60 additions & 0 deletions benchmarks/generate_comparison.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os

import matplotlib.pylab as plt
import pandas as pd

from benchmarks.test_basic_parity import lightning_loop, vanilla_loop
from tests.base.models import ParityModuleMNIST, ParityModuleRNN

NUM_EPOCHS = 20
NUM_RUNS = 50
MODEL_CLASSES = (ParityModuleRNN, ParityModuleMNIST)
PATH_HERE = os.path.dirname(__file__)
FIGURE_EXTENSION = '.png'


def _main():
fig, axarr = plt.subplots(nrows=len(MODEL_CLASSES))

for i, cls_model in enumerate(MODEL_CLASSES):
path_csv = os.path.join(PATH_HERE, f'dump-times_{cls_model.__name__}.csv')
if os.path.isfile(path_csv):
df_time = pd.read_csv(path_csv, index_col=0)
else:
vanilla = vanilla_loop(cls_model, num_epochs=NUM_EPOCHS, num_runs=NUM_RUNS)
lightning = lightning_loop(cls_model, num_epochs=NUM_EPOCHS, num_runs=NUM_RUNS)

df_time = pd.DataFrame({'vanilla PT': vanilla['durations'][1:], 'PT Lightning': lightning['durations'][1:]})
df_time /= NUM_RUNS
df_time.to_csv(os.path.join(PATH_HERE, f'dump-times_{cls_model.__name__}.csv'))
# todo: add also relative X-axis ticks to see both: relative and absolute time differences
df_time.plot.hist(
ax=axarr[i],
bins=20,
alpha=0.5,
title=cls_model.__name__,
legend=True,
grid=True,
)
axarr[i].set(xlabel='time [seconds]')

path_fig = os.path.join(PATH_HERE, f'figure-parity-times{FIGURE_EXTENSION}')
fig.tight_layout()
fig.savefig(path_fig)


if __name__ == '__main__':
_main()
Loading