Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Commit a4c82df

Browse files
ktakeda1fepegar
andauthored
Updated README.md (#680)
Add acknowledgements section Co-authored-by: Fernando Pérez-García <[email protected]>
1 parent e984554 commit a4c82df

File tree

1 file changed

+31
-24
lines changed

1 file changed

+31
-24
lines changed

README.md

Lines changed: 31 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,19 @@
44

55
## Overview
66

7-
This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
7+
This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
88
It integrates seamlessly with cloud computing in Azure.
9-
10-
On the modelling side, this toolbox supports
9+
10+
On the modelling side, this toolbox supports
11+
1112
- Segmentation models
1213
- Classification and regression models
1314
- Sequence models
14-
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)
15+
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)
1516
- Active label cleaning and noise robust learning toolbox (stand-alone folder)
1617

1718
Classification, regression, and sequence models can be built with only images as inputs, or a combination of images
18-
and non-imaging data as input. This supports typical use cases on medical data where measurements, biomarkers,
19+
and non-imaging data as input. This supports typical use cases on medical data where measurements, biomarkers,
1920
or patient characteristics are often available in addition to images.
2021

2122
On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
@@ -26,8 +27,8 @@ the code. Tags are added to the experiments automatically, that can later help f
2627
- **Transparency**: All team members have access to each other's experiments and results.
2728
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
2829
sources of randomness like multithreading are controlled for.
29-
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
30-
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
30+
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
31+
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
3132
nodes can be used to further reduce costs (up to 80% cheaper).
3233
- **Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.
3334

@@ -36,22 +37,22 @@ model prototyping, debugging, and in cases where the cloud can't be used. In par
3637
machines available, you will be able to utilize them with the InnerEye toolbox.
3738

3839
In addition, our toolbox supports:
39-
- Cross-validation using AzureML's built-in support, where the models for
40+
- Cross-validation using AzureML's built-in support, where the models for
4041
individual folds are trained in parallel. This is particularly important for the long-running training jobs
41-
often seen with medical images.
42+
often seen with medical images.
4243
- Hyperparameter tuning using
4344
[Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
4445
- Building ensemble models.
4546
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
4647
architecture.
47-
48-
Once training in AzureML is done, the models can be deployed from within AzureML or via
48+
49+
Once training in AzureML is done, the models can be deployed from within AzureML or via
4950
[Azure Stack Hub](https://azure.microsoft.com/en-us/products/azure-stack/hub/).
5051

5152

5253
## Getting started
5354

54-
We recommend using our toolbox with Linux or with the Windows Subsystem for Linux (WSL2). Much of the core
55+
We recommend using our toolbox with Linux or with the Windows Subsystem for Linux (WSL2). Much of the core
5556
functionality works fine on Windows, but PyTorch's full feature set is only available on Linux. Read [more about
5657
WSL here](docs/WSL.md).
5758

@@ -63,17 +64,17 @@ git lfs install
6364
git lfs pull
6465
```
6566
After that, you need to set up your Python environment:
66-
- Install `conda` or `miniconda` for your operating system.
67+
- Install `conda` or `miniconda` for your operating system.
6768
- Create a Conda environment from the `environment.yml` file in the repository root, and activate it:
6869
```shell script
6970
conda env create --file environment.yml
7071
conda activate InnerEye
71-
```
72+
```
7273
- If environment creation fails with odd error messages on a Windows machine, please [continue here](docs/WSL.md).
7374

7475
Now try to run the HelloWorld segmentation model - that's a very simple model that will train for 2 epochs on any
75-
machine, no GPU required. You need to set the `PYTHONPATH` environment variable to point to the repository root first.
76-
Assuming that your current directory is the repository root folder, on Linux `bash` that is:
76+
machine, no GPU required. You need to set the `PYTHONPATH` environment variable to point to the repository root first.
77+
Assuming that your current directory is the repository root folder, on Linux `bash` that is:
7778
```shell script
7879
export PYTHONPATH=`pwd`
7980
python InnerEye/ML/runner.py --model=HelloWorld
@@ -88,7 +89,7 @@ python InnerEye/ML/runner.py --model=HelloWorld
8889

8990
If that works: Congratulations! You have successfully built your first model using the InnerEye toolbox.
9091

91-
If it fails, please check the
92+
If it fails, please check the
9293
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).
9394

9495
Further detailed instructions, including setup in Azure, are here:
@@ -100,7 +101,7 @@ Further detailed instructions, including setup in Azure, are here:
100101
1. [Sample Segmentation and Classification tasks](docs/sample_tasks.md)
101102
1. [Debugging and monitoring models](docs/debugging_and_monitoring.md)
102103
1. [Model diagnostics](docs/model_diagnostics.md)
103-
1. [Move a model to a different workspace](docs/move_model.md)
104+
1. [Move a model to a different workspace](docs/move_model.md)
104105
1. [Working with FastMRI models](docs/fastmri.md)
105106
1. [Active label cleaning and noise robust learning toolbox](InnerEye-DataQuality/README.md)
106107

@@ -132,12 +133,18 @@ Details can be found [here](docs/deploy_on_aml.md).
132133
**You are responsible for the performance, the necessary testing, and if needed any regulatory clearance for
133134
any of the models produced by this toolbox.**
134135

136+
## Acknowledging usage of Project InnerEye OSS tools
137+
138+
When using Project InnerEye open-source software (OSS) tools, please acknowledge with the following wording:
139+
140+
> This project used Microsoft Research's Project InnerEye open-source software tools ([https://aka.ms/InnerEyeOSS](https://aka.ms/InnerEyeOSS)).
141+
135142
## Contact
136143

137-
If you have any feature requests, or find issues in the code, please create an
144+
If you have any feature requests, or find issues in the code, please create an
138145
[issue on GitHub](https://github.com/microsoft/InnerEye-DeepLearning/issues).
139146

140-
Please send an email to [email protected] if you would like further information about this project.
147+
Please send an email to [email protected] if you would like further information about this project.
141148

142149
## Publications
143150

@@ -164,12 +171,12 @@ contact [[email protected]](mailto:[email protected]) with any additio
164171

165172
## Credits
166173

167-
This toolbox is maintained by the
168-
[Microsoft InnerEye team](https://www.microsoft.com/en-us/research/project/medical-image-analysis/),
174+
This toolbox is maintained by the
175+
[Microsoft InnerEye team](https://www.microsoft.com/en-us/research/project/medical-image-analysis/),
169176
and has received valuable contributions from a number
170-
of people outside our team. We would like to thank in particular our interns,
177+
of people outside our team. We would like to thank in particular our interns,
171178
[Yao Quin](http://cseweb.ucsd.edu/~yaq007/), [Zoe Landgraf](https://www.linkedin.com/in/zoe-landgraf-a2212293),
172179
[Padmaja Jonnalagedda](https://www.linkedin.com/in/jspadmaja/),
173-
[Mathias Perslev](https://github.com/perslev), as well as the AI Residents
180+
[Mathias Perslev](https://github.com/perslev), as well as the AI Residents
174181
[Patricia Gillespie](https://www.microsoft.com/en-us/research/people/t-pagill/) and
175182
[Guilherme Ilunga](https://gilunga.github.io/).

0 commit comments

Comments
 (0)