Skip to content

Commit c62d2b1

Browse files
author
Svetlana Karslioglu
authored
Add a note on running in colab (#2254)
* Add a note on running in colab
1 parent f4862ee commit c62d2b1

File tree

6 files changed

+29
-0
lines changed

6 files changed

+29
-0
lines changed
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
.. _new-release_colab::
2+
3+
Notes for Running in Colab
4+
~~~~~~~~~~~~~~~~~~~~~~~~~~
5+
6+
.. note::
7+
This tutorial requires PyTorch 2.0.0 or later. If you are running this
8+
in Google Colab, verify that you have the required ``torch`` and
9+
compatible domain libraties installed by running ``!pip list``.
10+
If the installed version of PyTorch is lower than required,
11+
unistall it and reinstall again by running the following commands:
12+
13+
.. code-block:: python
14+
15+
!pip3 uninstall --yes torch torchaudio torchvideo torchdata
16+
!pip3 install torch torchaudio torchvideo torchdata

intermediate_source/ensembling.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,9 @@
1616
for-loops and speeding them up through vectorization.
1717
1818
Let's demonstrate how to do this using an ensemble of simple MLPs.
19+
20+
.. include:: ../beginner_source/new-release-colab.rst
21+
1922
"""
2023

2124
import torch

intermediate_source/jacobians_hessians.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@
1212
provides ways of computing various higher-order autodiff quantities
1313
efficiently.
1414
15+
.. include:: ../beginner_source/new-release-colab.rst
16+
1517
Computing the Jacobian
1618
----------------------
1719
"""

intermediate_source/neural_tangent_kernels.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
demonstrates how to easily compute this quantity using ``torch.func``,
1212
composable function transforms for PyTorch.
1313
14+
.. include:: ../beginner_source/new-release-colab.rst
15+
1416
Setup
1517
-----
1618

intermediate_source/per_sample_grads.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@
99
Per-sample-gradient computation is computing the gradient for each and every
1010
sample in a batch of data. It is a useful quantity in differential privacy,
1111
meta-learning, and optimization research.
12+
13+
.. include:: ../beginner_source/new-release-colab.rst
14+
1215
"""
1316

1417
import torch

intermediate_source/torch_compile_tutorial.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@
3535
# - ``tabulate``
3636
#
3737
# Note: a modern NVIDIA GPU (Volta or Ampere) is recommended for this tutorial.
38+
#
39+
# .. include:: ../beginner_source/new-release-colab.rst
40+
#
3841

3942
######################################################################
4043
# Basic Usage

0 commit comments

Comments
 (0)