Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions beginner_source/new-release-colab.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
.. _new-release_colab::

Notes for Running in Colab
~~~~~~~~~~~~~~~~~~~~~~~~~~

.. note::
This tutorial requires PyTorch 2.0.0 or later. If you are running this
in Google Colab, verify that you have the required ``torch`` and
compatible domain libraties installed by running ``!pip list``.
If the installed version of PyTorch is lower than required,
unistall it and reinstall again by running the following commands:

.. code-block:: python

!pip3 uninstall --yes torch torchaudio torchvideo torchdata
!pip3 install torch torchaudio torchvideo torchdata
3 changes: 3 additions & 0 deletions intermediate_source/ensembling.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@
for-loops and speeding them up through vectorization.

Let's demonstrate how to do this using an ensemble of simple MLPs.

.. include:: ../beginner_source/new-release-colab.rst

"""

import torch
Expand Down
2 changes: 2 additions & 0 deletions intermediate_source/jacobians_hessians.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
provides ways of computing various higher-order autodiff quantities
efficiently.

.. include:: ../beginner_source/new-release-colab.rst

Computing the Jacobian
----------------------
"""
Expand Down
2 changes: 2 additions & 0 deletions intermediate_source/neural_tangent_kernels.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
demonstrates how to easily compute this quantity using ``torch.func``,
composable function transforms for PyTorch.

.. include:: ../beginner_source/new-release-colab.rst

Setup
-----

Expand Down
3 changes: 3 additions & 0 deletions intermediate_source/per_sample_grads.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@
Per-sample-gradient computation is computing the gradient for each and every
sample in a batch of data. It is a useful quantity in differential privacy,
meta-learning, and optimization research.

.. include:: ../beginner_source/new-release-colab.rst

"""

import torch
Expand Down
3 changes: 3 additions & 0 deletions intermediate_source/torch_compile_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@
# - ``tabulate``
#
# Note: a modern NVIDIA GPU (Volta or Ampere) is recommended for this tutorial.
#
# .. include:: ../beginner_source/new-release-colab.rst
#

######################################################################
# Basic Usage
Expand Down