-
Notifications
You must be signed in to change notification settings - Fork 4
[Python] Basic dependency management with pyproject and uv #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a simple man: I see red, I approve
@rolfmorel thanks for the PR! I'm good with the changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, thanks!
Some minor comments/questions inline.
pyproject.toml
Outdated
[project.optional-dependencies] | ||
ingress_torch_cpu = [ | ||
"torch==v2.8.0+cpu", | ||
"torch-mlir==v20251004.590" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does this version come from? I've tried here, but no luck:
Presumably that's from https://github.com/llvm/torch-mlir-release/releases/expanded_assets/dev-wheels? Who manages that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, fair point! The https://github.com/llvm/torch-mlir-release repo does state this in its README:
This repository houses release automation for torch-mlir. It is isolated from the main repository so that we can more strictly control people with write access.
As for "who manages that?" -- the contribution history (e.g. https://github.com/llvm/torch-mlir-release/graphs/contributors) shows @marbre, @sjain-stanford, @stellaraccident, and @sahas3. Maybe any of them can chime in on regarding who's managing torch-mlir
's releases (in particular from this dedicated repo).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for "who manages that?" -- the contribution history (e.g. https://github.com/llvm/torch-mlir-release/graphs/contributors) shows @marbre, @sjain-stanford, @stellaraccident, and @sahas3. Maybe any of them can chime in on regarding who's managing torch-mlir's releases (in particular from this dedicated repo).
I contributed to the project as I had a need for wheels earlier this year (and those where basically broken) but I am not actively managing releases for torch-mlir.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick response, @marbre.
Could you let us know who to reach out to about these packages, e.g. regarding them inevitably having issues? For example the person who gave you access to repo or anybody else who is taking responsibility for this releases repo/torch-mlir's releases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sjarus, could you maybe comment on the infrastructure you have going that makes use of torch-mlir. In this particular case regarding getting the torch-mlir dependency to be available. Do you just build it yourself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not an official maintainer of the torch-mlir-release
repo as well but I've vested interest in ensuring that the wheels are released properly from that repo as we do use a forked repo to build the python wheels with stable torch
release to use in our work.
The main repo currently builds against nightly torch
version which is pinned in https://github.com/llvm/torch-mlir/blob/main/pytorch-requirements.txt
pyproject.toml
Outdated
"torch-mlir==v20251004.590" | ||
] | ||
ingress_torch = [ | ||
"torch==v2.8.0", # Nvidia-enabled version of torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit] IIUC, v
is not needed.
"torch==v2.8.0", # Nvidia-enabled version of torch | |
"torch==2.8.0", # Nvidia-enabled version of torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by comment without looking deep into what is done here, so excuse if this seems noisy: You may want to consider the CPU-only version? I some of our projects we explicitly skipped setting a strict dependency on torch and leave it to the user to pick a suitable version (CPU-only, Nvidia-enabled, ROCm-enabled). This has among others the benefit of being able to use the way smaller CPU-ony packages in the CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's correct. The leading v
s were extraneous. They're gone now 👍
Regarding cpu-only: there's no default here, only different optional alternatives made available through an optional package management facilitator. Your point regarding the cpu-only version being lighter weight is well-noted though. I have moved the note that there are other versions to a collapsed section in the README.
pytorch_triton_xpu = { index = "pytorch" } | ||
pytorch_triton_rocm = { index = "pytorch" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why Triton? As in, what's the relevance here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using uv
's explict = true
flag, uv
's dependency constraint solver needs to be told where to find the transitive dependencies (of torch). I think another option is to remove the explicit = true
flags below, though that has the effect that we don't know from which non-Pypi index/package repository a dependency is going to be coming from (in effect, explicit = true
makes it so that "dependencies that come from this repo need to explicitly state they will come from this repo").
README.md
Outdated
$ uv sync --extra ingress-torch-cpu # Optionally install the dependencies for torch ingress | ||
``` | ||
|
||
For vendor-specific versions of `torch` use the targets `ingress-torch`, `ingress-torch-rocm` or `ingress-torch-xpu` for Nvidia, AMD, and Intel-enabled versions, respectively. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have two points on "vendor specific" stuff:
- Nothing against Nvidia, but why would we use the default name,
ingress-torch
, for Nvidia? Shouldn't that be e.g.ingress-torch-nvidia
? - Will you be testing all the vendor specific stuff? Making this logic "generic" is a worthy goal, but if its not tested ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I took the naming to be reflective of 1) the directory structure, i.e. for the
ingress
subfolder, in which you might like 2) the dependencies associated to the python packages with identifierstorch
, ortorch+cpu
ortorch+rocm
(or actually,torch+rocm6.4
) ortorch+xpu
, respectively (with the first supporting Nvidia by default -- I wonder why 🙃 ). I don't have a strong opinion on the naming of these targets (though they aren't allowed to contain "+"), so if you have an idea for a better scheme, do let me know! -
I think for now we will indeed not bother with testing with all the different vendor-specific dependencies (at least not until these vendors themselves contribute to the repo). This is an argument for not bothering with adding these different targets now. I mostly added them so that we would have feature parity with the two
install-virtualenv.sh
scripts that I am removing in this PR. If you and @rengolin are fine with just aningress-torch-cpu
target for now, I can push a new commit removing the other targets and leave the current commit, showing how to getuv
to play nice with the others, in this PR's history so we could revisit later. Just let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I have now updated to the following pragmatic solution: the
torch
version for Nvidia now has a suffix-nvidia
. As pointed out up above, the-cpu
version is a lot lighter weight and hence will be the one the docs direct towards in the first instance (there's now a collapsed section explaining you can use a different target to obtain a differenttorch
). -
With regards to these targets getting tested: in the spirit of inclusivity, lets keep the different targets and take the position that them getting tested/integrated in CI is a "patches welcome"-type of situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me rephrase a bit but please excuse that I am lacking context of on the overall PR. What we, and others, do is to not introduce a dependency on torch via the package management system at all. Instead we have
to let the users decide and make them responsible for picking an appropriate version.
ac703fc
to
3b18487
Compare
Thanks for the review, @banach-space ! I hope your comments are sufficiently addressed for us to go ahead and merge. Do let me know if you feel otherwise! |
Yes, thanks for the replies, that clarified the context 🙏🏻 Feel free to land this, my other comments are non-blockers. |
Sets us up to do a
uv sync
to set up the current Python environment with the correct dependencies. Note that this is just a facilitating mechanism for preparing the python environment; users are still free to set up the environment via any means they like.By default the only dependency installed is the Python bindings (from
llvm/eudsl
) -- that is, enablingimport mlir
in Python. This is the common dependency for generating and running pipelines and for ingress. Optionally one can pass (a variant of)--extra ingress-torch
touv sync
to also get thetorch
andtorch-mlir
dependencies -- that is, enablingimport torch
andimport torch_mlir
in Python. Per the included docs, it is recommended to first set up a fresh virtualenv, e.g. viauv venv
.The
pyproject.toml
comes withuv
-specific sections so as to enable specifying from which "indices", i.e. Python package repositories, each specific package is expected to come from. This file pins particular nightlies/versions from these package repositories.