-
Notifications
You must be signed in to change notification settings - Fork 56
Connect with docker session in examples #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
d6fdd1e to
2292b55
Compare
|
|
||
| concurrency: | ||
| group: ${{ github.ref }} | ||
| cancel-in-progress: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will automatically cancel any in-progress run when pushed (https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-using-concurrency-to-cancel-any-in-progress-job-or-run)
5a282ba to
7e9e52c
Compare
69a0f79 to
4178c53
Compare
| from ansys.fluent.core import examples | ||
|
|
||
| if "PYFLUENT_RUN_EXAMPLES_WITH_DOCKER" in os.environ: | ||
| examples.download_file("mixing_elbow.pmdb", "pyfluent/mixing_elbow") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess in principle you could always download the files right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These examples should be ready for public consumption and not contain any cryptic environment variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to find a way so that the doc build script launches a docker container before calling each example script. Within the example script we will only connect to the running server.
Or we can modify the launch_fluent command to first try to launch the Fluent docker container and connect to it. If the docker command/image is not found or if forced by an env var it will look for the local Fluent installation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mkundu1 Regarding your first sentence you can just add the docker calls to the nightly build CI:
https://github.202132.xyz.mcas.ms/pyansys/pyfluent/blob/main/.github/workflows/nightly-doc-build.yml
and the regular PR based CI.
Yes, you would have to modify launch_fluent to connect to it, makes sense. I've not stared at launch_apdl() but it must already be doing this right.
Regarding the file dependencies I think the point of the examples repo is to not have the files in the Fluent repo at all, so we should be removing them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for luanch_fluent it's mostly there. Won't this work once the docker is spun up launch_fluent(start_instance=False). The container should be running locally and you need to just spin it up on a default port #, which should be open.
| examples.download_file("mixing_elbow.scdoc", "pyfluent/mixing_elbow") | ||
|
|
||
| license_server = os.getenv("LICENSE_SERVER", "leblnxlic64.ansys.com") | ||
| subprocess.run(["docker", "run", "-d", "--rm", "-p", "63084:63084", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mkundu1 Note that in the apdl CI pipeline they spin up docker as part of the CI.yml for the nightly doc builds:
For the CI on a pull request they use this caching thing instead:
I think what this is doing then is updating the examples once a day and every other time when the CI executes it uses the cached copy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pymapdl also runs the script .ci/start_mapdl.sh which spins up the docker as a part of unit testing and doc build. Their docs seems to be deployed nightly or when a tag is pushed ('Deploy' step).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I missed that file shell script. I did not note that the CI scripts are using it anywhere. Let me see.
Yes, dev.fluentdocs.pyansys.com is updated nightly just like theirs, that's what our nightly CI yml is doing.
The released doc at fluentdocs.pyansys.com is updated when we tag, the update for this is in our ci.yml. We've not tagged yet so that's never ran.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script seems to be used for the tests but looks to me like it contains identical commands to what is in their nightly CI script for the doc build. I'm not sure why they do this in two places. Probably to debug issues building the doc locally would be one guess (i.e. you can't run a ci.yml locally...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
License server should not be specified in these examples either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for needing to spin up new containers for each example? I don't understand. That sounds a bit unusable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dnwillia-work In standalone Fluent regression, we generally don't reuse the Fluent session between tests. The Fluent docker container starts with the Fluent session. So there we should use separate container for each tests (or the example scripts during the doc build). There can be issues like if the first script changes the container to solution mode, the next script cannot easily start from meshing mode.
I think this is normal usage scenario for containers - we can launch multiple of them treating each as isolated process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a non-solution. You should be able to reset the container state. If you run 100s of unit tests and 100s of examples you'll spend the entire day restarting the service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently we don't have an exposed way to reset the Fluent session. We need to look into having an API to do the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree with Alex, so we would need that. Let's finish this PR for now, so that we have an example up and running, and then do that.
04474ee to
1d95874
Compare
|
The doc build is now successful and the Examples section is created in the doc artifact. |
| license_server = os.environ["ANSYSLMD_LICENSE_FILE"] | ||
| port = os.environ["PYFLUENT_FLUENT_PORT"] | ||
|
|
||
| subprocess.run(["docker", "run", "--name", "fluent_server", "-d", "--rm", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These scripts are specific for CI runs. There is an assumption that the host OS is linux (-v assumes same unix path format for the container and host OS).
01e3ec8 to
7d77bfe
Compare
| ) | ||
|
|
||
|
|
||
| def _start_or_stop_fluent_container(gallery_conf, fname, when): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
runs before and after each example script
7d77bfe to
261877e
Compare
| import_filename = examples.download_file( | ||
| "mixing_elbow.pmdb", "pyfluent/mixing_elbow" | ||
| ) | ||
| examples.download_file("mixing_elbow.scdoc", "pyfluent/mixing_elbow") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you downloading this but only using the pmdb?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, but we have to come back to the reconnect issue.
261877e to
1c6e2ac
Compare
1c6e2ac to
910c600
Compare
…heat transfer in a mixing elbow
910c600 to
4ed9899
Compare
No description provided.