Skip to content
This repository was archived by the owner on Sep 23, 2025. It is now read-only.

Conversation

@xuechendi
Copy link
Contributor

Fix relative path import.

Error Log before fixing:
image

After adding complete path, I am able to run serving.

1. install steps

conda create -n llm-on-ray python=3.9
conda activate llm-on-ray
pip install .[cpu,ui] -f https://developer.intel.com/ipex-whl-stable-cpu -f https://download.pytorch.org/whl/torch_stable.html
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)") /env/setvars.sh

# Ray Started

2. test step:

python inference/serve.py --config_file inference/models/llama-2-7b-chat-hf.yaml

@xuechendi
Copy link
Contributor Author

@carsonwang @KepingYan , please help to take a review.

@xwu-intel
Copy link

@KepingYan

From https://docs.python.org/3/library/sys.html#sys.path

python script.py command line: prepend the script’s directory. If it’s a symbolic link, resolve symbolic links.

We should test and make sure it works in the following both cases and should allow the script to be run from any path:

  1. user installed the whole package and run serve.py.
  2. user only installed the dependencies and run serve.py directly.

@xuechendi
Copy link
Contributor Author

@KepingYan

From https://docs.python.org/3/library/sys.html#sys.path

python script.py command line: prepend the script’s directory. If it’s a symbolic link, resolve symbolic links.

We should test and make sure it works in the following both cases and should allow the script to be run from any path:

  1. user installed the whole package and run serve.py.
  2. user only installed the dependencies and run serve.py directly.

Yeah, after all, the readme should provide User a smooth experience to Run a 'hello world' demo.
If adding my cloned llm-on-ray to python path is a must, may need to add it to README as well.
Otherwise, provide full module path during import will help to avoid python package finding issues.

@xwu-intel
Copy link

xwu-intel commented Feb 2, 2024

We also need to consider the distributed nature of Ray that all workers should find the package. (may be the cause of this issue)

Following the industrial practices, I would suggest move inference/finetune into a package llm_on_ray namespace and use absolute module path to import module, in this way, ui/inference/finetune can cross import as well.

For users, they can launch serve as python -m llm_on_ray.inference.serve ...
For developers, they can use pip install -e to install package in-place and debug.

@xuechendi We will discuss locally and let you know.

@xwu-intel
Copy link

xwu-intel commented Feb 4, 2024

@xuechendi We decided to use package path (move inference/finetune/... into a package llm_on_ray namespace) for import to avoid name conflict such as
from llm_on_ray.inference.inference_config import InferenceConfig

And use module when running
python -m llm_on_ray.inference.serve

Also need to install serve.py and finetune.py as bin commands (llm_on_ray-serve & llm_on_ray-finetune) that user can run it directly from command line.

@KepingYan could you fix this in #79 as discussed?

@xuechendi
Copy link
Contributor Author

@xwu99 , That sounds great! This is also something I felt quite unusual that after installing llm-on-ray, the package name is not "llm-on-ray" but "inference". Great to know that you are going to fix the package name. :)

Please feel free to close this PR once you have merge the other one. I would be willing to test the new installation by then.

@xuechendi xuechendi force-pushed the pip_install_path branch 2 times, most recently from cef8a73 to e47113b Compare February 15, 2024 20:10
@xwu-intel
Copy link

fixed by #106

@xwu-intel xwu-intel closed this Mar 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants