Skip to content
This repository was archived by the owner on Sep 23, 2025. It is now read-only.

Conversation

@minmingzhu
Copy link
Contributor

No description provided.

minmingzhu and others added 30 commits April 28, 2024 13:49
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
2. modify chat template

Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
minmingzhu and others added 17 commits May 6, 2024 10:37
2. add unit test

Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
* update

* fix blocking

* update

Signed-off-by: Wu, Xiaochang <[email protected]>

* update

Signed-off-by: Wu, Xiaochang <[email protected]>

* fix setup and getting started

Signed-off-by: Wu, Xiaochang <[email protected]>

* update

Signed-off-by: Wu, Xiaochang <[email protected]>

* update

Signed-off-by: Wu, Xiaochang <[email protected]>

* nit

Signed-off-by: Wu, Xiaochang <[email protected]>

* Add dependencies for tests and update pyproject.toml

Signed-off-by: Wu, Xiaochang <[email protected]>

* Update dependencies and test workflow

Signed-off-by: Wu, Xiaochang <[email protected]>

* Update dependencies and fix torch_dist.py

Signed-off-by: Wu, Xiaochang <[email protected]>

* Update OpenAI SDK installation and start ray cluster

Signed-off-by: Wu, Xiaochang <[email protected]>

---------

Signed-off-by: Wu, Xiaochang <[email protected]>
* single test

* single test

* single test

* single test

* fix hang error
Signed-off-by: minmingzhu <[email protected]>
* use base model mpt-7b instead of mpt-7b-chat

Signed-off-by: minmingzhu <[email protected]>

* manual setting specify tokenizer

Signed-off-by: minmingzhu <[email protected]>

* update

Signed-off-by: minmingzhu <[email protected]>

* update doc/finetune_parameters.md

Signed-off-by: minmingzhu <[email protected]>

---------

Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
Signed-off-by: minmingzhu <[email protected]>
deepspeed: false
workers_per_group: 2
device: cpu
device: "cpu"
Copy link

@xwu-intel xwu-intel May 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to add extra " to yaml. Is it needed to touch this part for your PR?

deepspeed: false
workers_per_group: 2
device: cpu
device: CPU

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pay attention to use lowercase device for consistency

deepspeed: false
workers_per_group: 2
device: cpu
device: CPU

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change the device name to capital case?

|lora_config|task_type: CAUSAL_LM<br>r: 8<br>lora_alpha: 32<br>lora_dropout: 0.1|Will be passed to the LoraConfig `__init__()` method, then it'll be used as config to build Peft model object.|
|deltatuner_config|"algo": "lora"<br>"denas": True<br>"best_model_structure": "/path/to/best_structure_of_deltatuner_model"|Will be passed to the DeltaTunerArguments `__init__()` method, then it'll be used as config to build [Deltatuner model](https://github.com/intel/e2eAIOK/tree/main/e2eAIOK/deltatuner) object.|
|enable_gradient_checkpointing|False|enable gradient checkpointing to save GPU memory, but will cost more compute runtime|
|chat_template|None|User-defined chat template.|

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add description and link to the doc of huggingface otherwise user will not know what it is.

prompt = "Once upon a time,"
# prompt = "Once upon a time,"
prompt = [
{"role": "user", "content": "Which is bigger, the moon or the sun?"},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't modify this as api_server_simple/query_single.py is for simple protocol. it's not formatted like this. focus on openapi support, don't need to support chat temple for simple protocol if need to change query format.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants