-
Notifications
You must be signed in to change notification settings - Fork 281
Neural Coder enable backend support for intel_extension_for_transformers #214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Lfish99
approved these changes
Dec 5, 2022
yuwei-work
approved these changes
Dec 5, 2022
Contributor
Author
|
@WenjiaoYue could you help add the word PythonLauncher to pass spelling check CI? Thanks |
Signed-off-by: Yue, Wenjiao <[email protected]>
PenghuiCheng
pushed a commit
that referenced
this pull request
Dec 6, 2022
…ers (#214) * Create intel_extension_for_transformers.yaml * change default strategy to dynamic according to huggingface sync * change default strategy to dynamic according to HF sync * enable intel extension for transformers * Create change_trainer_to_nlptrainer.py * add use_inc for not using default optimum for HF code * add use_inc * update optimum quant static dynamic separation * Update interface.py * Update interface.py * Update autoinc_harness.py * Update README.md * add change_trainer_to_nlptrainer to outside_harness * add PythonLauncher to pass spelling check CI Signed-off-by: Yue, Wenjiao <[email protected]> Signed-off-by: Yue, Wenjiao <[email protected]> Co-authored-by: Yue, Wenjiao <[email protected]>
zehao-intel
pushed a commit
that referenced
this pull request
Dec 9, 2022
…ers (#214) * Create intel_extension_for_transformers.yaml * change default strategy to dynamic according to huggingface sync * change default strategy to dynamic according to HF sync * enable intel extension for transformers * Create change_trainer_to_nlptrainer.py * add use_inc for not using default optimum for HF code * add use_inc * update optimum quant static dynamic separation * Update interface.py * Update interface.py * Update autoinc_harness.py * Update README.md * add change_trainer_to_nlptrainer to outside_harness * add PythonLauncher to pass spelling check CI Signed-off-by: Yue, Wenjiao <[email protected]> Signed-off-by: Yue, Wenjiao <[email protected]> Co-authored-by: Yue, Wenjiao <[email protected]> Signed-off-by: zehao-intel <[email protected]>
VincyZhang
pushed a commit
that referenced
this pull request
Feb 12, 2023
yiliu30
pushed a commit
that referenced
this pull request
Apr 30, 2025
#214) * [SW-225078] [INC][DynamicQuant] Reenable testing dynamic quantization scales on hpu graphs and torch.compile * CR fixes * tiny fix * cr fix * don't support running _quant_only_scale_methods with dynamic quantization * string check fix * fix test_matmul runs and atol in HW_ALIGNED_SINGLE_SCALE * string fixes
xin3he
pushed a commit
that referenced
this pull request
Jul 15, 2025
#214) * [SW-225078] [INC][DynamicQuant] Reenable testing dynamic quantization scales on hpu graphs and torch.compile * CR fixes * tiny fix * cr fix * don't support running _quant_only_scale_methods with dynamic quantization * string check fix * fix test_matmul runs and atol in HW_ALIGNED_SINGLE_SCALE * string fixes
XuehaoSun
pushed a commit
that referenced
this pull request
Jul 19, 2025
#214) * [SW-225078] [INC][DynamicQuant] Reenable testing dynamic quantization scales on hpu graphs and torch.compile * CR fixes * tiny fix * cr fix * don't support running _quant_only_scale_methods with dynamic quantization * string check fix * fix test_matmul runs and atol in HW_ALIGNED_SINGLE_SCALE * string fixes
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Yao, Kai [email protected]
Type of Change
Neural Coder enable backend support for
intel_extension_for_transformersDescription
Neural Coder enable backend support for
intel_extension_for_transformers. Intel Extension for Transformers: https://github.com/intel/intel-extension-for-transformersThe enabling pattern is as follows:
We enable the backend support in two parts:
TrainertoNLPTrainerwhile defining the trainer, according tointel_extension_for_transformersdocumentintel_extension_for_transformersdocumentNote that since Neural Coder is a one-click no-code solution, we default apply
Post-Training Dynamicandeval_f1in this case to avoid confusion, and we do not provide user-specific argument for this feature as of now.Expected Behavior & Potential Risk
enable backend support for intel_extension_for_transformers , no risk
How has this PR been tested?
locally
Dependency Change?
no