Skip to content

Conversation

@Narsil
Copy link
Contributor

@Narsil Narsil commented Jul 31, 2023

What does this PR do?

  • Adds Rope NTK scaling.

Done because #529 was closed
Took some code from huggingface/transformers#24653

  • --rope-scaling and --rope-factor are added separately. I considered having a single one and parsing something line ("linear:4.0" , or "dynamic") but decided against
    it because it would push more parsing+validation a bit everywhere (both in the launcher and the server).

Fixes #512

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Narsil Narsil requested review from McPatate and gante July 31, 2023 11:59
@Narsil Narsil merged commit 932bdd9 into main Jul 31, 2023
@Narsil Narsil deleted the ntk_scaling branch July 31, 2023 13:38
@gante
Copy link
Member

gante commented Jul 31, 2023

🔥 🔥 🔥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.

4 participants