Skip to content

Conversation

hammad93
Copy link
Owner

@hammad93 hammad93 commented Sep 26, 2025

pending ggml-org/llama.cpp#15852

Apertus

  • The current implementation utilizes the 70 billion parameter model quantized at 8 bits
  • This model is preferred because it's fully open and transparent
  • Apertus is massively multilingual and covers more languages than others
  • Testing and analysis of benchmarks indicate that the model is sufficient for our purposes

This was referenced Sep 26, 2025
@hammad93 hammad93 merged commit 5e34c81 into main Oct 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant