[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
-
Updated
Oct 29, 2025 - Python
[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
[ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models
PyTorch implementation of Rotary Spatial Embeddings
Decoder-only LLM trained on the Harry Potter books.
A from-scratch implementation of a T5 model modified with Rotary Position Embeddings (RoPE). This project includes the code for pre-training on the C4 dataset in streaming mode with Flash Attention 2.
Add a description, image, and links to the rotary-position-embedding topic page so that developers can more easily learn about it.
To associate your repository with the rotary-position-embedding topic, visit your repo's landing page and select "manage topics."