DeepSeek: R1 Distill Llama 8B
DeepSeek: R1 Distill Llama 8B

DeepSeek R1 Distill Llama 8B is a distilled large language model based on Llama-3.1-8B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

  • AIME 2024 pass@1: 50.4
  • MATH-500 pass@1: 89.1
  • CodeForces Rating: 1205

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Hugging Face:

Model Information

Model ID

deepseek/deepseek-r1-distill-llama-8b

Context Length

32,000 tokens

Author

deepseek

Capabilities