Swallow: Llama 3.1 Swallow 8B Instruct V0.3

Llama 3.1 Swallow 8B is a large language model that was built by continual pre-training on the Meta Llama 3.1 8B. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities. Swallow used approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section of the base model) for continual pre-training. The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.

Model Information

Model ID

tokyotech-llm/llama-3.1-swallow-8b-instruct-v0.3

Context Length

16,384 tokens

Author

tokyotech-llm

Capabilities