The flagship, 70 billion parameter language model from Meta, fine tuned for chat completions. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.

Model Information

Model ID

meta-llama/llama-2-70b-chat

Context Length

4,096 tokens

Author

meta-llama

Capabilities