OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.

  • For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B.
  • For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B.

#open-source

Model Information

Model ID

openchat/openchat-7b

Context Length

8,192 tokens

Author

openchat

Capabilities