LLaVA Yi 34B is an open-source model trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: NousResearch/Nous-Hermes-2-Yi-34B

It was trained in December 2023.

Model Information

Model ID

liuhaotian/llava-yi-34b

Context Length

4,096 tokens

Author

liuhaotian

Capabilities