LLaVA is a large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities and setting a new state-of-the-art accuracy on Science QA.

#multimodal

Model Information

Model ID

liuhaotian/llava-13b

Context Length

2,048 tokens

Author

liuhaotian

Capabilities