LLaVA 13B

LLaVA is a large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities and setting a new state-of-the-art accuracy on Science QA.

#multimodal

How Can I Help?
Type here to start conversation