Title: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Authors: Hang Zhang, Xin Li, Lidong Bing
Published: 5th June 2023 (Monday) @ 13:17:27
Link: http://arxiv.org/abs/2306.02858v4
Abstract
We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLMâs embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.
we adopt the idea of BLIP-2 (Li et al., 2023b) to guarantee the efficiency of cross-modal pre-training. To explicitly capture the change of visual scenes in the video, we use a pre-trained visual encoder to separately compute frame representations. Then, we introduce a frame embedding layer to inject temporal information and a video Q-Former to generate visual query tokens. As for the audio signals from the video, we additionally leverage a pre-trained audio encoder as well as an audio Q-former to learn reasonable auditory query embeddings (see the right part of Figure 1).