ECCV 2024 "large language models" Papers

17 papers found

Asynchronous Large Language Model Enhanced Planner for Autonomous Driving

Yuan Chen, Zi-han Ding, Ziqin Wang et al.

ECCV 2024posterarXiv:2406.14556
33
citations

CoMo: Controllable Motion Generation through Language Guided Pose Code Editing

Yiming Huang, WEILIN WAN, Yue Yang et al.

ECCV 2024posterarXiv:2403.13900
48
citations

Controllable Navigation Instruction Generation with Chain of Thought Prompting

Xianghao Kong, Jinyu Chen, Wenguan Wang et al.

ECCV 2024posterarXiv:2407.07433
16
citations

Emergent Visual-Semantic Hierarchies in Image-Text Representations

Morris Alper, Hadar Averbuch-Elor

ECCV 2024posterarXiv:2407.08521
17
citations

FedVAD: Enhancing Federated Video Anomaly Detection with GPT-Driven Semantic Distillation

Fan Qi, Ruijie Pan, Huaiwen Zhang et al.

ECCV 2024poster
2
citations

Latent Guard: a Safety Framework for Text-to-image Generation

Runtao Liu, Ashkan Khakzar, Jindong Gu et al.

ECCV 2024posterarXiv:2404.08031
53
citations

Learning to Localize Actions in Instructional Videos with LLM-Based Multi-Pathway Text-Video Alignment

Yuxiao Chen, Kai Li, Wentao Bao et al.

ECCV 2024posterarXiv:2409.16145
5
citations

LongVLM: Efficient Long Video Understanding via Large Language Models

Yuetian Weng, Mingfei Han, Haoyu He et al.

ECCV 2024posterarXiv:2404.03384
128
citations

PALM: Predicting Actions through Language Models

Sanghwan Kim, Daoji Huang, Yongqin Xian et al.

ECCV 2024posterarXiv:2311.17944
22
citations

Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation

Jinpeng Liu, Wenxun Dai, Chunyu Wang et al.

ECCV 2024poster
8
citations

Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos

Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang et al.

ECCV 2024posterarXiv:2409.20557
10
citations

TextDiffuser-2: Unleashing the Power of Language Models for Text Rendering

Jingye Chen, Yupan Huang, Tengchao Lv et al.

ECCV 2024posterarXiv:2311.16465
104
citations

Training-free Video Temporal Grounding using Large-scale Pre-trained Models

Minghang Zheng, Xinhao Cai, Qingchao Chen et al.

ECCV 2024posterarXiv:2408.16219
20
citations

Vamos: Versatile Action Models for Video Understanding

Shijie Wang, Qi Zhao, Minh Quan et al.

ECCV 2024posterarXiv:2311.13627
36
citations

Video Question Answering with Procedural Programs

Rohan Choudhury, Koichiro Niinuma, Kris Kitani et al.

ECCV 2024posterarXiv:2312.00937
37
citations

X-InstructBLIP: A Framework for Aligning Image, 3D, Audio, Video to LLMs and its Emergent Cross-modal Reasoning

Artemis Panagopoulou, Le Xue, Ning Yu et al.

ECCV 2024poster
6
citations

Zero-shot Text-guided Infinite Image Synthesis with LLM guidance

Soyeong Kwon, TAEGYEONG LEE, Taehwan Kim

ECCV 2024posterarXiv:2407.12642
3
citations