Oral "vision-language-action models" Papers
2 papers found
EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models
Yantai Yang, Yuhao Wang, Zichen Wen et al.
NeurIPS 2025oralarXiv:2506.10100
31
citations
VLA-Cache: Efficient Vision-Language-Action Manipulation via Adaptive Token Caching
Siyu Xu, Yunke Wang, Chenghao Xia et al.
NeurIPS 2025oralarXiv:2502.02175
27
citations