ECCV Papers
2,387 papers found • Page 28 of 48
MMVR: Millimeter-wave Multi-View Radar Dataset and Benchmark for Indoor Perception
Mohammad Mahbubur Rahman, Ryoma Yataka, Sorachi Kato et al.
MoAI: Mixture of All Intelligence for Large Language and Vision Models
Byung-Kwan Lee, Beomchan Park, Chae Won Kim et al.
MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices
Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.
MobileNetV4: Universal Models for the Mobile Ecosystem
Danfeng Qin, Chas Leichner, Manolis Delakis et al.
Möbius Transform for Mitigating Perspective Distortions in Representation Learning
Prakash Chandra Chhipa, Meenakshi Subhash Chippa, Kanjar De et al.
Modality Translation for Object Detection Adaptation without forgetting prior knowledge
Heitor Rapela Medeiros, Masih Aminbeidokhti, Fidel A Guerrero Pena et al.
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
MohammadReza Davari, Eugene Belilovsky
Modeling and Driving Human Body Soundfields through Acoustic Primitives
Chao Huang, Dejan Markovic, Chenliang Xu et al.
Modeling Label Correlations with Latent Context for Multi-Label Recognition
Zhao-Min Chen, Quan Cui, Ruoxi Deng et al.
Modelling Competitive Behaviors in Autonomous Driving Under Generative World Model
Guanren Qiao, Guiliang Liu, Guorui Quan et al.
Model Stock: All we need is just a few fine-tuned models
Dong-Hwan Jang, Sangdoo Yun, Dongyoon Han
MOD-UV: Learning Mobile Object Detectors from Unlabeled Videos
Yihong Sun, Bharath Hariharan
MoEAD: A Parameter-efficient Model for Multi-class Anomaly Detection
Shiyuan Meng, Wenchao Meng, Qihang Zhou et al.
MoE-DiffIR: Task-customized Diffusion Priors for Universal Compressed Image Restoration
Yulin Ren, Xin Li, Bingchen Li et al.
MO-EMT-NAS: Multi-Objective Continuous Transfer of Architectural Knowledge Between Tasks from Different Datasets
Peng Liao, Xilu Wang, Yaochu Jin et al.
MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
Muyao Niu, Xiaodong Cun, Xintao Wang et al.
MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
KUNPENG SONG, Yizhe Zhu, Bingchen Liu et al.
Momentum Auxiliary Network for Supervised Local Learning
Junhao Su, Changpeng Cai, Feiyu Zhu et al.
Monocular Occupancy Prediction for Scalable Indoor Scenes
Hongxiao Yu, Yuqi Wang, Yuntao Chen et al.
MonoTTA: Fully Test-Time Adaptation for Monocular 3D Object Detection
Hongbin Lin, Yifan Zhang, SHUAICHENG NIU et al.
Mono-ViFI: A Unified Learning Framework for Self-supervised Single- and Multi-frame Monocular Depth Estimation
Jinfeng Liu, Lingtong Kong, Bo Li et al.
MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection
Youngmin Oh, Hyung-Il Kim, Seong Tae Kim et al.
MONTRAGE: Monitoring Training for Attribution of Generative Diffusion Models
Jonathan Brokman, Omer Hofman, Roman Vainshtein et al.
Motion and Structure from Event-based Normal Flow
Zhongyang Ren, Bangyan Liao, Delei Kong et al.
Motion Aware Event Representation-driven Image Deblurring
Zhijing Sun, Xueyang Fu, Longzhuo Huang et al.
MotionChain: Conversational Motion Controllers via Multimodal Prompts
Biao Jiang, Xin Chen, Chi Zhang et al.
MotionDirector: Motion Customization of Text-to-Video Diffusion Models
Rui Zhao, Yuchao Gu, Jay Zhangjie Wu et al.
Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution
Xi Yang, Chenhang He, Jianqi Ma et al.
Motion Keyframe Interpolation for Any Human Skeleton using Point Cloud-based Human Motion Data Homogenisation
Clinton Mo, Kun Hu, Chengjiang Long et al.
MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model
Wenxun Dai, Ling-Hao Chen, Jingbo Wang et al.
Motion Mamba: Efficient and Long Sequence Motion Generation
Zeyu Zhang, Akide Liu, Ian Reid et al.
Motion-Oriented Compositional Neural Radiance Fields for Monocular Dynamic Human Modeling
Jaehyeok Kim, Dongyoon Wee, Dan Xu
Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation
Friedhelm Hamann, Ziyun Wang, Ioannis Asmanis et al.
MoVideo: Motion-Aware Video Generation with Diffusion Models
Jingyun Liang, Yuchen Fan, Kai Zhang et al.
MRSP: Learn Multi-Representations of Single Primitive for Compositional Zero-Shot Learning
Dongyao Jiang, Hui Chen, Haodong Jing et al.
MSD: A Benchmark Dataset for Floor Plan Generation of Building Complexes
Casper van Engelenburg, Fatemeh Mostafavi, Emanuel Kuhn et al.
MTA-CLIP: Language-Guided Semantic Segmentation with Mask-Text Alignment
Anurag Das, Xinting Hu, Li Jiang et al.
MTaDCS: Moving Trace and Feature Density-based Confidence Sample Selection under Label Noise
Qingzheng Huang, Xilin He, Xiaole Xian et al.
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
Yuxuan Jiang, Chen Feng, Fan Zhang et al.
MTMamba: Enhancing Multi-Task Dense Scene Understanding by Mamba-Based Decoders
Baijiong Lin, Weisen Jiang, Pengguang Chen et al.
Multi-branch Collaborative Learning Network for 3D Visual Grounding
Zhipeng Qian, Yiwei Ma, Zhekai Lin et al.
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng, Hadi Amiri
MultiGen: Zero-shot Image Generation from Multi-modal Prompts
Zhi-Fan Wu, Lianghua Huang, Wei Wang et al.
Multi-Granularity Sparse Relationship Matrix Prediction Network for End-to-End Scene Graph Generation
lei wang, Zejian Yuan, Badong Chen
Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot
Fabien Baradel, Thomas Lucas, Matthieu Armando et al.
Multi-Label Cluster Discrimination for Visual Representation Learning
Xiang An, Kaicheng Yang, Xiangzi Dai et al.
Multi-Memory Matching for Unsupervised Visible-Infrared Person Re-Identification
Jiangming Shi, Xiangbo Yin, Yeyun Chen et al.
Multimodal Cross-Domain Few-Shot Learning for Egocentric Action Recognition
Masashi Hatano, Ryo Hachiuma, Ryo Fujii et al.
Multi-modal Crowd Counting via a Broker Modality
Haoliang Meng, Xiaopeng Hong, Chenhao Wang et al.
Multimodal Label Relevance Ranking via Reinforcement Learning
Taian Guo, Taolin Zhang, Haoqian Wu et al.