All Papers

34,598 papers found • Page 579 of 692

MaGIC: Multi-modality Guided Image Completion

Hao Wang, Yongsheng Yu, Tiejian Luo et al.

ICLR 2024arXiv:2305.11818
15
citations

Magicoder: Empowering Code Generation with OSS-Instruct

Yuxiang Wei, Zhe Wang, Jiawei Liu et al.

ICML 2024arXiv:2312.02120
208
citations

MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion

Di Chang, Yichun Shi, Quankai Gao et al.

ICML 2024arXiv:2311.12052
113
citations

Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification

Pingping Zhang, Yuhao Wang, Yang Liu et al.

CVPR 2024arXiv:2403.10254
49
citations

MagMax: Leveraging Model Merging for Seamless Continual Learning

Daniel Marczak, Bartlomiej Twardowski, Tomasz Trzcinski et al.

ECCV 2024arXiv:2407.06322
46
citations

Magnitude Invariant Parametrizations Improve Hypernetwork Learning

Jose Javier Gonzalez Ortiz, John Guttag, Adrian Dalca

ICLR 2024arXiv:2304.07645
11
citations

MAGNOLIA: Matching Algorithms via GNNs for Online Value-to-go Approximation

Alexandre Hayderi, Amin Saberi, Ellen Vitercik et al.

ICML 2024arXiv:2406.05959
3
citations

Magnushammer: A Transformer-Based Approach to Premise Selection

Maciej Mikuła, Szymon Tworkowski, Szymon Antoniak et al.

ICLR 2024arXiv:2303.04488
58
citations

MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment

Kanglei Zhou, Liyuan Wang, Xingxing Zhang et al.

ECCV 2024arXiv:2403.04398
11
citations

Mahalanobis Distance-based Multi-view Optimal Transport for Multi-view Crowd Localization

Qi Zhang, Kaiyi Zhang, Antoni Chan et al.

ECCV 2024arXiv:2409.01726
5
citations

Major-Minor Mean Field Multi-Agent Reinforcement Learning

Kai Cui, Christian Fabian, Anam Tahir et al.

ICML 2024arXiv:2303.10665
5
citations

Make a Cheap Scaling: A Self-Cascade Diffusion Model for Higher-Resolution Adaptation

Lanqing Guo, Yingqing He, Haoxin Chen et al.

ECCV 2024arXiv:2402.10491
51
citations

Make-A-Shape: a Ten-Million-scale 3D Shape Model

Ka-Hei Hui, Aditya Sanghi, Arianna Rampini et al.

ICML 2024arXiv:2401.11067
28
citations

Make a Strong Teacher with Label Assistance: A Novel Knowledge Distillation Approach for Semantic Segmentation

Shoumeng Qiu, Jie Chen, Xinrun Li et al.

ECCV 2024arXiv:2407.13254
9
citations

Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text

Junshu Tang, Yanhong Zeng, Ke Fan et al.

CVPR 2024arXiv:2403.16897
8
citations

Make Lossy Compression Meaningful for Low-Light Images

Shilv Cai, Liqun Chen, Sheng Zhong et al.

AAAI 2024paperarXiv:2305.15030
5
citations

Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models

Gianni Franchi, Olivier Laurent, Maxence Leguéry et al.

CVPR 2024arXiv:2312.15297
16
citations

Make Pixels Dance: High-Dynamic Video Generation

Yan Zeng, Guoqiang Wei, Jiani Zheng et al.

CVPR 2024arXiv:2311.10982
149
citations

Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt Learning with Data-Dependent Prior

Youngjae Cho, HeeSun Bae, Seungjae Shin et al.

AAAI 2024paperarXiv:2401.06799
9
citations

Make RepVGG Greater Again: A Quantization-Aware Approach

Xuesong Nie, Yunfeng Yan, Siyuan Li et al.

AAAI 2024paperarXiv:2212.01593
66
citations

Makeup Prior Models for 3D Facial Makeup Estimation and Applications

Xingchao Yang, Takafumi Taketomi, Yuki Endo et al.

CVPR 2024arXiv:2403.17761
7
citations

Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation

Fangfu Liu, Hanyang Wang, Weiliang Chen et al.

ECCV 2024arXiv:2403.09625
26
citations

Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework

Ziyao Huang, Fan Tang, Yong Zhang et al.

CVPR 2024arXiv:2403.16510
30
citations

Make Your ViT-based Multi-view 3D Detectors Faster via Token Compression

Dingyuan Zhang, Dingkang Liang, Zichang Tan et al.

ECCV 2024arXiv:2409.00633
4
citations

Making Large Language Models Better Planners with Reasoning-Decision Alignment

Zhijian Huang, Tao Tang, Shaoxiang Chen et al.

ECCV 2024arXiv:2408.13890
40
citations

Making LLaMA SEE and Draw with SEED Tokenizer

Yuying Ge, Sijie Zhao, Ziyun Zeng et al.

ICLR 2024arXiv:2310.01218
190
citations

Making Old Things New: A Unified Algorithm for Differentially Private Clustering

Max Dupre la Tour, Monika Henzinger, David Saulpic

ICML 2024arXiv:2406.11649
4
citations

Making Pre-trained Language Models Great on Tabular Prediction

Jiahuan Yan, Bo Zheng, Hongxia Xu et al.

ICLR 2024spotlightarXiv:2403.01841
64
citations

Making Retrieval-Augmented Language Models Robust to Irrelevant Context

Ori Yoran, Tomer Wolfson, Ori Ram et al.

ICLR 2024arXiv:2310.01558
314
citations

Making RL with Preference-based Feedback Efficient via Randomization

Runzhe Wu, Wen Sun

ICLR 2024arXiv:2310.14554
37
citations

Making Vision Transformers Truly Shift-Equivariant

Renan A. Rojas-Gomez, Teck-Yian Lim, Minh Do et al.

CVPR 2024arXiv:2305.16316
20
citations

Making Visual Sense of Oracle Bones for You and Me

Runqi Qiao, LAN YANG, Kaiyue Pang et al.

CVPR 2024
9
citations

MALIBO: Meta-learning for Likelihood-free Bayesian Optimization

Jiarong Pan, Stefan Falkner, Felix Berkenkamp et al.

ICML 2024spotlightarXiv:2307.03565
2
citations

MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding

Bo He, Hengduo Li, Young Kyun Jang et al.

CVPR 2024arXiv:2404.05726
188
citations

MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning

Zohar Rimon, Tom Jurgenson, Orr Krupnik et al.

ICLR 2024arXiv:2403.09859
14
citations

MambaIR: A Simple Baseline for Image Restoration with State-Space Model

Hang Guo, Jinmin Li, Tao Dai et al.

ECCV 2024arXiv:2402.15648
560
citations

Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data

Shufan Li, Aditya Grover, Harkanwar Singh

ECCV 2024arXiv:2402.05892
106
citations

MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning

Xiang Yue, Xingwei Qu, Ge Zhang et al.

ICLR 2024spotlightarXiv:2309.05653
522
citations

MA-Net: Rethinking Neural Unit in the Light of Astrocytes

Mengqiao Han, Liyuan Pan, Xiabi Liu

AAAI 2024paper
4
citations

Manifold-Based Verbalizer Space Re-embedding for Tuning-Free Prompt-Based Classification

Haochun Wang, Sendong Zhao, Chi Liu et al.

AAAI 2024paperarXiv:2309.04174
3
citations

Manifold Constraints for Imperceptible Adversarial Attacks on Point Clouds

AAAI 2024paper

Manifold Diffusion Fields

Ahmed Elhag, Ahmed Elhag, Yuyang Wang et al.

ICLR 2024arXiv:2305.15586
11
citations

Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution

Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.

ICML 2024arXiv:2405.09800
9
citations

Manifold Preserving Guided Diffusion

Yutong He, Naoki Murata, Chieh-Hsin Lai et al.

ICLR 2024arXiv:2311.16424
129
citations

ManiFPT: Defining and Analyzing Fingerprints of Generative Models

Hae Jin Song, Mahyar Khayatkhoei, Wael AbdAlmageed

CVPR 2024arXiv:2402.10401
15
citations

ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation

Guanxing Lu, Shiyi Zhang, Ziwei Wang et al.

ECCV 2024arXiv:2403.08321
112
citations

MANIKIN: Biomechanically Accurate Neural Inverse Kinematics for Human Motion Estimation

Jiaxi Jiang, Paul Streli, Xuejing Luo et al.

ECCV 2024

ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation

Xiaoqi Li, Mingxu Zhang, Yiran Geng et al.

CVPR 2024arXiv:2312.16217
182
citations

Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems

Jacob Prince, Gabriel Fajardo, George Alvarez et al.

ICLR 2024oral

Manipulation-Robust Selection of Citizens’ Assemblies

Bailey Flanigan, Jennifer Liang, Ariel Procaccia et al.

AAAI 2024paper