2025 Poster Papers

15,759 papers found • Page 314 of 316

X-Fi: A Modality-Invariant Foundation Model for Multimodal Human Sensing

Xinyan Chen, Jianfei Yang

ICLR 2025posterarXiv:2410.10167
10
citations

xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation

Qingchen Yu, Zifan Zheng, Shichao Song et al.

ICLR 2025posterarXiv:2405.11874
15
citations

X-Fusion: Introducing New Modality to Frozen Large Language Models

Sicheng Mo, Thao Nguyen, Xun Huang et al.

ICCV 2025posterarXiv:2504.20996
8
citations

X-Hacking: The Threat of Misguided AutoML

Rahul Sharma, Sumantrak Mukherjee, Andrea Šipka et al.

ICML 2025poster
4
citations

XIFBench: Evaluating Large Language Models on Multilingual Instruction Following

Zhenyu Li, Kehai Chen, Yunfei Long et al.

NEURIPS 2025posterarXiv:2503.07539

XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning

Alexander Nikulin, Ilya Zisman, Alexey Zemtsov et al.

ICLR 2025posterarXiv:2406.08973
11
citations

xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference

Maximilian Beck, Korbinian Pöppel, Phillip Lippe et al.

ICML 2025posterarXiv:2503.13427

X-Mahalanobis: Transformer Feature Mixing for Reliable OOD Detection

Tong Wei, Bolin Wang, Jiang-Xin Shi et al.

NEURIPS 2025poster

X-NeMo: Expressive Neural Motion Reenactment via Disentangled Latent Attention

XiaoChen Zhao, Hongyi Xu, Guoxian Song et al.

ICLR 2025posterarXiv:2507.23143
17
citations

X-Prompt: Generalizable Auto-Regressive Visual Learning with In-Context Prompting

Zeyi Sun, Ziyang Chu, Pan Zhang et al.

ICCV 2025poster

XTrack: Multimodal Training Boosts RGB-X Video Object Trackers

Yuedong Tan, Zongwei Wu, Yuqian Fu et al.

ICCV 2025posterarXiv:2405.17773
11
citations

X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP

Hanxun Huang, Sarah Erfani, Yige Li et al.

ICML 2025posterarXiv:2505.05528

XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation

Bowen Chen, Brynn zhao, Haomiao Sun et al.

NEURIPS 2025posterarXiv:2506.21416
25
citations

YEAST: Yet Another Sequential Test

Alexey Kurennoy, Majed Dodin, Tural Gurbanov et al.

NEURIPS 2025posterarXiv:2406.16523

Yggdrasil: Bridging Dynamic Speculation and Static Runtime for Latency-Optimal Tree-Based LLM Decoding

Yue Guan, Changming Yu, Shihan Fang et al.

NEURIPS 2025posterarXiv:2512.23858

Yo’Chameleon: Personalized Vision and Language Generation

Thao Nguyen, Krishna Kumar Singh, Jing Shi et al.

CVPR 2025poster

YOLO-Count: Differentiable Object Counting for Text-to-Image Generation

Guanning Zeng, Xiang Zhang, Zirui Wang et al.

ICCV 2025posterarXiv:2508.00728
6
citations

YOLOE: Real-Time Seeing Anything

Ao Wang, Lihao Liu, Hui Chen et al.

ICCV 2025posterarXiv:2503.07465
34
citations

YOLO-RD: Introducing Relevant and Compact Explicit Knowledge to YOLO by Retriever-Dictionary

Hao-Tang Tsui, Chien-Yao Wang, Hong-Yuan Liao

ICLR 2025posterarXiv:2410.15346

YOLOv12: Attention-Centric Real-Time Object Detectors

Yunjie Tian, Qixiang Ye, DAVID DOERMANN

NEURIPS 2025posterarXiv:2502.12524

You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption

Weihang Ran, Wei Yuan, Yinqiang Zheng

ICML 2025poster
1
citations

You Are Your Own Best Teacher: Achieving Centralized-level Performance in Federated Learning under Heterogeneous and Long-tailed Data

Shanshan Yan, Zexi Li, Chao Wu et al.

ICCV 2025posterarXiv:2503.06916
2
citations

You Can Trust Your Clustering Model: A Parameter-free Self-Boosting Plug-in for Deep Clustering

Hanyang Li, Yuheng Jia, Hui LIU et al.

NEURIPS 2025posterarXiv:2511.21193

You Get What You Give: Reciprocally Fair Federated Learning

Aniket Murhekar, Jiaxin Song, Parnian Shahkar et al.

ICML 2025poster

Youku Dense Caption: A Large-scale Chinese Video Dense Caption Dataset and Benchmarks

Zixuan Xiong, Guangwei Xu, wenkai zhang et al.

ICLR 2025poster

You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM

Binqian Xu, Haiyang Mei, Zechen Bai et al.

NEURIPS 2025poster

You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning

Ayan Sengupta, Siddhant Chaudhary, Tanmoy Chakraborty

ICLR 2025posterarXiv:2501.15296
9
citations

You Only Sample Once: Taming One-Step Text-to-Image Synthesis by Self-Cooperative Diffusion GANs

Yihong Luo, Xiaolong Chen, Xinghua Qu et al.

ICLR 2025posterarXiv:2403.12931
18
citations

You Only Spectralize Once: Taking a Spectral Detour to Accelerate Graph Neural Network

Yi Li, Zhichun Guo, Guanpeng Li et al.

NEURIPS 2025poster

Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data

Jingyang Ou, Shen Nie, Kaiwen Xue et al.

ICLR 2025posterarXiv:2406.03736
182
citations

Your Mixture-of-Experts LLM Is Secretly an Embedding Model for Free

Ziyue Li, Tianyi Zhou

ICLR 2025posterarXiv:2410.10814
27
citations

Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

Beier Luo, Shuoyuan Wang, Sharon Li et al.

NEURIPS 2025posterarXiv:2505.16690

Your Scale Factors are My Weapon: Targeted Bit-Flip Attacks on Vision Transformers via Scale Factor Manipulation

Jialai Wang, Yuxiao Wu, Weiye Xu et al.

CVPR 2025poster
3
citations

Your Text Encoder Can Be An Object-Level Watermarking Controller

Naresh Kumar Devulapally, Mingzhen Huang, Vishal Asnani et al.

ICCV 2025posterarXiv:2503.11945

Your Weak LLM is Secretly a Strong Teacher for Alignment

Leitian Tao, Yixuan Li

ICLR 2025posterarXiv:2409.08813

You Share Beliefs, I Adapt: Progressive Heterogeneous Collaborative Perception

hao si, Ehsan Javanmardi, Manabu Tsukada

ICCV 2025posterarXiv:2509.09310
1
citations

You Think, You ACT: The New Task of Arbitrary Text to Motion Generation

Runqi Wang, Caoyuan Ma, Guopeng Li et al.

ICCV 2025posterarXiv:2404.14745
3
citations

YouTube-SL-25: A Large-Scale, Open-Domain Multilingual Sign Language Parallel Corpus

Garrett Tanzer, Biao Zhang

ICLR 2025posterarXiv:2407.11144

ZAPBench: A Benchmark for Whole-Brain Activity Prediction in Zebrafish

Jan-Matthis Lueckmann, Alexander Immer, Alex Chen et al.

ICLR 2025posterarXiv:2503.02618
5
citations

Zebra: In-Context Generative Pretraining for Solving Parametric PDEs

Louis Serrano, Armand Kassaï Koupaï, Thomas Wang et al.

ICML 2025posterarXiv:2410.03437

Zebra-Llama: Towards Extremely Efficient Hybrid Models

Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li et al.

NEURIPS 2025posterarXiv:2505.17272
6
citations

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Yuchen Lin, Ronan Le Bras, Kyle Richardson et al.

ICML 2025posterarXiv:2502.01100

ZEBRA: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding

Haonan Wang, Jingyu Lu, Hongrui Li et al.

NEURIPS 2025posterarXiv:2510.27128

ZeCO: Zero-Communication Overhead Sequence Parallelism for Linear Attention

Yuhong CHOU, Zehao Liu, Rui-Jie Zhu et al.

NEURIPS 2025posterarXiv:2507.01004
1
citations

Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion

Zhenglin Zhou, Fan Ma, Hehe Fan et al.

CVPR 2025posterarXiv:2503.15851
3
citations

Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations

Jeong Hun Yeo, Minsu Kim, Chae Won Kim et al.

ICCV 2025posterarXiv:2503.06273
5
citations

Zero-cost Proxy for Adversarial Robustness Evaluation

Yuqi Feng, Yuwei Ou, Jiahao Fan et al.

ICLR 2025poster
1
citations

ZeroDiff: Solidified Visual-semantic Correlation in Zero-Shot Learning

Zihan Ye, Shreyank Gowda, Shiming Chen et al.

ICLR 2025posterarXiv:2406.02929

ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think

Tao Feng, Wei Li, Didi Zhu et al.

ICML 2025posterarXiv:2501.01045

ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping

Shun Iwase, Muhammad Zubair Irshad, Katherine Liu et al.

CVPR 2025posterarXiv:2504.10857
5
citations