Poster Papers

24,624 papers found • Page 39 of 493

CamPoint: Boosting Point Cloud Segmentation with Virtual Camera

Jianhui Zhang, Luo Yizhi, Zicheng Zhang et al.

CVPR 2025poster
1
citations

CamSAM2: Segment Anything Accurately in Camouflaged Videos

Yuli Zhou, Yawei Li, Yuqian Fu et al.

NEURIPS 2025posterarXiv:2503.19730

CaMuViD: Calibration-Free Multi-View Detection

Amir Etefaghi Daryani, M. Usman Maqbool Bhutta, Byron Hernandez et al.

CVPR 2025poster
1
citations

Can3Tok: Canonical 3D Tokenization and Latent Modeling of Scene-Level 3D Gaussians

Quankai Gao, Iliyan Georgiev, Tuanfeng Wang et al.

ICCV 2025posterarXiv:2508.01464
2
citations

Can Agent Fix Agent Issues?

Alfin Wijaya Rahardja, Junwei Liu, Weitong Chen et al.

NEURIPS 2025posterarXiv:2505.20749

Can a Large Language Model be a Gaslighter?

Wei Li, Luyao Zhu, Yang Song et al.

ICLR 2025posterarXiv:2410.09181
2
citations

Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning

Chongyi Zheng, Jens Tuyls, Joanne Peng et al.

ICLR 2025posterarXiv:2412.08021
8
citations

Cancer Survival Analysis via Zero-shot Tumor Microenvironment Segmentation on Low-resolution Whole Slide Pathology Images

Jiao Tang, WEI SHAO, Daoqiang Zhang

NEURIPS 2025poster

Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence

Yuankai Luo, Lei Shi, Xiao-Ming Wu

ICML 2025posterarXiv:2502.09263
9
citations

Can Class-Priors Help Single-Positive Multi-Label Learning?

Biao Liu, Ning Xu, Jie Wang et al.

NEURIPS 2025posterarXiv:2309.13886
1
citations

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression

Peijie Dong, Zhenheng Tang, Xiang Liu et al.

ICML 2025posterarXiv:2505.19433

Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?

Wendong Zheng, Junyang Chen, Husheng Guo et al.

ICML 2025poster

Can Dependencies Induced by LLM-Agent Workflows Be Trusted?

Yu Yao, Yiliao (Lia) Song, Yian Xie et al.

NEURIPS 2025poster

Can Diffusion Models Disentangle? A Theoretical Perspective

Liming Wang, Muhammad Jehanzeb Mirza, Yishu Gong et al.

NEURIPS 2025posterarXiv:2504.00220

Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?

Yujin Han, Andi Han, Wei Huang et al.

ICML 2025posterarXiv:2502.04725

Can DPO Learn Diverse Human Values? A Theoretical Scaling Law

Shawn Im, Sharon Li

NEURIPS 2025posterarXiv:2408.03459
8
citations

CanFields: Consolidating Diffeomorphic Flows for Non-Rigid 4D Interpolation from Arbitrary-Length Sequences

Miaowei Wang, Changjian Li, Amir Vaxman

ICCV 2025posterarXiv:2406.18582
1
citations

Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective

Andrew Jesson, Nicolas Beltran-Velez, David Blei

ICLR 2025posterarXiv:2412.06033

Can Generative Geospatial Diffusion Models Excel as Discriminative Geospatial Foundation Models?

Yuru Jia, Valerio Marsocci, Ziyang Gong et al.

ICCV 2025posterarXiv:2503.07890
5
citations

Can In-context Learning Really Generalize to Out-of-distribution Tasks?

Qixun Wang, Yifei Wang, Xianghua Ying et al.

ICLR 2025posterarXiv:2410.09695
15
citations

Can Knowledge be Transferred from Unimodal to Multimodal? Investigating the Transitivity of Multimodal Knowledge Editing

Lingyong Fang, Xinzhong Wang, Depeng depeng wang et al.

ICCV 2025poster

Can Knowledge Editing Really Correct Hallucinations?

Baixiang Huang, Canyu Chen, Xiongxiao Xu et al.

ICLR 2025posterarXiv:2410.16251
29
citations

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

Hanlei Zhang, zhuohang li, Hua Xu et al.

NEURIPS 2025posterarXiv:2504.16427
2
citations

Can Large Language Models Master Complex Card Games?

Wei Wang, Fuqing Bie, Junzhe Chen et al.

NEURIPS 2025posterarXiv:2509.01328
2
citations

Can Large Language Models Understand Intermediate Representations in Compilers?

Hailong Jiang, Jianfeng Zhu, Yao Wan et al.

ICML 2025posterarXiv:2502.06854

Can Large Language Models Understand Symbolic Graphics Programs?

Zeju Qiu, Weiyang Liu, Haiwen Feng et al.

ICLR 2025posterarXiv:2408.08313
28
citations

Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind

Qingmei Li, Yang Zhang, Zurong Mai et al.

NEURIPS 2025posterarXiv:2505.12207
1
citations

Can Large Vision-Language Models Correct Semantic Grounding Errors By Themselves?

Yuan-Hong Liao, Rafid Mahmood, Sanja Fidler et al.

CVPR 2025posterarXiv:2404.06510

CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning

Xuankun Rong, Jianshu Zhang, Kun He et al.

ICML 2025poster

Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs

Guiyao Tie, Zenghui Yuan, Zeli Zhao et al.

NEURIPS 2025posterarXiv:2510.16062

Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers

Chenglei Si, Diyi Yang, Tatsunori Hashimoto

ICLR 2025posterarXiv:2409.04109
281
citations

Can LLM Simulations Truly Reflect Humanity? A Deep Dive

Qian Wang, Zhenheng Tang, Bingsheng He

ICLR 2025poster

Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation

Qijiong Liu, Jieming Zhu, Lu Fan et al.

NEURIPS 2025posterarXiv:2503.05493
4
citations

Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?

Seth Aycock, David Stap, Di Wu et al.

ICLR 2025posterarXiv:2409.19151
18
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NEURIPS 2025posterarXiv:2509.17552
1
citations

Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?

Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.

ICLR 2025posterarXiv:2403.06833
45
citations

Can LLMs Solve Longer Math Word Problems Better?

Xin Xu, Tong Xiao, Zitong Chao et al.

ICLR 2025posterarXiv:2405.14804
25
citations

Can LLMs Understand Time Series Anomalies?

Zihao Zhou, Rose Yu

ICLR 2025posterarXiv:2410.05440
32
citations

Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?

Yijie Hu, Zihao Zhou, Kaizhu Huang et al.

NEURIPS 2025posterarXiv:2510.14387

Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?

Apratim Bhattacharyya, Bicheng Xu, Sanjay Haresh et al.

NEURIPS 2025posterarXiv:2511.21998

Can NeRFs "See" without Cameras?

Chaitanya Amballa, Yu-Lin Wei, Sattwik Basu et al.

NEURIPS 2025poster

Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model

Siyu Chen, Beining Wu, Miao Lu et al.

ICLR 2025poster
2
citations

Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs

Haoming Yang, Ke Ma, Xiaojun Jia et al.

ICML 2025posterarXiv:2505.02862
4
citations

Can One Modality Model Synergize Training of Other Modality Models?

Jae-Jun Lee, Sung Whan Yoon

ICLR 2025poster

Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers

Lokesh Veeramacheneni, Moritz Wolter, Hilde Kuehne et al.

ICML 2025poster

CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation

Xiangyang Luo, Ye Zhu, Yunfei Liu et al.

ICCV 2025posterarXiv:2507.02691
4
citations

Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective

Jiawei Huang, Bingcong Li, Christoph Dann et al.

ICML 2025posterarXiv:2502.19255
4
citations

Can Text-to-Video Generation help Video-Language Alignment?

Luca Zanella, Massimiliano Mancini, Willi Menapace et al.

CVPR 2025posterarXiv:2503.18507
1
citations

Can Textual Gradient Work in Federated Learning?

Minghui Chen, Ruinan Jin, Wenlong Deng et al.

ICLR 2025posterarXiv:2502.19980
8
citations

Can Transformers Do Enumerative Geometry?

Baran Hashemi, Roderic Corominas, Alessandro Giacchetto

ICLR 2025posterarXiv:2408.14915
9
citations