Poster Papers

24,624 papers found • Page 75 of 493

DocKS-RAG: Optimizing Document-Level Relation Extraction through LLM-Enhanced Hybrid Prompt Tuning

Xiaolong Xu, Yibo Zhou, Haolong Xiang et al.

ICML 2025

DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding

Wenhui Liao, Jiapeng Wang, Hongliang Li et al.

CVPR 2025arXiv:2408.15045
10
citations

DocMIA: Document-Level Membership Inference Attacks against DocVQA Models

Khanh Nguyen, Raouf Kerkouche, Mario Fritz et al.

ICLR 2025arXiv:2502.03692
1
citations

Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark

Haining Yu, Yizhou Sun

ICLR 2025arXiv:2410.07021
1
citations

Docopilot: Improving Multimodal Models for Document-Level Understanding

Yuchen Duan, Zhe Chen, Yusong Hu et al.

CVPR 2025arXiv:2507.14675
15
citations

DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning

Xiao-Hui Li, Fei Yin, Cheng-Lin Liu

CVPR 2025arXiv:2504.04085
3
citations

DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models

Zeping Min, Xinshang Wang

ICLR 2025arXiv:2501.16650
1
citations

DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding

Wenwen Yu, Zhibo Yang, Yuliang Liu et al.

ICCV 2025arXiv:2508.08589
4
citations

Doctor Approved: Generating Medically Accurate Skin Disease Images through AI-Expert Feedback

Janet Wang, Yunbei Zhang, Zhengming Ding et al.

NEURIPS 2025arXiv:2506.12323
2
citations

Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents

Jun Chen, Dannong Xu, Junjie Fei et al.

CVPR 2025arXiv:2411.16740
5
citations

Document Summarization with Conformal Importance Guarantees

Bruce Kuwahara, Chen-Yuan Lin, Xiao Shi Huang et al.

NEURIPS 2025arXiv:2509.20461

DocVLM: Make Your VLM an Efficient Reader

Mor Shpigel Nacson, Aviad Aberdam, Roy Ganz et al.

CVPR 2025arXiv:2412.08746
12
citations

DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Mohamed Ali Souibgui, Changkyu Choi, Andrey Barsky et al.

ICML 2025arXiv:2505.07496
3
citations

Do Deep Neural Network Solutions Form a Star Domain?

Ankit Sonthalia, Alexander Rubinstein, Ehsan Abbasnejad et al.

ICLR 2025arXiv:2403.07968
4
citations

Do different prompting methods yield a common task representation in language models?

Guy Davidson, Todd Gureckis, Brenden Lake et al.

NEURIPS 2025arXiv:2505.12075
5
citations

DoDo-Code: an Efficient Levenshtein Distance Embedding-based Code for 4-ary IDS Channel

Alan J.X. Guo, Sihan Sun, Xiang Wei et al.

NEURIPS 2025arXiv:2312.12717
1
citations

Does Data Scaling Lead to Visual Compositional Generalization?

Arnas Uselis, Andrea Dittadi, Seong Joon Oh

ICML 2025arXiv:2507.07102
5
citations

Does Editing Provide Evidence for Localization?

Zihao Wang, Victor Veitch

ICLR 2025arXiv:2502.11447
9
citations

Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion

Kulin Shah, Alkis Kalavasis, Adam Klivans et al.

ICML 2025arXiv:2502.21278
11
citations

Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis

Qunzhong WANG, Xiangguo Sun, Hong Cheng

ICML 2025arXiv:2410.01635
15
citations

Does learning the right latent variables necessarily improve in-context learning?

Sarthak Mittal, Eric Elmoznino, Léo Gagnon et al.

ICML 2025arXiv:2405.19162
8
citations

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Zi Liang, Haibo Hu, Qingqing Ye et al.

ICML 2025arXiv:2505.12871
4
citations

Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning

Hui Zeng, Wenke Huang, Tongqing Zhou et al.

ICML 2025
1
citations

Does Refusal Training in LLMs Generalize to the Past Tense?

Maksym Andriushchenko, Nicolas Flammarion

ICLR 2025arXiv:2407.11969
69
citations

Does Representation Guarantee Welfare?

Jakob de Raaij, Ariel Procaccia, Alexandros Psomas

NEURIPS 2025

Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?

Sravanti Addepalli, Yerram Varun, Arun Suggala et al.

ICLR 2025arXiv:2412.03235
7
citations

Does SGD really happen in tiny subspaces?

Minhak Song, Kwangjun Ahn, Chulhee Yun

ICLR 2025arXiv:2405.16002
21
citations

Does Spatial Cognition Emerge in Frontier Models?

Santhosh Kumar Ramakrishnan, Erik Wijmans, Philipp Krähenbühl et al.

ICLR 2025arXiv:2410.06468
51
citations

Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models

Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy et al.

NEURIPS 2025arXiv:2506.04210
24
citations

Does Training with Synthetic Data Truly Protect Privacy?

Yunpeng Zhao, Jie Zhang

ICLR 2025arXiv:2502.12976
8
citations

Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?

Tianyuan Qu, Longxiang Tang, Bohao PENG et al.

ICCV 2025arXiv:2503.12496
12
citations

DoF: A Diffusion Factorization Framework for Offline Multi-Agent Reinforcement Learning

Chao Li, Ziwei Deng, Chenxing Lin et al.

ICLR 2025
7
citations

DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting

Liao Shen, Tianqi Liu, Huiqiang Sun et al.

CVPR 2025arXiv:2503.00746
3
citations

DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Post-Capture Refocusing, Defocus Rendering and Blur Removal

Yujie Wang, Praneeth Chakravarthula, Baoquan Chen

CVPR 2025
3
citations

DOGR: Towards Versatile Visual Document Grounding and Referring

Yinan Zhou, Yuxin Chen, Haokun Lin et al.

ICCV 2025arXiv:2411.17125
2
citations

Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models

Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan et al.

ICLR 2025arXiv:2411.14257
85
citations

Do ImageNet-trained Models Learn Shortcuts? The Impact of Frequency Shortcuts on Generalization

Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio

CVPR 2025arXiv:2503.03519
2
citations

Do It Yourself: Learning Semantic Correspondence from Pseudo-Labels

Olaf Dünkel, Thomas Wimmer, Christian Theobalt et al.

ICCV 2025arXiv:2506.05312
5
citations

Do Language Models Use Their Depth Efficiently?

Róbert Csordás, Christopher D Manning, Chris Potts

NEURIPS 2025arXiv:2505.13898
21
citations

Do Large Language Models Truly Understand Geometric Structures?

Xiaofeng Wang, Yiming Wang, Wenhong Zhu et al.

ICLR 2025arXiv:2501.13773
9
citations

DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization

Zihan Ding, Chi Jin, Difan Liu et al.

ICCV 2025arXiv:2412.15689
8
citations

Do LLM Agents Have Regret? A Case Study in Online Learning and Games

Chanwoo Park, Xiangyu Liu, Asuman Ozdaglar et al.

ICLR 2025arXiv:2403.16843
36
citations

Do LLMs estimate uncertainty well in instruction-following?

Juyeon Heo, Miao Xiong, Christina Heinze-Deml et al.

ICLR 2025arXiv:2410.14582
16
citations

Do LLMs have Consistent Values?

Naama Rozen, Liat Bezalel, Gal Elidan et al.

ICLR 2025arXiv:2407.12878
8
citations

Do LLMs ``know'' internally when they follow instructions?

Juyeon Heo, Christina Heinze-Deml, Oussama Elachqar et al.

ICLR 2025arXiv:2410.14516
22
citations

Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness

Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.

NEURIPS 2025arXiv:2506.05735
9
citations

Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs

Siyan Zhao, Mingyi Hong, Yang Liu et al.

ICLR 2025arXiv:2502.09597
51
citations

DOLPHIN: A Programmable Framework for Scalable Neurosymbolic Learning

Aaditya Naik, Jason Liu, Claire Wang et al.

ICML 2025arXiv:2410.03348
7
citations

Do LVLMs Truly Understand Video Anomalies? Revealing Hallucination via Co-Occurrence Patterns

Menghao Zhang, Huazheng Wang, Pengfei Ren et al.

NEURIPS 2025

Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training

Mozhi Zhang, Howe Tissue, Lu Wang et al.

ICML 2025arXiv:2506.10952
3
citations