Poster Papers
24,624 papers found • Page 75 of 493
Conference
DocKS-RAG: Optimizing Document-Level Relation Extraction through LLM-Enhanced Hybrid Prompt Tuning
Xiaolong Xu, Yibo Zhou, Haolong Xiang et al.
DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding
Wenhui Liao, Jiapeng Wang, Hongliang Li et al.
DocMIA: Document-Level Membership Inference Attacks against DocVQA Models
Khanh Nguyen, Raouf Kerkouche, Mario Fritz et al.
Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark
Haining Yu, Yizhou Sun
Docopilot: Improving Multimodal Models for Document-Level Understanding
Yuchen Duan, Zhe Chen, Yusong Hu et al.
DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning
Xiao-Hui Li, Fei Yin, Cheng-Lin Liu
DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models
Zeping Min, Xinshang Wang
DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding
Wenwen Yu, Zhibo Yang, Yuliang Liu et al.
Doctor Approved: Generating Medically Accurate Skin Disease Images through AI-Expert Feedback
Janet Wang, Yunbei Zhang, Zhengming Ding et al.
Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents
Jun Chen, Dannong Xu, Junjie Fei et al.
Document Summarization with Conformal Importance Guarantees
Bruce Kuwahara, Chen-Yuan Lin, Xiao Shi Huang et al.
DocVLM: Make Your VLM an Efficient Reader
Mor Shpigel Nacson, Aviad Aberdam, Roy Ganz et al.
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui, Changkyu Choi, Andrey Barsky et al.
Do Deep Neural Network Solutions Form a Star Domain?
Ankit Sonthalia, Alexander Rubinstein, Ehsan Abbasnejad et al.
Do different prompting methods yield a common task representation in language models?
Guy Davidson, Todd Gureckis, Brenden Lake et al.
DoDo-Code: an Efficient Levenshtein Distance Embedding-based Code for 4-ary IDS Channel
Alan J.X. Guo, Sihan Sun, Xiang Wei et al.
Does Data Scaling Lead to Visual Compositional Generalization?
Arnas Uselis, Andrea Dittadi, Seong Joon Oh
Does Editing Provide Evidence for Localization?
Zihao Wang, Victor Veitch
Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion
Kulin Shah, Alkis Kalavasis, Adam Klivans et al.
Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis
Qunzhong WANG, Xiangguo Sun, Hong Cheng
Does learning the right latent variables necessarily improve in-context learning?
Sarthak Mittal, Eric Elmoznino, Léo Gagnon et al.
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?
Zi Liang, Haibo Hu, Qingqing Ye et al.
Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning
Hui Zeng, Wenke Huang, Tongqing Zhou et al.
Does Refusal Training in LLMs Generalize to the Past Tense?
Maksym Andriushchenko, Nicolas Flammarion
Does Representation Guarantee Welfare?
Jakob de Raaij, Ariel Procaccia, Alexandros Psomas
Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?
Sravanti Addepalli, Yerram Varun, Arun Suggala et al.
Does SGD really happen in tiny subspaces?
Minhak Song, Kwangjun Ahn, Chulhee Yun
Does Spatial Cognition Emerge in Frontier Models?
Santhosh Kumar Ramakrishnan, Erik Wijmans, Philipp Krähenbühl et al.
Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models
Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy et al.
Does Training with Synthetic Data Truly Protect Privacy?
Yunpeng Zhao, Jie Zhang
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?
Tianyuan Qu, Longxiang Tang, Bohao PENG et al.
DoF: A Diffusion Factorization Framework for Offline Multi-Agent Reinforcement Learning
Chao Li, Ziwei Deng, Chenxing Lin et al.
DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting
Liao Shen, Tianqi Liu, Huiqiang Sun et al.
DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Post-Capture Refocusing, Defocus Rendering and Blur Removal
Yujie Wang, Praneeth Chakravarthula, Baoquan Chen
DOGR: Towards Versatile Visual Document Grounding and Referring
Yinan Zhou, Yuxin Chen, Haokun Lin et al.
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan et al.
Do ImageNet-trained Models Learn Shortcuts? The Impact of Frequency Shortcuts on Generalization
Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio
Do It Yourself: Learning Semantic Correspondence from Pseudo-Labels
Olaf Dünkel, Thomas Wimmer, Christian Theobalt et al.
Do Language Models Use Their Depth Efficiently?
Róbert Csordás, Christopher D Manning, Chris Potts
Do Large Language Models Truly Understand Geometric Structures?
Xiaofeng Wang, Yiming Wang, Wenhong Zhu et al.
DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization
Zihan Ding, Chi Jin, Difan Liu et al.
Do LLM Agents Have Regret? A Case Study in Online Learning and Games
Chanwoo Park, Xiangyu Liu, Asuman Ozdaglar et al.
Do LLMs estimate uncertainty well in instruction-following?
Juyeon Heo, Miao Xiong, Christina Heinze-Deml et al.
Do LLMs have Consistent Values?
Naama Rozen, Liat Bezalel, Gal Elidan et al.
Do LLMs ``know'' internally when they follow instructions?
Juyeon Heo, Christina Heinze-Deml, Oussama Elachqar et al.
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
Siyan Zhao, Mingyi Hong, Yang Liu et al.
DOLPHIN: A Programmable Framework for Scalable Neurosymbolic Learning
Aaditya Naik, Jason Liu, Claire Wang et al.
Do LVLMs Truly Understand Video Anomalies? Revealing Hallucination via Co-Occurrence Patterns
Menghao Zhang, Huazheng Wang, Pengfei Ren et al.
Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training
Mozhi Zhang, Howe Tissue, Lu Wang et al.