Poster Papers
24,624 papers found • Page 39 of 493
CamPoint: Boosting Point Cloud Segmentation with Virtual Camera
Jianhui Zhang, Luo Yizhi, Zicheng Zhang et al.
CamSAM2: Segment Anything Accurately in Camouflaged Videos
Yuli Zhou, Yawei Li, Yuqian Fu et al.
CaMuViD: Calibration-Free Multi-View Detection
Amir Etefaghi Daryani, M. Usman Maqbool Bhutta, Byron Hernandez et al.
Can3Tok: Canonical 3D Tokenization and Latent Modeling of Scene-Level 3D Gaussians
Quankai Gao, Iliyan Georgiev, Tuanfeng Wang et al.
Can Agent Fix Agent Issues?
Alfin Wijaya Rahardja, Junwei Liu, Weitong Chen et al.
Can a Large Language Model be a Gaslighter?
Wei Li, Luyao Zhu, Yang Song et al.
Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning
Chongyi Zheng, Jens Tuyls, Joanne Peng et al.
Cancer Survival Analysis via Zero-shot Tumor Microenvironment Segmentation on Low-resolution Whole Slide Pathology Images
Jiao Tang, WEI SHAO, Daoqiang Zhang
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
Yuankai Luo, Lei Shi, Xiao-Ming Wu
Can Class-Priors Help Single-Positive Multi-Label Learning?
Biao Liu, Ning Xu, Jie Wang et al.
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
Peijie Dong, Zhenheng Tang, Xiang Liu et al.
Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?
Wendong Zheng, Junyang Chen, Husheng Guo et al.
Can Dependencies Induced by LLM-Agent Workflows Be Trusted?
Yu Yao, Yiliao (Lia) Song, Yian Xie et al.
Can Diffusion Models Disentangle? A Theoretical Perspective
Liming Wang, Muhammad Jehanzeb Mirza, Yishu Gong et al.
Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?
Yujin Han, Andi Han, Wei Huang et al.
Can DPO Learn Diverse Human Values? A Theoretical Scaling Law
Shawn Im, Sharon Li
CanFields: Consolidating Diffeomorphic Flows for Non-Rigid 4D Interpolation from Arbitrary-Length Sequences
Miaowei Wang, Changjian Li, Amir Vaxman
Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective
Andrew Jesson, Nicolas Beltran-Velez, David Blei
Can Generative Geospatial Diffusion Models Excel as Discriminative Geospatial Foundation Models?
Yuru Jia, Valerio Marsocci, Ziyang Gong et al.
Can In-context Learning Really Generalize to Out-of-distribution Tasks?
Qixun Wang, Yifei Wang, Xianghua Ying et al.
Can Knowledge be Transferred from Unimodal to Multimodal? Investigating the Transitivity of Multimodal Knowledge Editing
Lingyong Fang, Xinzhong Wang, Depeng depeng wang et al.
Can Knowledge Editing Really Correct Hallucinations?
Baixiang Huang, Canyu Chen, Xiongxiao Xu et al.
Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark
Hanlei Zhang, zhuohang li, Hua Xu et al.
Can Large Language Models Master Complex Card Games?
Wei Wang, Fuqing Bie, Junzhe Chen et al.
Can Large Language Models Understand Intermediate Representations in Compilers?
Hailong Jiang, Jianfeng Zhu, Yao Wan et al.
Can Large Language Models Understand Symbolic Graphics Programs?
Zeju Qiu, Weiyang Liu, Haiwen Feng et al.
Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind
Qingmei Li, Yang Zhang, Zurong Mai et al.
Can Large Vision-Language Models Correct Semantic Grounding Errors By Themselves?
Yuan-Hong Liao, Rafid Mahmood, Sanja Fidler et al.
CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning
Xuankun Rong, Jianshu Zhang, Kun He et al.
Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs
Guiyao Tie, Zenghui Yuan, Zeli Zhao et al.
Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers
Chenglei Si, Diyi Yang, Tatsunori Hashimoto
Can LLM Simulations Truly Reflect Humanity? A Deep Dive
Qian Wang, Zhenheng Tang, Bingsheng He
Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation
Qijiong Liu, Jieming Zhu, Lu Fan et al.
Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?
Seth Aycock, David Stap, Di Wu et al.
Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
Tianle Zhang, Wanlong Fang, Jonathan Woo et al.
Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?
Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.
Can LLMs Solve Longer Math Word Problems Better?
Xin Xu, Tong Xiao, Zitong Chao et al.
Can LLMs Understand Time Series Anomalies?
Zihao Zhou, Rose Yu
Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?
Yijie Hu, Zihao Zhou, Kaizhu Huang et al.
Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?
Apratim Bhattacharyya, Bicheng Xu, Sanjay Haresh et al.
Can NeRFs "See" without Cameras?
Chaitanya Amballa, Yu-Lin Wei, Sattwik Basu et al.
Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model
Siyu Chen, Beining Wu, Miao Lu et al.
Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs
Haoming Yang, Ke Ma, Xiaojun Jia et al.
Can One Modality Model Synergize Training of Other Modality Models?
Jae-Jun Lee, Sung Whan Yoon
Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers
Lokesh Veeramacheneni, Moritz Wolter, Hilde Kuehne et al.
CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation
Xiangyang Luo, Ye Zhu, Yunfei Liu et al.
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Jiawei Huang, Bingcong Li, Christoph Dann et al.
Can Text-to-Video Generation help Video-Language Alignment?
Luca Zanella, Massimiliano Mancini, Willi Menapace et al.
Can Textual Gradient Work in Federated Learning?
Minghui Chen, Ruinan Jin, Wenlong Deng et al.
Can Transformers Do Enumerative Geometry?
Baran Hashemi, Roderic Corominas, Alessandro Giacchetto