Poster Papers
24,624 papers found • Page 397 of 493
Conference
IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks
Yue Cao, Tianlin Li, Xiaofeng Cao et al.
IReNe: Instant Recoloring of Neural Radiance Fields
Alessio Mazzucchelli, Adrian Garcia-Garcia, Elena Garces et al.
IRGen: Generative Modeling for Image Retrieval
Yidan Zhang, Ting Zhang, DONG CHEN et al.
IRSAM: Advancing Segment Anything Model for Infrared Small Target Detection
Mingjin Zhang, Yuchun Wang, Jie Guo et al.
Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability
Ivan Lee, Nan Jiang, Taylor Berg-Kirkpatrick
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
Shusheng Xu, Wei Fu, Jiaxuan Gao et al.
Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving?
Zhiqi Li, Zhiding Yu, Shiyi Lan et al.
Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?
Mira Juergens, Nis Meinert, Viktor Bengs et al.
Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video
Shashank Venkataramanan, Mamshad Nayeem Rizve, Joao Carreira et al.
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective
Fabian Falck, Ziyu Wang, Christopher Holmes
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective
Lei Zhao, Mengdi Wang, Yu Bai
Is Kernel Prediction More Powerful than Gating in Convolutional Neural Networks?
Lorenz K. Muller
Isometric Representation Learning for Disentangled Latent Space of Diffusion Models
Jaehoon Hahm, Junho Lee, Sunghyun Kim et al.
Isomorphic Pruning for Vision Models
Gongfan Fang, Xinyin Ma, Michael Bi Mi et al.
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
Jacopo Bonato, Marco Cotogni, Luigi Sabetta
Is Self-Repair a Silver Bullet for Code Generation?
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang et al.
Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?
Huy Nguyen, Pedram Akbarian, Nhat Ho
Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching
Aleksandar Makelov, Georg Lange, Atticus Geiger et al.
Is user feedback always informative? Retrieval Latent Defending for Semi-Supervised Domain Adaptation without Source Data
Junha Song, Tae Soo Kim, Junha Kim et al.
Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?
Hanxin Zhu, Tianyu He, Xin Li et al.
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
Tara Akhound-Sadegh, Jarrid Rector-Brooks, Joey Bose et al.
Iterated Learning Improves Compositionality in Large Vision-Language Models
Chenhao Zheng, Jieyu Zhang, Aniruddha Kembhavi et al.
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Banghua Zhu, Michael Jordan, Jiantao Jiao
Iterative Ensemble Training with Anti-Gradient Control for Mitigating Memorization in Diffusion Models
Xiao Liu, Xiaoliu Guan, Yu Wu et al.
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint
Wei Xiong, Hanze Dong, Chenlu Ye et al.
Iterative Regularized Policy Optimization with Imperfect Demonstrations
Xudong Gong, Feng Dawei, Kele Xu et al.
Iterative Search Attribution for Deep Neural Networks
Zhiyu Zhu, Huaming Chen, Xinyi Wang et al.
Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting
Aleksei Ustimenko, Aleksandr Beznosikov
iToF-flow-based High Frame Rate Depth Imaging
Yu Meng, Zhou Xue, Xu Chang et al.
It's All About Your Sketch: Democratising Sketch Control in Diffusion Models
Subhadeep Koley, Ayan Kumar Bhunia, Deeptanshu Sekhri et al.
It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition
CHEN CHEN, Ruizhe Li, Yuchen Hu et al.
ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation
Yuyuan Liu, Yuanhong Chen, Hu Wang et al.
IVTP: Instruction-guided Visual Token Pruning for Large Vision-Language Models
Kai Huang, Hao Zou, Ye Xi et al.
IW-GAE: Importance weighted group accuracy estimation for improved calibration and model selection in unsupervised domain adaptation
Taejong Joo, Diego Klabjan
Jacobian Regularizer-based Neural Granger Causality
Wanqi Zhou, Shuanghao Bai, Shujian Yu et al.
JDEC: JPEG Decoding via Enhanced Continuous Cosine Coefficients
Woo Kyoung Han, Sunghoon Im, Jaedeok Kim et al.
JDT3D: Addressing the Gaps in LiDAR-Based Tracking-by-Attention
Brian Cheong, Jiachen Zhou, Steven Waslander
JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation
Yu Zeng, Vishal M. Patel, Haochen Wang et al.
JoAPR: Cleaning the Lens of Prompt Learning for Vision-Language Models
YUNCHENG GUO, Xiaodong Gu
Joint2Human: High-Quality 3D Human Generation via Compact Spherical Embedding of 3D Joints
Muxin Zhang, Qiao Feng, Zhuo Su et al.
Joint Composite Latent Space Bayesian Optimization
Natalie Maus, Zhiyuan Jerry Lin, Maximilian Balandat et al.
JointDreamer: Ensuring Geometry Consistency and Text Congruence in Text-to-3D Generation via Joint Score Distillation
ChenHan Jiang, Yihan Zeng, Tianyang Hu et al.
Jointly-Learned Exit and Inference for a Dynamic Neural Network
Florence Regol, Joud Chataoui, Mark Coates
Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment
Alireza Ganjdanesh, Shangqian Gao, Heng Huang
Jointly Training Large Autoregressive Multimodal Models
Emanuele Aiello, Lili Yu, Yixin Nie et al.
JointNet: Extending Text-to-Image Diffusion for Dense Distribution Modeling
Jingyang Zhang, Shiwei Li, Yuanxun Lu et al.
Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer
Hyeongjin Nam, Daniel Jung, Gyeongsik Moon et al.
Joint RGB-Spectral Decomposition Model Guided Image Enhancement in Mobile Photography
Kailai Zhou, Lijing Cai, Yibo Wang et al.
JointSQ: Joint Sparsification-Quantization for Distributed Learning
Weiying Xie, Haowei Li, Ma Jitao et al.
Joint-Task Regularization for Partially Labeled Multi-Task Learning
Kento Nishi, Junsik Kim, Wanhua Li et al.