ICML 2024 Papers
2,635 papers found • Page 52 of 53
Unveiling the Dynamics of Information Interplay in Supervised Learning
Kun Song, Zhiquan Tan, Bochao Zou et al.
Unveiling the Potential of AI for Nanomaterial Morphology Prediction
Ivan Dubrovsky, Andrei Dmitrenko, Aleksey Dmitrenko et al.
UP2ME: Univariate Pre-training to Multivariate Fine-tuning as a General-purpose Framework for Multivariate Time Series Analysis
Yunhao Zhang, Liu Minghao, Shengyang Zhou et al.
UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers
Duo Peng, Qiuhong Ke, Jun Liu
UPOCR: Towards Unified Pixel-Level OCR Interface
Dezhi Peng, Zhenhua Yang, Jiaxin Zhang et al.
Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers
Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai et al.
Using AI Uncertainty Quantification to Improve Human Decision-Making
Laura Marusich, Jonathan Bakdash, Yan Zhou et al.
Using Left and Right Brains Together: Towards Vision and Language Planning
Jun CEN, Chenfei Wu, Xiao Liu et al.
Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs
Chandra Mouli Sekar, Danielle Robinson, Shima Alizadeh et al.
USTAD: Unified Single-model Training Achieving Diverse Scores for Information Retrieval
Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer et al.
Vague Prototype-Oriented Diffusion Model for Multi-Class Anomaly Detection
yuxin li, Yaoxuan Feng, Bo Chen et al.
Value-Evolutionary-Based Reinforcement Learning
Pengyi Li, Jianye Hao, Hongyao Tang et al.
Vanilla Bayesian Optimization Performs Great in High Dimensions
Carl Hvarfner, Erik Hellsten, Luigi Nardi
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models
Tanmay Gautam, Youngsuk Park, Hao Zhou et al.
Variational Inference with Coverage Guarantees in Simulation-Based Inference
Yash Patel, Declan McNamara, Jackson Loper et al.
Variational Learning is Effective for Large Deep Networks
Yuesong Shen, Nico Daheim, Bai Cong et al.
Variational Linearized Laplace Approximation for Bayesian Deep Learning
Luis A. Ortega, Simon Rodriguez Santana, Daniel Hernández-Lobato
Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
Hyunsu Kim, Ye Gon Kim, Hongseok Yang et al.
Variational Schrödinger Diffusion Models
Wei Deng, Weijian Luo, Yixin Tan et al.
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention
Zhen Qin, Weigao Sun, Dong Li et al.
Vectorized Conditional Neural Fields: A Framework for Solving Time-dependent Parametric Partial Differential Equations
Jan Hagnberger, Marimuthu Kalimuthu, Daniel Musekamp et al.
Vector Quantization Pretraining for EEG Time Series with Random Projection and Phase Alignment
Haokun Gui, Xiucheng Li, Xinyang Chen
Verification of Machine Unlearning is Fragile
Binchi Zhang, Zihan Chen, Cong Shen et al.
Verifying message-passing neural networks via topology-based bounds tightening
Christopher Hojny, Shiqiang Zhang, Juan Campos et al.
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization
Yang Jin, Zhicheng Sun, Kun Xu et al.
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition
Hao Fei, Shengqiong Wu, Wei Ji et al.
VideoPoet: A Large Language Model for Zero-Shot Video Generation
Dan Kondratyuk, Lijun Yu, Xiuye Gu et al.
VideoPrism: A Foundational Visual Encoder for Video Understanding
Long Zhao, Nitesh Bharadwaj Gundavarapu, Liangzhe Yuan et al.
video-SALMONN: Speech-Enhanced Audio-Visual Large Language Models
Guangzhi Sun, Wenyi Yu, Changli Tang et al.
Viewing Transformers Through the Lens of Long Convolutions Layers
Itamar Zimerman, Lior Wolf
VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and Proprioception
Zhaoliang Wan, Yonggen Ling, Senlin Yi et al.
ViP: A Differentially Private Foundation Model for Computer Vision
Yaodong Yu, Maziar Sanjabi, Yi Ma et al.
VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual Context
yunxin li, Baotian Hu, Haoyuan Shi et al.
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Lianghui Zhu, Bencheng Liao, Qian Zhang et al.
Vision Transformers as Probabilistic Expansion from Learngene
Qiufeng Wang, Xu Yang, Haokun Chen et al.
Visual Representation Learning with Stochastic Frame Prediction
Huiwon Jang, Dongyoung Kim, Junsu Kim et al.
Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models
Jinhao Li, Haopeng Li, Sarah Erfani et al.
Visual Transformer with Differentiable Channel Selection: An Information Bottleneck Inspired Approach
Yancheng Wang, Ping Li, Yingzhen Yang
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
Anahita Baninajjar, Ahmed Rezine, Amir Aminifar
Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions
Yongqiang Cai
VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language Model
Pengying Wu, Yao Mu, Bingxian Wu et al.
VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling
Siyuan Li, Zedong Wang, Zicheng Liu et al.
WARM: On the Benefits of Weight Averaged Reward Models
Alexandre Rame, Nino Vieillard, Léonard Hussenot et al.
Wasserstein Wormhole: Scalable Optimal Transport Distance with Transformer
Doron Haviv, Russell Kunes, Thomas Dougherty et al.
Watermarks in the Sand: Impossibility of Strong Watermarking for Language Models
Hanlin Zhang, Benjamin Edelman, Danilo Francati et al.
Watermark Stealing in Large Language Models
Nikola Jovanović, Robin Staab, Martin Vechev
WAVES: Benchmarking the Robustness of Image Watermarks
Bang An, Mucong Ding, Tahseen Rabbani et al.
Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation
Zakhar Shumaylov, Jeremy Budd, Subhadip Mukherjee et al.
Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation
Pei Liu, Luping Ji
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
Collin Burns, Pavel Izmailov, Jan Kirchner et al.