ICLR Papers

6,124 papers found • Page 122 of 123

Variational Bayesian Last Layers

James Harrison, John Willes, Jasper Snoek

ICLR 2024spotlightarXiv:2404.11599

Variational Inference for SDEs Driven by Fractional Noise

Rembert Daems, Manfred Opper, Guillaume Crevecoeur et al.

ICLR 2024spotlightarXiv:2310.12975
10
citations

VBH-GNN: Variational Bayesian Heterogeneous Graph Neural Networks for Cross-subject Emotion Recognition

Chenyu Liu, XINLIANG ZHOU, Zhengri Zhu et al.

ICLR 2024oral

VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections

Dongqi Fu, Zhigang Hua, Yan Xie et al.

ICLR 2024posterarXiv:2403.16030

VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models

Zihao Zhu, Mingda Zhang, Shaokui Wei et al.

ICLR 2024posterarXiv:2309.16211

V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection

Yichao Shen, Zigang Geng, YUHUI YUAN et al.

ICLR 2024posterarXiv:2308.04409

VDT: General-purpose Video Diffusion Transformers via Mask Modeling

Haoyu Lu, Guoxing Yang, Nanyi Fei et al.

ICLR 2024oralarXiv:2305.13311

VeRA: Vector-based Random Matrix Adaptation

Dawid Kopiczko, Tijmen Blankevoort, Yuki Asano

ICLR 2024posterarXiv:2310.11454

VersVideo: Leveraging Enhanced Temporal Diffusion Models for Versatile Video Generation

Jinxi Xiang, Ricong Huang, Jun Zhang et al.

ICLR 2024oral

VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks

Zhaomin Wu, Junyi Hou, Bingsheng He

ICLR 2024posterarXiv:2307.02040
7
citations

VFLAIR: A Research Library and Benchmark for Vertical Federated Learning

TIANYUAN ZOU, Zixuan GU, Yu He et al.

ICLR 2024posterarXiv:2310.09827

ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation

Jiaming Liu, Senqiao Yang, Peidong Jia et al.

ICLR 2024posterarXiv:2306.04344

Video Decomposition Prior: Editing Videos Layer by Layer

Gaurav Shrivastava, Ser-Nam Lim, Abhinav Shrivastava

ICLR 2024poster

Video Language Planning

Yilun Du, Sherry Yang, Pete Florence et al.

ICLR 2024posterarXiv:2310.10625
144
citations

Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation

Kimia Hamidieh, Haoran Zhang, Swami Sankaranarayanan et al.

ICLR 2024spotlightarXiv:2406.18562

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

İlker Kesen, Andrea Pedrotti, Mustafa Dogan et al.

ICLR 2024oralarXiv:2311.07022

Vision-by-Language for Training-Free Compositional Image Retrieval

Shyamgopal Karthik, Karsten Roth, Massimiliano Mancini et al.

ICLR 2024posterarXiv:2310.09291

Vision-Language Foundation Models as Effective Robot Imitators

Xinghang Li, Minghuan Liu, Hanbo Zhang et al.

ICLR 2024spotlightarXiv:2311.01378
310
citations

Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning

Juan Rocamonde, Victoriano Montesinos, Elvis Nava et al.

ICLR 2024posterarXiv:2310.12921
133
citations

Vision Transformers Need Registers

Timothée Darcet, Maxime Oquab, Julien Mairal et al.

ICLR 2024posterarXiv:2309.16588

Visual Data-Type Understanding does not emerge from scaling Vision-Language Models

Vishaal Udandarao, Max F. Burg, Samuel Albanie et al.

ICLR 2024posterarXiv:2310.08577

Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis

Hubert Siuzdak

ICLR 2024posterarXiv:2306.00814

VONet: Unsupervised Video Object Learning With Parallel U-Net Attention and Object-wise Sequential VAE

Haonan Yu, Wei Xu

ICLR 2024oralarXiv:2401.11110

VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs

Ling Yang, Ye Tian, Minkai Xu et al.

ICLR 2024posterarXiv:2308.02117

VQ-TR: Vector Quantized Attention for Time Series Forecasting

Kashif Rasul, Andrew Bennett, Pablo Vicente et al.

ICLR 2024poster

Waxing-and-Waning: a Generic Similarity-based Framework for Efficient Self-Supervised Learning

Sheng Li, Chao Wu, Ao Li et al.

ICLR 2024poster

Weaker MVI Condition: Extragradient Methods with Multi-Step Exploration

Yifeng Fan, Yongqiang Li, Bo Chen

ICLR 2024poster

Weakly-supervised Audio Separation via Bi-modal Semantic Similarity

Tanvir Mahmud, Saeed Amizadeh, Kazuhito Koishida et al.

ICLR 2024posterarXiv:2404.01740

Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images

Hannah Kniesel, Leon Sick, Tristan Payer et al.

ICLR 2024posterarXiv:2508.00563
3
citations

Weatherproofing Retrieval for Localization with Generative AI and Geometric Consistency

Yannis Kalantidis, Mert Bulent SARIYILDIZ, Rafael Rezende et al.

ICLR 2024posterarXiv:2402.09237

WebArena: A Realistic Web Environment for Building Autonomous Agents

Shuyan Zhou, Frank F Xu, Hao Zhu et al.

ICLR 2024posterarXiv:2307.13854

What Algorithms can Transformers Learn? A Study in Length Generalization

Hattie Zhou, Arwen Bradley, Etai Littwin et al.

ICLR 2024posterarXiv:2310.16028

"What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection

Anshuman Chhabra, Peizhao Li, Prasant Mohapatra et al.

ICLR 2024poster

What does automatic differentiation compute for neural networks?

Sejun Park, Sanghyuk Chun, Wonyeol Lee

ICLR 2024spotlight

What does the Knowledge Neuron Thesis Have to do with Knowledge?

Jingcheng Niu, Andrew Liu, Zining Zhu et al.

ICLR 2024spotlightarXiv:2405.02421
47
citations

What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity

Gabryel Mason-Williams, Fredrik Dahlqvist

ICLR 2024poster
17
citations

What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning

Wei Liu, Weihao Zeng, Keqing He et al.

ICLR 2024posterarXiv:2312.15685

What Matters to You? Towards Visual Representation Alignment for Robot Learning

Thomas Tian, Chenfeng Xu, Masayoshi Tomizuka et al.

ICLR 2024oralarXiv:2310.07932

What's in a Prior? Learned Proximal Networks for Inverse Problems

Zhenghan Fang, Sam Buchanan, Jeremias Sulam

ICLR 2024posterarXiv:2310.14344
23
citations

What's In My Big Data?

Yanai Elazar, Akshita Bhagia, Ian Magnusson et al.

ICLR 2024spotlightarXiv:2310.20707

When can transformers reason with abstract symbols?

Enric Boix-Adserà, Omid Saremi, Emmanuel Abbe et al.

ICLR 2024poster

When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations

Aleksandar Petrov, Philip Torr, Adel Bibi

ICLR 2024posterarXiv:2310.19698
38
citations

When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method

Biao Zhang, Zhongtao Liu, Colin Cherry et al.

ICLR 2024posterarXiv:2402.17193

When Semantic Segmentation Meets Frequency Aliasing

Linwei Chen, Lin Gu, Ying Fu

ICLR 2024posterarXiv:2403.09065
21
citations

When should we prefer Decision Transformers for Offline Reinforcement Learning?

Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard et al.

ICLR 2024posterarXiv:2305.14550

Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs

Qihan Ren, Jiayang Gao, Wen Shen et al.

ICLR 2024poster

Whittle Index with Multiple Actions and State Constraint for Inventory Management

Chuheng Zhang, Xiangsen Wang, Wei Jiang et al.

ICLR 2024poster

Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models

Ziyu Wang, Lejun Min, Gus Xia

ICLR 2024spotlightarXiv:2405.09901
24
citations

Why is SAM Robust to Label Noise?

Christina Baek, J Kolter, Aditi Raghunathan

ICLR 2024posterarXiv:2405.03676

WildChat: 1M ChatGPT Interaction Logs in the Wild

Wenting Zhao, Xiang Ren, Jack Hessel et al.

ICLR 2024oralarXiv:2405.01470