ICLR 2024 Papers
2,297 papers found • Page 46 of 46
Weakly-supervised Audio Separation via Bi-modal Semantic Similarity
Tanvir Mahmud, Saeed Amizadeh, Kazuhito Koishida et al.
Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images
Hannah Kniesel, Leon Sick, Tristan Payer et al.
Weatherproofing Retrieval for Localization with Generative AI and Geometric Consistency
Yannis Kalantidis, Mert Bulent SARIYILDIZ, Rafael Rezende et al.
WebArena: A Realistic Web Environment for Building Autonomous Agents
Shuyan Zhou, Frank F Xu, Hao Zhu et al.
What Algorithms can Transformers Learn? A Study in Length Generalization
Hattie Zhou, Arwen Bradley, Etai Littwin et al.
"What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection
Anshuman Chhabra, Peizhao Li, Prasant Mohapatra et al.
What does automatic differentiation compute for neural networks?
Sejun Park, Sanghyuk Chun, Wonyeol Lee
What does the Knowledge Neuron Thesis Have to do with Knowledge?
Jingcheng Niu, Andrew Liu, Zining Zhu et al.
What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity
Gabryel Mason-Williams, Fredrik Dahlqvist
What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Wei Liu, Weihao Zeng, Keqing He et al.
What Matters to You? Towards Visual Representation Alignment for Robot Learning
Thomas Tian, Chenfeng Xu, Masayoshi Tomizuka et al.
What's in a Prior? Learned Proximal Networks for Inverse Problems
Zhenghan Fang, Sam Buchanan, Jeremias Sulam
What's In My Big Data?
Yanai Elazar, Akshita Bhagia, Ian Magnusson et al.
When can transformers reason with abstract symbols?
Enric Boix-Adserà, Omid Saremi, Emmanuel Abbe et al.
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov, Philip Torr, Adel Bibi
When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method
Biao Zhang, Zhongtao Liu, Colin Cherry et al.
When Semantic Segmentation Meets Frequency Aliasing
Linwei Chen, Lin Gu, Ying Fu
When should we prefer Decision Transformers for Offline Reinforcement Learning?
Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard et al.
Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs
Qihan Ren, Jiayang Gao, Wen Shen et al.
Whittle Index with Multiple Actions and State Constraint for Inventory Management
Chuheng Zhang, Xiangsen Wang, Wei Jiang et al.
Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models
Ziyu Wang, Lejun Min, Gus Xia
Why is SAM Robust to Label Noise?
Christina Baek, J Kolter, Aditi Raghunathan
WildChat: 1M ChatGPT Interaction Logs in the Wild
Wenting Zhao, Xiang Ren, Jack Hessel et al.
WildFusion: Learning 3D-Aware Latent Diffusion Models in View Space
Katja Schwarz, Seung Wook Kim, Jun Gao et al.
Window Attention is Bugged: How not to Interpolate Position Embeddings
Daniel Bolya, Chaitanya Ryali, Judy Hoffman et al.
Win-Win: Training High-Resolution Vision Transformers from Two Windows
Vincent Leroy, Jerome Revaud, Thomas Lucas et al.
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Ziyang Luo, Can Xu, Pu Zhao et al.
WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
Can Xu, Qingfeng Sun, Kai Zheng et al.
WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
Irina Rish, Kartik Ahuja, Mohammad Javad Darvishi Bayazi et al.
Workflow Discovery from Dialogues in the Low Data Regime
David Vazquez, Stefania Raimondo, Christopher Pal et al.
Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models
Pablo Pernías, Dominic Rampas, Mats L. Richter et al.
Xformer: Hybrid X-Shaped Transformer for Image Denoising
Jiale Zhang, Yulun Zhang, Jinjin Gu et al.
YaRN: Efficient Context Window Extension of Large Language Models
Bowen Peng, Jeffrey Quesnelle, Honglu Fan et al.
Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML
Robin van de Water, Hendrik Schmidt, Paul Elbers et al.
You Only Query Once: An Efficient Label-Only Membership Inference Attack
Yutong Wu, Han Qiu, Shangwei Guo et al.
Zero and Few-shot Semantic Parsing with Ambiguous Inputs
Elias Stengel-Eskin, Kyle Rawlins, Benjamin Van Durme
Zero Bubble (Almost) Pipeline Parallelism
Penghui Qi, Xinyi Wan, Guangxing Huang et al.
ZeRO++: Extremely Efficient Collective Communication for Large Model Training
Guanhua Wang, Heyang Qin, Sam Jacobs et al.
ZeroFlow: Scalable Scene Flow via Distillation
Kyle Vedder, Neehar Peri, Nathaniel Chodosh et al.
Zero-Mean Regularized Spectral Contrastive Learning: Implicitly Mitigating Wrong Connections in Positive-Pair Graphs
Xiong Zhou, Xianming Liu, feilong zhang et al.
Zero-Shot Continuous Prompt Transfer: Generalizing Task Semantics Across Language Models
Zijun Wu, Yongkang Wu, Lili Mou
Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models
Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya et al.
Zero-Shot Robustification of Zero-Shot Models
Dyah Adila, Changho Shin, Linrong Cai et al.
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles
Zhiwei Tang, Dmitry Rybin, Tsung-Hui Chang
Zipformer: A faster and better encoder for automatic speech recognition
Zengwei Yao, Liyong Guo, Xiaoyu Yang et al.
ZipIt! Merging Models from Different Tasks without Training
George Stoica, Daniel Bolya, Jakob Bjorner et al.
Zoology: Measuring and Improving Recall in Efficient Language Models
Simran Arora, Sabri Eyuboglu, Aman Timalsina et al.