All Papers

34,598 papers found • Page 547 of 692

How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model

Umberto Tomasini, Matthieu Wyart

ICML 2024spotlightarXiv:2404.10727
7
citations

How Does Goal Relabeling Improve Sample Efficiency?

Sirui Zheng, Chenjia Bai, Zhuoran Yang et al.

ICML 2024

How Does Unlabeled Data Provably Help Out-of-Distribution Detection?

Xuefeng Du, Zhen Fang, Ilias Diakonikolas et al.

ICLR 2024arXiv:2402.03502
34
citations

How do Language Models Bind Entities in Context?

Jiahai Feng, Jacob Steinhardt

ICLR 2024arXiv:2310.17191
69
citations

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

Ryan Liu, Theodore R Sumers, Ishita Dasgupta et al.

ICML 2024arXiv:2402.07282
28
citations

How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?

Hongkang Li, Meng Wang, Songtao Lu et al.

ICML 2024arXiv:2402.15607
34
citations

How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations

Tianyu Guo, Wei Hu, Song Mei et al.

ICLR 2024arXiv:2310.10616
77
citations

How do Transformers Perform In-Context Autoregressive Learning ?

Michael Sander, Raja Giryes, Taiji Suzuki et al.

ICML 2024

How Far Can a 1-Pixel Camera Go? Solving Vision Tasks using Photoreceptors and Computationally Designed Visual Morphology

Andrei Atanov, Rishubh Singh, Jiawei Fu et al.

ECCV 2024

How Far Can Fairness Constraints Help Recover From Biased Data?

Mohit Sharma, Amit Jayant Deshpande

ICML 2024arXiv:2312.10396
6
citations

How Far Can We Compress Instant-NGP-Based NeRF?

Yihang Chen, Qianyi Wu, Mehrtash Harandi et al.

CVPR 2024arXiv:2406.04101
34
citations

How Flawed Is ECE? An Analysis via Logit Smoothing

Muthu Chidambaram, Holden Lee, Colin McSwiggen et al.

ICML 2024arXiv:2402.10046
4
citations

How Free is Parameter-Free Stochastic Optimization?

Amit Attia, Tomer Koren

ICML 2024spotlightarXiv:2402.03126
11
citations

How Graph Neural Networks Learn: Lessons from Training Dynamics

Chenxiao Yang, Qitian Wu, David Wipf et al.

ICML 2024arXiv:2310.05105
2
citations

How Interpretable Are Interpretable Graph Neural Networks?

Yongqiang Chen, Yatao Bian, Bo Han et al.

ICML 2024arXiv:2406.07955
15
citations

How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models

Pascal Chang, Jingwei Tang, Markus Gross et al.

ICLR 2024oralarXiv:2504.03072
35
citations

How Language Model Hallucinations Can Snowball

Muru Zhang, Ofir Press, William Merrill et al.

ICML 2024arXiv:2305.13534
378
citations

How Learning by Reconstruction Produces Uninformative Features For Perception

Randall Balestriero, Yann LeCun

ICML 2024

How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?

Jingfeng Wu, Difan Zou, Zixiang Chen et al.

ICLR 2024spotlightarXiv:2310.08391
89
citations

How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs

Haoqin Tu, Chenhang Cui, Zijun Wang et al.

ECCV 2024arXiv:2311.16101
105
citations

How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing: The Curses of Symmetry and Initialization

Nuoya Xiong, Lijun Ding, Simon Du

ICLR 2024spotlightarXiv:2310.01769
21
citations

How Private are DP-SGD Implementations?

Lynn Chua, Badih Ghazi, Pritish Kamath et al.

ICML 2024arXiv:2403.17673
22
citations

How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data

Mihaela Stoian, Salijona Dyrmishi, Maxime Cordy et al.

ICLR 2024arXiv:2402.04823
27
citations

How Smooth Is Attention?

Valérie Castin, Pierre Ablin, Gabriel Peyré

ICML 2024arXiv:2312.14820
29
citations

How Spurious Features are Memorized: Precise Analysis for Random and NTK Features

Simone Bombari, Marco Mondelli

ICML 2024arXiv:2305.12100
9
citations

HowToCaption: Prompting LLMs to Transform Video Annotations at Scale

Nina Shvetsova, Anna Kukleva, Xudong Hong et al.

ECCV 2024arXiv:2310.04900
33
citations

How to Capture Higher-order Correlations? Generalizing Matrix Softmax Attention to Kronecker Computation

Josh Alman, Zhao Song

ICLR 2024spotlightarXiv:2310.04064
49
citations

How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions

Lorenzo Pacchiardi, Alex Chan, Sören Mindermann et al.

ICLR 2024arXiv:2309.15840
79
citations

How to Configure Good In-Context Sequence for Visual Question Answering

Li Li, Jiawei Peng, huiyi chen et al.

CVPR 2024arXiv:2312.01571
38
citations

How to Escape Sharp Minima with Random Perturbations

Kwangjun Ahn, Ali Jadbabaie, Suvrit Sra

ICML 2024arXiv:2305.15659
14
citations

How to Evaluate Behavioral Models

Greg d'Eon, Sophie Greenwood, Kevin Leyton-Brown et al.

AAAI 2024paperarXiv:2306.04778
1
citations

How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection

Yiyang Yao, Peng Liu, Tiancheng Zhao et al.

AAAI 2024paperarXiv:2308.13177
17
citations

How to Explore with Belief: State Entropy Maximization in POMDPs

Riccardo Zamboni, Duilio Cirino, Marcello Restelli et al.

ICML 2024arXiv:2406.02295
6
citations

How to Fine-Tune Vision Models with SGD

Ananya Kumar, Ruoqi Shen, Sebastien Bubeck et al.

ICLR 2024arXiv:2211.09359
36
citations

How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval?

Subhadeep Koley, Ayan Kumar Bhunia, Aneeshan Sain et al.

CVPR 2024arXiv:2403.07203
16
citations

How to Leverage Diverse Demonstrations in Offline Imitation Learning

Sheng Yue, Jiani Liu, Xingyuan Hua et al.

ICML 2024arXiv:2405.17476
7
citations

How to Make Cross Encoder a Good Teacher for Efficient Image-Text Retrieval?

Yuxin Chen, Zongyang Ma, Ziqi Zhang et al.

CVPR 2024arXiv:2407.07479
4
citations

How to Make Knockout Tournaments More Popular?

Juhi Chaudhary, Hendrik Molter, Meirav Zehavi

AAAI 2024paperarXiv:2309.09967
5
citations

How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization

Andrew Lowy, Jonathan Ullman, Stephen Wright

ICML 2024arXiv:2402.11173
11
citations

How to Overcome Curse-of-Dimensionality for Out-of-Distribution Detection?

Soumya Suvra Ghosal, Yiyou Sun, Yixuan Li

AAAI 2024paperarXiv:2312.14452
22
citations

How to Protect Copyright Data in Optimization of Large Language Models?

Timothy Chu, Zhao Song, Chiwun Yang

AAAI 2024paperarXiv:2308.12247
40
citations

How to Trace Latent Generative Model Generated Images without Artificial Watermark?

Zhenting Wang, Vikash Sehwag, Chen Chen et al.

ICML 2024arXiv:2405.13360
20
citations

How to Trade Off the Quantity and Capacity of Teacher Ensemble: Learning Categorical Distribution to Stochastically Employ a Teacher for Distillation

Zixiang Ding, Guoqing Jiang, Shuai Zhang et al.

AAAI 2024paper
2
citations

How to Train Neural Field Representations: A Comprehensive Study and Benchmark

Samuele Papa, Riccardo Valperga, David Knigge et al.

CVPR 2024arXiv:2312.10531
11
citations

How to Train the Teacher Model for Effective Knowledge Distillation

Shayan Mohajer Hamidi, Xizhen Deng, Renhao Tan et al.

ECCV 2024arXiv:2407.18041
13
citations

How to Use the Metropolis Algorithm for Multi-Objective Optimization?

Weijie Zheng, Mingfeng Li, Renzhong Deng et al.

AAAI 2024paper
10
citations

How Transformers Learn Causal Structure with Gradient Descent

Eshaan Nichani, Alex Damian, Jason Lee

ICML 2024arXiv:2402.14735
102
citations

How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers

Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson et al.

ICML 2024spotlightarXiv:2402.06323
10
citations

How Universal Polynomial Bases Enhance Spectral Graph Neural Networks: Heterophily, Over-smoothing, and Over-squashing

Keke Huang, Yu Guang Wang, Ming Li et al.

ICML 2024arXiv:2405.12474
55
citations

How Video Meetings Change Your Expression

Sumit Sarin, Utkarsh Mall, Purva Tendulkar et al.

ECCV 2024arXiv:2406.00955