All Papers
34,180 papers found • Page 657 of 684
The Emergence of Reproducibility and Consistency in Diffusion Models
Huijie Zhang, Jinfan Zhou, Yifu Lu et al.
The Entropy Enigma: Success and Failure of Entropy Minimization
Ori Press, Ravid Shwartz-Ziv, Yann LeCun et al.
The Expected Loss of Preconditioned Langevin Dynamics Reveals the Hessian Rank
Amitay Bar, Rotem Mulayoff, Tomer Michaeli et al.
The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks.
Aaron Spieler, Nasim Rahaman, Georg Martius et al.
The Expressive Power of Low-Rank Adaptation
Yuchen Zeng, Kangwook Lee
The Expressive Power of Path-Based Graph Neural Networks
Caterina Graziani, Tamara Drucks, Fabian Jogl et al.
The Expressive Power of Transformers with Chain of Thought
William Merrill, Ashish Sabharwal
The Fabrication of Reality and Fantasy: Scene Generation with LLM-Assisted Prompt Interpretation
Yi Yao, Chan-Feng Hsu, Jhe-Hao Lin et al.
The False Promise of Imitating Proprietary Language Models
Arnav Gudibande, Eric Wallace, Charlie Snell et al.
The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?
Qinyu Zhao, Ming Xu, Kartik Gupta et al.
The Fundamental Limits of Least-Privilege Learning
Theresa Stadler, Bogdan Kulynych, Michael Gastpar et al.
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Anselm Haselhoff, Kevin Trelenberg, Fabian Küppers et al.
The Generalization Gap in Offline Reinforcement Learning
Ishita Mediratta, Qingfei You, Minqi Jiang et al.
The Generative AI Paradox: “What It Can Create, It May Not Understand”
Peter West, Ximing Lu, Nouha Dziri et al.
The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective
Chi-Heng Lin, Chiraag Kaushik, Eva Dyer et al.
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
CHENG LI, Jindong Wang, Yixuan Zhang et al.
The Hard Positive Truth about Vision-Language Compositionality
Amita Kamath, Cheng-Yu Hsieh, Kai-Wei Chang et al.
The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry
Michael Zhang, Kush Bhatia, Hermann Kumbong et al.
The Hidden Language of Diffusion Models
Hila Chefer, Oran Lang, Mor Geva et al.
The Human-AI Substitution game: active learning from a strategic labeler
Tom Yan, Chicheng Zhang
The Illusion of State in State-Space Models
William Merrill, Jackson Petty, Ashish Sabharwal
The importance of feature preprocessing for differentially private linear optimization
Ziteng Sun, Ananda Theertha Suresh, Aditya Krishna Menon
The Irrelevance of Influencers: Information Diffusion with Re
Activation and Immunity Lasts Exponentially Long on Social Network Models
The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting — An Analytical Model
Daniel Goldfarb, Itay Evron, Nir Weinberger et al.
The Linear Representation Hypothesis and the Geometry of Large Language Models
Kiho Park, Yo Joong Choe, Victor Veitch
The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing
Blaise Delattre, Alexandre Araujo, Quentin Barthélemy et al.
The LLM Surgeon
Tycho van der Ouderaa, Markus Nagel, Mart van Baalen et al.
The Logic of Doxastic Strategies
Junli Jiang, Pavel Naumov
The Lottery Ticket Hypothesis in Denoising: Towards Semantic-Driven Initialization
Jiafeng Mao, Xueting Wang, Kiyoharu Aizawa
The Manga Whisperer: Automatically Generating Transcriptions for Comics
Ragav Sachdeva, Andrew Zisserman
The Marginal Value of Momentum for Small Learning Rate SGD
Runzhe Wang, Sadhika Malladi, Tianhao Wang et al.
The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm
Giseung Park, woohyeon Byeon, Seongmin Kim et al.
The mechanistic basis of data dependence and abrupt learning in an in-context classification task
Gautam Reddy Nallamala
The Merit of River Network Topology for Neural Flood Forecasting
Nikolas Kirschstein, Yixuan Sun
The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes
Myeongseob Ko, Feiyang Kang, Weiyan Shi et al.
The Moderating Effect of Instant Runoff Voting
Kiran Tomlinson, Johan Ugander, Jon Kleinberg
The More You See in 2D the More You Perceive in 3D
Xinyang Han, Zelin Gao, Angjoo Kanazawa et al.
The Need for Speed: Pruning Transformers with One Recipe
Samir Khaki, Konstantinos Plataniotis
The Neglected Tails in Vision-Language Models
Shubham Parashar, Tian Liu, Zhiqiu Lin et al.
The Nerfect Match: Exploring NeRF Features for Visual Localization
Qunjie Zhou, Maxim Maximov, Or Litany et al.
The Non-linear $F$-Design and Applications to Interactive Learning
Alekh Agarwal, Jian Qian, Alexander Rakhlin et al.
The optimality of kernel classifiers in Sobolev space
Jianfa Lai, zhifan Li, Dongming Huang et al.
Theoretical Analysis of Learned Database Operations under Distribution Shift through Distribution Learnability
Sepanta Zeighami, Cyrus Shahabi
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach
Shaopeng Fu, Di Wang
Theoretical and Empirical Analysis of Cost-Function Merging for Implicit Hitting Set WCSP Solving
Javier Larrosa, Conrado Martínez, Emma Rollon
Theoretical Aspects of Generating Instances with Unique Solutions: Pre-assignment Models for Unique Vertex Cover
Takashi Horiyama, Yasuaki Kobayashi, Hirotaka Ono et al.
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians
Tom Huix, Anna Korba, Alain Oliviero Durmus et al.
Theoretical insights for diffusion guidance: A case study for Gaussian mixture models
Yuchen Wu, Minshuo Chen, Zihao Li et al.
Theoretically Achieving Continuous Representation of Oriented Bounding Boxes
Zikai Xiao, Guo-Ye Yang, Xue Yang et al.
Theoretical Understanding of Learning from Adversarial Perturbations
Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki