Stefanie Jegelka

46
Papers
219
Total Citations

Papers (46)

Parallel Streaming Wasserstein Barycenters

NeurIPS 2017arXiv
92
citations

Fast Mixing Markov Chains for Strongly Rayleigh Measures, DPPs, and Constrained Sampling

NeurIPS 2016arXiv
39
citations

On the Emergence of Position Bias in Transformers

ICML 2025
33
citations

Polynomial time algorithms for dual volume sampling

NeurIPS 2017arXiv
31
citations

On the hardness of learning under symmetries

ICLR 2024
12
citations

Learning Diffusion Models with Flexible Representation Guidance

NeurIPS 2025arXiv
5
citations

Beyond Interpretability: The Gains of Feature Monosemanticity on Model Robustness

ICLR 2025
4
citations

Learning Linear Attention in Polynomial Time

NeurIPS 2025
3
citations

Deep Metric Learning via Lifted Structured Feature Embedding

CVPR 2016
0
citations

Cooperative Graphical Models

NeurIPS 2016
0
citations

Robust Contrastive Learning Against Noisy Views

CVPR 2022arXiv
0
citations

Simplicity Bias via Global Convergence of Sharpness Minimization

ICML 2024
0
citations

A Universal Class of Sharpness-Aware Minimization Algorithms

ICML 2024
0
citations

Sample Complexity Bounds for Estimating Probability Divergences under Invariances

ICML 2024
0
citations

Position: Future Directions in the Theory of Graph Machine Learning

ICML 2024
0
citations

Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?

ICML 2024
0
citations

Geometric Algorithms for Neural Combinatorial Optimization with Constraints

NeurIPS 2025arXiv
0
citations

Deep Metric Learning via Facility Location

CVPR 2017arXiv
0
citations

Expressive Sign Equivariant Networks for Spectral Geometric Learning

NeurIPS 2023
0
citations

What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models

NeurIPS 2023
0
citations

Limits, approximation and size transferability for GNNs on sparse graphs via graphops

NeurIPS 2023
0
citations

The Exact Sample Complexity Gain from Invariances for Kernel Regression

NeurIPS 2023
0
citations

Gaussian quadrature for matrix inverse forms with applications

ICML 2016
0
citations

Fast DPP Sampling for Nystrom with Application to Kernel Methods

ICML 2016
0
citations

Robust Budget Allocation via Continuous Submodular Functions

ICML 2017
0
citations

Max-value Entropy Search for Efficient Bayesian Optimization

ICML 2017
0
citations

Batched High-dimensional Bayesian Optimization via Structural Kernel Learning

ICML 2017
0
citations

Representation Learning on Graphs with Jumping Knowledge Networks

ICML 2018
0
citations

Learning Generative Models across Incomparable Spaces

ICML 2019
0
citations

ResNet with one-neuron hidden layers is a Universal Approximator

NeurIPS 2018
0
citations

Provable Variational Inference for Constrained Log-Submodular Models

NeurIPS 2018
0
citations

Exponentiated Strongly Rayleigh Distributions

NeurIPS 2018
0
citations

Adversarially Robust Optimization with Gaussian Processes

NeurIPS 2018
0
citations

Distributionally Robust Optimization and Generalization in Kernel Methods

NeurIPS 2019
0
citations

Flexible Modeling of Diversity with Strongly Log-Concave Distributions

NeurIPS 2019
0
citations

Adaptive Sampling for Stochastic Risk-Averse Learning

NeurIPS 2020
0
citations

Debiased Contrastive Learning

NeurIPS 2020
0
citations

Testing Determinantal Point Processes

NeurIPS 2020
0
citations

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method

NeurIPS 2020
0
citations

What training reveals about neural network complexity

NeurIPS 2021
0
citations

Can contrastive learning avoid shortcut solutions?

NeurIPS 2021
0
citations

Measuring Generalization with Optimal Transport

NeurIPS 2021
0
citations

Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification

NeurIPS 2021
0
citations

Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks

NeurIPS 2022
0
citations

Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions

NeurIPS 2022
0
citations

On the generalization of learning algorithms that do not converge

NeurIPS 2022
0
citations