ICML 2024 Papers
2,635 papers found • Page 50 of 53
Towards Theoretical Understanding of Learning Large-scale Dependent Data via Random Features
Chao Wang, Xin Bing, Xin HE et al.
Towards Theoretical Understandings of Self-Consuming Generative Models
Shi Fu, Sen Zhang, Yingjie Wang et al.
Towards the Theory of Unsupervised Federated Learning: Non-asymptotic Analysis of Federated EM Algorithms
Ye Tian, Haolei Weng, Yang Feng
Towards Understanding Inductive Bias in Transformers: A View From Infinity
Itay Lavie, Guy Gur-Ari, Zohar Ringel
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
Simone Bombari, Marco Mondelli
Towards Unified Multi-granularity Text Detection with Interactive Attention
Xingyu Wan, Chengquan Zhang, Pengyuan Lyu et al.
Trainable Transformer in Transformer
Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia et al.
Trained Random Forests Completely Reveal your Dataset
Julien Ferry, Ricardo Fukasawa, Timothée Pascal et al.
Training-Free Long-Context Scaling of Large Language Models
Chenxin An, Fei Huang, Jun Zhang et al.
Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization
Deokjae Lee, Hyun Oh Song, Kyunghyun Cho
Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
Zhiheng Xi, Wenxiang Chen, Boyang Hong et al.
Transferable Facial Privacy Protection against Blind Face Restoration via Domain-Consistent Adversarial Obfuscation
Kui Zhang, Hang Zhou, Jie Zhang et al.
Transferring Knowledge From Large Foundation Models to Small Downstream Models
Shikai Qiu, Boran Han, Danielle Robinson et al.
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Tri Dao, Albert Gu
Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models
Akhil Kedia, Mohd Abbas Zaidi, Sushil Khyalia et al.
Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context
Xiang Cheng, Yuxin Chen, Suvrit Sra
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape
Juno Kim, Taiji Suzuki
Transformers, parallel computation, and logarithmic depth
Clayton Sanford, Daniel Hsu, Matus Telgarsky
Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot
Zixuan Wang, Stanley Wei, Daniel Hsu et al.
Transforming and Combining Rewards for Aligning Large Language Models
Zihao Wang, Chirag Nagpal, Jonathan Berant et al.
Transitional Uncertainty with Layered Intermediate Predictions
Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning
Dongkwan Kim, Alice Oh
Translation Equivariant Transformer Neural Processes
Matthew Ashman, Cristiana Diaconu, Junhyuck Kim et al.
Transolver: A Fast Transformer Solver for PDEs on General Geometries
Haixu Wu, Huakun Luo, Haowen Wang et al.
Transport of Algebraic Structure to Latent Embeddings
Samuel Pfrommer, Brendon G. Anderson, Somayeh Sojoudi
TravelPlanner: A Benchmark for Real-World Planning with Language Agents
Jian Xie, Kai Zhang, Jiangjie Chen et al.
Triadic-OCD: Asynchronous Online Change Detection with Provable Robustness, Optimality, and Convergence
Yancheng Huang, Kai Yang, Zelin Zhu et al.
Triple Changes Estimator for Targeted Policies
Sina Akbari, Negar Kiyavash
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
Md Shamim Hussain, Mohammed Zaki, Dharmashankar Subramanian
Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning
Kyle Hsu, Jubayer Ibn Hamid, Kaylee Burns et al.
TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks
Zhiruo Wang, Graham Neubig, Daniel Fried
Truly No-Regret Learning in Constrained MDPs
Adrian Müller, Pragnya Alatur, Volkan Cevher et al.
Trustless Audits without Revealing Data or Models
Suppakit Waiwitlikhit, Ion Stoica, Yi Sun et al.
Trust Regions for Explanations via Black-Box Probabilistic Certification
Amit Dhurandhar, Swagatam Haldar, Dennis Wei et al.
Trust the Model Where It Trusts Itself - Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption
Bernd Frauenknecht, Artur Eisele, Devdutt Subhasish et al.
Trustworthy Actionable Perturbations
Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon
Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning
Zongmeng Zhang, Yufeng Shi, Jinhua Zhu et al.
TSLANet: Rethinking Transformers for Time Series Representation Learning
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen et al.
Tuning-free Estimation and Inference of Cumulative Distribution Function under Local Differential Privacy
Yi Liu, Qirui Hu, Linglong Kong
Tuning-Free Stochastic Optimization
Ahmed Khaled, Chi Jin
Turnstile $\ell_p$ leverage score sampling with applications
Alexander Munteanu, Simon Omlor
TVE: Learning Meta-attribution for Transferable Vision Explainer
Guanchu (Gary) Wang, Yu-Neng Chuang, Fan Yang et al.
Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning
Zhe Zhao, Pengkun Wang, HaiBin Wen et al.
Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu et al.
Two Heads Are Better Than One: Boosting Graph Sparse Training via Semantic and Topological Awareness
Guibin Zhang, Yanwei Yue, kun wang et al.
Two-sided Competing Matching Recommendation Markets With Quota and Complementary Preferences Constraints
Yuantong Li, Guang Cheng, Xiaowu Dai
Two-Stage Shadow Inclusion Estimation: An IV Approach for Causal Inference under Latent Confounding and Collider Bias
Baohong Li, Anpeng Wu, Ruoxuan Xiong et al.
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation
Zhenyu He, Guhao Feng, Shengjie Luo et al.
Two Tales of Single-Phase Contrastive Hebbian Learning
Rasmus Kjær Høier, Christopher Zach
Two-timescale Derivative Free Optimization for Performative Prediction with Markovian Data
Haitong LIU, Qiang Li, Hoi To Wai