ICLR Papers
6,124 papers found • Page 117 of 123
Symmetric Neural-Collapse Representations with Supervised Contrastive Loss: The Impact of ReLU and Batching
Ganesh Ramachandra Kini, Vala Vakilian, Tina Behnia et al.
Symmetric Single Index Learning
Aaron Zweig, Joan Bruna
Symphony: Symmetry-Equivariant Point-Centered Spherical Harmonics for 3D Molecule Generation
Ameya Daigavane, Song Eun Kim, Mario Geiger et al.
Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control
Longtao Zheng, Rundong Wang, Xinrun Wang et al.
Synaptic Weight Distributions Depend on the Geometry of Plasticity
Roman Pogodin, Jonathan Cornford, Arna Ghosh et al.
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
Yuan Liu, Cheng Lin, Zijiao Zeng et al.
Synergistic Patch Pruning for Vision Transformer: Unifying Intra- & Inter-Layer Patch Importance
Yuyao Zhang, Lan Wei, Nikolaos Freris
TabR: Tabular Deep Learning Meets Nearest Neighbors
Yury Gorishniy, Ivan Rubachev, Nikolay Kartashev et al.
TAB: Temporal Accumulated Batch Normalization in Spiking Neural Networks
Haiyan Jiang, Vincent Zoonekynd, Giulia De Masi et al.
Tackling the Data Heterogeneity in Asynchronous Federated Learning with Cached Update Calibration
Yujia Wang, Yuanpu Cao, Jingcheng Wu et al.
TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series
Arjun Ashok, Étienne Marcotte, Valentina Zantedeschi et al.
Tag2Text: Guiding Vision-Language Model via Image Tagging
Xinyu Huang, Youcai Zhang, Jinyu Ma et al.
Tailoring Self-Rationalizers with Multi-Reward Distillation
Sahana Ramnath, Brihi Joshi, Skyler Hallinan et al.
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Zuxin Liu, Jesse Zhang, Kavosh Asadi et al.
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen et al.
Talk like a Graph: Encoding Graphs for Large Language Models
Bahare Fatemi, Jonathan Halcrow, Bryan Perozzi
Tangent Transformers for Composition,Privacy and Removal
Tian Yu Liu, Aditya Golatkar, Stefano Soatto
TapMo: Shape-aware Motion Generation of Skeleton-free Characters
Jiaxu Zhang, Shaoli Huang, Zhigang Tu et al.
Task Adaptation from Skills: Information Geometry, Disentanglement, and New Objectives for Unsupervised Reinforcement Learning
Yucheng Yang, Tianyi Zhou, Qiang HE et al.
Task Planning for Visual Room Rearrangement under Partial Observability
Karan Mirakhor, Sourav Ghosh, DIPANJAN DAS et al.
Task structure and nonlinearity jointly determine learned representational geometry
Matteo Alleman, Jack Lindsey, Stefano Fusi
TD-MPC2: Scalable, Robust World Models for Continuous Control
Nicklas Hansen, Hao Su, Xiaolong Wang
Teaching Arithmetic to Small Transformers
Nayoung Lee, Kartik Sreenivasan, Jason Lee et al.
Teaching Language Models to Hallucinate Less with Synthetic Tasks
Erik Jones, Hamid Palangi, Clarisse Ribeiro et al.
Teaching Large Language Models to Self-Debug
Xinyun Chen, Maxwell Lin, Nathanael Schaerli et al.
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda, Christopher Choquette-Choo, Zhengming Zhang et al.
TEDDY: Trimming Edges with Degree-based Discrimination Strategy
Hyunjin Seo, Jihun Yun, Eunho Yang
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
Qingru Zhang, Chandan Singh, Liyuan Liu et al.
TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
Defu Cao, Furong Jia, Sercan Arik et al.
Temporal Generalization Estimation in Evolving Graphs
Bin Lu, Tingyan Ma, Xiaoying Gan et al.
Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks
Greg Yang, Dingli Yu, Chen Zhu et al.
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
Sam Toyer, Olivia Watkins, Ethan Mendes et al.
TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts
Hyunwook Lee, Sungahn Ko
TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series
Chenxi Sun, Hongyan Li, Yaliang Li et al.
Test-time Adaptation against Multi-modal Reliability Bias
Mouxing Yang, Yunfan Li, Changqing Zhang et al.
Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models
Shuai Zhao, Xiaohan Wang, Linchao Zhu et al.
Test-Time Training on Nearest Neighbors for Large Language Models
Moritz Hardt, Yu Sun
Text2Reward: Reward Shaping with Language Models for Reinforcement Learning
Tianbao Xie, Siheng Zhao, Chen Henry Wu et al.
TextField3D: Towards Enhancing Open-Vocabulary 3D Generation with Noisy Text Fields
Tianyu Huang, Yihan Zeng, Bowen Dong et al.
Text-to-3D with Classifier Score Distillation
Xin Yu, Yuan-Chen Guo, Yangguang Li et al.
The Alignment Problem from a Deep Learning Perspective
Richard Ngo, Lawrence Chan, Sören Mindermann
The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World
Weiyun Wang, Min Shi, Qingyun Li et al.
The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing
Shen Nie, Hanzhong Guo, Cheng Lu et al.
The Consensus Game: Language Model Generation via Equilibrium Search
Athul Jacob, Yikang Shen, Gabriele Farina et al.
The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning
Tian Jin, Nolan Clement, Xin Dong et al.
The Curse of Diversity in Ensemble-Based Exploration
Zhixuan Lin, Pierluca D'Oro, Evgenii Nikishin et al.
The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Language Models
Yan Liu, Yu Liu, Xiaokang Chen et al.
The Devil is in the Object Boundary: Towards Annotation-free Instance Segmentation using Foundation Models
cheng shi, Sibei Yang
The Effective Horizon Explains Deep RL Performance in Stochastic Environments
Cassidy Laidlaw, Banghua Zhu, Stuart Russell et al.
The Effectiveness of Random Forgetting for Robust Generalization
Vijaya Raghavan T Ramkumar, Bahram Zonooz, Elahe Arani