ICML Papers

5,975 papers found • Page 116 of 120

Time-Series Forecasting for Out-of-Distribution Generalization Using Invariant Learning

Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong et al.

ICML 2024oralarXiv:2406.09130

TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling

Jiaxiang Dong, Haixu Wu, Yuxuan Wang et al.

ICML 2024oralarXiv:2402.02475

Time Weaver: A Conditional Time Series Generation Model

Sai Shankar Narasimhan, Shubhankar Agarwal, Oguzhan Akcin et al.

ICML 2024spotlightarXiv:2403.02682
33
citations

TimeX++: Learning Time-Series Explanations with Information Bottleneck

Zichuan Liu, Tianchun Wang, Jimeng Shi et al.

ICML 2024posterarXiv:2405.09308

tinyBenchmarks: evaluating LLMs with fewer examples

Felipe Maia Polo, Lucas Weber, Leshem Choshen et al.

ICML 2024posterarXiv:2402.14992

TinyTrain: Resource-Aware Task-Adaptive Sparse Training of DNNs at the Data-Scarce Edge

Young Kwon, Rui Li, Stylianos Venieris et al.

ICML 2024posterarXiv:2307.09988

tnGPS: Discovering Unknown Tensor Network Structure Search Algorithms via Large Language Models (LLMs)

Junhua Zeng, Chao Li, Zhun Sun et al.

ICML 2024posterarXiv:2402.02456

To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO

Zi-Hao Qiu, Siqi Guo, Mao Xu et al.

ICML 2024posterarXiv:2404.04575

To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models

George-Octavian Bărbulescu, Peter Triantafillou

ICML 2024poster

Token-level Direct Preference Optimization

Yongcheng Zeng, Guoqing Liu, Weiyu Ma et al.

ICML 2024posterarXiv:2404.11999

Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models

Mingjia Huo, Sai Ashish Somayajula, Youwei Liang et al.

ICML 2024posterarXiv:2402.18059

Topological Neural Networks go Persistent, Equivariant, and Continuous

Yogesh Verma, Amauri Souza, Vikas Garg

ICML 2024posterarXiv:2406.03164

Total Variation Distance Meets Probabilistic Inference

Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel et al.

ICML 2024posterarXiv:2309.09134

Total Variation Floodgate for Variable Importance Inference in Classification

Wenshuo Wang, Lucas Janson, Lihua Lei et al.

ICML 2024posterarXiv:2309.04002

To the Max: Reinventing Reward in Reinforcement Learning

Grigorii Veviurko, Wendelin Boehmer, Mathijs de Weerdt

ICML 2024posterarXiv:2402.01361

Toward Adaptive Reasoning in Large Language Models with Thought Rollback

Sijia Chen, Baochun Li

ICML 2024posterarXiv:2412.19707

Toward Availability Attacks in 3D Point Clouds

Yifan Zhu, Yibo Miao, Yinpeng Dong et al.

ICML 2024posterarXiv:2407.11011

Towards a Better Theoretical Understanding of Independent Subnetwork Training

Egor Shulgin, Peter Richtarik

ICML 2024posterarXiv:2306.16484

Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model

Mikail Khona, Maya Okawa, Jan Hula et al.

ICML 2024posterarXiv:2402.07757

Towards a Self-contained Data-driven Global Weather Forecasting Framework

Yi Xiao, LEI BAI, Wei Xue et al.

ICML 2024poster

Towards AutoAI: Optimizing a Machine Learning System with Black-box and Differentiable Components

Zhiliang Chen, Chuan-Sheng Foo, Bryan Kian Hsiang Low

ICML 2024poster

Towards Causal Foundation Model: on Duality between Optimal Balancing and Attention

Jiaqi Zhang, Joel Jennings, Agrin Hilmkil et al.

ICML 2024poster

Towards Certified Unlearning for Deep Neural Networks

Binchi Zhang, Yushun Dong, Tianhao Wang et al.

ICML 2024posterarXiv:2408.00920

Towards Compositionality in Concept Learning

Adam Stein, Aaditya Naik, Yinjun Wu et al.

ICML 2024posterarXiv:2406.18534

Towards efficient deep spiking neural networks construction with spiking activity based pruning

Yaxin Li, Qi Xu, Jiangrong Shen et al.

ICML 2024posterarXiv:2406.01072

Towards Efficient Exact Optimization of Language Model Alignment

Haozhe Ji, Cheng Lu, Yilin Niu et al.

ICML 2024posterarXiv:2402.00856

Towards Efficient Spiking Transformer: a Token Sparsification Framework for Training and Inference Acceleration

Zhengyang Zhuge, Peisong Wang, Xingting Yao et al.

ICML 2024poster

Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations

Xuyang Zhong, Yixiao HUANG, Chen Liu

ICML 2024poster

Towards General Algorithm Discovery for Combinatorial Optimization: Learning Symbolic Branching Policy from Bipartite Graph

Yufei Kuang, Jie Wang, Yuyan Zhou et al.

ICML 2024poster

Towards Generalization beyond Pointwise Learning: A Unified Information-theoretic Perspective

Yuxin Dong, Tieliang Gong, Hong Chen et al.

ICML 2024poster

Towards General Neural Surrogate Solvers with Specialized Neural Accelerators

Chenkai Mao, Robert Lupoiu, Tianxiang Dai et al.

ICML 2024posterarXiv:2405.02351

Towards Global Optimality for Practical Average Reward Reinforcement Learning without Mixing Time Oracles

Bhrij Patel, Wesley A. Suttle, Alec Koppel et al.

ICML 2024posterarXiv:2403.11925

Towards Interpretable Deep Local Learning with Successive Gradient Reconciliation

Yibo Yang, Xiaojie Li, Motasem Alfarra et al.

ICML 2024posterarXiv:2406.05222

Towards Modular LLMs by Building and Reusing a Library of LoRAs

Oleksiy Ostapenko, Zhan Su, Edoardo Ponti et al.

ICML 2024posterarXiv:2405.11157

Towards Neural Architecture Search through Hierarchical Generative Modeling

Lichuan Xiang, Łukasz Dudziak, Mohamed Abdelfattah et al.

ICML 2024poster

Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error

Haoran Li, Zicheng Zhang, Wang Luo et al.

ICML 2024posterarXiv:2402.02165

Towards Realistic Model Selection for Semi-supervised Learning

Muyang Li, Xiaobo Xia, Runze Wu et al.

ICML 2024poster

Towards Resource-friendly, Extensible and Stable Incomplete Multi-view Clustering

Shengju Yu, Dong Zhibin, Siwei Wang et al.

ICML 2024spotlight

Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption

Chenlu Ye, Jiafan He, Quanquan Gu et al.

ICML 2024posterarXiv:2402.08991

Towards Scalable and Versatile Weight Space Learning

Konstantin Schürholt, Michael Mahoney, Damian Borth

ICML 2024posterarXiv:2406.09997

Towards Theoretical Understanding of Learning Large-scale Dependent Data via Random Features

Chao Wang, Xin Bing, Xin HE et al.

ICML 2024spotlight

Towards Theoretical Understandings of Self-Consuming Generative Models

Shi Fu, Sen Zhang, Yingjie Wang et al.

ICML 2024posterarXiv:2402.11778

Towards the Theory of Unsupervised Federated Learning: Non-asymptotic Analysis of Federated EM Algorithms

Ye Tian, Haolei Weng, Yang Feng

ICML 2024posterarXiv:2310.15330

Towards Understanding Inductive Bias in Transformers: A View From Infinity

Itay Lavie, Guy Gur-Ari, Zohar Ringel

ICML 2024posterarXiv:2402.05173

Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features

Simone Bombari, Marco Mondelli

ICML 2024posterarXiv:2402.02969

Towards Unified Multi-granularity Text Detection with Interactive Attention

Xingyu Wan, Chengquan Zhang, Pengyuan Lyu et al.

ICML 2024spotlightarXiv:2405.19765

Trainable Transformer in Transformer

Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia et al.

ICML 2024posterarXiv:2307.01189

Trained Random Forests Completely Reveal your Dataset

Julien Ferry, Ricardo Fukasawa, Timothée Pascal et al.

ICML 2024posterarXiv:2402.19232

Training-Free Long-Context Scaling of Large Language Models

Chenxin An, Fei Huang, Jun Zhang et al.

ICML 2024posterarXiv:2402.17463

Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization

Deokjae Lee, Hyun Oh Song, Kyunghyun Cho

ICML 2024posterarXiv:2406.14876