ICML Papers
5,975 papers found • Page 103 of 120
Pi-DUAL: Using privileged information to distinguish clean from noisy labels
Ke Wang, Guillermo Ortiz-Jimenez, Rodolphe Jenatton et al.
Piecewise Constant and Linear Regression Trees: An Optimal Dynamic Programming Approach
Mim van den Bos, Jacobus van der Linden, Emir Demirović
PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation
Han Fu, Jian Tan, Pinhan Zhang et al.
PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement Learning via Hindsight Relabeling
Utsav Singh, Wesley A. Suttle, Brian Sadler et al.
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Soroush Nasiriany, Fei Xia, Wenhao Yu et al.
PlanDQ: Hierarchical Plan Orchestration via D-Conductor and Q-Performer
Chang Chen, Junyeob Baek, Fei Deng et al.
Planning, Fast and Slow: Online Reinforcement Learning with Action-Free Offline Data via Multiscale Planners
Chengjie Wu, Hao Hu, yiqin yang et al.
Plug-and-Play image restoration with Stochastic deNOising REgularization
Marien Renaud, Jean Prost, Arthur Leclaire et al.
Plug-in Performative Optimization
Licong Lin, Tijana Zrnic
Pluvial Flood Emulation with Hydraulics-informed Message Passing
Arnold Kazadi, James Doss-Gollin, Arlei Silva
PointMC: Multi-instance Point Cloud Registration based on Maximal Cliques
Yue Wu, Xidao hu, Yongzhe Yuan et al.
Policy-conditioned Environment Models are More Generalizable
Ruifeng Chen, Xiong-Hui Chen, Yihao Sun et al.
Policy Evaluation for Variance in Average Reward Reinforcement Learning
Shubhada Agrawal, Prashanth L.A., Siva Maguluri
Policy Learning for Balancing Short-Term and Long-Term Rewards
Peng Wu, Ziyu Shen, Feng Xie et al.
Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Dongyoung Lim, Sotirios Sabanis
Polynomial-based Self-Attention for Table Representation Learning
Jayoung Kim, Yehjin Shin, Jeongwhan Choi et al.
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels
Praneeth Kacham, Vahab Mirrokni, Peilin Zhong
Position: $C^*$-Algebraic Machine Learning $-$ Moving in a New Direction
Yuka Hashimoto, Masahiro Ikeda, Hachem Kadri
Position: A Call for Embodied AI
Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl
Position: A Call to Action for a Human-Centered AutoML Paradigm
Marius Lindauer, Florian Karl, Anne Klier et al.
Position: AI/ML Influencers Have a Place in the Academic Process
Iain Xie Weissburg, Mehir Arora, Xinyi Wang et al.
Position: AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research
Riley Simmons-Edler, Ryan Badman, Shayne Longpre et al.
Positional Knowledge is All You Need: Position-induced Transformer (PiT) for Operator Learning
Junfeng CHEN, Kailiang Wu
Position: Amazing Things Come From Having Many Good Models
Cynthia Rudin, Chudi Zhong, Lesia Semenova et al.
Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience
Martina G. Vilas, Federico Adolfi, David Poeppel et al.
Position: Application-Driven Innovation in Machine Learning
David Rolnick, Alan Aspuru-Guzik, Sara Beery et al.
Position: A Roadmap to Pluralistic Alignment
Taylor Sorensen, Jared Moore, Jillian Fisher et al.
Position: A Safe Harbor for AI Evaluation and Red Teaming
Shayne Longpre, Sayash Kapoor, Kevin Klyman et al.
Position: Automatic Environment Shaping is the Next Frontier in RL
Younghyo Park, Gabriel Margolis, Pulkit Agrawal
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
Theodore Papamarkou, Maria Skoularidou, Konstantina Palla et al.
Position: Benchmarking is Limited in Reinforcement Learning Research
Scott Jordan, Adam White, Bruno da Silva et al.
Position: Beyond Personhood: Agency, Accountability, and the Limits of Anthropomorphic Ethical Analysis
Jessica Dai
Position: Building Guardrails for Large Language Models Requires Systematic Design
Yi DONG, Ronghui Mu, Gaojie Jin et al.
Position: Categorical Deep Learning is an Algebraic Theory of All Architectures
Bruno Gavranović, Paul Lessard, Andrew Dudzik et al.
Position: Compositional Generative Modeling: A Single Model is Not All You Need
Yilun Du, Leslie Kaelbling
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
Florian Tramer, Gautam Kamath, Nicholas Carlini
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh
Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?
Shayne Longpre, Robert Mahari, Naana Obeng-Marnu et al.
Position: Data-driven Discovery with Large Generative Models
Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal et al.
Position: Do Not Explain Vision Models Without Context
Paulina Tomaszewska, Przemyslaw Biecek
Position: Do pretrained Transformers Learn In-Context by Gradient Descent?
Lingfeng Shen, Aayush Mishra, Daniel Khashabi
Position: Embracing Negative Results in Machine Learning
Florian Karl, Malte Kemeter, Gabriel Dax et al.
Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI
Yegor Tkachenko
Position: Evolving AI Collectives Enhance Human Diversity and Enable Self-Regulation
Shiyang Lai, Yujin Potter, Junsol Kim et al.
Position: Explain to Question not to Justify
Przemyslaw Biecek, Wojciech Samek
Position: Exploring the Robustness of Pipeline-Parallelism-Based Decentralized Training
Lin Lu, Chenxi Dai, Wangcheng Tao et al.
Position: Foundation Agents as the Paradigm Shift for Decision Making
Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao et al.
Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches
David Glukhov, Ilia Shumailov, Yarin Gal et al.
Position: Future Directions in the Theory of Graph Machine Learning
Christopher Morris, Fabrizio Frasca, Nadav Dym et al.
Position: Graph Foundation Models Are Already Here
Haitao Mao, Zhikai Chen, Wenzhuo Tang et al.