Poster Papers
24,624 papers found • Page 137 of 493
Conference
Improving Generalization in Federated Learning with Highly Heterogeneous Data via Momentum-Based Stochastic Controlled Weight Averaging
Junkang Liu, Yuanyuan Liu, Fanhua Shang et al.
Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning
Yuanyao Chen, Rongsheng Chen, Fu Luo et al.
Improving Generalization with Flat Hilbert Bayesian Inference
Tuan Truong, Quyen Tran, Ngoc Quan Pham et al.
Improving Graph Neural Networks by Learning Continuous Edge Directions
Seong Ho Pahng, Sahand Hormoz
Improving Instruction-Following in Language Models through Activation Steering
Alessandro Stolfo, Vidhisha Balachandran, Safoora Yousefi et al.
Improving Language Model Distillation through Hidden State Matching
Sayantan Dasgupta, Trevor Cohn
Improving Large Language Model Planning with Action Sequence Similarity
Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.
Improving Large Vision and Language Models by Learning from a Panel of Peers
Jefferson Hernandez, Jing Shi, Simon Jenni et al.
Improving LLM Safety Alignment with Dual-Objective Optimization
Xuandong Zhao, Will Cai, Tianneng Shi et al.
Improving LLMs for Recommendation with Out-Of-Vocabulary Tokens
Ting-Ji Huang, Jia-Qi Yang, Chunxu Shen et al.
Improving Long-Text Alignment for Text-to-Image Diffusion Models
Luping Liu, Chao Du, Tianyu Pang et al.
Improving Memory Efficiency for Training KANs via Meta Learning
Zhangchi Zhao, Jun Shu, Deyu Meng et al.
Improving Model Alignment Through Collective Intelligence of Open-Source Models
Junlin Wang, Roy Xie, Shang Zhu et al.
Improving Model-Based Reinforcement Learning by Converging to Flatter Minima
Shrinivas Ramasubramanian, Benjamin Freed, Alexandre Capone et al.
Improving Model Representation and Reducing KV Cache via Skip Connections with First Value Heads
Zhoutong Wu, Yuan Zhang, Yiming Dong et al.
Improving Monte Carlo Tree Search for Symbolic Regression
Zhengyao Huang, Daniel Huang, Tiannan Xiao et al.
Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques
Alon Arad, Saharon Rosset
Improving Multimodal Learning Balance and Sufficiency through Data Remixing
Xiaoyu Ma, Hao Chen, Yongjian Deng
Improving Multimodal Learning via Imbalanced Learning
Shicai Wei, Chunbo Luo, Yang Luo
Improving Neural Network Accuracy by Concurrently Training with a Twin Network
Benjamin Vandersmissen, Lucas Deckers, Jose Oramas
Improving Neural Optimal Transport via Displacement Interpolation
Jaemoo Choi, Yongxin Chen, Jaewoong Choi
Improving Noise Efficiency in Privacy-preserving Dataset Distillation
Runkai Zheng, Vishnu Dasu, Yinong Wang et al.
Improving Out-of-Distribution Detection via Dynamic Covariance Calibration
Kaiyu Guo, Zijian Wang, Tan Pan et al.
Improving Out-of-Distribution Detection with Markov Logic Networks
Konstantin Kirchheim, Frank Ortmeier
Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
Anjiang Wei, Allen Nie, Thiago Teixeira et al.
Improving Pretraining Data Using Perplexity Correlations
Tristan Thrush, Christopher Potts, Tatsunori Hashimoto
Improving Probabilistic Diffusion Models With Optimal Diagonal Covariance Matching
Zijing Ou, Mingtian Zhang, Andi Zhang et al.
Improving Progressive Generation with Decomposable Flow Matching
Moayed Haji-Ali, Willi Menapace, Ivan Skorokhodov et al.
Improving Rationality in the Reasoning Process of Language Models through Self-playing Game
Pinzheng Wang, Juntao Li, Zecheng Tang et al.
Improving Reasoning Performance in Large Language Models via Representation Engineering
Bertram Højer, Oliver Jarvis, Stefan Heinrich
Improving Rectified Flow with Boundary Conditions
Xixi Hu, Runlong Liao, Bo Liu et al.
Improving Regret Approximation for Unsupervised Dynamic Environment Generation
Harry Mead, Bruno Lacerda, Jakob Foerster et al.
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning
Yiqun Chen, Lingyong Yan, Weiwei Sun et al.
Improving Reward Model Generalization from Adversarial Process Enhanced Preferences
Zhilong Zhang, Tian Xu, Xinghao Du et al.
Improving Reward Models with Proximal Policy Exploration for Preference-Based Reinforcement Learning
Yiwen Zhu, Jinyi Liu, Pengjie Gu et al.
Improving SAM for Camouflaged Object Detection via Dual Stream Adapters
Jiaming Liu, Linghe Kong, Guihai Chen
Improving Semantic Understanding in Speech Language Models via Brain-tuning
Omer Moussa, Dietrich Klakow, Mariya Toneva
Improving Semi-Supervised Semantic Segmentation with Sliced-Wasserstein Feature Alignment and Uniformity
Chen Yi Lu, Kasra Derakhshandeh, Somali Chaterji
Improving Soft Unification with Knowledge Graph Embedding Methods
Xuanming Cui, Chionh Peng, Adriel Kuek et al.
Improving Sound Source Localization with Joint Slot Attention on Image and Audio
Inho Kim, YOUNGKIL SONG, Jicheol Park et al.
Improving Task-Specific Multimodal Sentiment Analysis with General MLLMs via Prompting
Haoyu Zhang, Yinan Zhang, Chaolong Ying et al.
Improving Text-to-Image Consistency via Automatic Prompt Optimization
Melissa Hall, Michal Drozdzal, Oscar Mañas et al.
Improving the Continuity of Goal-Achievement Ability via Policy Self-Regularization for Goal-Conditioned Reinforcement Learning
Xudong Gong, Sen Yang, Feng Dawei et al.
Improving the Diffusability of Autoencoders
Ivan Skorokhodov, Sharath Girish, Benran Hu et al.
Improving the Effective Receptive Field of Message-Passing Neural Networks
Shahaf E. Finder, Ron Shapira Weber, Moshe Eliasof et al.
Improving the Euclidean Diffusion Generation of Manifold Data by Mitigating Score Function Singularity
Zichen Liu, Wei Zhang, Tiejun Li
Improving the Generation and Evaluation of Synthetic Data for Downstream Medical Causal Inference
Harry Amad, Zhaozhi Qian, Dennis Frauen et al.
Improving the Sparse Structure Learning of Spiking Neural Networks from the View of Compression Efficiency
Jiangrong Shen, Qi Xu, Gang Pan et al.
Improving the Statistical Efficiency of Cross-Conformal Prediction
Improving the Straight-Through Estimator with Zeroth-Order Information
Ningfeng Yang, Tor Aamodt