2025 Papers
21,856 papers found • Page 402 of 438
Training-Free Bayesianization for Low-Rank Adapters of Large Language Models
Haizhou Shi, Yibin Wang, Ligong Han et al.
Training-free Camera Control for Video Generation
Chen Hou, Zhibo Chen
Training-Free Class Purification for Open-Vocabulary Semantic Segmentation
Qi Chen, Lingxiao Yang, Yun Chen et al.
Training-Free Constrained Generation With Stable Diffusion Models
Stefano Zampini, Jacob K Christopher, Luca Oneto et al.
Training-Free Dataset Pruning for Instance Segmentation
Yalun Dai, Lingao Xiao, Ivor Tsang et al.
Training-free Dense-Aligned Diffusion Guidance for Modular Conditional Image Synthesis
Zixuan Wang, DUO PENG, Feng Chen et al.
Training-free Detection of AI-generated images via Cropping Robustness
Sungik Choi, Hankook Lee, Moontae Lee
Training-Free Diffusion Model Alignment with Sampling Demons
Po-Hung Yeh, Kuang-Huei Lee, Jun-Cheng Chen
Training-Free Efficient Video Generation via Dynamic Token Carving
Yuechen Zhang, Jinbo Xing, bin xia et al.
Training Free Exponential Context Extension via Cascading KV Cache
Jeff Willette, Heejun Lee, Youngwan Lee et al.
Training-Free Generation of Temporally Consistent Rewards from VLMs
Yinuo Zhao, Jiale Yuan, Zhiyuan Xu et al.
Training-free Geometric Image Editing on Diffusion Models
Hanshen Zhu, Zhen Zhu, Kaile Zhang et al.
Training-Free Guidance Beyond Differentiability: Scalable Path Steering with Tree Search in Diffusion and Flow Models
Yingqing Guo, Yukang Yang, Hui Yuan et al.
Training Free Guided Flow-Matching with Optimal Control
Luran Wang, Chaoran Cheng, Yizhen Liao et al.
Training-Free Image Manipulation Localization Using Diffusion Models
Zhenfei Zhang, Ming-Ching Chang, Xin Li
Training-Free Industrial Defect Generation with Diffusion Models
Ruyi Xu, Yen-Tzu Chiu, Tai-I Chen et al.
Training-free LLM-generated Text Detection by Mining Token Probability Sequences
Yihuai Xu, Yongwei Wang, YIFEI BI et al.
Training-Free Message Passing for Learning on Hypergraphs
Bohan Tang, Zexi Liu, Keyue Jiang et al.
Training-free Neural Architecture Search through Variance of Knowledge of Deep Network Weights
Ondrej Tybl, Lukas Neumann
Training-free Online Video Step Grounding
Luca Zanella, Massimiliano Mancini, Yiming Wang et al.
Training-free Open-Vocabulary Semantic Segmentation via Diverse Prototype Construction and Sub-region Matching
Xuanpu Zhao, Dianmo Sheng, Zhentao Tan et al.
Training-Free Personalization via Retrieval and Reasoning on Fingerprints
Deepayan Das, Davide Talon, Yiming Wang et al.
Training-Free Safe Denoisers for Safe Use of Diffusion Models
Mingyu Kim, Dongjun Kim, Amman Yusuf et al.
Training-Free Safe Text Embedding Guidance for Text-to-Image Diffusion Models
Byeonghu Na, Mina Kang, Jiseok Kwak et al.
Training-Free Test-Time Adaptation via Shape and Style Guidance for Vision-Language Models
Shenglong Zhou, Manjiang Yin, Leiyu Sun et al.
Training-Free Text-Guided Image Editing with Visual Autoregressive Model
Yufei Wang, Lanqing Guo, Zhihao Li et al.
Training High Performance Spiking Neural Network by Temporal Model Calibration
Jiaqi Yan, Changping Wang, De Ma et al.
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Ulyana Piterbarg, Lerrel Pinto, Rob Fergus
Training Language Models to Generate Quality Code with Program Analysis Feedback
Feng Yao, Zilong Wang, Liyuan Liu et al.
Training Language Models to Reason Efficiently
Daman Arora, Andrea Zanette
Training Language Models to Self-Correct via Reinforcement Learning
Aviral Kumar, Vincent Zhuang, Rishabh Agarwal et al.
Training Large Language Models for Retrieval-Augmented Question Answering through Backtracking Correction
Huawen Feng, ZekunYao, Junhao Zheng et al.
Training LLMs over Neurally Compressed Text
Brian Lester, Jaehoon Lee, Jeffrey Pennington et al.
Training Matting Models Without Alpha Labels
Wenze Liu, Zixuan Ye, Hao Lu et al.
Training Neural Networks as Recognizers of Formal Languages
Alexandra Butoi, Ghazal Khalighinejad, Anej Svete et al.
Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis
Hongkang Li, Songtao Lu, Pin-Yu Chen et al.
Training One-Dimensional Graph Neural Networks is NP-Hard
Robert Ganian, Mathis Rocton, Simon Wietheger
Training on the Benchmark Is Not All You Need
Shiwen Ni, Xiangtao Kong, Chengming Li et al.
Training on the Test Task Confounds Evaluation and Emergence
Ricardo Dominguez-Olmedo, Florian Eddie Dorner, Moritz Hardt
Training Robust Ensembles Requires Rethinking Lipschitz Continuity
Ali Ebrahimpour Boroojeny, Hari Sundaram, Varun Chandrasekaran
Training Robust Graph Neural Networks by Modeling Noise Dependencies
Yeonjun In, Kanghoon Yoon, Sukwon Yun et al.
Training Software Engineering Agents and Verifiers with SWE-Gym
Jiayi Pan, Xingyao Wang, Graham Neubig et al.
Training the Untrainable: Introducing Inductive Bias via Representational Alignment
Vighnesh Subramaniam, David Mayo, Colin Conwell et al.
Training Verification-Friendly Neural Networks via Neuron Behavior Consistency
Zongxin Liu, Zhe Zhao, Fu Song et al.
Training with “Paraphrasing the Original Text” Teaches LLM to Better Retrieve in Long-Context Tasks
Yijiong Yu, Yongfeng Huang, Zhixiao Qi et al.
Train on Pins and Test on Obstacles for Rectilinear Steiner Minimum Tree
Xingbo Du, Ruizhe Zhong, Junchi Yan
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
Jun Zhang, Jue Wang, Huan Li et al.
Train to Defend: First Defense Against Cryptanalytic Neural Network Parameter Extraction Attacks
Ashley Kurian, Aydin Aysu
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.
TrajAgent: An LLM-Agent Framework for Trajectory Modeling via Large-and-Small Model Collaboration
Yuwei Du, Jie Feng, Jie Zhao et al.