2025 Papers
21,856 papers found • Page 401 of 438
Tracing Representation Progression: Analyzing and Enhancing Layer-Wise Similarity
Jiachen Jiang, Jinxin Zhou, Zhihui Zhu
Tracing the Representation Geometry of Language Models from Pretraining to Post-training
Melody Li, Kumar Krishna Agrawal, Arna Ghosh et al.
Tracing the Roots: Leveraging Temporal Dynamics in Diffusion Trajectories for Origin Attribution
Andreas Floros, Seyed-Mohsen Moosavi-Dezfooli, Pier Luigi Dragotti
Track3R: Joint Point Map and Trajectory Prior for Spatiotemporal 3D Understanding
Seong Hyeon Park, Jinwoo Shin
Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation
Hyeonho Jeong, Chun-Hao P. Huang, Jong Chul Ye et al.
TrackAny3D: Transferring Pretrained 3D Models for Category-unified 3D Point Cloud Tracking
Mengmeng Wang, Haonan Wang, Yulong Li et al.
Track Any Anomalous Object:A Granular Video Anomaly Detection Pipeline
Yuzhi Huang, Chenxin Li, Haitao Zhang et al.
TrackGo: A Flexible and Efficient Method for Controllable Video Generation
Haitao Zhou, Chuang Wang, Rui Nie et al.
Tracking and Understanding Object Transformations
Yihong Sun, Xinyu Yang, Jennifer Sun et al.
Tracking Everything Everywhere across Multiple Cameras
Li-Heng Wang, YuJu Cheng, Tyng-Luh Liu
Tracking Most Significant Shifts in Infinite-Armed Bandits
Joe Suk, Jung-hun Kim
Tracking objects that change in appearance with phase synchrony
Sabine Muzellec, Drew Linsley, Alekh Ashok et al.
Tracking The Best Expert Privately
Hilal Asi, Vinod Raman, Aadirupa Saha
Tracking the Copyright of Large Vision-Language Models through Parameter Learning Adversarial Images
Yubo Wang, Jianting Tang, Liu et al.
Tracking Tiny Drones against Clutter: Large-Scale Infrared Benchmark with Motion-Centric Adaptive Algorithm
Jiahao Zhang, Zongli Jiang, Gang Wang et al.
TrackingWorld: World-centric Monocular 3D Tracking of Almost All Pixels
Jiahao Lu, Weitao Xiong, Jiacheng Deng et al.
Track, Inpaint, Resplat: Subject-driven 3D and 4D Generation with Progressive Texture Infilling
Shuhong Zheng, Ashkan Mirzaei, Igor Gilitschenski
Track-On: Transformer-based Online Point Tracking with Memory
Görkay Aydemir, Xiongyi Cai, Weidi Xie et al.
Tracktention: Leveraging Point Tracking to Attend Videos Faster and Better
Zihang Lai, Andrea Vedaldi
Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues
Yan Zhang, Gangyan Zeng, Huawen Shen et al.
TrackVerse: A Large-Scale Object-Centric Video Dataset for Image-Level Representation Learning
Yibing Wei, Samuel Church, Victor Suciu et al.
Tractable Multi-Agent Reinforcement Learning through Behavioral Economics
Eric Mazumdar, Kishan Panaganti, Laixi Shi
Tractable Multinomial Logit Contextual Bandits with Non-Linear Utilities
Taehyun Hwang, Dahngoon Kim, Min-hwan Oh
Tractable Transformers for Flexible Conditional Generation
Anji Liu, Xuejie Liu, Dayuan Zhao et al.
TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks
Itzik Waizman, Yakov Gusakov, Itay Benou et al.
Tradeoffs between Mistakes and ERM Oracle Calls in Online and Transductive Online Learning
Idan Attias, Steve Hanneke, Arvind Ramaswami
Trade-offs in Image Generation: How Do Different Dimensions Interact?
Sicheng Zhang, Binzhu Xie, Zhonghao Yan et al.
Trading Off Quality and Uncertainty Through Multi-Objective Optimisation in Batch Bayesian Optimisation
Chao Jiang, Miqing Li
Tradutor: Building a Variety Specific Translation Model
Hugo Sousa, Satya Almasian, Ricardo Campos et al.
TraF-Align: Trajectory-aware Feature Alignment for Asynchronous Multi-agent Perception
Zhiying Song, Lei Yang, Fuxi Wen et al.
TrafficLoc: Localizing Traffic Surveillance Cameras in 3D Scenes
Yan Xia, Yunxiang Lu, Rui Song et al.
Traffic Scenario Logic: A Spatial-Temporal Logic for Modeling and Reasoning of Urban Traffic Scenarios
Ruolin Wang, Yuejiao Xu, Jianmin Ji
TraffiDent: A Dataset for Understanding the Interplay Between Traffic Dynamics and Incidents
Xiaochuan Gou, Ziyue Li, Tian Lan et al.
TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning
Gangqiang Hu, Jianfeng Lu, Jianmin Han et al.
Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression
Jiarui Jiang, Wei Huang, Miao Zhang et al.
Trained Transformer Classifiers Generalize and Exhibit Benign Overfitting In-Context
Spencer Frei, Gal Vardi
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
Jaeyeon Kim, Kulin Shah, Vasilis Kontonis et al.
Training a Generally Curious Agent
Fahim Tajwar, Yiding Jiang, Abitha Thankaraj et al.
Training-and-Prompt-Free General Painterly Harmonization via Zero-Shot Disentenglement on Style and Content References
Teng-Fang Hsiao, Bo-Kai Ruan, Hong-Han Shuai
Training a Scientific Reasoning Model for Chemistry
Siddharth Narayanan, James Braza, Ryan-Rhys Griffiths et al.
Training Consistent Mixture-of-Experts-Based Prompt Generator for Continual Learning
Yue Lu, Shizhou Zhang, De Cheng et al.
Training Data Provenance Verification: Did Your Model Use Synthetic Data from My Generative Model for Training?
Yuechen Xie, Jie Song, Huiqiong Wang et al.
Training Deep Learning Models with Norm-Constrained LMOs
Thomas Pethick, Wanyun Xie, Kimon Antonakopoulos et al.
Training Deep Neural Networks with Virtual Smoothing Classes
Zhiyang Zhou, Siwei Wei, Xudong Zhang et al.
Training Diffusion-based Generative Models with Limited Data
Zhaoyu Zhang, Yang Hua, Guanxiong Sun et al.
Training Dynamics of In-Context Learning in Linear Attention
Yedi Zhang, Aaditya Singh, Peter Latham et al.
Training Flexible Models of Genetic Variant Effects from Functional Annotations using Accelerated Linear Algebra
Alan Amin, Andres Potapczynski, Andrew Wilson
Training-Free Activation Sparsity in Large Language Models
James Liu, Pragaash Ponnusamy, Tianle Cai et al.
Training-free and Adaptive Sparse Attention for Efficient Long Video Generation
yifei xia, Suhan Ling, Fangcheng Fu et al.
Training-Free and Hardware-Friendly Acceleration for Diffusion Models via Similarity-based Token Pruning
Evelyn Zhang, Jiayi Tang, Xuefei Ning et al.