Most Cited 2024 "data privacy laws" Papers
12,324 papers found • Page 61 of 62
Conference
Batched Low-Rank Adaptation of Foundation Models
Yeming Wen, Swarat Chaudhuri
GRAPH-CONSTRAINED DIFFUSION FOR END-TO-END PATH PLANNING
DINGYUAN SHI, Yongxin Tong, Zimu Zhou et al.
Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design
Heng Dong, Junyu Zhang, Chongjie Zhang
Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control
Wenhan Cao, Wei Pan
Generalized Schrödinger Bridge Matching
Guan-Horng Liu, Yaron Lipman, Maximilian Nickel et al.
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos et al.
Navigating the Design Space of Equivariant Diffusion-Based Generative Models for De Novo 3D Molecule Generation
Tuan Le, Julian Cremer, Frank Noe et al.
Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
Yang Liu, Muzhi Zhu, Hengtao Li et al.
Ferret: Refer and Ground Anything Anywhere at Any Granularity
Haoxuan You, Haotian Zhang, Zhe Gan et al.
Demonstration-Regularized RL
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello et al.
Manifold Diffusion Fields
Ahmed Elhag, Ahmed Elhag, Yuyang Wang et al.
Language Control Diffusion: Efficiently Scaling through Space, Time, and Tasks
David Bell, Yujie Lu, Shinda Huang et al.
PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
Xinyuan Wang, Chenxi Li, Zhen Wang et al.
How do Language Models Bind Entities in Context?
Jiahai Feng, Jacob Steinhardt
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Sina Khajehabdollahi, Roxana Zeraati, Emmanouil Giannakakis et al.
Score-based generative models break the curse of dimensionality in learning a family of sub-Gaussian distributions
Frank Cole, Yulong Lu
FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-Bound Scaling
Yu Tian, Min Shi, Yan Luo et al.
Learning Performance-Improving Code Edits
Alexander Shypula, Aman Madaan, Yimeng Zeng et al.
Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks
Greg Yang, Dingli Yu, Chen Zhu et al.
Privately Aligning Language Models with Reinforcement Learning
Fan Wu, Huseyin Inan, Arturs Backurs et al.
Let's Verify Step by Step
Hunter Lightman, Vineet Kosaraju, Yuri Burda et al.
Complete and Efficient Graph Transformers for Crystal Material Property Prediction
Keqiang Yan, Cong Fu, Xiaofeng Qian et al.
Uncertainty Quantification via Stable Distribution Propagation
Felix Petersen, Aashwin Mishra, Hilde Kuehne et al.
Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory
Wei Huang, Ye Shi, Zhongyi Cai et al.
LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors
Sheng JIn, Xueying Jiang, Jiaxing Huang et al.
Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches
Lingxuan Wu, Xiao Yang, Yinpeng Dong et al.
$\texttt{NAISR}$: A 3D Neural Additive Model for Interpretable Shape Representation
Yining Jiao, Carlton ZDANSKI, Julia Kimbell et al.
Towards Offline Opponent Modeling with In-context Learning
Yuheng Jing, Kai Li, Bingyun Liu et al.
MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo
chenjie cao, xinlin ren, Yanwei Fu
SKILL-MIX: a Flexible and Expandable Family of Evaluations for AI Models
Dingli Yu, Simran Kaur, Arushi Gupta et al.
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents
Jae-Woo Choi, Youngwoo Yoon, Youngwoo Yoon et al.
Hybrid Directional Graph Neural Network for Molecules
Junyi An, Chao Qu, Zhipeng Zhou et al.
DAFA: Distance-Aware Fair Adversarial Training
Hyungyu Lee, Saehyung Lee, Hyemi Jang et al.
Fast-ELECTRA for Efficient Pre-training
Chengyu Dong, Liyuan Liu, Hao Cheng et al.
Course Correcting Koopman Representations
Mahan Fathi, Clement Gehring, Jonathan Pilault et al.
Generating Pragmatic Examples to Train Neural Program Synthesizers
Saujas Vaduguru, Daniel Fried, Yewen Pu
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu, Zeju Qiu, Yao Feng et al.
Learning with Language-Guided State Abstractions
Andi Peng, Ilia Sucholutsky, Belinda Li et al.
Successor Heads: Recurring, Interpretable Attention Heads In The Wild
Rhys Gould, Euan Ong, George Ogden et al.
On the Expressivity of Objective-Specification Formalisms in Reinforcement Learning
Rohan Subramani, Marcus Williams, Max Heitmann et al.
Latent Representation and Simulation of Markov Processes via Time-Lagged Information Bottleneck
Marco Federici, Patrick Forré, Ryota Tomioka et al.
DreamClean: Restoring Clean Image Using Deep Diffusion Prior
Jie Xiao, Ruili Feng, Han Zhang et al.
Enhancing Neural Subset Selection: Integrating Background Information into Set Representations
Binghui Xie, Yatao Bian, Kaiwen Zhou et al.
Probabilistic Adaptation of Black-Box Text-to-Video Models
Sherry Yang, Yilun Du, Bo Dai et al.
Partitioning Message Passing for Graph Fraud Detection
Wei Zhuo, Zemin Liu, Bryan Hooi et al.
Meta-Evolve: Continuous Robot Evolution for One-to-many Policy Transfer
Xingyu Liu, Deepak Pathak, DING ZHAO
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding
Zilong Wang, Hao Zhang, Chun-Liang Li et al.
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Zuxin Liu, Jesse Zhang, Kavosh Asadi et al.
Concept Bottleneck Generative Models
Aya Abdelsalam Ismail, Julius Adebayo, Hector Corrada Bravo et al.
Safe Collaborative Filtering
Riku Togashi, Tatsushi Oka, Naoto Ohsaka et al.
Skip-Attention: Improving Vision Transformers by Paying Less Attention
Shashank Venkataramanan, Amir Ghodrati, Yuki Asano et al.
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Linhao Luo, Yuan-Fang Li, Reza Haffari et al.
Robust Model-Based Optimization for Challenging Fitness Landscapes
Saba Ghaffari, Ehsan Saleh, Alex Schwing et al.
Analytically Tractable Hidden-States Inference in Bayesian Neural Networks
Luong-Ha Nguyen, James-A. Goulet
An interpretable error correction method for enhancing code-to-code translation
Min Xue, Artur Andrzejak, Marla Leuther
Fiber Monte Carlo
Nick Richardson, Deniz Oktay, Yaniv Ovadia et al.
NeRM: Learning Neural Representations for High-Framerate Human Motion Synthesis
Dong Wei, Huaijiang Sun, Bin Li et al.
A Unified Experiment Design Approach for Cyclic and Acyclic Causal Models
Ehsan Mokhtarian, Saber Salehkaleybar, AmirEmad Ghassami et al.
A Framework and Benchmark for Deep Batch Active Learning for Regression
David Holzmüller, Viktor Zaverkin, Johannes Kästner et al.
Tackling the Data Heterogeneity in Asynchronous Federated Learning with Cached Update Calibration
Yujia Wang, Yuanpu Cao, Jingcheng Wu et al.
ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation
Jiaming Liu, Senqiao Yang, Peidong Jia et al.
Automatic Functional Differentiation in JAX
Min Lin
Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems
Jacob Prince, Gabriel Fajardo, George Alvarez et al.
$\mathcal{B}$-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis
Zishun Yu, Yunzhe Tao, Liyu Chen et al.
Octavius: Mitigating Task Interference in MLLMs via LoRA-MoE
Zeren Chen, ziqin wang, zhen wang et al.
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
Zhibin Gou, Zhihong Shao, Yeyun Gong et al.
Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation
Jianliang He, Han Zhong, Zhuoran Yang
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption
Rui Yang, Han Zhong, Jiawei Xu et al.
Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions
Juncheng Li, Kaihang Pan, Zhiqi Ge et al.
Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization
Marin Scalbert, Maria Vakalopoulou, Florent Couzinie-Devy
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training
Kazem Meidani, Parshin Shojaee, Chandan Reddy et al.
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation
Shreyas Havaldar, Navodita Sharma, Shubhi Sareen et al.
Transformer-Modulated Diffusion Models for Probabilistic Multivariate Time Series Forecasting
Yuxin Li, Wenchao Chen, Xinyue Hu et al.
Vanishing Gradients in Reinforcement Finetuning of Language Models
Noam Razin, Hattie Zhou, Omid Saremi et al.
What Algorithms can Transformers Learn? A Study in Length Generalization
Hattie Zhou, Arwen Bradley, Etai Littwin et al.
Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization
Yinbin Han, Meisam Razaviyayn, Renyuan Xu
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
xinlu zhang, Shiyang Li, Xianjun Yang et al.
Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime
Keita Suzuki, Taiji Suzuki
On the Scalability and Memory Efficiency of Semidefinite Programs for Lipschitz Constant Estimation of Neural Networks
Zi Wang, Bin Hu, Aaron Havens et al.
Intelligent Switching for Reset-Free RL
Darshan Patil, Janarthanan Rajendran, Glen Berseth et al.
Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification
Joar Skalse, Alessandro Abate
Effective and Efficient Federated Tree Learning on Hybrid Data
Qinbin Li, Chulin Xie, Xiaojun Xu et al.
Neural Processing of Tri-Plane Hybrid Neural Fields
Adriano Cardace, Pierluigi Zama Ramirez, Francesco Ballerini et al.
Boosting the Adversarial Robustness of Graph Neural Networks: An OOD Perspective
Kuan Li, YiWen Chen, Yang Liu et al.
Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game
Simin Li, Jun Guo, Jingqiao Xiu et al.
SetCSE: Set Operations using Contrastive Learning of Sentence Embeddings
Kang Liu
#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models
Keming Lu, Hongyi Yuan, Zheng Yuan et al.
Debiasing Attention Mechanism in Transformer without Demographics
Shenyu Lu, Yipei Wang, Xiaoqian Wang
Unsupervised Pretraining for Fact Verification by Language Model Distillation
Adrian Bazaga, Pietro Lio, Gos Micklem
Image Translation as Diffusion Visual Programmers
Cheng Han, James Liang, Qifan Wang et al.
Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation
Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami et al.
Adversarial Imitation Learning via Boosting
Jonathan Chang, Dhruv Sreenivas, Yingbing Huang et al.
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
Linfeng Ye, Shayan Mohajer Hamidi, Renhao Tan et al.
Provable Reward-Agnostic Preference-Based Reinforcement Learning
Wenhao Zhan, Masatoshi Uehara, Wen Sun et al.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
Licong Lin, Yu Bai, Song Mei
Improving Convergence and Generalization Using Parameter Symmetries
Bo Zhao, Robert M. Gower, Robin Walters et al.
COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits
Mintong Kang, Nezihe Merve Gürel, Linyi Li et al.
Manifold Preserving Guided Diffusion
Yutong He, Naoki Murata, Chieh-Hsin Lai et al.
Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators
Daniel Geng, Andrew Owens
Threaten Spiking Neural Networks through Combining Rate and Temporal Information
Zecheng Hao, Tong Bu, Xinyu Shi et al.
Exploring Target Representations for Masked Autoencoders
xingbin liu, Jinghao Zhou, Tao Kong et al.
Federated Recommendation with Additive Personalization
Zhiwei Li, Guodong Long, Tianyi Zhou
Neural Language of Thought Models
Yi-Fu Wu, Minseung Lee, Sungjin Ahn
Text2Reward: Reward Shaping with Language Models for Reinforcement Learning
Tianbao Xie, Siheng Zhao, Chen Henry Wu et al.
Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Alexandru Meterez, Amir Joudaki, Francesco Orabona et al.
Statistical Rejection Sampling Improves Preference Optimization
Tianqi Liu, Yao Zhao, Rishabh Joshi et al.
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
Qingru Zhang, Chandan Singh, Liyuan Liu et al.
Privacy Amplification for Matrix Mechanisms
Christopher Choquette-Choo, Arun Ganesh, Thomas Steinke et al.
Negative Label Guided OOD Detection with Pretrained Vision-Language Models
Xue JIANG, Feng Liu, Zhen Fang et al.
PTaRL: Prototype-based Tabular Representation Learning via Space Calibration
Hangting Ye, Wei Fan, Xiaozhuang Song et al.
Constrained Bi-Level Optimization: Proximal Lagrangian Value Function Approach and Hessian-free Algorithm
Wei Yao, Chengming Yu, Shangzhi Zeng et al.
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning
Christopher Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla et al.
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Junjie Oscar Yin, Yingheng Wang, Volodymyr Kuleshov et al.
On the Stability of Expressive Positional Encodings for Graphs
Yinan Huang, William Lu, Joshua Robinson et al.
Evaluating Representation Learning on the Protein Structure Universe
Arian Jamasb, Alex Morehead, Chaitanya Joshi et al.
AutoVP: An Automated Visual Prompting Framework and Benchmark
Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen et al.
On the Hardness of Constrained Cooperative Multi-Agent Reinforcement Learning
Ziyi Chen, Yi Zhou, Heng Huang
Information Retention via Learning Supplemental Features
Zhipeng Xie, Yahe Li
Geometry-Aware Projective Mapping for Unbounded Neural Radiance Fields
Junoh Lee, Hyunjun Jung, Jinhwi Park et al.
Off-Policy Primal-Dual Safe Reinforcement Learning
Zifan Wu, Bo Tang, Qian Lin et al.
When should we prefer Decision Transformers for Offline Reinforcement Learning?
Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard et al.
ARM: Refining Multivariate Forecasting with Adaptive Temporal-Contextual Learning
Jiecheng Lu, Xu Han, Shihao Yang
SAS: Structured Activation Sparsification
Yusuke Sekikawa, Shingo Yashima
Learning Multi-Agent Communication with Contrastive Learning
Yat Long (Richie) Lo, Biswa Sengupta, Jakob Foerster et al.
Xformer: Hybrid X-Shaped Transformer for Image Denoising
Jiale Zhang, Yulun Zhang, Jinjin Gu et al.
Dynamics-Informed Protein Design with Structure Conditioning
Urszula Julia Komorowska, Simon Mathis, Kieran Didi et al.
Identifiable Latent Polynomial Causal Models through the Lens of Change
Yuhang Liu, Zhen Zhang, Dong Gong et al.
SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning
Jiacheng Chen, Zeyuan Ma, Hongshu Guo et al.
Graph Lottery Ticket Automated
Guibin Zhang, Kun Wang, Wei Huang et al.
Threshold-Consistent Margin Loss for Open-World Deep Metric Learning
Qin ZHANG, Linghan Xu, Jun Fang et al.
Encoding Unitig-level Assembly Graphs with Heterophilous Constraints for Metagenomic Contigs Binning
Hansheng Xue, Vijini Mallawaarachchi, Lexing Xie et al.
Adaptive Regret for Bandits Made Possible: Two Queries Suffice
Zhou Lu, Qiuyi (Richard) Zhang, Xinyi Chen et al.
AdaMerging: Adaptive Model Merging for Multi-Task Learning
Enneng Yang, Zhenyi Wang, Li Shen et al.
Statistically Optimal $K$-means Clustering via Nonnegative Low-rank Semidefinite Programming
Yubo Zhuang, Xiaohui Chen, Yun Yang et al.
Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data
Atsushi Nitanda, Kazusato Oko, Taiji Suzuki et al.
Bridging Neural and Symbolic Representations with Transitional Dictionary Learning
Junyan Cheng, Peter Chin
Thin-Shell Object Manipulations With Differentiable Physics Simulations
Yian Wang, Juntian Zheng, Zhehuan Chen et al.
Bayesian Coreset Optimization for Personalized Federated Learning
Prateek Chanda, Shrey Modi, Ganesh Ramakrishnan
Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs
Anson Simon Bastos, Kuldeep Singh, Abhishek Nadgeri et al.
Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs
Woomin Song, Seunghyuk Oh, Sangwoo Mo et al.
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods
Fred Zhang, Neel Nanda
Scale-Adaptive Diffusion Model for Complex Sketch Synthesis
Jijin Hu, Ke Li, Yonggang Qi et al.
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting
Runqi Lin, Chaojian Yu, Bo Han et al.
Mastering Memory Tasks with World Models
Mohammad Reza Samsami, Artem Zholus, Janarthanan Rajendran et al.
Towards Principled Representation Learning from Videos for Reinforcement Learning
Dipendra Kumar Misra, Akanksha Saran, Tengyang Xie et al.
Expected flow networks in stochastic environments and two-player zero-sum games
Marco Jiralerspong, Bilun Sun, Danilo Vucetic et al.
Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond
Tianxin Wei, Bowen Jin, Ruirui Li et al.
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Yung-Sung Chuang, Yujia Xie, Hongyin Luo et al.
Energy-conserving equivariant GNN for elasticity of lattice architected metamaterials
Ivan Grega, Ilyes Batatia, Gábor Csányi et al.
SALMON: Self-Alignment with Instructable Reward Models
Zhiqing Sun, Yikang Shen, Hongxin Zhang et al.
Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMs
Feiyang Kang, Hoang Anh Just, Yifan Sun et al.
Augmenting Transformers with Recursively Composed Multi-grained Representations
Xiang Hu, Qingyang Zhu, Kewei Tu et al.
Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization
Guang Lin, Chao Li, Jianhai Zhang et al.
Large Language Models as Generalizable Policies for Embodied Tasks
Andrew Szot, Max Schwarzer, Harsh Agrawal et al.
The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting — An Analytical Model
Daniel Goldfarb, Itay Evron, Nir Weinberger et al.
Fast Equilibrium of SGD in Generic Situations
Zhiyuan Li, Yi Wang, Zhiren Wang
Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data
Yuhui Zhang, Elaine Sui, Serena Yeung
Compositional Preference Models for Aligning LMs
DONGYOUNG GO, Tomek Korbak, Germàn Kruszewski et al.
Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective
Zehao Dou, Yang Song
Demystifying Local & Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Faisal Hamman, Sanghamitra Dutta
Learning Conditional Invariances through Non-Commutativity
Abhra Chaudhuri, Serban Georgescu, Anjan Dutta
Generative Modeling with Phase Stochastic Bridge
Tianrong Chen, Jiatao Gu, Laurent Dinh et al.
Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation
Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis et al.
RobustTSF: Towards Theory and Design of Robust Time Series Forecasting with Anomalies
Hao Cheng, Qingsong Wen, Yang Liu et al.
Tailoring Self-Rationalizers with Multi-Reward Distillation
Sahana Ramnath, Brihi Joshi, Skyler Hallinan et al.
Controlling Vision-Language Models for Multi-Task Image Restoration
Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao et al.
VFLAIR: A Research Library and Benchmark for Vertical Federated Learning
TIANYUAN ZOU, Zixuan GU, Yu He et al.
Measuring Vision-Language STEM Skills of Neural Models
Jianhao Shen, Ye Yuan, Srbuhi Mirzoyan et al.
Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers
Qingyan Guo, Rui Wang, Junliang Guo et al.
MCM: Masked Cell Modeling for Anomaly Detection in Tabular Data
Jiaxin Yin, Yuanyuan Qiao, Zitang Zhou et al.
NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers
Kai Shen, Zeqian Ju, Xu Tan et al.
CLIP-MUSED: CLIP-Guided Multi-Subject Visual Neural Information Semantic Decoding
Qiongyi Zhou, Changde Du, Shengpei Wang et al.
How connectivity structure shapes rich and lazy learning in neural circuits
Yuhan Helena Liu, Aristide Baratin, Jonathan Cornford et al.
ARGS: Alignment as Reward-Guided Search
Maxim Khanov, Jirayu Burapacheep, Yixuan Li
Let Models Speak Ciphers: Multiagent Debate through Embeddings
Chau Pham, Boyi Liu, Yingxiang Yang et al.
NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks
Wenxi Wang, Yang Hu, Mohit Tiwari et al.
Understanding when Dynamics-Invariant Data Augmentations Benefit Model-free Reinforcement Learning Updates
Nicholas Corrado, Josiah Hanna
Revisiting Deep Audio-Text Retrieval Through the Lens of Transportation
Tien Manh Luong, Khai Nguyen, Nhat Ho et al.
Text-to-3D with Classifier Score Distillation
Xin Yu, Yuan-Chen Guo, Yangguang Li et al.
Transformers can optimally learn regression mixture models
Reese Pathak, Rajat Sen, Weihao Kong et al.
Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy Label Learning
HeeSun Bae, Seungjae Shin, Byeonghu Na et al.
Branch-GAN: Improving Text Generation with (not so) Large Language Models
Fredrik Carlsson, Johan Broberg, Erik Hillbom et al.
SocioDojo: Building Lifelong Analytical Agents with Real-world Text and Time Series
Junyan Cheng, Peter Chin
A unique M-pattern for micro-expression spotting in long videos
Jinxuan Wang, Shiting Xu, Tong Zhang
Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning
Yun-Hin Chan, Rui Zhou, Running Zhao et al.
iTransformer: Inverted Transformers Are Effective for Time Series Forecasting
Yong Liu, Tengge Hu, Haoran Zhang et al.
A Mutual Information Perspective on Federated Contrastive Learning
Christos Louizos, Matthias Reisser, Denis Korzhenkov
Local Graph Clustering with Noisy Labels
Artur Back de Luca, Kimon Fountoulakis, Shenghao Yang
DistillSpec: Improving Speculative Decoding via Knowledge Distillation
Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat et al.
Faithful Vision-Language Interpretation via Concept Bottleneck Models
Songning Lai, Lijie Hu, Junxiao Wang et al.
Stylized Offline Reinforcement Learning: Extracting Diverse High-Quality Behaviors from Heterogeneous Datasets
Yihuan Mao, Chengjie Wu, Xi Chen et al.
Variance-aware Regret Bounds for Stochastic Contextual Dueling Bandits
Qiwei Di, Tao Jin, Yue Wu et al.
Demystifying Embedding Spaces using Large Language Models
Guy Tennenholtz, Yinlam Chow, ChihWei Hsu et al.
A Newborn Embodied Turing Test for Comparing Object Segmentation Across Animals and Machines
Manju Garimella, Denizhan Pak, Justin Wood et al.
DeepZero: Scaling Up Zeroth-Order Optimization for Deep Model Training
AOCHUAN CHEN, Yimeng Zhang, Jinghan Jia et al.
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View.
Raj Ghugare, Matthieu Geist, Glen Berseth et al.
Unveiling the Pitfalls of Knowledge Editing for Large Language Models
Zhoubo Li, Ningyu Zhang, Yunzhi Yao et al.
Learning Thresholds with Latent Values and Censored Feedback
Jiahao Zhang, Tao Lin, Weiqiang Zheng et al.
Extending Power of Nature from Binary to Real-Valued Graph Learning in Real World
Chunshu Wu, Ruibing Song, Chuan Liu et al.
Robustifying and Boosting Training-Free Neural Architecture Search
Zhenfeng He, Yao Shu, Zhongxiang Dai et al.