RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

12citations
arXiv:2411.18822
12
Citations
#832
in ICLR 2025
of 3827 papers
10
Authors
3
Data Points

Abstract

We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves strong performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks.

Citation History

Jan 26, 2026
12
Jan 26, 2026
12
Jan 27, 2026
12