arXiv Analytics

Sign in

arXiv:2411.09849 [eess.SP]AbstractReferencesReviewsResources

Self-Supervised Radio Pre-training: Toward Foundational Models for Spectrogram Learning

Ahmed Aboulfotouh, Ashkan Eshaghbeigi, Dimitrios Karslidis, Hatem Abou-Zeid

Published 2024-11-14Version 1

Foundational deep learning (DL) models are general models, trained on large, diverse, and unlabelled datasets, typically using self-supervised learning techniques have led to significant advancements especially in natural language processing. These pretrained models can be fine-tuned for related downstream tasks, offering faster development and reduced training costs, while often achieving improved performance. In this work, we introduce Masked Spectrogram Modeling, a novel self-supervised learning approach for pretraining foundational DL models on radio signals. Adopting a Convolutional LSTM architecture for efficient spatio-temporal processing, we pretrain the model with an unlabelled radio dataset collected from over-the-air measurements. Subsequently, the pretrained model is fine-tuned for two downstream tasks: spectrum forecasting and segmentation. Experimental results demonstrate that our methodology achieves competitive performance in both forecasting accuracy and segmentation, validating its effectiveness for developing foundational radio models.

Related articles:
arXiv:2002.05364 [eess.SP] (Published 2020-02-13)
Fast Reinforcement Learning for Anti-jamming Communications
arXiv:2411.04128 [eess.SP] (Published 2024-10-22)
On the analysis of saturated pressure to detect fatigue
arXiv:2501.18799 [eess.SP] (Published 2025-01-30)
A General-Purpose Neuromorphic Sensor based on Spiketrum Algorithm: Hardware Details and Real-life Applications