arXiv Analytics

Sign in

arXiv:2505.07070 [cs.LG]AbstractReferencesReviewsResources

Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures

Francesco Cagnetta, Alessandro Favero, Antonio Sclocchi, Matthieu Wyart

Published 2025-05-11Version 1

How do neural language models acquire a language's structure when trained for next-token prediction? We address this question by deriving theoretical scaling laws for neural network performance on synthetic datasets generated by the Random Hierarchy Model (RHM) -- an ensemble of probabilistic context-free grammars designed to capture the hierarchical structure of natural language while remaining analytically tractable. Previously, we developed a theory of representation learning based on data correlations that explains how deep learning models capture the hierarchical structure of the data sequentially, one layer at a time. Here, we extend our theoretical framework to account for architectural differences. In particular, we predict and empirically validate that convolutional networks, whose structure aligns with that of the generative process through locality and weight sharing, enjoy a faster scaling of performance compared to transformer models, which rely on global self-attention mechanisms. This finding clarifies the architectural biases underlying neural scaling laws and highlights how representation learning is shaped by the interaction between model architecture and the statistical properties of data.

Related articles: Most relevant | Search more
arXiv:1706.04601 [cs.LG] (Published 2017-06-14)
Provable benefits of representation learning
arXiv:1912.05977 [cs.LG] (Published 2019-12-12)
Tracing the Propagation Path: A Flow Perspective of Representation Learning on Graphs
arXiv:1703.02156 [cs.LG] (Published 2017-03-07)
On the Limits of Learning Representations with Label-Based Supervision