arXiv Analytics

Sign in

arXiv:2304.10557 [cs.LG]AbstractReferencesReviewsResources

An Introduction to Transformers

Richard E. Turner

Published 2023-04-20Version 1

The transformer is a neural network component that can be used to learn useful representations of sequences or sets of datapoints. The transformer has driven recent advances in natural language processing, computer vision, and spatio-temporal modelling. There are many introductions to transformers, but most do not contain precise mathematical descriptions of the architecture and the intuitions behind the design choices are often also missing. Moreover, as research takes a winding path, the explanations for the components of the transformer can be idiosyncratic. In this note we aim for a mathematically precise, intuitive, and clean description of the transformer architecture.

Related articles: Most relevant | Search more
arXiv:1811.12560 [cs.LG] (Published 2018-11-30)
An Introduction to Deep Reinforcement Learning
arXiv:2502.02732 [cs.LG] (Published 2025-02-04)
Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
arXiv:2108.13083 [cs.LG] (Published 2021-08-30)
An Introduction to Variational Inference