arXiv Analytics

Sign in

arXiv:2205.08377 [cs.LG]AbstractReferencesReviewsResources

Should attention be all we need? The epistemic and ethical implications of unification in machine learning

Nic Fishman, Leif Hancox-Li

Published 2022-05-09Version 1

"Attention is all you need" has become a fundamental precept in machine learning research. Originally designed for machine translation, transformers and the attention mechanisms that underpin them now find success across many problem domains. With the apparent domain-agnostic success of transformers, many researchers are excited that similar model architectures can be successfully deployed across diverse applications in vision, language and beyond. We consider the benefits and risks of these waves of unification on both epistemic and ethical fronts. On the epistemic side, we argue that many of the arguments in favor of unification in the natural sciences fail to transfer over to the machine learning case, or transfer over only under assumptions that might not hold. Unification also introduces epistemic risks related to portability, path dependency, methodological diversity, and increased black-boxing. On the ethical side, we discuss risks emerging from epistemic concerns, further marginalizing underrepresented perspectives, the centralization of power, and having fewer models across more domains of application

Related articles: Most relevant | Search more
arXiv:2401.01629 [cs.LG] (Published 2024-01-03)
Synthetic Data in AI: Challenges, Applications, and Ethical Implications
Shuang Hao et al.
arXiv:2311.15317 [cs.LG] (Published 2023-11-26)
Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs
arXiv:2109.08134 [cs.LG] (Published 2021-09-16)
Comparison and Unification of Three Regularization Methods in Batch Reinforcement Learning