arXiv Analytics

Sign in

arXiv:2406.17812 [cs.LG]AbstractReferencesReviewsResources

Scalable Artificial Intelligence for Science: Perspectives, Methods and Exemplars

Wesley Brewer, Aditya Kashi, Sajal Dash, Aristeidis Tsaris, Junqi Yin, Mallikarjun Shankar, Feiyi Wang

Published 2024-06-24Version 1

In a post-ChatGPT world, this paper explores the potential of leveraging scalable artificial intelligence for scientific discovery. We propose that scaling up artificial intelligence on high-performance computing platforms is essential to address such complex problems. This perspective focuses on scientific use cases like cognitive simulations, large language models for scientific inquiry, medical image analysis, and physics-informed approaches. The study outlines the methodologies needed to address such challenges at scale on supercomputers or the cloud and provides exemplars of such approaches applied to solve a variety of scientific problems.

Related articles: Most relevant | Search more
arXiv:2303.08302 [cs.LG] (Published 2023-03-15)
A Comprehensive Study on Post-Training Quantization for Large Language Models
arXiv:2305.12356 [cs.LG] (Published 2023-05-21)
Integer or Floating Point? New Outlooks for Low-Bit Quantization on Large Language Models
Yijia Zhang et al.
arXiv:2302.06692 [cs.LG] (Published 2023-02-13)
Guiding Pretraining in Reinforcement Learning with Large Language Models
Yuqing Du et al.