arXiv:1704.03528 [astro-ph.SR]AbstractReferencesReviewsResources
Computation of astrophysical opacities
Published 2017-04-11Version 1
The revision of the standard Los Alamos opacities in the 1980-90s by a group at the Livermore Laboratory (OPAL) and the Opacity Project (OP) consortium was an early example of collaborative big-data science, leading to reliable data deliverables (atomic databases, monochromatic opacities, mean opacities and radiative accelerations) that have been widely used since then to solve a variety of important astrophysical problems. Nowadays, the precision of the OPAL and OP opacities, and even that of new tables (OPLIB) by Los Alamos, is a recurrent topic in a hot debate involving stringent comparisons between theory, laboratory experiments, and solar and stellar observations in sophisticated research fields: the Standard Solar Model (SSM); helio and asteroseismology; NLTE 3D hydrodynamic photospheric modeling; nuclear reaction rates, solar neutrino observations, computational atomic physics, and plasma experiments. In this context, an unexpected downward revision of the solar photospheric metal abundances in 2005 spoiled very precise agreement between the helioseismic indicators (depth of the convection zone, sound-speed profile, and helium surface abundance) and the SSM benchmarks, which could be somehow reestablished with a substantial opacity increase. Recent laboratory measurements of the iron opacity in physical conditions similar to the boundary of the solar convection zone have indeed predicted significant increases (30-400%), although new systematic improvements and comparisons of the computational tables have not as yet been able to reproduce them. In the present talk we give an overview of this controversy, and discuss within the OP approach some of the theoretical shortcomings that could be possibly impairing a more complete and accurate opacity accounting.