arXiv Analytics

Sign in

arXiv:1405.5139 [math.LO]AbstractReferencesReviewsResources

Algorithmic identification of probabilities is hard

Laurent Bienvenu, Santiago Figueira, Benoit Monin, Alexander Shen

Published 2014-05-20, updated 2017-04-12Version 3

Suppose that we are given an infinite binary sequence which is random for a Bernoulli measure of parameter $p$. By the law of large numbers, the frequency of zeros in the sequence tends to~$p$, and thus we can get better and better approximations of $p$ as we read the sequence. We study in this paper a similar question, but from the viewpoint of inductive inference. We suppose now that $p$ is a computable real, but one asks for more: as we are reading more and more bits of our random sequence, we have to eventually guess the exact parameter $p$ (in the form of a Turing code). Can one do such a thing uniformly on all sequences that are random for computable Bernoulli measures, or even on a `large enough' fraction of them? In this paper, we give a negative answer to this question. In fact, we prove a very general negative result which extends far beyond the class of Bernoulli measures.

Related articles: Most relevant | Search more
arXiv:2409.00631 [math.LO] (Published 2024-09-01)
There is a deep 1-generic set
arXiv:2003.07408 [math.LO] (Published 2020-03-16)
Probabilities with Gaps and Gluts
arXiv:1301.3392 [math.LO] (Published 2013-01-15)
The axiomatic power of Kolmogorov complexity