James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.