Human Perception Of Patterns And Randomness As Algorithmic Complexity Over Probabilistic Stack Automata

Why does observing HHHHHH (six heads) from a fair coin feel more regular and less random than observing HTTHHT when both are equally likely outcomes? Despite the simplicity of this question, cognitive and complexity scientists have debated whether human perception of randomness is simply erroneous,1 determined by how representative the event is of a large random sequence,2 reflective of cognitive limitations (e.g., working memory),3 calculated via Bayesian inference (posterior odds of the sequence having been generated by a "fair" vs. "biased" process),4 or computed by the complexity of the simplest program that can produce the event.5,6 Because calculating the algorithmic complexity for an arbitrary event on an unrestricted Universal Turing Machine is impossible, researchers who use algorithmic complexity to understand human randomness judgments approximate it using heuristics5 or by constraining to the set of Turing machines with two symbols and five states (D(5)).6 We explore a different approach: Approximating algorithmic complexity using machines that are constrained in their computational capabilities. The constrained machines we evaluate are probabilistic finite state machines, probabilistic finite stack machines, probabilistic finite queue machines, and probabilistic finite random-access machines (PFRAS). In a first experiment, we found that PRFS provide the best explanation of human randomness judgments (see Fig 1). We then compared the best computationally-constrained model (PFRAS) to the state- and symbol-constrained Turing machine model (D(5)). PFRAS provided a better fit to human adult judgments of randomness (PFRAS: r = 0.83, Bayesian Information Criterion [smaller values correspond to better fits], BIC = 3399; D(5): r = 0.38, BIC = 3449) and, in a second experiment, regularity (PFRAS: r = 0.92, BIC = 89,685; D(5): r = 0.65, BIC = 94,184). Thus, PFRAS capture perceived randomness and regularity better than D(5). This suggests that constraining the computational capabilities of machines can capture human cognitive processes better than constraining the number of possible states and symbols accessible to a more complex machine.