Index of /files/papers

A discussion of the problem of 'interpretability' in cognitive modeling, arguing that uninterpretable models might actually be the guide to deeper understanding of neural archetecture. A linguistic comparison to the Shivasutras invoked.