Choose your preferred view mode

Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
mobile device.

It has been proposed that the general function of the brain is inference, which corresponds quantitatively to the minimization of uncertainty (or the maximization of information). However, there has been a lack of clarity about exactly what this means. Efforts to quantify information

It has been proposed that the general function of the brain is inference, which corresponds quantitatively to the minimization of uncertainty (or the maximization of information). However, there has been a lack of clarity about exactly what this means. Efforts to quantify information have been in agreement that it depends on probabilities (through Shannon entropy), but there has long been a dispute about the definition of probabilities themselves. The “frequentist” view is that probabilities are (or can be) essentially equivalent to frequencies, and that they are therefore properties of a physical system, independent of any observer of the system. E.T. Jaynes developed the alternate “Bayesian” definition, in which probabilities are always conditional on a state of knowledge through the rules of logic, as expressed in the maximum entropy principle. In doing so, Jaynes and others provided the objective means for deriving probabilities, as well as a unified account of information and logic (knowledge and reason). However, neuroscience literature virtually never specifies any definition of probability, nor does it acknowledge any dispute concerning the definition. Although there has recently been tremendous interest in Bayesian approaches to the brain, even in the Bayesian literature it is common to find probabilities that are purported to come directly and unconditionally from frequencies. As a result, scientists have mistakenly attributed their own information to the neural systems they study. Here I argue that the adoption of a strictly Jaynesian approach will prevent such errors and will provide us with the philosophical and mathematical framework that is needed to understand the general function of the brain. Accordingly, our challenge becomes the identification of the biophysical basis of Jaynesian information and logic. I begin to address this issue by suggesting how we might identify a probability distribution over states of one physical system (an “object”) conditional only on the biophysical state of another physical system (an “observer”). The primary purpose in doing so is not to characterize information and inference in exquisite, quantitative detail, but to be as clear and precise as possible about what it means to perform inference and how the biophysics of the brain could achieve this goal.
Full article

The framework is proposed where matter can be seen as related to energy in a way structure relates to process and information relates to computation. In this scheme matter corresponds to a structure, which corresponds to information. Energy corresponds to the ability

The framework is proposed where matter can be seen as related to energy in a way structure relates to process and information relates to computation. In this scheme matter corresponds to a structure, which corresponds to information. Energy corresponds to the ability to carry out a process, which corresponds to computation. The relationship between each two complementary parts of each dichotomous pair (matter/energy, structure/process, information/computation) are analogous to the relationship between being and becoming, where being is the persistence of an existing structure while becoming is the emergence of a new structure through the process of interactions. This approach presents a unified view built on two fundamental ontological categories: Information and computation. Conceptualizing the physical world as an intricate tapestry of protoinformation networks evolving through processes of natural computation helps to make more coherent models of nature, connecting non-living and living worlds. It presents a suitable basis for incorporating current developments in understanding of biological/cognitive/social systems as generated by complexification of physicochemical processes through self-organization of molecules into dynamic adaptive complex systems by morphogenesis, adaptation and learning—all of which are understood as information processing.
Full article

In this paper I discuss the question: what comes first, physics or information? The two have had a long-standing, symbiotic relationship for almost a hundred years out of which we have learnt a great deal. Information theory has enriched our interpretations of quantum

In this paper I discuss the question: what comes first, physics or information? The two have had a long-standing, symbiotic relationship for almost a hundred years out of which we have learnt a great deal. Information theory has enriched our interpretations of quantum physics, and, at the same time, offered us deep insights into general relativity through the study of black hole thermodynamics. Whatever the outcome of this debate, I argue that physicists will be able to benefit from continuing to explore connections between the two.
Full article

Human beings inherit an informational culture transmitted through spoken and written language. A growing body of empirical work supports the mutual influence between language and categorization, suggesting that our cognitive-linguistic environment both reflects and shapes our understanding. By implication, artifacts that manifest this

Human beings inherit an informational culture transmitted through spoken and written language. A growing body of empirical work supports the mutual influence between language and categorization, suggesting that our cognitive-linguistic environment both reflects and shapes our understanding. By implication, artifacts that manifest this cognitive-linguistic environment, such asWikipedia, should represent language structure and conceptual categorization in a way consistent with human behavior. We use this intuition to guide the construction of a computational cognitive model, situated in Wikipedia, that generates semantic association judgments. Our unsupervised model combines information at the language structure and conceptual categorization levels to achieve state of the art correlation with human ratings on semantic association tasks including WordSimilarity-353, semantic feature production norms, word association, and false memory.
Full article