The brain's need to attempt integration will cause it to compromise if sensory input conflicts.

2 theories about how senses integrate:

lots of central planning

on-the-ground analysis and decision-making prior to consultation with a central command.

Both theories involve same 3 steps:

sensation

routing

perception

But the second theory, at each step, adds that the signals begin immediately communicating (instead of first sending back to the central command), influencing subsequent rounds of signal processing.

This chapters's focus is the brain AFTER we achieve perception.

There is a survival advantage to seeing the world as a whole (integrated). But there's also a binding problem: where and how does information from different senses begin to merge in brain?

Where = association cortices: bridges between sensory and motor regions. These bridges use both bottom-up and top-down processes to achieve perception.

HOW

bottom-up processors = feature detectors/auditors (edges, visual conception, why READING is a slow way to put information in the brain).

top-down processing = reading the auditors' report and then reacting to it, based on pre-existing knowledge. Because we have unique previous experiences, we have different interpretations of top-down analyses. (This is where we add/subtract/alter data).

Sensory processes are wired to work together (evolutionarily, we've always encountered a multisensory world). So, we experience multimodal reinforcement: i.e. touch can boost the visual system. In addition, multiple senses affect our ability to DETECT stimuli, as well. For instance, learning abilities are increasingly optimized the more multisensory the environment becomes. And vice versa: Learning is less effective in a unisensory environment.

Supra-additive integration: multisensory improvments are greater than the sum of their unisensory counterparts. So, multisensory presentations are the way to go.

A counter-intuitive property of multisensory learning: extra information given at the moment of learning makes learning better (elaborative processing). The extra cognitive processing of information helps the learner to integrate the new material with prior information.

Mayers' 5 rules/principles for multimedia presentation (note: only applies to hearing and vision):

multimedia principle: better words and pictures than words alone

temporal contiguity principle: better when corresponding words and pictures are presented simultaneously rather than successively

spatial contiguity principle: better when corresponding words and pictures are presented near to each other rather than far from each other on the page/screen.

coherence principle: better when extraneous material is excluded

modality principle: better from animation and narration than from animation and on-screen text.

One sense that is special is the nose. (Already talked about proust effect in which smells can evoke memory). Basically, unlike all your other senses, smell information automatically connects with the rest of the brain (including emotion-regulating amygdala). As the web cannot yet send smells, unlike your local bakery, this does not yet apply to web development.

Multiple cues, dished up via different senses, enhance learning. But those cues must be congrous (perhaps to avoid an overwhelming cognitive load?).

Visual processing doesn't just assist in the perception of our world. It dominates the perception of our world. The brain devotes about half of its resources to vision.

We see with our brains. We actually experience our visual environment as a fully analyzed OPINION about what the brain thinks is out there.

A HOLLYWOOD HORDE
Brain doesn't process the visual all at once, the retina assembles patterns of light into tracks (or partial movies) of specific features of the visual field and sends them to thalamus and visual cortex (as separate "streams").

Tracks: coherent, though partial abstractions (i.e. interpretations) of specific features of the visual environment.

Visual processing is MODULAR (Tracks feed into the visual cortex where these individual features are processed SEPARATELY).

Brain then reassembles these scattered features/tracks into 2 giant streams:

Ventral stream: recognizes what an object is and what color it possesses

Dorsal stream: recognizes the location of an object in the visual field and whether it is moving.

These 2 high-level streams are then integrated by the "association regions" of the brain.

CAMELS & COPS
The brain is NOT a camera. It actively deconstructs the information given to it by the eyes, pushing it through a series of filters, and then reconstructing what it thinks it sees (or thinks you should see). Your brain likes to make thing up, and is not 100 percent faithful to what the eyes broadcast to it. It even uses a process called "filling in" to fake visual data in your blind spot, based on what's around it.

You perceive things that aren't there, but HOW you construct that false information follows rules based on:

previous experience

brain's assumptions

We live in a 3-D world, but light falls onto the eye in a 2-D fashion, so we don't actually experience an image, we experience a leap of faith by the brain about the probability of what a current event should look like.

PHANTOM OF THE OCULAR

Visual capture effect: the addition of visual information can convince the brain of falsity (this is the basis for illusions).

TWO types of memory:

recognition memory (explains familiarity)

working memory: a collection of temporary storage buffers with fixed capacities and frustratingly short life spans.

Visual short-term memory is the slice of working memory dedicated to storing visual information. Most people can only hold 4 objects at a time in that buffer (less if they are complex). As the complexity of those objects increases, the number we can hold decreases.

If vision is the best tool we have for learning anything (as Medina claims), I'm not sure of the implications or the visually-impaired.

WORTH A THOUSAND WORDSPictorial Superiority Effect (PSE): The more visual input becomes, the more likely it is to be recognized and recalled than text.

The brain sees words as lots of tiny pictures: hence, the inefficiency of text. Even when we read, most of us try to visualize what the text is telling us.
Reading creates a bottleneck. My text chokes you ... because my text is too much like pictures.

This makes sense, though, as most major evolutionary threats, and eating/reproductive opportunities were apprehended visually.

A PUNCH IN THE NOSE
How brains perceive the world:

preference for patterns with high contrast

principle of common fate: objects that move together are perceived as part of the same object.

prefer human faces to other objects

size related to distance (an object getting closer & therefore bigger, is still the same object)

categorize visual objects by common physical characteristics.

Color vision is beating out our other senses for control/dominance.

IDEAS
Why pictures grab attention

color

orientation

size

motion

IN SUM:

Use color pictures and animations in presentations.

Simple, 2-D pictures are easier to understand than complex drawings.

Know when pictures are NOT the best media for communicating (perhaps a story is better?)

Less text, more pictures, because pictures are a more efficient delivery mechanism for information.

The reason consumers might like pictorial information better is because it is easier to comprehend.