Pages

Tuesday, 30 March 2010

UPDATE (23/6/15) - Carl Zimmer followed up his Discover piece with an NYT article about a recent paper in Cortex naming this non-imaging 'aphantasia'. The lead researcher is Dr Adam Zeman and he's interested to hear from people who experience this; his email is a.zeman@exeter.ac.uk (tell him we sent you :)

Discover covers a really cool Neuropsychologia article about a man, MX, who lost his ability to experience mental imagery.

Mental imagery is a divisive topic in psychology. Some (most notably Kosslyn) argue that mental images are essential to many types of cognition. According to this camp, mental images are functionally similar (but not identical to) like-modality perception (Kosslyn, 2004 summarises this view nicely). Imagining an apple and seeing an apple involve similar mechanisms. Furthermore, I can use my mental image of an apple to answer questions about its properties – is it red? is it heavier than a plum? But, many other people argue that, although we might feel like we’re using pictures in our imagination to solve various problems, the real work is done by non-depictive representations (see Pylyshyn, 2003 for a good review). When we’re asked to answer questions about an apple’s properties, we can think about what it would be like to see the apple, but this doesn’t entail that the representation is depictive, in this case, pictorial.

If you want to really confuse a psychologist, tell them you don't think there are mental representations mediating behaviour. Try it - they will simply assume you must be joking, because it has never occurred to them that it might be true. This, unfortunately, is the single biggest stumbling block in talking about Gibson to cognitive psychologists, because one of the radical ideas in ecological psychology is that there isn't any need to invoke representations.

The most common counter argument comes from modern neuroscience (specifically, neuro-imaging, and in particular functional magnetic resonance imaging (fMRI)). There must be representations, your cognitive friend will cry - we've seen them via fMRI! If I present a stimulus to a person in a magnet, you can literally see the brain light up. Clearly, the triumphant cognitive type will claim, Gibson is wrong when he says there are no representations because the brain is obviously up to something.

Friday, 26 March 2010

I'm going to take advantage of the fact I'm doing this blog to re-read Gibson 1979 and take notes. I'm going to post these chapter by chapter as I go, I also found a copy of Harry Heft's book in the library and I've been meaning to read it too, so I'm going to do the same for that. Posts about the books will have 'Reading Group' in the title. Feel free to read along :)

Thursday, 25 March 2010

A question about a term I’ve been using is a nice segue into an important moment in the history of the ecological approach.

Poverty of stimulus is a term that came from Chomsky, but the intuition has been underpinning theories of perception for as long as there have been theories of perception. The term describes a problem, in which the information required to achieve something is not present in the environment. In language, the argument runs

Certain patterns of correct language use can only be learned with exposure to negative evidence (i.e. evidence about what counts as incorrect)

Children learning languages only encounter positive evidence (i.e. evidence about what counts as correct)

Sunday, 21 March 2010

String theory is a really cool idea. I don’t actually understand it, of course; I’m not a physicist. But, it’s a neat to think that some of the oddness of our physical world is accounted for by this undetectable world of tiny, vibrating strings. Some physicists also seem to think that string theory is cool. But, physics has not adopted string theory and nobody’s really pushing for this, either. Why not? Well, strings are purely hypothetical entities. Maybe they exist, or maybe something completely different is going on. It doesn’t matter that they might explain some interesting stuff, they’re off the table because we can’t see or measure them. Tough. That’s physics.

Psychologists faced a very similar problem when people started thinking about theories of representation. Representations seemed to resolve thorny issues, like how we can successfully interact with the environment given inadequate information (e.g., poverty of the stimulus). It was a really cool idea; people are just like computers! But, as with strings, representations are hypothetical entities. They seem to explain certain behaviour, but we can’t see or measure them. They also aren’t the only game in town. Let go of certain assumptions (e.g., poverty of the stimulus) and the problems representations were supposed to solve look very different, or cease to look like problems at all. While physicists showed restraint in the face of their cool theory, psychologists took representation and ran with it. Although they remain poorly defined and undetectable (probably because we don’t know what we’re looking for), representations are ubiquitous in explanations of cognition.

So, what is the alternative? How about we bench the really cool idea until we’ve exhausted all the other possibilities? Let’s take the alternatives to representation seriously. Physics produces some insanely accurate predictions. Physics sent people to the moon. Psychology can’t reliably diagnose and treat depression. Some of that is down to the complexity of the subject – people are a mess. But I think that some of it is also down to method. While physics is cautious, psychology is eager.

Saturday, 20 March 2010

Today is the birthday of Burrhus Frederic Skinner. I have a soft spot for ol’ BF, because he was right about a lot of things and refused to let the man get him down. Behaviourism gets a bad rap these days, with cognitive people the world over rolling their eyes at the idea that all behaviour can be explained by stimulus-response associations of varying kinds. But Skinner was my kind of scientist: driven by the data and refusing to indulge in the theoretical excess he saw in others.

Skinner was, in part, responding to the Freudian school of thought, that saw all human behaviour as generated by unseen drives and urges. Skinner recognised that there was no scientific evidence for the existence of these particular kinds of internal mental states mediating between the environment and our behaviour. In fact, you could account for a lot of behaviour, human and otherwise, without ever assuming any internal states, simply by recognising that behaviours can be shaped and assembled by learning via schedules of reinforcement.

Friday, 19 March 2010

I teach a Matlab programming class. The main project I get people to work on is programming up a version of the game Monopoly. It’s a great project, I think, because it makes you use all the things Matlab is good at (loops, matrices, data in and output, etc). It’s a surprisingly entertaining programming project, if you’re into that sort of thing.

But I realised yesterday that Monopoly contains a perfect example of embodied cognition, and this contrast becomes abundantly clear when attempting to implement it in a computational way.

Thursday, 18 March 2010

More on Gibson later, but I wanted to get to this today. Yesterday I saw a talk about eye tracking, and how people control smooth pursuit movements (the tracking movements your eyes can make when you’re following something continuously). Tracking performance is a quandary for cognitive folks, because we are often very good at it. For instance, if you ask people to track a moving stimulus and record their eye movements, they will successfully foveate the target with almost no lag or erratic need to play catch up (foveating means using the fovea, the densely packed high resolution region on the retina we rely on for precise visual perception). The lack of ‘catch-up’ is the interesting bit, and cognitive psychology thinks that it is evidence of prediction by the system. Prediction requires a predictor, which for cognitive psychology is always a representation.

The main thing I learned from this talk is this: I am a rampant ego-maniac who is convinced I am right and other people are wrong, but at least I am capable of entertaining the idea that there is another way to conceive of the task. This speaker (and at least one other person in the room) was completely unaware that their perspective entailed assumptions about the underlying mechanism and simply couldn’t conceive of another way to describe the task: for them, prediction was clearly required and therefore internal representation was clearly required.

Wednesday, 17 March 2010

One of my pet peeves is when different groups of psychologists use one term to refer to something for which they have multiple, sometimes contradictory, definitions. When I started studying similarity, I wasted a lot of time trying to clarify what everyone meant by relations. See, for one group of people, relations occurred within a stimulus (a cat’s legs are under its body, its whiskers are on its face). For another group of people, relations occurred between two stimuli (you use a hammer to hit a nail). These are very different types of relations and they affect similarity in very different ways. But, by using that one word, relation, the literature was all muddied about how relations influenced similarity. Clarifying the terms (we now speak of structural relations vs. thematic relations) helps clarify how we think about the subject.

The word representation is used in a comparably muddied fashion. Depending on who you’re talking to, representation might refer to something symbolic, perceptual, discrete, or continuous; and these symbolic/perceptual/discrete/continuous things might be transformed or acted on via ordinary computations or differential equations.

To get to the bottom of this, I want to clarify the different ways in which representation is commonly used. Then, I want to figure out how to introduce some precision in talking about representations. This will make it much easier to discuss the problem of representation and to consider the alternatives.

Representations are internal mediating states. Anything that changes / transforms / acts on input to a system in a way that changes / transforms output (i.e., actions) is a representation.
The authors provide four conditions for this definition.

1) There needs to be at least one system, which has internal states governing its behaviour.
2) There needs to be an environment, although this doesn’t have to be the external environment. It could just be an adjacent system.
3) Some types of relations have to exist between the system’s internal states and the environment.
4) Processes must act on the internal states to satisfy goals or solve problems. Dietrich and Markman believe that these processes are computational.

On top of these conditions, the authors argue that semantic content needs to be explicit. In other words, the authors contend that psychological-level descriptions of internal states are real and that this level is more relevant that the physical-level description. Representations and processes are more important than chemicals and neurons.

How representations get their content:

1) The relations between internal states and the environment connect particular internal states with particular external states (i.e., correspondence).
2) Representations acquire some content by virtue of the types of interactions they have with other representations (i.e., functional role).

The authors suggest that 1 contributes primarily to the content of low level DC representations like a vibrating eardrum responding to sound, while 2 contributes to higher level DC representations like “hope”, “democracy” or other abstract concepts. It’s necessary for every DC representation to have at least some content from correspondence to external states.

Now, representations could be either discrete or continuous, but Dietrich and Markman argue that they must be discrete. These terms map on perfectly to the mathematical sense of continuity/discreteness. So, discrete representations are uniquely identifiable. E.g., I have a unique cat representation that is different from all of my other representations. And, discrete representations have gaps between them. My cat representation doesn’t seamlessly transition into my tiger representation (although there may be overlap).

To sum up, this notion of representation is that they are internal mediating states that are discrete and computational. Each representation is uniquely identifiable (discrete) and the processes that act on representations are ordinary computations. From now on, when I’m talking about this type of representation, I will refer to DC (discrete computational) representations.

Tuesday, 16 March 2010

In my last post I outlined the basic state of affairs in the study of perception up to the modern day. Cognitive science assumes a poverty of stimulus that must be overcome with internal mental representations. The content of these representations is an empirical question, and the drive in psychology ever since the ‘cognitive revolution’ of the 1960s has been to uncover the contents and format of these representations such that they can do the job apparently required.

Monday, 15 March 2010

Hi. I’m a cognitive psychologist, but I’m not that kind of cognitive psychologist. Specifically, I don’t believe in representations, and I reject the computational model of cognition. Yes, this makes me very unpopular. I this post I want to quickly review the dominant cognitive approach and then briefly raise several potential problems with this framework. I will go through these issues in detail in later posts, but I want to go ahead and present the big picture here.

Cognitive scientists tend to view cognition as computation. In this model, representations are data structures and cognitive processes are algorithms acting on these data structures. Input => transformation of input via manipulation of discrete symbols (representations) => output. The clear analogy is to information processing in a computer. Another way to think about representations is as internal mediating states (cf. Dietrich & Markman, 2003). For instance, when I see a cup, the stuff happening in the visual system will probably be a better match to my “cup” representation than to my “glass” representation. So, I correctly identify the object as a cup. In other words, my ability to identify an object depends on consulting a discrete, internal representation of that object.

There are a number of unresolved issues with this representational stance: First, there is no theory of what representations actually are or of what information they contain. Second, many cognitive phenomena seem to defy a computational explanation. For instance, attempts to use a computational framework to model cognitive behaviours have often failed to produce anything as flexible or interesting as what we humans get up to. Third, alternative stances (e.g., that there are no discrete representations or that they are not processed algorithmically) have not been thoroughly explored. Cognitive psychologists usually take representations for granted; their existence is assumed, rarely defined or tested. This just isn’t good science. I’m just raising these points here; in future posts I’ll lay out the evidence.

My goal is to spend some time discussing these issues and to think about alternatives to representation. As a cognitive psychologist, I could get away with not understanding or caring about perception. Honestly, it just doesn’t come up much. When it does come up (e.g., Barsalou) it’s in terms of the “sensation based theory of perception” (see previous post), which we know is outdated. In the long run, I want to discuss how it might be possible to ground the study of cognition in Gibsonian perception/action. This is risky since, at the moment, I have no idea what such a framework would look like. But, cognitive psychology needs to evolve, and this is currently my best bet on how that might happen.

Sunday, 14 March 2010

I’m a Gibsonian. I study perception and action from an ecological perspective, which is based in James Gibson’s theories of perception (and Nikolai Bernstein’s theories of motor control). This perspective is, in many ways, in direct conflict with the dominant cognitive paradigm in psychology, and frankly a lot of people simply think it’s ridiculous. This is partly our own fault: the ecological camp is small, a bit insular and prone to picking odd fights. But I think we are right, even if sometimes I don’t think we’re studying it or talking about it right. My goal for this blog is to work some of these thoughts out (amongst other things) so I can actually turn these thoughts into experiments and papers.

The main driver for me doing this right now is an internal argument that is brewing amongst ecological types that I think is a) flawed and b) a waste of time. I’ve been thinking and talking about this conflict for a little while now, but I need to find ways to go after the flaws empirically: I am a scientist, and in spite of all the evidence I really do believe psychology can be a science. The argument is actually yet another round of the only argument that ever happens about perception: what constitutes information for a perceiving organism? The modern Gibsonian approach makes specific claims about this, so before we get into the recent stuff we need some context. That means we need a little history.

A Little History

The fundamental question in perception is, what is the information for perception? What is the form of the proximal stimulus (the thing that actually causally interacts with an organism’s sensory apparatus)? The answer to this question has almost always been that the proximal stimuli are sensations, meaningless physical events (photons for vision, compression waves for hearing, etc). Sensations are meaningless because they do not fully specify the world that caused them; they are ambiguous, because a given physical event gives rise to many patterns of sensations (a small red ball flying towards you produces a different optical pattern than, say, a large blue ball, although they are both the same type of event).

The basic ‘sensation based theory of perception’ runs as follows:

a.Something happens in the world (the distal stimulus). This is what an observer eventually needs to respond to, and so needs to detect it somehow.

b.This event causes a pattern of change in the light, although this pattern only correlates with the event in the world; local conditions, the observer’s perspective, etc, alters many of the details.

c.An observer detects this pattern (the proximal stimulus) which is related to the event in the world but not uniquely.

d.They must then infer what event in the world lead to this pattern (i.e. resolve the ambiguity).

e.The observer then responds appropriately to the event.

Modern cognitive psychology has what is essentially just the latest version of this hypothesis, in which it fills in some of the details. (a) and (b) remain the same: these are taken to be the facts of the matter (and it’s the correlation aspect in (b) that Gibson will object to):

c.The proximal stimulus (for vision) is the ‘image’ projected to the back of the eye, onto the retina. Like a camera, the eye focuses rays of light (photons) to a point (the fovea) and forms an image. The image isn’t really a picture (not in modern theories, although people still talk about the retinal image); it’s a pattern of light distributed across the retina. There are internal correlations between the ‘pixels’ such that neighbouring pixels tend to resemble each other, unless there is an edge, in which case they differ immensely (telling you there’s an edge there). However, this image is ambiguous with respect to what caused it; there is a one-to-many mapping, in which one event could lead to multiple patterns (every visual illusion is an example of this).

d.The job of vision is to resolve this ambiguity, via inference. Essentially, the solution to this problem is that the visual system must make an educated guess about which of the ‘many’ possible events caused the ‘one’ proximal stimulus. The education of the guess comes from mental representations, the workhorse of cognitive psychology. A representation is a pattern of neural activity/connections that ‘stands in for’ (i.e. represents) the missing information. The representation may contain information about past exposure to this pattern and what the event turned out to be (learning); it may contain information about how certain types of correlations tend to indicate one thing or another. Visual perception is the cognitive process of detecting the proximal stimulus, then selecting and applying the correct representations from your repertoire to resolve the ambiguities.

Cognitive theories about visual perception therefore

1.assume a poverty of stimulus, i.e. an ambiguous proximal stimulus that must be enriched with internally stored information, and

2.are about uncovering the contents of the representations and how the correct one(s) are selected and used.

This is the state of things today, and is the essential form of the argument that James Gibson rejected with his theories. Gibson’s move was simple: he considered (b) from above, and rejected it. If the proximal stimulus only correlates with events in the world, he said, then perception is doomed to failure – how do we ever build correct representations? How could we possibly select the right ones? Gibson’s solution was to rethink information, and propose that events in the world can, in fact, lead to changes in patterns of light that uniquely specify the event that caused them, rather than merely correlate with. He also argued that it couldn’t be any other way, if perception was to ever work the way we experience it working every day. The next step is to consider some details of Gibson’s alternative.