If I had to reduce language learning to the bare essentials and then construct a methodology around those essentials, it might look something like this (from Edmund White’s autobiographical novel The Farewell Symphony):

“[Lucrezia’s] teaching method was clever. She invited me to gossip away in Italian as best I could, discussing what I would ordinarily discuss in English; when stumped for the next expression, I’d pause. She’d then provide the missing word. I’d write it down in a notebook I kept week after week. … Day after day I trekked to Lucrezia’s and she tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.”

Whatever theoretical lens you view this through, Lucrezia’s ‘method’ contains the right mix. Those who subscribe to the ‘learning-is-information-processing’ view will approve of the output + feedback cycle and the covert focus on form. Those of a sociocultural bent will applaud Lucrezia’s scaffolding of learning affordances at the point of need. Dynamic systems theorists will invoke ‘the soft-assembly of language resources in a coupled system’. What’s more, my own recent experience of trying to re-animate my moribund Spanish suggests that the single most effective learning strategy was ‘instructional conversation’ with a friend in a bar. That is to say, the same kind of ‘clever method’ that White celebrates above.

But, of course, unless you have a willing partner, such intensive one-to-one treatment is costly and not always available. Could this kind of conversation-based mediation be engineered digitally? Is there an app for it?

Interactive software that replicates human conversation has long been a dream of researchers ever since Alan Turing proposed the ‘Turing Test’ in the 1950s, which challenged programmers to design a machine that could outwit a jury into thinking that they were interacting with a real person.

While no one has yet met Turing’s conditions in any convincing way, programs such as ‘chatterbots’ have certainly managed to fool some of the people some of the time. Could they substitute for a real interlocutor, in the way, say, that a computer can substitute for a chess player?

It’s unlikely. Conversation, unlike chess, is not constrained by a finite number of moves. Even the most sophisticated program based on ‘big data’, i.e. one that could scan a corpus of millions or even billions of conversations, and then select its responses accordingly, would still be a simulation. Crucially, what the program would lack is the capacity to ‘get into the mind’ of its conversational partner and intuit his or her intentions. In a word, it would lack intersubjectivity.

Intersubjectivity is ‘the sharing of experiential content (e.g., feelings, perceptions, thoughts, and linguistic meanings) among a plurality of subjects’ (Zlatev et al 2008, p.1). It appears to be a uniquely human faculty. Indeed, some researchers go so far as to claim that ‘the human mind is quintessentially a shared mind and that intersubjectivity is at the heart of what makes us human’ (op.cit. p. 2). Play, collaborative work, conversation and teaching are all dependent on this capacity to ‘know what the other person is thinking’. Lucrezia’s ability to second-guess White’s communicative needs is a consequence of their ‘shared mind’.

It is intersubjectivity that enables effective teachers to pitch their instructional interventions at just the right level, and at the right moment. Indeed, Vygotsky’s notion of the ‘zone of proximal development’ (ZPD) is premised on the notion of intersubjectivity. As van Lier (1996, p. 191) observes:

‘How do we, as caretakers or educators, ensure that our teaching actions are located in the ZPD, especially if we do not really have any precise idea of the innate timetable of every learner? In answer to this question, researchers in the Vygotskian mould propose that social interaction, by virtue of its orientation towards mutual engagement and intersubjectivity, is likely to home in on the ZPD and stay with it.’

Intersubjectivity develops at a very early age – even before the development of language – as a consequence of joint attention on collaborative tasks and routines. Pointing, touching, gaze, and body alignment all contribute to this sharing of attention that is a prerequisite for the emergence of intersubjectivity.

In this sense, intersubjectivity is both situated and embodied: ‘Intersubjectivity is achieved on the basis of how participants orient to one another and to the here-and-now context of an interaction’ (Kramsch 2009, p. 19). Even in adulthood we are acutely sensitive to the ‘body language’ of our conversational partners: ‘A conversation consists of an elaborate sequence of actions – speaking, gesturing, maintaining the correct body language – which conversants must carefully select and time with respect to one another’ (Richardson, et al. 2008, p. 77). And teaching, arguably, is more effective when it is supported by gesture, eye contact and physical alignment. Sime (2008, p. 274), for example, has observed how teachers’ ‘nonverbal behaviours’ frame classroom interactions, whereby ‘a developed sense of intersubjectivity seems to exist, where both learners and teacher share a common set of gestural meanings that are regularly deployed during interaction’.

So, could a computer program replicate (as opposed to simulate) the intersubjectivity that underpins Lucrezia’s method? It seems unlikely. For a start, no amount of data can configure a computer to imagine what it would be like to experience the world from my point of view, with my body and my mind.

Moreover, the disembodied nature of computer-mediated instruction would hardly seem conducive to the ‘situatedness’ that is a condition for intersubjectivity. As Kramsch observes, ‘Teaching the multilingual subject means teaching language as a living form, experienced and remembered bodily’ (2009, p. 191). It is not accidental, I would suggest, that White enlists a very physical metaphor to capture the essence of Lucrezia’s method: ‘She tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.’

The criteria for evaluating the worth of any aid to language learning (whether print or digital, and, in the case of the latter, whether app, program, game, or the software that supports these) must include some assessment of its fitness for purpose. That is to say, does it facilitate learning?

But how do you measure this? Short of testing the item on a representative cross-section of learners, we need a rubric according to which its learning potential might be predicted. And this rubric should, ideally, be informed by our current understandings of how second languages are best learned, understandings which are in turn derived — in part at least — from the findings of researchers of second language acquisition (SLA).

This is easier said than done, of course, as there is (still) little real consensus on how the burgeoning research into SLA should be interpreted. This is partly because of the invisibility of most cognitive processes, but also because of the huge range of variables that SLA embraces: different languages, different aspects of language, different learners, different learning contexts, different learning needs, different learning outcomes, different instructional materials, and so on. Generalizing from research context A to learning context B is fraught with risks. It is for this reason that, in a recent article, Nina Spada (2015) urges caution in extrapolating classroom applications from the findings of SLA researchers.

Cautiously, then, and following VanPatten and Williams’ (2007) example, I’ve compiled a list of ‘observations’ about SLA that have been culled from the literature (albeit inflected by my own particular preoccupations). On the basis of these, and inspired by Long (2011), I will then attempt to frame some questions that can be asked of any teaching aid (tool, device, program, or whatever) in order to calculate its potential for facilitating learning.

Exposure to input is necessary

Here, then, are 12 observations:

The acquisition of an L2 grammar follows a ‘natural order’ that is roughly the same for all learners, independent of age, L1, instructional approach, etc., although there is considerable variability in terms of the rate of acquisition and of ultimate achievement (Ellis 2008), and, moreover, ‘a good deal of SLA happens incidentally’ (VanPatten and Williams 2007).

A precondition of fluency is having rapid access to a large store of memorized sequences or chunks (Nattinger & DeCarrico 1992; Segalowitz 2010)

Learning, particularly of words, is aided when the learner makes strong associations with the new material (Sökmen 1997).

The more time (and the more intensive the time) spent on learning tasks, the better (Muñoz 2012). Moreover, ‘learners will invest effort in any task if they perceive benefit from it’ (Breen 1987); and task motivation is optimal when challenge and skill are harmonized (Csikszentmihalyi 1990).

On the basis of these observations, and confronted by a novel language learning tool (app, game, device, blah blah), the following questions might be asked:

ADAPTIVITY: Does the tool accommodate the non-linear, often recursive, stochastic, incidental, and idiosyncratic nature of learning, e.g. by allowing the users to negotiate their own learning paths and goals?

COMPLEXITY: Does the tool address the complexity of language, including its multiple interrelated sub-systems (e.g. grammar, lexis, phonology, discourse, pragmatics)?

INPUT: Does it provide access to rich, comprehensible, and engaging reading and/or listening input? Are there means by which the input can be made more comprehensible? And is there a lot of input (so as to optimize the chances of repeated encounters with language items, and of incidental learning)?

NOTICING: Are there mechanisms whereby the user’s attention is directed to features of the input and/or mechanisms that the user can enlist to make features of the input salient?

OUTPUT: Are there opportunities for language production? Are there means whereby the user is pushed to produce language at or even beyond his/her current level of competence?

SCAFFOLDING: Are learning tasks modelled and mediated? Are interventions timely and supportive, and calibrated to take account of the learner’s emerging capacities?

FEEDBACK: Do users get focused and informative feedback on their comprehension and production, including feedback on error?

INTERACTION: Is there provision for the user to collaborate and interact with other users (whether other learners or proficient speakers) in the target language?

AUTOMATICITY: Does the tool provide opportunities for massed practice, and in conditions that replicate conditions of use? Are practice opportunities optimally spaced?

CHUNKS: Does the tool encourage/facilitate the acquisition and use of formulaic language?

PERSONALIZATION: Does the tool encourage the user to form strong personal associations with the material?

FLOW: Is the tool sufficiently engaging and challenging to increase the likelihood of sustained and repeated use? Are its benefits obvious to the user?

Is it better than a teacher?

This list is very provisional: consider it work in progress. But it does replicate a number of the criteria that have been used to evaluate educational materials generally (e.g. Tomlinson 2011) and educational technologies specifically (e.g. Kervin and Derewianka 2011). At the same time, the questions might also provide a framework for comparing and contrasting the learning power of self-access technology with that of more traditional, teacher-mediated classroom instruction. Of course, the bottom line is: does the tool (app, program, learning platform etc) do the job any better than a trained teacher on their own might do?

Any suggestions for amendments and improvements would be very welcome!

As part of a Methods course I am teaching at the moment, I am observing teachers-in-training working with especially constituted classes of ‘guinea pig’ students.

Trainers who work on CELTA or DELTA courses, or on other pre- or in-service schemes, will be familiar with the teaching practice (or practicum) set-up. The trainee teachers plan their classes collaboratively, and then take turns to teach a segment of the overall lesson. The trainer (me, in this case) takes a corner seat, mutely observes the succession of ‘teaching slots’, and then conducts a joint feedback session with the trainee teachers either immediately afterwards, or on a subsequent day.

The more I do this, the more uncomfortable I feel with the process on at least two counts. One I’ll call logistical, and the other—for want of a better term—I’ll call existential.

First: the logistics. The trainer’s role, as silent, impassive observer, noting every move, and delivering the feedback retrospectively, seems to run counter to what we now understand about skill acquisition. Cognitive learning theory has long recognised that feedback in ‘real operating conditions’—i.e. while you’re actually engaged in a task —is generally more powerful and more durable than feedback delivered after the event. More recently, a sociocultural perspective argues that skills are best learned through ‘assisted performance’, where the expert and the novice work collaboratively on a task, the former modelling and scaffolding the necessary sub-skills, and mediating the activity by means of well-placed interventions, such as commands, gestures, or gaze. In this way, and assuming an optimal state of readiness (aka the zone of proximal development) novices begin to appropriate the necessary skills, until they are capable of regulating them independently.

All this would seem to argue against the traditional practicum structure, with the trainer detached from the activity, and the feedback delivered ‘cold’. In fact, I’m finding that, on my present course, the sessions in which we ‘workshop’ lessons as a group in a micro-teaching format, with the trainees teaching their colleagues and me intervening as they do so, are both less stressful for the trainees and (I think) more productive in terms of their developmental outcomes. Here is an example of what I mean: a group has prepared a presentation of used to, and one of the team has volunteered to demonstrate it to the class.

The milling activity

Of course, micro-teaching lacks the authenticity of real classrooms, so the next step might involve taking a more interventionist role during the actual teaching practice, in the form, for example, of team-teaching, or of ‘coaching from the sidelines’, i.e. intervening more actively during the teaching practice lessons. In fact, I did this last week, gesticulating like a football coach in order to prompt the trainee who was teaching at the time to stop what he was doing and to pre-teach a question form, in advance of the milling activity that he was about to launch into. He got the hint, took the necessary steps, and the activity—I think—was all the better for it.

And now for the ‘existential’ problem, which goes much deeper. Sitting at the back of the room, or even intervening from the sidelines, I can’t help wondering what my role really is here. All these teachers I’m watching are so different, in terms of style, personality, experience, professional needs and aspirations, teaching contexts, and so on. And yet I get the sense I am trying to shoehorn them into a way of teaching that is very much ‘one-size-fits-all’.

Thinking back, I realise, uncomfortably, that, over the years that I have been working with teachers-in-training, my intentions as a trainer have always been more prescriptive than I would have admitted at the time. Initially, as a fairly inexperienced Director of Studies, these intentions took the form of wanting to turn my newly-trained teachers into clones of myself: “Do it like this (because this is the way I do it)”. Then, as a CELTA trainer, it was all about getting the trainees to teach in the way that the ‘method’ dictated. Of course, we used to deny that there was a ‘CELTA method’. It was all about eclecticism, surely. Looking back, I now realise that, if the CELTA course offered a range of methodological choices, this range was in fact fairly limited. Or even, very limited, given the way that a small set of global coursebooks determined (and still determine) the prevailing approach.

When I became an in-service trainer, working on DELTA courses, I paid lip-service to the notion that it was professional teacher development that should drive the agenda, and hence encouraged my trainees to look beyond the narrow confines of their CELTA ‘method’, to experiment, to reflect, and to adapt their teaching to their specific contexts. This, of course, ignored the fact that DELTA is an externally examined course, with a very clearly specified syllabus and success criteria – and, moreover, that the teachers are still using (and therefore are still constrained by) the same coursebooks.

Now, as I sit and watch and take notes I realise at least two things:

1. Whatever I say and do, these teachers will change only to the extent that their own beliefs, values, self-image, personality, previous experience etc will allow them; and

2. Whatever change that they do make, they will likely revert to their ‘default’ setting as soon as my back is turned. The teacher who is the entertainer, or the lecturer, or the football coach, or the social worker, will always be the entertainer, lecturer, football coach, etc.

Hence, all I can hope to do is help them become the best (= most effective, but also the most fulfilled) teacher that they themselves can possibly be – irrespective of how I myself teach, or whatever method is the flavour of the month, or whatever materials they happen to be using, or whatever context they happen to be teaching in.

And how do I do this? Probably not by sitting at the back of the room and taking notes.

A core tenet of the Dogme philosophy is that classroom learning should be ‘conversation-driven’, and that, out of the language that emerges from this conversation, language learning episodes can be co-constructed.

Conversation is the word we really need to define. Jack Richards wrote that interactions fall into three basic categories: small talk, transactions and performances. For me, small talk is the most important because it constitutes a difficult social skill that is often least practised among learners. But small talk is too mundane to base a whole class around, hence the need for materials. Dogme has the danger of becoming like the people who tweet about what they had for lunch. Pleasant but not very inspiring, especially in an learning context. Surely we can do better by giving more time to transactions and performances, i.e. speech acts rather than coffee chats.

This is a fair criticism, especially if we construe conversation as being synonymous with ‘chat’, which by definition is largely interpersonal in terms of its function, and local – even trivial – in terms of its field. If learning opportunities are based solely upon this fairly restricted register, it’s unlikely that most students will find their communicative needs are satisfied – especially if these needs include more formal registers, such as academic or technical writing. (At the same time, it’s worth noting that even written registers are becoming increasingly ‘conversationalized’, especially since the advent of digital media).

So, effectively, a classroom conversation needs to be more than chat. And it also needs to be more than the teacher-led question-answer sequences that characterised Direct Method courses, and which are so easily ridiculed: How many fingers do I have? Do I have a nose on my face? Is this your neck? etc.

Causeries avec mes élèves

Incidentally, one of the prototypical Direct Method courses was called Causeries avec mes élèves [Conversations with my students, 1874)]. Its author, Lambert Sauveur, describes the first lesson: “It is a conversation during two hours in the French language with twenty persons who know nothing of this language. After five minutes only, I am carrying on a dialogue with them, and this dialogue does not cease.”

While this dialogue might, in many ways, not have resembled natually-occuring conversation (the first five lessons of his course dealt with parts of the body), one principle that Sauveur rated highly was coherence, his intention being “to connect scrupulously the questions in such a manner that one may give rise to another”. As Howatt (1984) comments: “This principle probably explains his success in communicating with his students better than anything else. They understood what he was talking about because they were able to predict the course of the conversation” (p. 201).

Conversation is predictable, because one turn follows from the other. At the same time, because it is locally assembled, and takes place in real time, it is unpredictable. This tension between the predictable and the unpredictable makes conversation – real conversation – an ideal medium for instruction. As Leo van Lier (1996) argues, “learning takes place when the new is embedded in the familiar, so that risks and security are in balance… Conversational interaction naturally links the known to the new. It creates its own expectancies and its own context, and offers choices to the participants. In a conversation, we must continually make decisions on the basis of what other people mean. We therefore have to listen very carefully… and we also have to take great care in constructing our contributions so that we can be understood” (p. 171).

At the same time, for such conversations to provide a site for learning, there need to be strategic interventions on the part of the teacher – interventions that distinguish normal conversation between peers from what has been called ‘instructional conversation’ (Tharp and Gallimore, 1988):

The task of schooling can be seen as one of creating and supporting instructional conversations… The concept itself contains a paradox: “Instruction” and “conversation” appear contrary, the one implying authority and planning, the other equality and responsiveness. The task of teaching is to resolve this paradox. To most truly teach, one must converse; to truly converse is to teach” (p. 111).

The notion of instructional conversation has been further developed by scholars such as Neil Mercer (1995), who writes of the ‘long conversation’ that constitutes the dialogic curriculum, and Gordon Wells (1999), who calls it ‘dialogic enquiry’. In fact, dialogue may be a better term than conversation, not least because it echoes Paulo Freire’s insistence on putting dialogue at the heart of pedagogy: “Whoever enters into dialogue does so with someone about something; and that something ought to constitute the new content of our proposed education” (1993, p. 46). In a similar manner, Sylvia Ashton-Warner yielded to – and exploited – the hubbub in her infant classroom: “I harness the communication, since I can’t control it, and base my method on it” (1963, p. 104).

Individual presentations

So, really, conversation stands for all the talk, the dialogue, the communication (both spoken and written) that is generated by the people in the room, and that is shaped, scaffolded, supported and signposted by the teacher. It could take the form of formal debates, individual presentations, small group tasks, or a plenary discussion. It could be mediated by means of an online chat function, or Twitter, or SMS messages, or pieces of paper that are traded back and forth across the class. In the end it is simply the ‘stuff’ (to use Ashton-Warner’s phrase) out of which learning episodes are moulded. In its most basic, common-or-garden form it is simply conversation – the most natural form of communication we know.

Formal debate

And, as Gordon Wells concedes, “conversation may not be perfect as a means of information exchange… but when engaged in collaboratively, it can be an effective medium for learning and teaching. In any case, since there is no better alternative, we must do the best we can” (1987, p. 218).

A recent item on the BBC website (Reading test for six-year-olds to include non-words) reminds us that the debate about phonics continues to polarise educationalists and the public alike. The fact that a government-mandated reading test for six-year-olds is to include nonsense words, like ‘koob’ and ‘zort’, which the children are required to sound out, has incensed advocates of a more meaning- and context-driven approach to developing first language literacy: “It’s just bonkers!” The very mention of phonics is guaranteed to elicit this kind of knee-jerk reaction in some quarters.

Just to remind you, phonics (to quote the entry from An A-Z of ELT)

is an approach to the teaching of first language reading that is based on the principle of identifying sound-letter relationships, and using this knowledge to ‘sound out’ unfamiliar words when reading.

The analytic, bottom-up phonics approach contrasts with a more holistic, top-down approach to developing literacy skills that is called (in the US at least) whole language learning. Whole language learning is premised on the belief that, “in the development of both speech and writing, children begin with a whole and only later develop an understanding of the constituent parts… Parts are harder to learn than wholes because they are more abstract. We need the whole to provide a context for the parts” (Freeman & Freeman, 1998, p. 65).

Because so much is at stake (i.e. first language literacy, and hence access to all the ‘cultural capital’ that goes with being able to read and write) the debate between advocates of phonics, on the one hand, and of whole language learning, on the other, has become iconic – representing as it does the war between traditionalists (‘teach the facts’) and the progressivisits (‘nurture the child’). The former claim that there can be no learning without knowledge of the system (i.e. the rules), while the latter claim that the only real learning is self-directed, socially-situated, and experiential.

Supporters of the phonics position cite research studies that suggest that the best predictors of reading ability are good phoneme-identification skills (the ability to sound out a word like c-a-t) and a knowledge of letter-sound correspondences, enabling accurate decoding of the written word. In one of a series of studies, for example, Byrne & Fielding-Barnsley (1995) found that children who had been instructed in phonemic awareness in pre-school “were superior in nonword [i.e. nonsense word] reading 2 and 3 years later and in reading comprehension at 3 years” (cited in Grabe & Stoller, 2002).

Advocates of whole language learning, on the other hand, argue that learning to read emerges out of immersion in a world of texts. “Children growing up in literate societies are surrounded by print. They begin to be aware of the functions of written language and to play at its use long before they come to school. School continues and extends this immersion in literacy…” (Goodman & Goodman, 1990, p. 225). Krashen (1999) cites a number of studies that show that what he calls ‘free voluntary reading’ “profoundly improves our reading ability, our writing ability, our spelling, our grammar, and our vocabulary” (p. 54).

Is there a compromise position? In her fascinating book, Proust and the Squid, (Wolf, 2008), Maryanne Wolf argues that successful decoding is contingent upon “knowing the meaning”, and that “for some children, knowledge of a word’s meaning pushes their halting decoding into the real thing”. One clue to a word’s meaning is its context, and an understanding of context requires reading skills, such as predicting and inferencing, of a more global kind than simply knowledge of sound-letter relationships. And it also assumes the existence of an already extensive and well-connected lexicon: “The more established our knowledge of a word, the more accurately and rapidly we read it” (p. 153).

Thus, successful readers are able to marshall both bottom-up (i.e. phonics) and top-down (i.e. whole language) processes more or less simultaneously, drawing on the one when the other is less reliable. Effective teaching of reading, arguably, achieves a similar balance. In the Reading Recovery approach, as pioneered by Marie M. Clay, the child’s reading aloud is supported and scaffolded by the teacher, allowing both a bottom-up or a top-down focus, as appropriate. As Clay & Cazden (1992) observe:

This program should be differentiated from both ‘whole language’ and ‘phonics.’ It differs from most whole language programs in recognising the need for temporary instructional detours in which the child’s attention is called to particular cues available in speech or print. It differs from phonics in conceptualising phonological awareness as an outcome of reading and writing rather than as their prerequisite (pp. 129-130).

How does all this relate to second language learning? As I point out in An A-Z of ELT “the phonics debate is less of an issue [for us] since most adult second language learners are already literate”. Nevertherless, the more fundamental argument – as to whether the parts should be taught in advance of the whole, or vice versa – is just as relevant to language teaching as it is to literacy learning, and just as capable of inflaming similar passions.

References:

Clay, M. & Cazden, C. (1992) A Vygotskian interpretation of reading recovery. In Cazden, C. 1992. Whole Language Plus: Essays on Literacy in the US and NZ. New York: Teachers College Press.

A: Es para hablar del futuro, como ‘yo voy a ayudar a mis amigos’. (It’s to talk about the future, as in [in Spanish] ‘I’m going to help my friends’).

These girls were in ther mid-teens, I guessed, and had probably been doing three or four years of English already – three or four years learning, and attempting to apply (but with such conspicuous lack of success) some of the most basic rules of English grammar. Which led me to wonder, what earthly good had these rules done them? And, more radically, what earthly good are rules at all?

I’m not, of course, disputing the fact that language consists of certain patterns and regularities. I’m simply sceptical of the value of teaching these regularities in the form of explicit rules. Especially when the rules have so little obvious utility. As Chris Brumfit (2001) wrote, “it is common to believe that teaching the descriptive rules is to teach the means of generating the behaviour itself” ( p 29.) Clearly, this was not happening to the girls on the bus.

And it’s not just schoolgirls who find grammar rules hard to get their heads around. Some of the best minds in the business are ‘grammatically challenged’. Take, for instance, the eminent linguist Dick Schmidt, who recorded this classroom experience when learning Portuguese in Brazil:

The class started off with a discussion of the imperfect vs. perfect, with C [the teacher] eliciting rules from the class. She ended up with more than a dozen rules on the board — which I am never going to remember when I need them. I’m just going to think of it as background and foreground and hope that I can get a feel for the rest of it (Schmidt & Frota, 1986, p. 258).

Which he did – by heading out into the street and trying it on with the locals. The fact that some learners, at least, dispense with rules should give us pause. After all, if we take the view that, as Ellis (2007) puts it, “language is not a collection of rules and target forms to be acquired, but rather a by-product of communicative processes ” (p. 23), then surely communication is the name of the game.

But what about accuracy? The argument that – without knowledge of rules – accuracy will suffer doesn’t hold much water either. As J. Hulstijn (1995) remarks, “It is perfectly well possible to focus learners’ attention on grammatical correctness without explicitly teaching grammar” (p.383). That is, after all, the function of feedback and correction.

And yet part of me can’t entirely dismiss the value of rules – or of some rules, at least – if for no other reason than for their mnemonic value, like the mantra-like spelling rules we learn as children and still invoke as adults: “i before e, except after c“. In support of this view, cognitive scientists have studied the role that such memorised rules play in ‘self-scaffolding’ learned routines, the frequent practice of which “enables the agent to develop genuine expertise and to dispense with the rehearsal of the helpful mantra” (Clark, 2011, p. 48).

Moreover, taking a socio-cultural perspective, might not grammar rules serve as a kind of symbolic tool, providing learners the means to regulate their own performance – a form of ‘private speech’, as it were?

Indeed, Lantolf & Thorne (2006), acknowledging the importance that Vygotsky himself credited “to well-articulated explict knowledge as the object of instruction and learning” (p. 291), describe a number of studies of second language learners for whom self-verbalization of quite sophisticated grammatical concepts seemed to assist in their subsequent internalization.

If this is the case, my three schoolgirl companions – immersed in the process of jointly constructing knowledge out of explicit rules of grammar: were they on the right track, even if a long way from their desired destination?

Schmidt. R., & Frota, S. (1986). Developing basic conversational ability in a second language: A case study of an adult learner. In R. Day (Ed.). Talking to learn: Conversation in a second language. Rowley, MA: Newbury House.

There’s no entry for Z in the A-Z of ELT (which means perhaps it should be called the A to Y of ELT!) but if there were, the strongest candidate would have to be ZPD as in the zone of proximal development. This is the concept most closely identified with the work of the Russian developmental psychologist Lev Vygotsky, but also, arguably, the concept of his that has been subject to the greatest number of interpretations.

Vygotsky himself defined it as:

“the distance between the actual developmental level as determined by independent problem-solving and the level of potential development as determined through problem-solving and adult guidance or in collaboration with more capable peers” (1978, p. 86).

That is to say, it’s that point where learning is still other-regulated, but where the potential for self-regulation is imminent – the moment that the child, teetering on her bike, still needs the steadying touch of her mother’s hand. Teaching is optimally effective, the theory goes, when it “awakens and rouses into life those functions which are in the stage of maturing, which lie in the zone of proximal development” (Vygotsky, 1934, quoted in Wertsch 1985, p. 71).

It’s important to note that the ZPD is not the learner’s ‘level’ in the traditional sense in which we grade students, nor even the level just above, but that, as Gordon Wells puts it, it is “created in the interaction between the student and the co-participants in an activity… and depends on the nature and quality of the interaction as much as on upper limit of the learner’s capability” (Wells, 1999, p. 318). Because the ZPD cannot be gauged in advance, and is a property neither of the learner nor of the interaction alone, “from the teacher’s perspective, … one is always aiming at a moving target” (op.cit., p. 319).

These elusive, emergent, unpredictable, and idiosyncratic properties of the ZPD raise the question as to whether it has any pedagogical applications at all. If it’s not the student’s level (or level + 1), what is it? And how can it be manipulated for optimal learning?

Scholars in the sociocultural tradition have suggested that the way classroom talk is scaffolded (see S is for scaffolding), with the teacher providing only the minimal assistance necessary to enable the learner’s performance, can help orient the activity towards the learner’s ZPD and thereby influence its potential for learning. Optimal experience theorists (see F is for Flow) would also argue that the ZDP is situated at the point where challenge and skill are counter-balanced. Advocates of task-based learning likewise suggest that the judicious calibration of task conditions, such as preparation time and rehearsal, can provide the optimal balance between safety and risk-taking that is associated with the concept of the ZPD, and thereby lead to learning.

Jim Lantolf's workshop: JALT 2009

Others have tried to map the ZPD onto Krashen’s concept of input + 1 and Swain’s analogous concept of output + 1 (see P is for Push). When, during an engaging question-and-answer session at last year’s JALT conference, I asked Jim Lantolf (who, more than anyone, has championed Vygotsky’s relevance to SLA: see Lantolf, 2000, for example) if there were any grounds for making this connection, he was dismissive. “For a start, input + 1 and output + 1 describe qualities of language, not of cognition. Nor do they situate this language within the context of collaborative, interactive activity”. (In fact, Krashen’s Input Hypothesis rejects the need for interaction altogether). Kinginger (2002) is even more scathing, and argues that Vygotsky’s original concept – fuzzy as it was – has been shamelessly co-opted for ideological purposes, as a way of prettifying activities “that have always been done in classrooms where speaking activity takes place as a pretext for grammar practice, only now we are calling it the ‘ZPD’” (p. 255).

Despite all this fuzziness, the notion of the ZPD permeates current rhetoric on teaching. Is it just a fairly meaningless buzz word, or does it still have some currency?