One of the most robust constructs in SLA research is “Interlanguage Development”. As Doughty and Long (2003) say

There is strong evidence for various kinds of developmental sequences and stages in interlanguage development, such as the well known four-stage sequence for ESL negation (Pica, 1983; Schumann, 1979), the six-stage sequence for English relative clauses (Doughty, 1991; Eckman, Bell, & Nelson, 1988; Gass, 1982), and sequences in many other grammatical domains in a variety of L2s (Johnston, 1985, 1997). The sequences are impervious to instruction, in the sense that it is impossible to alter stage order or to make learners skip stages altogether (e.g., R. Ellis, 1989; Lightbown, 1983). Acquisition sequences do not reflect instructional sequences, and teachability is constrained by learnability (Pienemann, 1984).

If SLA involves ordered development and this ordered development is impervious to instruction, what’s going on in learners’ minds? VanPatten (2017) reviews some of the explanations on offer. Chomsky (1957) suggests that language is special and involves a language-specific module for learning. Krashen (1981) closely follows with his “Natural order” hypothesis and the distinction between acquisition and learning. Other approaches look to the processing aspects of language, including Processability Theory (e.g. Pienemann, 1998); processing constraints more generally (e.g. O’Grady, 2015); and general learning mechanisms that are not related to language. These types of approaches include emergentism and usage-based approaches (e.g. N. Ellis and Wulff, 2015) as well as dynamic systems (e.g. Larsen-Freeman, 2015). (See VanPatten, 2017 for all these references.)

Probably the most important issue in the various accounts of language learning concerns the roles of implicit and explicit learning. With regard to this fundamental question, despite important disagreements among those adopting generativists, processing or usage-based approaches to L2 learning, and despite the related differences in interface (strong/weak/no) positions on explcit and implicit knowledge, there is widespread agreement that, as Long (2017) puts it “the relevant goal for instruction is implicit learning, resulting in implicit L2 knowledge”. Long continues:

In an article on classroom research, Whong, Gil, and Marsden (2014) noted that while generativists and general cognitivists disagree over the viability of inductive learning as a substitute for innate linguistic knowledge, both camps consider implicit learning more basic and more important than explicit learning, and superior. This is because access to implicit knowledge is automatic and fast, and is what underlies listening comprehension, spontaneous speech, and fluency. It is the result of deeper processing and is more durable as a result, and it obviates the need for explicit knowledge, freeing up attentional resources for a speaker to focus on message content. ….Whong, Gil, and Marsden (2014) conclude:

“In sum, we argue that the distinction between implicit and explicit knowledge needs to be more robustly recognized in research design, and suggest that implicit knowledge should be the target of research, regardless of theoretical premise” (2014:557).

Long agrees, as do Nick Ellis (2005), Robinson (1996), Williams (2005), Rebuschat (2008), Rastelli (2014), and most of those cited above. All the research indicates that learning an L2 is not best facilitated by presenting and practicing bits and pieces of language according to criteria such as “difficulty” or “frequency of occurrence”, but rather by developing the ability to make meaning in the L2 through exposure to comprehensible input, participation in discourse, and implicit or explicit feedback. At the same time, we should note that the evidence clearly indicates a role for explicit instruction, even if it’s a much smaller role than implicit instruction. To cite Long (2017) again, he points to the case of Julie (Ioup et al. 1994) whose achievement of near-native L2 through long-term residence without the aid of any instruction at all suggests to him that most of an L2 can be acquired implicitly given sufficient time and high enough aptitude for implicit language learning. But, as he concludes “the problem for instructed Instructed Second Language Acquisition is that vanishingly few L2 learners have both”. Neither expicit nor implicit learning alone can get those who sign up for any kind of L2 learning course to their goals, but if the goal is a functional command of the L2, then implicit learning should be the default and should take up most of the time.

Walkley’s Reply

What, then, of Walkley’s reply to my claim that most coursebooks implement a synthetic syllabus based on 3 false assumptions about SLA? With regard to Assunption 1 (declarative knowledge converts to procedural knowledge through a process of presentation and practice) he responds by saying that “passing on knowledge is a starting point, not a final destination”. He says:

There is no doubt that knowing that the past tense of “has” is “had” doesn’t mean that with a bit of classroom practice you can use “had” fluently and correctly in real-time communication. Of course the job is not finished when irregular past tenses are presented and practised in class. But that’s true of all types of classroom teaching. Would the job be finished if the past tense of “had” was required during a task and ‘emerged’ or was ‘noticed’ or ‘recast’ or whatever? Of course not! Students will continue to make mistakes because whether this information is passed on to students through a task or a ‘bit of classroom practice’, the information still remains essentially declarative at that point.”

What we know about instructed SLA (ISLA) indicates that most of the knowledge “passed on” by explicit language instruction doesn’t become inplicit knowledge and is of little use in achieving a functional use of the L2. If a synthetic syllabus devotes most of the time focusing on the L2 as object, the learners will get too little opportunity to gain implicit knowledge and to drive forward interlanguage development. Walkley assumes that all types of classroom teaching involve passing on explicit knowledge about language as a necessary “first step”, and that spending a lot of time passing on explicit knowledge about language is common to all teaching approaches. He’s wrong on both counts.

Next, Walkley takes my remark that SLA is more like learning to swim than learning geography, and says:

Jordan’s analogy with swimming is surprisingly helpful here, though perhaps not for reasons he realises. During most swimming lessons above absolute beginner levels, nearly all instructors work outside the water! They are telling students what to do (declarative knowledge) and getting students to proceduralise this – often through rather synthetic (and to my mind rather monotonous) tasks such as ‘practise breathing out underwater’ or ‘swim with your legs only’. ….

The lesson for teachers and learners is that you have to make use of the language you have ‘learned’ – and do so repeatedly over time. Certainly, a coursebook should give opportunities to make use of language and should ensure proper recycling over time, but it may not. A task-based syllabus should certainly provide opportunities to make use of language, but it may not be the same language over time. Although, of course, it might be! It all depends.

Again, Walkley assumes that all ISLA, no matter whether a synthetic or analytic sylabus is being implemented, and no matter what pedagogical principles inform the teacher, consists of teachers explicitly teaching students about the L2 (passing on knowledge) and then helping them practice it. Again, he’s wrong. One thing is using a coursebook to implement a synthetic syllabus. This imples concentrating on explicit knowledge and spending most of the time treating the L2 as an object of study. Another, different, and I suggest better thing, is using a needs analysis and a means analysis to identify target tasks from which pedagogical tasks are designed, and then making the pedagogical tasks the basis for an analytical syllabus where the time is spent developing the ability to function successfully in the L2 in well-defined areas. This implies concentrating on implicit knowledge, coupled with some explicit feedback. I should add that the Dogme approach also rejects coursebooks, rejects synthetic syllabuses, and concentrates on developing the ability to make meaning in the L2 through a predominantly implicit learning process.

In a section titled “When do you move on to a new ‘teaching’?”, Walkley addresses the second false assumption made by coursebook writers, namely that SLA is a process of mastering, one by one, accumulating structural items. (The assumption is false because all the items are inextricably inter-related.) He writes:

…while I basically agree that SLA is not a process of mastering, one by one, accumulated structural items as in some kind of building block process, you could argue that the next point that ‘Teaching affects the rate, but not the route of SLA’ is slightly contradictory. It seems clear that language learners move from more or less ungrammaticalised words to grammaring the words they know in progressively more complex ways: this is the route. So in the case of questions, students at the lowest levels will generally start by just using a word, maybe with intonation or gesture – coffee?; then move to a string of words – you want coffee?; to grammaticalised strings – Do you want a coffee?; to more complex sentences – Are you sure you want a coffee? If you were a mad person and did these as consecutive lessons, your students would not be producing all these different question types.

The route he describes is not the route described by those who have studied the development of interlanguages (see Cazden, Cancino, Rosansky and Schumann (1975) for sequence of interrogative forms), but even if it were, how would this challenge the claim that teaching affects the rate, but not the route of SLA? Walkley is yet again assuming that ISLA is characterised by teachers passing on explicit knowledge and then practicing it. From this “idée fixe”, he argues that it would be mad to “do” the different stages he describes of question formation in consecutive lessons. He’s right, of course, but nobody in the literature suggested such a thing in the first place; that is, nobody suggested speeding up the rate of interlanguage development by “doing” (i.e. explicitly teaching) bits of the L2 “in the right order”. Note the question at the start: “When do you move on to a new ‘teaching’?” It is, surely, the wrong question, and one that explains Walkley’s misunderstanding about the conclusions drawn from interlanguage research. Pace Walkley, ISLA is not best seen as addressing the question of when to “pass on” which new bit of knowledge.

Nevertheless, that’s the way Walkley sees it, and if we share his (in my opinion, blinkered) view then there really isn’t much difference between using a coursebook and using a TBLT or Dogme approach, because they all involve the same basic thing: the teacher presents and practises bits of the target language. This approach to ELT is perfectly evident in Dellar and Wlakley’s book Teaching Lexically (see here for a full review).

Teaching Lexically is very teacher-centred. There’s no suggestion anywhere of including students in decisions affecting what and how things are to be learned: teachers make all the decisions. The teacher decides the mainly lexical “items” to be taught, the sequence of presentation of these “items”, plus how they are to be recycled and revised.

There’s an almost obsessive concentration on teaching as many lexical chunks as possible. The need to teach as much vocabulary as possible pervades the book. The chapters in Part B on teaching speaking, reading, listening and writing are driven by the same over-arching aim: look for new ways to teach more lexis, or to re-introduce lexis that has already been presented.

The book promotes the view that education is primarily concerned with the transmission of information. In doing so, it runs counter to the principles of learner-centred teaching, as argued by educators such as John Dewey, Sebastian Faure, Paul Friere, Ivan Illich, and Paul Goodman, and supported in the ELT field by educators such as Chris Candlin, Catherine Doughty, Caorl Chapelle, Grahame Crookes, Rebecca Brent, Earl Stevick, John Faneslow, Vivian Cook, Sue Sheerin, Alan Maley and Mike Long. All these educators reject the view of education as the transmission of information, and, instead, see the student as a learner whose needs and opinions have to be continuously taken into account. For just one opinion, see Weimer (2002) who argues for the need to bring about changes in the balance of power; changes in the function of course content; changes in the role of the teacher: changes in who is responsible for learning; and changes in the purpose and process of evaluation.

Teaching Lexically involves dividing the language into items, presenting them to learners via various types of carefully-selected texts, and practising them intensively, using pattern drills, exercises and all the other means outlined in the book, including comprehension checks, error corrections and so on, before moving on to the next set of items. As such, it mostly replicates the grammar-based PPP method it so stridently criticises. Furthermore, it sees translation into the L1 as the best way of dealing with meaning, because it wants to get quickly on to the most important part of the process , namely memorising bits of lexis with their collocates and even co-text. Compare this to an approach that sees the negotiation of meaning as a key aspect of language teaching, where the lesson is conducted almost entirely in English and the L1 is used sparingly, where students have chosen for themselves some of the topics that they deal with, where they contribute some of their own texts, and where most of classroom time is given over to activities where the language is used communicatively and spontaneously, and where the teacher reacts to linguistic problems as they arise, thus respecting the learners’ ‘internal syllabus’.

Teaching Lexically sees explicit learning and explicit teaching as paramount, and it assumes that explicit knowledge can be converted into implicit knowledge through practice. These assumptions, like the assumptions that students will learn what they’re taught in the order they’re taught it, clash with SLA research findings. To assume, as Dellar and Walkley do, that the best way to teach English as an L2 is to devote the majority of classroom time to the explicit teaching and practice of pre-selected bits of the language is to fly in the face of SLA research.

As Long (2017) argues

“the direct effects of instruction are limited to manipulations of the linguistic environment, with only indirect effects on learning processes. The learner’s use of this or that cognitive process can be intended by the instructional designer, but cannot be stipulated or guaranteed. For example, explicit instruction is designed to invoke intentional learning – a conscious operation in which the learner attends to aspects of a stimulus array in the search for underlying patterns or structure. Intentional learning usually results in explicit knowledge: people know something, and know they know. But students may learn some things incidentally and implicitly from the input used to deliver the explicit instruction”.

On the other hand,

“instruction can be designed to create optimal conditions for incidental learning, but that does not guarantee that incidental learning will transpire, or that if it does, the result will be implicit learning, or if it is, that implicit knowledge will be the end-product, or if it is, that it will remain implicit only”.

Long concludes that if we take the view that most students want teachers to help them to be able to use the L2 for communication, then the primary goal of teaching must be to develop implicit knowledge. Research findings on interlanguage development undermine the credibility and viability of explicit language teaching, synthetic approaches, and PPP.

Since purely incidental learning is impractical, due to the amount of input required and the length of time needed to deliver it, in the interest of identifying the least interventionist, but still effective, forms of instruction, it follows that a major focus of ISLA research (not the only focus, but a major one) should now be on even less intrusive enhancements of incidental learning rather than focus on form.

Post navigation

27 thoughts on “A Reply to Walkley’s Defence of Coursebooks”

Enjoyed reading this, as, after several months out of the classroom I’m now able to try out the approaches you advocate, instead of being shackled to a synthetic syllabus. The first thing I’ve noticed is that students struggle less and produce more decent language, as they’re not pouring their resources into trying to master one mcnugget of grammar.

One of the difficulties, for me, is reconciling interlanguage with pushed output. If a learner produces a form, or a structure, I may try to get, from them, an alternative (e.g different syntactic form, same meaning, or similar meaning, different emotive inpact). In cases where I need to guide the student, e.g through information gap, Socratic questioning, or more basic methods such as clozes and anagrams, am I actually teaching them anything, or am I simply depending on the interlanguage?

to butt in here, from say a generative view interlanguage is the result of input being acted on by internal systems in the mind; so when you are guiding students that is another form of input that is being acted on by the student; so in one way yes you are teaching someone if you define teaching as providing appropriate input
ta
mura

Yes, I think that’s right. The question is, of course, what form does the “guidance” take? As I said in my reply to Robert, Long seems to be moving even further away from explicit “guidance” than his present “focus on form” view suggests.

When we get right down to moment-to-moment classroom practice, there are so many local factors in play that it’s up to the teachers to decide what to do and it’s hard to say what different interventions achieve. In his latest paper, Long says:

“There is growing interest internationally in various types of programs, such as immersion, TBLT, and CLIL, in which the principal focus is the non-linguistic syllabus, with the L2 in theory learned incidentally through being used communicatively as the medium, not the object, of instruction. Explicit instruction and a focus on language as object may turn out to be more efficient for some language-learning details, but like massive doses of chemotherapy, it disrupts the main focus of such programs, diverting teachers’ and students’ attention away from crucial non-linguistic syllabus content. Focus on form, with its temporary brief switches to intentional language learning during otherwise communicative lessons, is a major improvement in this regard. If they work, less intrusively enhanced incidental learning and detection would be better still.”

Right at the end, Long says:

“A major goal of ISLA research is to free instruction of unnecessary artificial aids, unless they turn out to be either absolutely necessary or an improvement, e.g. because they produce implicit knowledge faster. The aim is to identify the least intrusive, but still efficient, means of achieving the same instructional goals, thereby protecting the integrity of non-linguistic syllabus content. Instead of proactive explicit instruction, intentional learning and noticing, and instead, even, of reactive focus on form and intentional learning, can the same results be achieved by less intrusive forms of enhanced incidental learning and detection? There are three possible conditions to be compared:

Due to the unwanted side effects of Condition 1, the most interesting comparisons are those between Conditions 2 and 3. It is answers to these questions that will ultimately settle many of the long-standing debates in language teaching. Whatever the outcomes, they will have major implications for SLA theory and for instructional design.”

hi Geoff
wanted to pick up on that definition of explicit knowledge (EK) shown in the graphic – where EK is knowledge which “can be drawn on when there is time for controlled processing” and implicit knowledge (IK) is “automatized knowledge”

first this does this overlooks the main difference which is awareness/conciousness of knowledge? and time dependent differences are less important?

for example DeKeyser & co have the constuct of (no, partial or full) automatized explicit knowledge that is rapid to access and learners are consciously aware of vs implicit knowledge which is fast but unconscious?
ta
mura

forgot to add ref for that automatized EK construct The Interface of Explicit and Implicit Knowledge in a Second Language: Insights From Individual Differences in Cognitive Aptitudes [http://onlinelibrary.wiley.com/doi/10.1111/lang.12241/abstract]

I didn’t look at the graphic carefully enough! First, we need to distinguish between explicit / implicit knowledge, learning, and instruction. Then we need to see the different position that different scholars take on them, though in SLA research, it’s the differences between views on explicit / implicit knowledge and learning that are most important. There are, as you know, strong interface, weak interface and no interface positions taken on the interaction between the two. When it comes to explicit / implicit knowledge, as you say there are two components: awareness and time. Awareness is the usual criterion for explicitness, and can be measured by things like verbal report. Whereas explicit knowledge is seen to guide intentional actions, implicit knowledge is deployed automatically. How you identify automaticity is an issue, but a speed diagnostic is often used, since it relates to fluency, and fluency is seen as the result of implicit knowledge. I think Williams, J.N. (2009) Implicit Learning. In Ritchie & Bhatia (eds) The New Handbook of SLA. Bingley, Emerald Group Publishers offers a good discussion of all this.

While I’m at it, you and I have already commented on the VanPatten (2017) paper, where he criticises Schmidt’s Noticing Hypothesis. “The Noticing Hypothesis is agnostic, at best, on the nature of language, ignores the ordered development of language acquisition, and ignores the nature of internal constraints on acquisition” (p. 54). Let’s see what responses there are to this, but meanwhile we can note that, as I say in Jordan (2004)

“Schmidt examines three senses of the term “consciousness”: consciousness as awareness, consciousness as intention, and consciousness as knowledge.

Consciousness and awareness are often equated, but Schmidt distinguishes between three levels: Perception, Noticing and Understanding. The second level, Noticing, is the key to Schmidt’s eventual hypothesis.

Noticing is focal awareness.

“When reading, for example, we are normally aware of (notice) the content of what we are reading, rather than the syntactic peculiarities of the writer’s style, the style of type in which the text is set, music playing on a radio in the next room, or background noise outside a window. However, we still perceive these competing stimuli and may pay attention to them if we choose” (Schmidt, 1990: 132).

Noticing refers to a private experience, but it can be operationally defined as availability for verbal report, and

“When problems of memory and metalanguage can be avoided, verbal reports can be used to both verify and falsify claims concerning the role of noticing in cognition” (Schmidt, 1990: 132).

Consciousness as intention is used to distinguish between awareness and intention behaviour. “He did it consciously”, in this second sense, means “He did it intentionally.”

The third sense of the term – consciousness as knowledge – is the one that often causes problems in attempts to explain the SLA process. Schmidt cites White (1982) who argued that “experiential consciousness and knowledge are not at all the same thing”, and warned that

“the contrast between conscious and unconscious knowledge is conceptually unclear when different authors are compared, since the ambiguities are combined with those of knowledge, equally difficult in psychological terms” (Schmidt, 1990: 133).

Schmidt comments:

“It is unfortunate that most discussion of the role of consciousness in language has focused on distinctions between conscious and unconscious knowledge, because the confusion warned against by White is apparent” (Schmidt, 1990:133).

Schmidt suggests that the ambiguities of “conscious” and “unconscious” can be tackled by recognising that it refers not to a single question but to six different contrasts:

1. Unconscious learning refers to unawareness of having learned something.
2. Conscious learning refers to awareness at the level of noticing and unconscious learning to picking up stretches of speech without noticing them. Schmidt calls this the “subliminal” learning question: is it possible to learn aspects of a second language that are not consciously noticed?
3. Conscious learning refers to intention and effort. This is the incidental learning question: if noticing is required, must learners consciously pay attention?
4. Conscious learning is understanding principles of the language, and unconscious learning is the induction of such principles. This is the implicit learning question: can second language learners acquire rules without any conscious understanding of them?
5. Conscious learning is a deliberate plan involving study and other intentional learning strategies, unconscious learning is an unintended by-product of communicative interaction.
6. Conscious learning allows the learner to say what they appear to “know”.

While, according to Schmidt, most of the literature has been concerned with the last two issues, Schmidt considers the issues of subliminal, incidental, and implicit learning more important”.

This, I think, shows that Schmidt was aware of the problems involved in talking about implicit and explicit learning, and that he went to some trouble to carefully define the construct of “Noticing”. Its use by Dellar particularly (as evidenced by all his published work in written texts, talks and presentations) shows what happens when you fail to appreciate the subtleties of Schmidt’s construct and use it in an unexamined way to justify your own “passionately held beliefs” about the best way to approach ELT.

I think VanPatten cited Truscott to cite arguments against noticing hypothesis NH – I don’t think that Truscott included the charge that Schmidt has not thought about the relation of noticing to awarenss/conciousness?

re implicit/explicit I think VanPatten’s position is his generative description of language defines itse;f as implicit hence many questions which invoked implicit/explicit are not relevant

he teases out the question of looking at whether +explicit processes+ are involved in processing input; he seems to equate explicit processes with +conscious strategies+ learners use to make meaning but they cannot access the (lexicon) features that are to do with formal properties of language

i think VanPatten’s critique of language learning as rule acquisition is quite helpful to describe your characteriszation of Walkley’s assumption – re “that explicit knowledge can be converted into implicit knowledge through practice”; that is Walkley’s position seems to view +learning rules+ as the content of the language knowledge?

First, you’re right: Truscott didn’t charge Schmidt with not thinking about the relation of noticing to awarenss/conciousness.

In the 2017 article, VanPatten elaborates on 4 facts about SLA which are certainly informed by a generative perspective. He says:

“Regardless of one’s theory about language acquisition (e.g. generative, emergentist/usage-based, processability), a common underlying aspect of language acquisition is that learners build internal linguistic systems (mental representations) that are implicit. By implicit I mean they exist outside of awareness. What is more, they are abstract and complex in nature such that they bear little to no resemblance to rules as traditionally conceived. They certainly bear no resemblance to rules in textbooks (see, for example, the discussion in VanPatten and Rothman 2014).”

“…the underlying mental representation of language is constrained and pushed in certain directions by something internal to the learner.”

This has little in common with what Walkley says in reply to my criticisms of coursebooks, or what Dellar & Walkley say in their book Teaching Lexically. While Walkley & Dellar sing off the same hymn sheet in their promotion of an approach to ELT that puts particular attention on teaching lexical chunks, I’ve never seen either of them coherently discuss the research findings in SLA that are mentioned in this post. My reply to Walkley suggests that his view of ELT is blinkered. He sees ELT as teaching students a collection of “items” through a 6 step process involving (1) understanding the meaning, (2) hearing/seeing an example in context, (3) approximating the sound, (4) paying attention to the item and noticing its features, (5) doing something with it – using it some way, and (6) then repeating these steps over time. Where “learning rules” comes into this seems to be through “grammaring words” and a bit of old-fashioned grammar teaching now and then. Not exactly what VanPatten has in mind, I suspect.

hmm i was not being clear there, sorry.
what i was trying to say is that VanPatten highlighst a pervasive assumption that language is essentially rules in the head that got there from rules in a book.

so Walkley could similarly see language as an object to be acquired via explicit teaching, objects acquired from out in the world and then internalised in the head? and that general cognitive mechanism work on the external language data to internalise it into a connectionist store of linguistic knowledge?

maybe what i am saying also is that there could be many assumptions of Walkley’s view of this area of merits of coursebooks?

As you say, VanPatten insists that the rules in your head are nothing like the rules in the book, and that the mental representations that make up a language aren’t best seen as objects acquired through explicit instruction. Walkley and Dellar say that they rely on Hoey’s theory of lexical priming to explain SLA. To confuse things a bit, Hoey said in his IATEFL 2014 plenary that Krashen’s Monitor Model is “true”, and Walkley has never suggested, as far as I know, that the result of concentrating on teaching lexical chunks is a connectionist store of linguistic knowledge.

In my opinion, this should be the goal of SLA teachers, as you suggest, I think.

” access to implicit knowledge is automatic and fast, and is what underlies listening comprehension, spontaneous speech, and fluency. It is the result of deeper processing and is more durable as a result, and it obviates the need for explicit knowledge, freeing up attentional resources for a speaker to focus on message content”

I’m no erudite linguist, as you and previous commenter are, but I am a language teacher. I have been teaching ESL and Spanish to foreigners for many years now.
One of my premises has always to move away from text-book examples and INSIST my students come up with their OWN , REAL-LIFE examples to practise in class.
Why learn about John and Mary from London (WHO THEY KNOW ARE FICTIONAL CHARACTERS IN A BOOK) when they would need to talk about THEMSELVES or people they know.
I say to them : each one needs to make their OWN textbook.

I agree strongly that students should be involved with texts that are not obviously artificial or remote from their own experiences, and that they should be encouraged to engage with these texts in genuinely communicative tasks. Encouraging students to create their own textbooks seems to me like a fine idea.

Thanks Geoff , I’m trying to encourage my readers to do exactly that!
I’m going to have to really hone in my IT skills to try and produce some sort of template that could fill or fit that need.
It could work!
Thanks for replying!
Regards. Marie.

I’m personally interested in the applicability of your position to lower proficiency non native English teachers working in state sector schools.

I agree with your statement that a TBLT syllabus with an emphasis on tasks which involve negotiating meaning is more effective than using a coursebook with a synthetic syllabus, and therefore a far more explicit focus on form.
But I wonder if implicit instruction is really achievable for the majority of the English teachers in state schools who may have a B1-B2 level of English themselves and lack the knowledge and language awareness to identify pedagogic tasks to supplement the ministry of education syllabus which they have been given.
In these cases, how would you wean teachers off coursebooks and introduce a task based syllabus?
I’d really appreciate your comments

In such a very common context – poor proficiency levels of English among teachers and a state-imposed, grammar-based, synthetic syllabus – I think you need to chip away at the problem by helping the teachers themselves to improve their English and by organising training sessions where you encourage them to use texts and tasks that you give them, as a first step in more collaborative TT where they gradually learn to work together on their own materials and procedures. You also need to work with your bosses in order to improve pay and conditions, and to push educational change. Easily said, I know. Maybe the first practical step would be to organise a TBLT workshop for the teachers.

As regards Robs question above. It’s surely the case that many countries can’t afford coursebooks for their teachers – so a TBLT approach would solve two problems in one go. (Didn’t Prabhu – one of the originators of TBLT – work in India?)

Weaning teachers off coursebooks would mean a re-professionalisation of the profession overall. Practically, it would mean teachers taking more of a role in the planning of schemes of work and developing frameworks for their own contexts rather than importing them. Not impossible.

Good point! Apart from being inefficient and boring, coursebooks are expensive and they can dumb down teaching. There are good reasons to argue that doing without coursebooks would raise standards in the ELT profession, and as you say, life without coursebooks isn’t impossible – or even difficult.

I agree with this, but am seeing first hand how utterly dependent many teachers have become on coursebooks, even to the extent that some will sit for hours wondering what to do about the page on present perfect that they’re meant to be teaching next day (but the new guy just covered present perfect) rather than, you know, teach something else.

There seem to be so many threads that lead to an arduous, complicated task of weaning teachers off coursebooks and convincing schools that teachers might, given time, perform better without coursebooks.

But then the teachers are gone come summer and new ones have to be retrained.

So why invest in CPD and top quality teaching when you can just grab a book from the shelf?

Perhaps we should treat it like an addiction. A social addiction suffered by the ELT community. There’s no point getting individual teachers to ‘stop using’ coursebooks. There has to be a broad cultural shift, if you like.

You have to create an environment in which the use of coursebooks becomes senseless, or irrelevant. And that kind of project takes time because it’s about a lot more than ELT.

And that’s also part of the problem: ELT itself. Who’s allowed to make knowledge? Who’s allowed to make truth claims? Who gets to feel good?

Yes indeed – we need a broad cultural shift and an environment in which the use of coursebooks becomes senseless and irrelevant. And that’s unlikely to happen without broad economic/social/political change. But we can make a start. We can raise the issues surrounding coursebook-driven ELT, we can offer alternatives in the form of more interest in how people learn languages, better syllabuses, better assessment procedures, better conferences, better teacher training, and, of course, better worker representation and organisation so as to improve pay and conditions. That way, we at least challenge those who are allowed to make knowledge and truth claims, and maybe we even get to feel good.

It’s depressing to hear that rapid turnover of teaching staff blights your attempts to ditch coursebooks. Is that a common feature of ELT schools and institutes? The fast-increasing number of autonomous teachers gives us another “avenue” to explore: cooperatives of autonomous workers can help sustain alternative ways of doing ELT, as I’m finding out through my involvement with the SLB group. http://www.slb.coop/?lang=es

Hi there,
Actually, in some places ministries of education hand out textbooks to state run schools for free. Gratis.
I am picking up your comment below:
“There’s no point getting individual teachers to ‘stop using’ coursebooks. There has to be a broad cultural shift, if you like.You have to create an environment in which the use of coursebooks becomes senseless, or irrelevant. And that kind of project takes time because it’s about a lot more than ELT. And that’s also part of the problem: ELT itself. Who’s allowed to make knowledge? Who’s allowed to make truth claims? Who gets to feel good?”

When you say “there has to be…” and “that kind of project…” I wonder what “that kind” would be. And “creating an environment” …what could that be? Things should be different? I think things are the way they are because people are fond of their beliefs. And each one has his or her own. Just take the SLA debates witnessed here. The current way of truth telling and knowledge making seems to be by way of getting degrees and peer debates. I don’t think there is a better way. And if that will not do, and changes are not coming along, what else can you do? Everybody can speak his or her mind while trying to stay decent. Which should us make feel good.