Menu

Output

(I looked for an image for upset Spanish teacher and this was all I got)

Here is a comment from the SPANISH TEACHERS IN THE US page on Facebook. Here, Dan brings up a classic argument between a more traditional language teacher and a C.I. practitioner

Here is a response to a discussion about whether or not C.I. delivers better results than does the textbook:

My first question to Dan’s interlocutor– the teacher who has inherited some C.I.-taught kids who can’t conjugate saber— is, what do you mean by “conjugate?”

If we mean, can we tell the kid “conjugate the verb saber in the present indicative yo a.k.a. first person form” and can they do it?, the answer might well be no. This is because consciously knowing

what an infinitive is

what conjugating is

what first person is

the rule

is what we would call conscious knowledge– Bill VanPatten calls it “explicit knowledge” and Krashen “Monitor awareness.” Neither of these have anything to do with the subconscious linguistic system where language is acquired, processed and stored. We can successfully use a variety of grammar “rules”– such as saying “I am” instead of “I are,“, or “I enjoy running” instead of “I enjoy to run“– without knowing (or even having been taught the rule).

As Bob Patrick says, conjugate the verb to run in the pluperfect passive third person progressive. Can you do this? Really? You mean you can’t say the race had been being run on demand?

Knowing the “rules,” and how and when and where to apply them, does not guarantee successful production of language.

As Jason Rothman (2008) write, “Variation in language use is simply a fact of all output, native and non-native. As a result, any given linguistic performance does not always accurately represent underlying competence.”

The real question, however, is do they do it without being asked to do it, ie in real-time, unrehearsed communication? If my experience of 12 years with the text is a guide, no, absolutely not, and the same goes true for writing. Kids taught with textbooks and a focus on grammar rules memorise dialogues, and they do not produce very much (nevermind very much good) written language spontaneously. Here is an example of just how grammatically accurate kids taught with C.I. can be.

My third question to Dan’s interlocutor is, what cost does an obsession with perfect grammatical output carry?If Johnny’s Spanish teacher gets the kids to obsess over verb tables, that means they won’t be either “practising” other grammar, or– worse– getting input. There will also be a cost to students’ enjoyment of Spanish: reading/watching good stories is way more fun than doing tedious grammar stuff, correcting one’s writing, etc. And this means that students who end up in grammar and textbook programs drop out more, as Grant Boulanger has thoroughly documented. It also means that, in the long run, students will not do as well in a textbook/grammar program as they will in a C.I. program (see Part Two of Boulanger’s work here).

My fourth question to Dan’s interlocutor is, if you put a C.I.-taught kid on the spot and get them to meaningfully communicate, can they do it well? My answer: generally– if the task is developmentally appropriate— yes, they can. We have to be realistic about what we can get done in a language class. Babies get 4,000-5,000 hours of input before they start saying single words; at age 6 (after ~14,000 hrs of input), kids are still making errors with irregular past-tense verbs in English. They are, however, communicating just fine.

My fifth question to Dan’s interlocutor: when C.I.-taught kids use sabo instead of sé, how much of a problem is that? My answer: a Mexican or a Spaniard who hears a kid say “yo no sabo donde está el baño” is going to know exactly what the kid is trying to say. This is like a Chinese kid asking you “where bathroom?” Mandarin doesn’t have “to be” the way English does, and the Chinese kid obviously hasn’t “studied hard enough,” as a grammarian would say, but we get it. When a Mexican asks, “did he went to the bathroom?,” we understand just fine.

Finally, I’d point one thing out to Dan’s interlocutor: When Johnny gets to Spain or Bolivia, he is going to hear more– and better– Spanish in 6 days than he will in class in one year. Input will ramp up so much that Johnny’s errors will inevitably get corrected by the epic amounts of Spanish he is hearing.

In a 2017 paper, Schenker and Kraemer argue that iPad use helps develop oral fluency. Specifically, they found that iPad app users after “speaking practice” were able to say more in German, and were more fluent– rapid and seamless– in saying it than were controls who had not “practiced” speaking.
So, prima facie, the authors can claim that focused speaking practice helps develop fluency.

Q: Does this claim hold up?

A: Not according to their evidence.

Let’s start with the method. Kraemer and Schenker took English L1 students of second-year German, divided them into two groups, and gave one batch iPads. The iPad group had to use Adobe Voice to record three tasks per week, which had to be posted to a group blog. In addition, each iPad user had to respond verbally to some other students’ posted responses to the tasks.

The tasks included things such as “describe your room” and “recommend a movie to a friend.”

The control group did nothing outside class other than their usual homework, and the iPad group had their other homework (which the authors do not detail, but describe as work involving “vocabulary and grammar knowledge”) slightly reduced in quantity.

In terms of results, the iPad group during oral testing on average said more, and was more fluent (using language “seamlessly”) than the control. The authors thereby claim that “practice speaking” boosted oral competence.

However, there are a number of atudy design flaws which render the authors’ conclusions problematic.

First, the study compares apples and oranges. The speaking group practised, well, speaking, while the controls did not. The speaking group had more time with German (class, plus speaking, plus doing whatever they did to prepare their recordings, plus listening and responding to others’ posted task responses) than did the controls (class, plus “vocabulary and grammar” hwk). The speaking group had more time doing speaking as well as more total German time than the controls.

This is akin to studying physical fitness by comparing people who work out with those who are couch potatoes, or by comparing people who do two hours a week of working out with those who do four.

Second, the study does not compare speaking development-focused methods. One group “practiced speaking,” while the other did “vocabulary and grammar” homework.
This is like comparing strength gains between a group of people who only run two hours a week with another group that runs two hours a week and lifts weights. Yes, both will get fitter, and both will be able to lift more weights and run a bit faster (overall fitness provides some strength gains, and vice-versa).

However, what should have been compared here are different ways of developing oral fluency. (We should note that fluency first requires broad comprehension, because you cannot respond to what you don’t understand).

Schenker and Kraemer’s “practice speaking” will help (at least in the short term). One could also in theory mix all of these, as a typical class does.

Schenker and Kraemer, however, compare one approach to developing speaking with an approach that does nothing at all to address speaking.

A more persuasive study design would have had three groups: a control, and two different “speaking development” groups. The “speaking development” groups could have included those doing Schenker & Kraemer’s “practice talking” with, say, people listening to speech, or reading, or watching subtitled film (or a mix). One group would spend 60 min per week recording German (and listening to 50-75 second German recordings made by their peers). The other would spend 60 min per week, say, listening to German. At the end, control, speakers and listeners would be tested and compared.

Third, the study does not control for the role of aural (or other) input. The iPad group for one had to come up with their ideas. Since no relatively novice learner by definition comes up with much on their own, they must have gotten language somewhere (Kraemer and Schenker do not discuss what the students did pre-recording their German). My guess is, the speakers used dictionaries, Google translate, reading, grammar charts, things they heard on Youtube, anything they remembered/wrote down from class, possibly Duolingo etc, to “figure out” what to say and how to say it. If you were recording work, being marked on it, and having it responded to by strangers, you would surely make it sound as good as you could…and that (in a language class) could only mean getting extra input. So did the speaking group get better at speaking because they “practiced speaking,” because they (probably) got help pre-recording, or both?

Which leads us to the next problem, namely, that the iPad group got aural input which the control group did not. Recall that the iPad group not only had to post their recordings, they also had to listen and respond to these recordings. So, again, did the iPad group get better because they talked, or because they also listened to others’ recordings of German?

Finally, there was no delayed post-test to see if the results “stuck.” Even if the design had shown the effectiveness of speaking “practice” (which in my view it did not), no delayed post test = no real results.

The upshot is this: the iPad group got more input, spent more time listening, spent more total time with German, and spent more time preparing, than did the controls. This looks (to me) like a problematic study design. Ideally, both groups would have had the same input, the same amount of listening, etc, with the only difference being that the iPad group recorded their tasks.

Anyway, the skill-builders’ quest continues for the Holy Grail of evidence that talking, in and of itself, helps us learn to talk.

The implications for classroom teachers are (in my view) that this is waaaay too much work for too few results. The teacher has to set the tasks (and the blog, iPad apps, etc) up, then check to make sure students are doing the work, and then test them. Sounds like a lot of work!

Better practice– if one feels one must assign homework– would be to have students listen to a story, or watch a video in the T.L., and answer some basic questions about that. This way people are focused on processing input, which the research clearly says drives acquisition.

On a personal note, I’m too lazy to plan and assess this sort of thing. My homework is whatever we don’t get done in class, and always involves reading.

So this was asked on a forum recently and, as usual, it got me thinking.

This is a question about “El Internado,” but, really, it applies to anything we do in a language class. We read/ask a story/do a Movietalk or Picturetalk, etc, and then we want to assess speaking, comprehension, etc.

My response to this question is don’t bother assessing speaking.

But first, a qualifier: if our Board/school/dept. etc says we absolutely MUST assess speaking, well, then, go for it. We do what we have to do to keep our job. But if we don’t have to assess speaking, don’t. Here is why.

The info we gain from this cannot generally guide instruction, which is the point of any assessment (other than at the very end of the course). The reason for this is very simple: what will we do if what we learn from assessment varies wildly (which it almost certainly will)? If Samba has problems with the pretérito verb tense, Max doesn’t understand questions with pronouns, and Sky can fluidly ask and answer anything, how are we going to design future instruction around that info? How are we going to “customise” reading/stories, etc to give 30 different kids the input they need? Answer: we can’t.

This takes forever. If we have 30 kids in our class, and we can assess them in three minutes each (which is tough) we are spending 90 min alone on speech assessment. That’s a period and a half! During this time, we have to design something else for them to do…and good luck having 29 kids– whose teacher is “distracted” by sitting in the corner assessing speech– staying on task for 60 minutes.

We already know how well they speak. If we are doing regular PQA– personalised questions and answers (basically, asking the class members the same questions we are asking the actors)– we know exactly how well each kid can talk. So why waste time with a formal assessment? In my Spanish 1 right now, Ronnie can only do y/n answers to questions, while Emma Watson (aka Kauthr) speaks fluid sentences, and so does Riya, while Sadhna mixes up present and past tense in her output (but understands tense differences in questions) etc.
Indeed, this is where feedback to the teacher is useful. If—in the PQA moment—I see that Sadhna mixes up past and present in answers, I can guide PQA around that right then and there.

In terms of bang-for-buck, we are going to get way more results from more input than from assessing speech. We acquire language not by practising talking etc, but by processing input, as Bill VanPatten endlessly reminds us. I used to do regular “speaking tests” and they did nothing and the info was useless. Now, I never test speaking until the end of the course, and the kids speak better, mostly because the wasted time now goes into input.

A question that comes up here, regarding assessing speech post-Internado, is, what are we testing the kids on? Are they expected to remember content— names, events, “facts” etc– from the show? Or are we assessing speech generally? In my opinion, “content” should be off-limits: we are building language ability, not recall.In terms of language ability, one of the problems with assessing right after specific content (eg some of El Internado) is that, since this input is generally not very targeted, we don’t have much of a guarantee that the kids are getting enough exposure (in a period or two) to “master” or acquire anything new. This is to say, while an episode may be 90- or even 100% comprehensible, thanks to the teacher’s guidance etc, it almost does not focus on a specific vocab set. In a classic T.P.R.S. story, the teacher makes sure to restrict (shelter) vocab used in order to maximise the number of times each word/phrase/etc is used.

This is whether s/he has a plan, or, as in totally “untargeted” story creation à la Ben Slavic, the kids are totally driving the bus. As a result, the odds of the kids picking up specific “stuff” from the story—in the short term, which is the focus of the question– are greater (and greater still if the asked story is followed by reading, Movietalk and Picturetalk) than if the input is familiar but untargeted.

What about the kid who missed some of (in this case) El Internado? If the speaking assessment focuses on Internado-specific vocab, it would (in my opinion) be unfair to ask Johnny who was there for all three periods and Maninder, who missed two of three periods, to do the same thing with the “language content” of the episodes.

Kids hate speaking and tests. Anything I can do to avoid tests, or putting people on the spot– which a one-on-one test does– I do. This is what Johnny looks like when you tell him, speaking test tomorrow:
(image: Youtube)

“Authentic content” eg El Internado has lots of low-frequency vocabulary. Sure, the teacher can keep things comprehensible, but there is inevitably kids’ mental bandwidth going into processing low-freq vocab…which is exactly what kids don’t need in a speaking assessment, where you want high-freq vocabulary that is easy to recall and applicable to lots of topics.

Anyway…this is why I save speaking assessment until the end of the course: I know how well my kids can speak, I can adjust aural input where it matters– right now–, I don’t want assessment to detract from input, and speaking assessment doesn’t really help me or my kids.

The Accelerated Integrative Method— AIM– is a comprehensible-input second-languages method which was developed by Wendy Maxwell in Canada. I havn’t used AIM (but have posted some comments about it from practitioners here). AIM is better than any standard text: they use stories, lots of repeated (and sheltered) vocab, etc, which are practices in line with what we know about what the brain needs to acquire languages.

AIM makes some claims about TPRS here, claims which I don’t think are always accurate. Mainly I want to clarify TPRS (as I understand it). I’ll quote AIM’s claims about TPRS and then clarify each in turn. What is in the text boxes is all AIM’s words.

Claim:

AIM

TPRS

Students speak primarily in sentences.

Students respond primarily with one-word responses.

Reality: in TPRS, students say whatever they are developmentally ready to say. In a beginner class, students’ initial output will be one-word and yes/no responses to questions. As input builds mental representation of language, their output grows longer and more complex. TPRS is built on research, which shows that forcing output beyond what students are developmentally ready for does nothing for acquisition and makes many students uncomfortable.

Claim:

AIM

The teacher uses a variety of strategies when students don’t understand.

TPRS

Translation is the primary method used when students don’t understand.

Reality: a TPRS practitioner will establish meaning using direct translation, and use translation to clarify, but will also use gestures, props, actors etc to clarify what is happening. What TPRS does not do: make students guess (or, in edubabble, “use metacognitive strategies to decode meaning”). Why? Because there is no substantiation in research that language acquisition gets easier and/or speeds up when people have to guess at meaning, and because how effective decoding strategies are depends on how much the learner already knows (and on the language being taught– good luck using cognates and “sounding out” when acquiring Mandarin). While babies and first language learners must guess, they have unlimited time to do so, while a classroom teacher has about 100 hrs/year max.

Claim:

AIM

Offers a full online teacher training and certification program.

TPRS

Offers webinars online.

Reality: both AIM and TPRS offer live training, and both offer online training, DVDs, etc.

Claim:

AIM

Supported by a variety of research. (See attached)

TPRS

Based on research of comprehensible input (CI) by Krashen.

Reality: the research into language acquisition supporting what TPRS does has been done by Krashen, Bill VanPatten, Ashley Hastings, Wynne Wong, James Asher, Beniko Mason and many others. See this for a summary. A.I.M. is built around most of the same ideas.

There is some good data from the Netherlands which suggests that A.I.M. works somewhat better than a traditional “skill-buuilding” approach. However, most of what is on the research portion of their page does not qualify as good science: small sample sizes, lack of control groups, etc, mean that AIM claims must be taken with a grain of salt.

Claim:

AIM

Yes/no questions are rarely used. The teacher focuses on total and partial questions with complete sentence answers.

TPRS

Questioning is done by circling (asking the same question in many ways) that includes yes/no questions, QT and QP as well as PQA (personalized questions and answers). Answers are usually one word.

PQA = teacher talk

Reality:

PQA is not teacher talk. It is teacher-initiated and teacher guided, because the teacher is the one who knows the target language.

Answers are whatever the student is developmentally ready for. For beginners, this means one-word and/or y/n answers. Later, output will become more complex and longer. We know from research that asking people to output beyond what they can do– eg complete sentences for beginners– is not really language use; it is memorised performance.

Not all questioning is circling. In reality, TPRS practitioners circle some new vocabulary, but prefer to use parallel characters (or students) for vocab repetition rather than focusing on questioning one sentence (though one-sentence focus is appropriate at times).

Claim:

AIM

The students and teacher write very long, detailed stories together, which are generally based on the play being studied. This happens twice as a whole class activity and twice as a partner activity per 50 hours of instruction. The play, vocabulary and language manipulation activities/creative writing are systematically integrated for success, predictability

TPRS

The student and teacher build a series of short stories (including 3 new words or phrases) called PMS (personalized mini-situation) by having the teacher “ask” the story. This oral activity happens frequently. Written exercises become more of a focus in the 3rd and 4th year.

Reality: TPRS includes writing right from the get-go. However, writing (and speech) in TPRS are indicators, not causes, of acquisition. In TPRS, students begin simple re-writes of stories after first co-creating one, and then reading various versions of it.

TPRS uses minimally-targeted (focused or chosen) vocabulary to build stories. Aside from a few basic verbs, nouns etc, the stories go more or less in the direction that students want them to.

TPRS stories vary in length, generally getting longer as students acquire more L2. Student written output (at the end of say Level 1) will be 600-1,000 words in one hour.

Claim:

AIM

Believe in a balanced literacy approach.

TPRS

High emphasis on the importance of reading (every second day) for language development. Students read early on. Students translate all readings out loud in a whole-class setting

Reality:

I have no idea what a “balanced literacy approach” is.

No, TPRS practitioners don’t necessarily translate all readings out loud, OR in a whole class setting. Sometimes…but we do partner translation, story illustration (comics), free voluntary reading, etc as well.

Claim:

AIM

The number of structures per lesson varies significantly.

TPRS

In a typical lesson, the teacher introduces and focuses on three target language structures.

Reality:

There is no pre-set number of structures in TPRS. An initial story will use a lot (because you need the “super 7” verbs to start storyasking with beginners). Later ones will use more, or fewer.

Claim:

AIM

All words and grammatical structures are associated with a gesture. The gestures are standardized. Gestures accelerate comprehension – no need to translate – the gestures allow the teacher to teach words as each represents clearly [sic] the meaning

TPRS

Gestures are sometimes used in conjunction with new vocabulary, however teacher and/or students can create his/her own gestures. Gestures or a physical response (TPR) from the body (limits to imperative form) and are used mostly with younger students (under Gr. 5) when needed only.

Reality:

In TPRS, TPR is not limited to third-person imperative. As a matter of fact, Ray and Seely (2015) advocate using third-person singular (and other) forms when doing TPR.

TPR is suggested for younger learners, but also works well (albeit with limited effectiveness) for older learners.

Claim:

TPRS has a “Five-day lesson plan which includes only three activities: PMS or mini-story, reading the extension, timed free writing and reading”

Reality: umm…TPRS practitioners also do any of the following activities:

Teachers are encouraged to “flood” the student with vocabulary in the target language.

TPRS

Teachers are encouraged to limit the amount of vocabulary introduced at one time.

Reality: This is true. Why do TPRS practitioners carefully restrict vocabulary? Because of the “bandwidth” issue, or what Bill VanPatten calls “working memory constraints.” Basically, the less variety of info the brain has to process, the more in-depth the processing of each item (and the sounds, grammar “rules,” etc with which it is implicitly associated) can be. If we can recycle a limited vocab set over and over, the vocab will be easy to pick up. In addition, when we have limited vocab– and so are not constantly guessing at/trying to recall meaning, because the working mind can have about 7 items in its awareness at a time– our brain can devote mental energy to soaking up grammar, pronunciation and other properties.

In TPRS, we “practice” language– by processing input– much like musicians practice pieces they are learning: we go over limited parts of tunes/songs to really nail them, rather than trying to soak up an entire piece in one go.

Claim:

AIM

Provides everything for the teacher in terms of outlining in detail and with scripted teacher talk for teachers to model what they might say during whole-class activities.

TPRS

The teacher asks many questions using the new vocabulary (5-6 questions) being taught. These questions are created ‘on the spot’. No teacher’s guide is provided since questions depend on student answers and reactions. A PMS (personalized mini-situation) is created by the teacher with the help of students, but all of this depends highly on teacher’s knowledge of the L2.

Reality: this is one of the alleged strengths (and to my mind) weaknesses of AIM. The AIM curriculum is massively structured, which means that– provided they know the routines– any teacher can, in theory, start AIM with very little planning. However, the rigid structure– this is what your play will be, these are your questions and answers– will inhibit personalisation possibilities, and also raises the question, what if the students do not find the story interesting?

Claim:

AIM

All students participate by speaking chorally, gesturing or reading the gestures. There is never silence in an AIM classroom – all students speak 30 minutes of a 30 minute class

TPRS

One or a few students are responding to commands at once. The teacher does most of the speaking. Students only start producing the L2 when enough comprehensible input has been provided (called the silent period – several hours to several weeks)

Reality:

Nobody at AIM has ever explained why it is necessary for students to speak. We know from research that input, not output, drives acquisition, and that forced output is not language, but what VanPatten calls “language-like behavior” which does not develop acquisition.

TPRS– outside of during bursts of TPR– does not use “commands.”

Students produce developmentally-appropriate L2 from Day 1. Initially, this will be y/n and then then one-word answers, and later sentences.

Claim:

AIM

Syntax and grammar are visualized, produced and embedded kinesthetically in this multi-modal approach

Teacher uses translation to clarify grammar and structures. They use pop-up grammar and one-second grammar explanations. For example, during the translation of a reading it is used every 20 second or so and always in the L1.

Reality: there is no need to “visualize” syntax or grammar. Since acquisition of L1 (and L2, L3 etc) follow the same processes, and since nobody “teaches” their own kids grammar, vocab etc, it is not clear why one must “visualize” syntax. If one understands the input, the brain will build mental representation of grammar. This is not a problem in AIM, however– there is nothing wrong with grammar visuals– but they are unnecessary.

TPRS uses direct translation in order to waste as little time as possible and to stay in L2 as much as possible.

Claim:

AIM

Specific language manipulation activities to scaffold the ability for language use

TPRS

Does not contain specific language manipulation activities to scaffold the ability for language use

Reality:

“Manipulation” of language is not necessary to acquire it. As Bill VanPatten notes, processing of comprehensible input alone “appears to be sufficient” to develop mental representation of L2. In other words, reading and listening to what students understand is all they need to acquire the language.

Learners inevitably produce junky output, which becomes junky input for other learners. If we accquire language through input, the purpose of generating bad output and having that bad output become bad input is, well, something I have not heard explained by AIM.

Learners need only comprehensible input to acquire a language. If they want to talk, great…but they don’t have to talk, and the lack of forced output means many kids are more comfortable in class.

Claim:

AIM

Carefully sequenced partner/group activities

TPRS

Various random activities for ‘partner vocabulary practice’

Reality:

TPRS does not require or suggest that teachers to do “partner vocabulary practice.” What “vocabulary practice” would be is not mentioned. I am not sure where AIM got this idea.

Claim:

AIM

Each activity of one type lasts a maximum of ten minutes to ensure the highest level of focus and learning potential

TPRS

One mini-story/PMS is taught per 50-minute daily class

Reality:

There is no defined max/min length for any TPRS story. Blaine Ray has famously told of spending four months on one story. Sometimes a story doesn’t work, so a TPRS practitioner ends it quickly and moves on to other activities. Some TPRS practitioners advocate what Mike Peto and Ben Slavic have called “quick takeoffs and landings,” i.e. stories that last 25-40 min.

How long an activity in a TPRS class lasts depends on how interesting the students find it.

Students visualize every single word as the teacher gestures delaying showing the written word.

TPRS

Students visualize the written word/translated written word very early on…

Reality: there is no requirement/suggestion that students in a TPRS class “visualize” the written word. A TPRS practitioner will write whatever words are used (with translation) on board. This is to help “anchor” and clarify the meaning of words, as we know that comprehensible– and not ambiguous– input is what leads to acquisition.

Anyway, that’s what AIM claims and what (my understanding of) TPRS actually is. Be good to hear from AIM what they think, or if they can clarify. Also be nice to hear from TPRS practitioners re: what they think.

This post comes from Carol Gaab. She is an author, teacher and San Francisco Giants language coach, as well as a presenter and all-around thinker. Gaab has one of the most critical minds I have ever run into, and likes to dismantle misconceptions almost as much as she likes to show us interesting and effective ways to teach languages.

So here she is, responding to myths like “we must use authentic documents” and “we must practice speaking,” etc. A fascinating read, and great if you are having discussions with colleagues who embrace older methods. Thanks, Carol!

“Techmology,” as Ali G. says, “is everywhere,” and we feel forced to use it. E-learning! i-Tech! Online portfoli-oli-olios! Quizkamodo! Boogle! Anyway, the litmus test for tech in the language classroom is the same as it is for anything else: does it deliver compelling, vocab-restricted comprehensible input?

Today, a look at two ways to play with tech.

Here is a recent #langchat tweet from a new-waveish language teacher:
What’s the problem here?

1. As Alfie Kohn has noted, using rewards to encourage ____ behaviour turns teaching & learning into a payment & reward system: kids buy a pizza by doing ____. But we really want to get kids to acquire languages because the process itself is interesting. If we have to pizza-bribe kids, we are doing something wrong.

2. The kids get a pizza party…during class time? Is this a good way to deliver the target language to kids? What about the kids who don’t write une carte? Do they not get to be part of the pizza party? Do they sit there and do worksheets or CPAs or whatever while their peers gleefully yap in English, chat and cram junk food into their mouths? What if kids are good at French, but can’t be bothered to write “une carte”? What if they are working, or lack digital access?

3.Output, as the research shows, does not improve acquisition…unless it provokes a TON of target-language response which meets all the following criteria:

it’s comprehensible

it’s quality and not student-made (ie impoverished)

it actually gets read/listened to

So if the teacher responds, and if the student reads/listens to the response…it might help.

4. Workload. Kids don’t benefit from creating output. The teacher also has to spend time wading through bad voicemails, tweets and what have you. Do you want to spend another 30 minutes/day looking at well-intentioned– though bad– “homework” that doesn’t do much good?

5. What do kids do when they compete? They try to win. So the kid who really wants pizza is going to do the simplest easiest thing in French every day just so s/he can get the pizza.

Now, while the “tweet/talk for pizza” idea is a non-starter, there are much better uses for tech out here…here is one, from powerhouse Spanish teacher Meredith McDonald White.

The Señora uses every tech platform I’ve ever heard of, among them Snapchat (a free smartphone app). You get it, make a user profile, and add people à la Facebook. Once people “follow” you, you can exchange images and short video with text added, and you can do hilarious things with images (eg face swap, add extra eyeballs, etc).

Her idea is simple and awesome:

She sends her followers (students) a sentence from a story or from PQA.

The kids create or find an image for which the sentence becomes a caption.

They send her the captioned image.

She uses these by projecting them and then doing Picturetalk about them.

You can also do the same thing with a meme generator program (loads free online): write sentence on the board, kids copy, and they email you their captioned pics.

Here is a crude example:

Teacher sends out/writes on board a line from a story, e.g. La chica tiene un gran problema (the girl has a big problem).

Kids use sentence as a caption & send back to teacher, e.g.

3. This serves for Picturetalk: Is there a girl/boy? Does she have a problem? What problem? What is her hair like? Is she happy? Why is she unhappy? Where is she? What is her name? etc…there are a hundred questions you can ask about this.

Not all the kids will chat/email back, and not all images will work, but over a few months they should all come up with some cool stuff. You can get them illustrating stories (4-6 images) using memes…

This is excellent practice (for outside class). Why? Because the kids are

getting quality comprehensible input

personalising the input without having to make or process junky language

building a community of their own ideas/images

generating kid-interesting stuff which becomes an in-class platform for generating more comprehensible input

And– equally as importantly– the teacher can read these things in like 3 seconds each, and they are fun to read. #eduwin, or what?

A teacher asked on the Facebook group “How do you assess speaking?” Responses were basically, “try using one of various apps” (i.e. Google Classrooms, KaBlaBla, etc). Lots of people want to use tech to do it.

speaking does not improve language acquisition. The act of talking is not like practicing music or baseball. The real driver of speech is aural (and written) input.

Teachers need a life. I for one refuse to spend an hour per class listening to students’ prepared recordings of prepared questions. The kids have better things to do, and so do we.

The only speech we should assess if we want to see what the kids have acquired is spontaneous and in-the-moment. If you want people to learn a language, then by all means let them plan, rehearse, etc…but don’t confuse this with acquisition, where we see what is “wired in” and gut-level, below– and beyond– the conscious mind. Most of the ed apps I’ve seen are similar: teacher records their voice asking question or saying prompts; kid listens, decodes and responds and records their answer for teacher to mark. This kind of “planned” or “reflected-on” communication doesn’t really assess what they have acquired.

Feedback doesn’t work. You can explain, correct, suggest, etc till the cows come home and it won’t make a difference in how well the kids speak. Only input can really change that.

So how do I assess speaking?

First, every time a kid opens their mouth and uses the target language in class– to answer a question, to add to a story, etc– you are getting perfect feedback about how well they speak.

So with my 2s…Aashir can say– and understand– a word at a time max. Simrowdy can answer any question and talk at length about anything. Sadjad extemporaneously comes up with good entire sentences when adding to a story. Janelle is like Simrowdy. Daniel will– and does– say anything but has some verb etc issues. Kevin never talks, but when he does, it’s perfect. I could go on.

Second, the point– to me– of assessing speaking (as with anything else) in class is to see what the kids do not understand and where they need more input. This is why we track barometer kids and choral responses.

Third, I don’t play “gotcha.” I test what I teach. I use vocab they know, and when using objects, pictures or people, I make sure the kids have the vocab to describe them.

Fourth, I don’t assess speaking for Level One students. It makes them anxious, and it is time taken away from input. I assess– i.e. attach a number to– kids once, at the end of Level Two. I do only what you would do speaking in real life:

ask them questions and have them answer

have them ask me questions (and I answer).

describe something– a photo, an object, another kid in the class

No presentations, storytelling, memorisation, etc.

Here’s my rubric:

For a mark of 3:

I can in detail discuss myself, my social and family circle and my activities and interests, and i can describe things.

I make minor mistakes that do not affect meaning, and I can speak fluidly.

I understand all questions and I come up with my own.

I can fix conversational problems orI don’t have any conversational problems.

For a mark of 2

I can discuss myself, my social and family circle and my activities and interests, and I can describe things. There are some gaps in what I can say, and sometimes I provide little detail.

I make enough mistakes that meaning occasionallyu breaks down, and I can speak but not quickly nd fluently

I understand most questions and I come up with some of my own.

I sometimes fix conversational problems.

For a mark of 1:

I can discuss myself, my social and family circle and my activities and interests, and i can describe things, but I can’t do so with much or any relevant detail.

My mistakes affect meaning, and I generally don’t use sentences.

I don’t understand all questions and I have trouble coming up with my own.

I either don’t know when there are conversational problems, or I don’t bother fixing them.

“Conversational problems” means not understanding, and “fixing them” means asking for a repeat, etc (i.e. not just bobble-heading along).

This rubric will generate numbers from four to twelve out of twelve. I’ll generally show them a pic on the iPad at some point, and have them describe that, or have them describe a kid in class. it takes about 4 minutes per kid to do this.

So you are teaching with your text and in year one the kids “learn” first how to say “I like” in Spanish– me gusta– and then how to conjugate regular present-tense verbs. And suddenly they are saying *yo gusto no trabajo. Then in Level 2 you “teach” them the past tense, like “she ran” is corrió. And suddenly they are saying *los lunes corrió a la escuela. These are a lot like how kids pick up L1: they acquire Daddy went to the store and then later say Daddy goed yesterday.

This is “rule overgeneralisation:” a new “rule” shows up and suddenly it gets applied everywhere, inappropriately.

Kids pull out of this very quickly, mostly because of the masses of input they get from L1 parents and other adults. But what can we do about this in the language classroom?

So some random notes:

1.Avoiding conscious learning is the first key. If you have to consciously learn AND remember AND apply “rules” in real time– ie during oral production– you will naturally default to the most recently-learned rule. So all that hard work on the present tense seems to go out the window when the passé composé gets introduced. This is not cos kids are dumb, lazy etc, but it is a brain-structure and bandwidth problem: you have a limited amount of conscious brainpower, and forcing it to “learn” and then remember and apply “grammar rules” (and the brain, as Bill VanPatten reminds us, doesn’t even actually use what we teachers call “grammar rules” in the first place) is too much. Too many mental balls to juggle. TPRS or AIM-style stories, Movietalk, Picturetalk, novels etc– i.e. interesting comprehensible input– will take care of a bunch of this.

2.Unsequenced or “unsheltered” grammar is second. Blaine Ray and Susan Gross pioneered using “unsheltered” grammar– using all verb tenses, pronouns, verb #s etc — from Day 1. If the input is “modeling” L2 in all its diversity, the brain won’t default to conscious or recently-“learned” rules. Yes, beginners can cope with sentences like El chico quería un mono que bailara (the boy wanted a monkey who might dance) easily. There you have inperfect, subordinate clause and past subjunctive all in one sentence.

This way, the brain has “everything” coming in at once, and it is getting the “mental spaces” for the different “rules” built, ground up, from Day 1. The kids won’t substitute trabajaba for trabajó because they have been hearing and reading them– mixed together, naturally– from the beginning.

(There is, btw, another argument for the use of unsheltered grammar: frequency. A glance at any word frequency list shows us that the 250 most-used words (i.e. what Level 1 of any language class should teach) includes verbs in five tenses and the subjunctive mood. And it’s not like Mexican moms or French dads delay speaking the subjunctive (or whatever) till their kids are ten years old!)

3.Avoiding “grammar practice” is the third key. The problems any output activity where we “practice” grammar are numerous:

How do we expect people to do what they are trying to learn to do? Are we not putting the cart before the horse here?

If we acquire languages via input, what good does output do? “Little or nothing” is Steve Krashen and Bill VanPatten’s answer.

This will inevitably be accompanied by tons of English or other L1 discussion. Even the eager beavers will be saying “is it the thingy, the subtunction? Is that like you put an -a on it? No wait that’s an -e. OMG this Snapchat. Shut up I don’t like her, OK it’s *ella trabajió.“

It’s boring. Generating sentences such as “the girl wants her cousin to cook” or “I want my friend to run” is not fun. I’ve tried everything–everything– and believe me, I can get kids to listen to a fun story that has [whatever grammar] in it, but I cannot get 90% of kids to “practice grammar” or “practice speaking” in any meaningful way.

4. Remember that “errors” do not exist, from the learner’s point of view. If somebody “screws up” in writing or speech, they quite simply have not acquired what they need to produce the language properly. They are being asked to do something they quite literally cannot do. There’s an entire Tea With BVP devoted to this question. So, rule overgeneralisation– like any error– has more to do with what teachers want than how “good” students are.

5. We have to remember that acquisition is non-linear. We can minimise problems such as rule overgeneralisation, but we can’t get rid of them. Check out this mama bear and her cub going rock climbing.

They test pawholds. They back down. They try the sequence differently. They don’t get there in one fast line.

Teachers are mama bear and students the cubs, if you will. They’ll do the moves…when they are ready.

Finally, we need to up the input. Students only acquire via input. Yes, it may seem like they are learning from doing worksheets, or using the subjunctive chart above, or practicing dialogues. But such “learning” is incidental, and as we see from research, much less effective than lots of good input. If you keep hearing “j’allais àl’école hier” or “yo gusto hamburguesas,” the students need to hear (and read) more je suis allé and me gustan las hamburguesas. In the long run, that’s the only thing that is going to work.

I was at Steve and Kim’s last Saturday, and when their kids’ bedtime came, Uncle Stolzie got the chance to read to Jasper, 4, from his new book, while the parents put Calder (20 months) to bed.

So we snuggled up on the couch and I started reading the book. I’m a pretty good reader: I can do different voices and accents, and I’m verbally quick. I would read a paragraph or two, and Jasper would ask questions about the pictures. He liked the reading. After about twenty minutes, Jasper was sleepyheaded and off to bed.

And then I realised that I had no idea what I’d just read. I was so focused on the reading, voices, dialogue, going slow, etc, that the story itself eluded me. I know there was a squirrel and a toad, and that was about it.

So it made me think about language performance. If we make kids read aloud, how much do they actually understand? Can you speak a foreign language– in my case, a totally new book– and know what you are saying? Can you read and speak well, and sound good, and not know what you’re doing? Does output help us learn things? When we “get through” a performance, have we experienced something like what a reader or viewer has?

This made me think of music. I’ve been playing Irish music (and old-time) for ten years now. So how do you learn? Well, primarily you listen. Irish music is played in sets. A tune will have an A part (played twice) and a B part (ditto). The whole thing is played three times, then you jump directly into the next tune, then another, etc. The music repeats a fair bit, so you have many chances to pick it up.

When I go to sessions or festivals, I see people hear a tune (from teacher or session group), use Tunepal or Shazam to identify it, then look up the sheet music, and then start playing along. I wonder why. Until you know the tune– i.e. you can hum or whistle it– there is very little point in playing. And the only way you can really learn a tune is by listening. Yes, you have to practice, because making music with mouth and fingers, unlike speech, is not something the brain is prewired to do.

Learning tunes by playing is like learning a language by talking: sure, you’ll pick something up. But it will be slow, and you’ll be so busy working on sounds and notes that you won’t really process what you’re hearing.

I have documented TPRS kids’ success in the past (see this) but today we are in for a different kind of treat: we are going to look first at what top students can do with traditional methods (forced output, grammar practice, word lists, memorisation, etc) and then with comprehensible input.

Today, totally by accident, I found my old Spanish 2 binder from when I was a traditional methods teacher using the ¡Juntos Dos! program. One of my old Level 2 final projects was to create a children’s book. The kids generally used themselves as characters. This story was written by Nuvjit S.

Nuvjit was a keen language learner in high school, and has since then acquired Japanese. She was the top student in Spanish in her year. For this project, the kids got editing help from me, they could use dictionaries, etc. Here is Nuvjit’s children’s book. This was the best project of its kind that I got that year. So take a look at what I was able to get done with traditional methods. This is second year Spanish.

Now, let’s take a look at what a kid taught with only comprehensible input methods can do.

This is Neha D.’s story. She is one of the top five or six students from this year. This was done today, in 50 minutes, with no notes or dictionary. First draft. No editing. Neha is Nuvjit, ten years later, with Spanish teaching based on what we know the brain needs to acquire language: tons of compelling comprehensible input, in aural and written form.

Neha has never seen a grammar worksheet, a verb conjugation table or an explanation of how the pretérito differs from the imperfecto. She has never had her work corrected, and she has never “reflected on her learning,” or fiddled with a portfolio. She probably can’t even tell you what a verb is and she has never heard the word “conjugate.”

This is first year Spanish.

So…it’s pretty obvious which method works better…for me, and for these students. Your mileage may vary.

Now let me also be clear here: I was a pretty bad communicative teacher. I didn’t get good results (well, I couldn’t get my kids to have awesome results). There were– and are– loads of people better than me in that tradition. So I am pretty sure that any number of people could have gotten better results. I’m also at best a slowly-improvingT.P.R.S. practitioner, and there are loads of people who get better results than me.

This however is also my post’s silver lining: if I was a bad “communicative” teacher and I’m a marginal (but improving) T.P.R.S. practitioner, my kids are getting more out of the class with T.P.R.S.

At bottom, I don’t attribute Neha’s success to me being smart or a good teacher, or to how funny I am– err, try to be– etc. Neha and her classmates’ success ultimately stems from T.P.R.S., Movietalk, etc, allowing us to remain comprehensibly in the target language for huuuuuge amounts of time.