I'm a language grad and a student language teacher, and in my spare time I learn languages. I have a special interest in minority languages and as a former IT professional I am particularly interested in where human and computer meet.

31 October 2012

Recently, several of the other blogs I've been reading have mentioned a new book, Le Québécois en 10 Leçons, self-published on Lulu.com by Alexandre Coutu, known on how-to-learn-any-language as Arrekusu. Now I'm pretty sure I'll be buying this book myself very soon, but right now I'm trying to stop myself as I've got enough on my plate what with a busying teaching schedule and trying to pick up Corsican.

The thread became kind of derailed when another user, s_allard, objected to the title, suggesting that it should be "le québécois populaire" or "le québécois de rue" or somesuch.

s_allard's position can be summarised thus:
By using the term "québécois" to describe low-status speech, Coutu makes the term "québécois" into a low-status term by association. Instead, s_allard wants to use the term "québécois" inclusively to include the "standard" language used in higher register situations. Besides, many of the features Coutu highlights are going out of fashion.

It's a valid viewpoint, but it's one that I don't personally hold. Rather, I would say that s_allard's position, while not in itself malicious, maintains an unfair distinction where one's intelligence is determined by the language one speaks. It has been pointed out that middle-class children generally do best in school the world over, but that this is simply a language bias -- school-teachers are middle-class, therefore the school lect is the middle-class sociolect.

We have a choice of action, therefore: change the lect of the children to match that of the school, or change the lect of the school to match that of the children. Academically, it's mostly agreed that the latter is the option that has proven most effective time and again, but s_allard's standpoint better matches popular opinion, whether you're talking about India, where many children are taught through the foreign medium of English; Scotland, where many say Gaelic shouldn't be taught "because many children can't even speak English properly" (for some undefined notion of "properly") or s_allard's own Canada.

Personally, it's an attitude I'd want to challenge, not least because many of the notions of "proper" English and "proper" French have been well and truly disproven by corpus studies of the language. (EG. The statistical norm in French is to drop the "ne" particle in the negative, and the statistical norm in English is to say "can", not "may", when asking permission.)

Moreover, there's the issue of words and phrases going out of fashion. I personally believe the examples s_allard gave are pretty misleading. He picked English slang words with no real history, that were invented by one generation and dropped by the next. But the "québécois" that he objects to is a historically attested form, and one that is being lost not simply due to natural language change, but by the constant imposition of the Standard French norm.

One of the books that I'm using to help me learn Corsican, le corse en 23 lettres, puts it very clearly. The author, Ghjaseppiu Gaggioli, is a descriptivist grammarian, and in the introduction states that he doesn't want anyone to interpret his work as authoritative. Instead, he wants to inform the reader so that the reader can make an educated choice. Because, he says, all languages change, but for a language to stay healthy, that change needs to come from within the boundaries of the language itself. Much of the change in Corsican today is the borrowing of feature after feature from French into the language. Similarly, much of the change in Scottish Gaelic is the borrowing of feature after feature from English.

And of course, much of the change in Quebec French is the borrowing of feature after feature from School French*. s_allard's approach leads to us defining "québécois" as something that every day becomes more like School French. He wants to differentiate the language name, while differentiating the language less and less.

Coutu's approach is more like that Gaggioli. He wants to bring people's attention to the features that exist in their local tongues, features that they are not themselves aware of. He mentions in the HTLAL thread that he gets people telling him "I don't talk like that," only to use the exact same word or phrase a sentence or two later.

This is completely and utterly normal, and anyone who has studied linguistics in a modern setting will have experienced a lesson where the teacher will tell them that "everybody really says X" and the student doesn't believe it. Over the next couple of days/weeks, the student simply can't stop hearing the phrase.

A couple of years ago, I was telling my manager about how us Scottish people hardly ever say "please" -- we go into a shop and say "I'll have a , thanks." That's "thanks", not "please". He was having none of it. He always said please... Well, that same day he came in from lunch looking shocked. "You're right," he said, "I just asked for a sandwich, and I didn't say please."

No-one can protect their own language until they recognise it for what it is....

* I'm giving up on the term "Standard X" unless it's a statistically-defined norm-reference. A standard isn't a standard just because a minority of people say it should be. "School X" is a far more accurate term.

29 October 2012

Have you heard of the Common European Framework of Reference for Languages? I have, and I don't particularly like it, which is an opinion I'm maybe too quick to express. I think it's worth me giving a more in-depth and complete critique here. This is part I...

What is the CEFR?
For those of you not yet familiar with it, the CEFR was an idea conceived by the Council of Europe in the 90s as a solution to the problem of differing language standards across Europe. Some bodies would use terms like "Beginner", "Intermediate" and "Advanced", or their local equivalents, and these didn't always match up, and some would insert additional intermediate grades. With EFL, for example, the scale is normally "beginner", "post-beginner", "lower intermediate", "intermediate", "upper intermediate", "advanced". Others would have numbered grades, others still lettered grades. When assessing CVs (en-US: "resume"), this made it very difficult to compare candidates' language levels.

The Council of Europe's scheme took the basic "beginner-intermediate-advanced" scheme and relabelled it as "basic-independent-proficient", then transferred it to a lettered form, with A as basic, B as independent and C as proficient. They subdivided each of these into two bands, giving 6 levels in total: A1, A2, B1, B2, C1, C2.

It was adopted by the European Union in 2001 as an official recommendation to all member states. In practical terms this means that it's quite hard to get European funding for a language teaching initiative if you can't align it to the CEFR.

Why I don't like the CEFR
The CEFR is, I feel, just as vague as the grading systems that preceded it. The only benefit to the learner, teacher or employer is that it results in everyone using the same terminology, which makes it easier to discuss language proficiency internationally. But while we can discuss it easier, we're still not able to discuss it precisely, because the CEFR is still far from precise.

Now, when I say this, the response is normally to point me towards such things as the language portfolio. The initial level descriptions are necessarily quite vague, and while there's a whole host of documents surrounding this, in reality, what we have is a devolution of vagueness to secondary sources. Much of the "detail" added is actually false detail.

The very first checklist item is a level 1 objective: I can understand when someone speaks very slowly to me and articulates carefully, with long pauses for me to assimilate meaning.

What does it mean to "understand"? We're looking here at someone who has barely started learning a language, and there will be very little content that he will be able to understand. Technically, I could mark this as "no" for all my languages, because I won't be able to understand someone even if they speak slowly, because I won't know all the words they are saying.

Moreover, the ideas of "speaking very slowly", "articulating carefully" and "with long pauses" are all inherently vague. Am I allowed to slow them down as much as I like? Can the pauses by long enough for me to consult a dictionary or a grammar book?

Another problem is pervasive tendency to tautology:
A1
"I can ask and answer simple questions, initiate and respond to simple statements in areas of immediate need or on very familiar topics. "
"I can understand phrases, words and expressions related to areas of most immediate priority"

Familiar subjects are subjects which the learner has become familiar with. The first sentence therefore is logically equivalent to the statement "I can discuss stuff I'm able to discuss."

The second is potentially just as tautologous. Consider that the teacher will be teaching language to a certain priority. So this is effectively "I can understand phrases, words and expressions that I have been taught".

A characteristic of descriptions of the CEFR is that this idea of subject-specificity carries through all levels, moving through talking about your area of work, technical documents etc, but always in a way that really is self-defined.

The CEFR dictates methodology

The people behind the CEFR deny it, stating that the CEFR only dictates content, not method, but have a look at this from the Swiss guidelines:
A2
"I can make myself understood using memorised phrases and single expressions. "
The CEFR itself doesn't state this, but the very same subject-specificity I highlighted above forces the issue. If you're measuring people on dealing with topics and situations, you have to train them with topic-specific language.

But that's not what language is -- the core of all language is topic independent. Our basic grammar -- tenses, prepositions, word order etc -- is the same in all fields.

By breaking into topic specific right from level A1, the CEFR actually dictates material that the student isn't ready to learn yet, so they have to memorise phrases. This imposes a certain view of teaching on the course writer. It is a view of teaching that is held by the majority of teachers, but it is not universally accepted, and I personally believe it is the wrong way round.

If the grammar is taught first, all that subject-specific stuff will fall into place with ease, but the goals for A1 steal time from the teacher, so it can't be done.

Furthermore...

Not all languages are equal

The CEFR is a single framework for all languages, for all students. Essentially it suggests an order of learning survival -> career related -> general. Even if a fixed order is a good idea, is that the right order?

In the Romance languages, career-related stuff is simplicity itself... depending on your career.

I used to work in IT, while studying language. With practically zero effort, I learned to discuss linguistics in Catalan. I can sort of get by discussing computers, but it's a struggle. I couldn't ask for directions to the railway station.

So my natural order of learning was specialist->career->survival

This order of difficulty is a bit different from what the CEFR would predict. Why?

Well, linguistics is mostly Latin (even the word linguistics!) so translation into Catalan is simply a matter of applying regular transformations to known English words. Computing, on the other hand, uses mostly English words, so the translations into Catalan can be very difficult to predict, and have to be learned individually. Thus the CEFR suggests that people should get stuck at a certain level for longer simply because some people need more words than others. But that's not prerequisite knowledge for the other levels.

But both are still easier than getting to the railway station. Why?

Any technical field is well-defined and logically organised. Translations are very often one-to-one and quite literal. Technical stuff is (contrary to expectations) easy. And more to the point, it is far, far easier to integrate it.

But that's a closed set of phrases. They combine in a highly constrained way, and they can't be integrated with other general language that you might be taught the week before or after.

But late introduction of survival language is effortless - once you know the grammar of a language, it is very easy to build some of these survival phrases independently; even when you can't, it's still a lot easier to memorise them once your learnt all the basic building blocks.

Take "what is your name?"

It's a totally regular question, and the following learning path leads a learner to be able to produce it independently:
"is" statements -> inverted order in simple questions -> question words

And yet it isn't uncommon for a teacher to ask the question on the very first day, before even the verb "to be" has been mentioned.

But the CEFR asks us to do this. It tells us which way we should be teaching. And I think it's wrong.

And finally:No-one actually teaches to the CEFR anyway

It is very rare that you'll find a course that genuinely teaches you how to discuss your specific profession in your target language. Even if you do take "English for Specific Purposes", it's still going to be at a very high, abstract level, like "English for Business", not "English for Operations Managers in Logistics SMEs"; or "English for IT", not "English for Object-Oriented Database Admins in the Public Sector".

And even if you do take the course, most of the exams that you can take from members of the Association of Language Testers in Europe (ALTE) do not include a professional component. And yet they all offer exams that they class as B2, the level that is explicitly defined as language relating to your area of work or study.

27 October 2012

While I was living on Skye, I made a few quid by running a few night classes in Spanish to speakers of Gaelic. It was a very interesting experience, as I was juggling three languages in the classroom -- I used both English and Gaelic for instruction, as each had similarities to Spanish that the other didn't, and even when neither was like the Spanish, the difference between the Spanish and English at least gave justification for why the Spanish was completely different.

I only had four people in the class (well, Spanish for bilingual speakers of Gaelic and English is a fairly limited market, isn't it? Particularly in a remote corner of a sparsely populated island) but it went well.
But one thing I was acutely aware of was the problem of forgetting between lessons if you're only in the class once a week. The solution to that was, unsurprisingly, to set them homework. But for the first couple of weeks, we did no writing, so what do you do? I sat down with a Zoom field recorder and a list of prompts and made a series of short MP3 files, each 3-6 minutes long containing prompts and responses that used the language we'd covered in class.

I never assumed I was the first to do it, and with software like Audio Lesson Studio out there, it's clear that it's occurred to others before me. One such is Ravi Purushotma, who complained in his 2006 Masters thesis and elsewhere that homework sheets have stayed the same over the years, even when in-class teaching has changed with the latest teaching fashions (ie. homework is the exception to the pattern in Decoo's lecture On the mortality of language learning methods). In the thesis itself, Purushotma proposes the usual handwavery of web 2.0 (use Twitter, write a blog, make a YouTube video etc) without giving clear directions on the how, why or when; but in another article he specifically recommends Pimsleur-like content as a method of setting homework. I certainly can't find fault with that, as long as it's covering the class material properly, or alternatively being used to teach stuff that isn't an effective use of time in class.

Personally, I've got a fair bit of use out of the software Gradint, an audio-only spaced recognition system (Wikipedia definition) created by a partially-sighted learner to make up for the lack of resources for the visually impaired. As it stands, this is only really suited for the independent learner, although with a bit of tweaking, it could be made into a really handy little homework generator. And it just so happens that it's open source, and I'm going to be trying to learn Python programming properly over the next wee while....

23 October 2012

OK, so this isn't strictly about
language, but I've been following Udacity's course on web app
development as I've been working on designing a language learning app
for a while now, and I'm really not too hot on web technologies at
the moment (and where would you put a language learning app other
than on the web these days?).

A lot of commenters suggested that
the problems identified were unique to the particular course, but
with it was with those criticisms in the back of my head that I've
spent several hours over the last couple of weeks rattling through
this course, and I have to say that I have very similar concerns to
Delta over at AngryMath.

To summarise, Delta picked out a “top
10” of problems:

Lack of planning

Sloppy writing

Quiz regime

Population and sample

Normal curve calculations

Central Limit Theorem not
explained

Bipolar difficulty

Final exam certification

Hucksterism

Lack of updates?

Everything there matches to my own
observations with the web development course, except the final exam
(which I haven't reached yet – I'm on unit 6 of 7) and
the stats-specific items (4,5,6) – although there are problems with
Steve Huffman's course that are analogous to these.

1. Lack of planning

It is not uncommon
to hear Huffman change his mind halfway through a unit, or
even after giving a quiz. Mostly, this is because he uncovers
another quirk in the Google AppEngine or one of the Python code
libraries that affects the outcome. OK, we can forgive the guy for
not being an expert on a relatively new technology, but why didn't he
take a couple of hours to check all these things before starting
filming?

In
video 4.38 he even says "One final password thing. I know I
promised the last one was the final one but..." Now he really
should have known he was going to say that when he filmed the
previous segment, and if he really wanted to change it, he could have
gone back and reshot part of the earlier section in order to edit it
out (or even just redubbed the section in question).

If he can't plan
an hour or two ahead, it throws his whole schedule into doubt.

2. Sloppy writing

Huffman makes several spelling
errors during the course on some pretty fundamental computing terms,
talking about “algorithims” (ouch) or a database being “comitted”
(yuck). After having “protol buffers” on screen for half a
minute, he spots it and corrects it to “protocol buffers” (5.16).

His handwriting becomes progressively
more crooked, moving across the screen at an angle, and he
consistently and clearly writes his quote marks as open and close
quotes on the whiteboard, even though most computers make no
distinction (and Python, along with most languages, definitely
doesn't).

This is core stuff he's dealing with, and he's failing to be precise.

3. Quiz regime

The quizzes seem just as forced as
Delta found in the stats course, with annoying simple ones, then
difficult ones that require you to remember an exact command that
you've seen once, to ones that suffer from a rather odd sense of
humour. I was not familiar with the “hunter2” meme, and the
constant reference to that value forced me to go and look it up. Not
particularly interesting. As an inside joke, using it as the default
password example was sufficient – giving it as an incorrect option
to several multiple-choice quizzes was unnecessary and distracting.

But the other thing that I really
noticed about the quizzes in this course is more serious: they just
didn't feel like an integral part of the lesson. Most of them
started with a dedicated video, rather than just being asked at the
end of a video. This inserted a little pause as the next video
loaded. You'd sit there waiting as Huffman unnecessarily read
out the answers (I can read, as you may have noticed). That wasn't
the worst of it, though. Huffman insisted on constantly telling
you you were about to have a quiz. Why? Isn't it enough to ask the
question?

Worse, this kills one of the clearest
pedagogical rules: don't overwrite useful information in working
memory – take full advantage of the "echo effect". I found myself
lost on several occasions, because after giving me new information,
Huffman would wipe the “echo” from my working memory by
telling me “I think it's time for a quick quiz”. There'd then be
a pause while the next video loaded, where the only thing repeating
in my head was the fact that there was going to be a quiz – the
information I needed to actually complete the quiz was gone. I skipped the quiz and went straight to the answer, because I didn't know it, and there was no scaffolding or structured guidance in the question.

And then, of course, whether I got the
answer right or wrong (or didn't even try), Huffman decides to
explain why all the answers are right or wrong anyway. No attempt
was made to focus on my specific misunderstandings, and when you're
giving a course to thousands of people, wouldn't it make sense to
take a little extra time and include a few extra video snippets to
match the different answer combinations to the quizzes? A couple of
hours of your time to save 10-20 minutes each for thousands of people
is a good trade-off (and what you might consider being a “good
citizen”, Huffman, as your own course proposes we all should
be).

4. Population and sample / ACID

Delta complains that Thrun's course
doesn't present a clear distinction between two fundamental
statistical concepts – I would say that Huffman's course
similarly fails when it touches on databases. It's not as serious a
problem, as this isn't a database course, but if you're going to
teach something, for pity's sake, teach it right. ACID stands from
Atomicity, Consistency, Isolation and Durability. Huffman's
explanation in unit 3 fails to fully define consistency, leaving it
difficult to see the difference between atomicity and consistency.
The confusion is compounded by the fact that the whole definition of
ACID relies on the idea of a database “transaction”, which
Huffman readily admits to not having talked about before. (So I
could actually add this into “poor planning” above if I wanted
to.)

5. Normal curve calculations /
multiple frameworks and libraries

There's not necessarily anything as
fundamental as this missing from this course as the normal curve, but
the end result of something “magical” happening (ie powerful,
important, and not understood) is present. By jumping about from
framework to framework and library to library, Huffman keeps
introducing stuff that we, as learners, just aren't going to
understand. To me, that decreases my confidence: I like to
understand (which is why I'm taking the course).

6. Central Limit Theorem not
explained

No real
equivalent, I suppose.

7. Bipolar difficulty

The difficulty problem in Thrun's stats
course is slightly different from the problem here. Thrun asked
questions that he didn't expect the student to know the answer to
(oddly), but here Huffman expects you to know the answer...
except that he has a very odd set of assumed prior knowledge.

For example, he starts with the
assumption that you have never encountered HTML before, but HTML is
extremely well known now, even among non-geeks. But then he assumes
you know Python. Python is a fairly popular programming language at
the moment, but really – not everyone
knows it. I'm also willing to wager a fair chunk of cash that most
Python scripters are very comfortable indeed with HTML, but that the converse is not true.

Now, I might be
doing him a disservice – his assumption no doubt comes because
Udacity's own Computer Science 101 course teaches Python, but
then again the course prerequisites don't mention either Udacity CS101 or specifically Python:

See? No mention of Python. Now I've
got a degree in Computer Science, so I've got what I thought was a
“moderate amount” of experience. But as soon as he asked a
code-based question, I was stuck. Not only did I not know the
appropriate syntax, but often I had no idea of the type of structure
required.

You see, Python is a very
sophisticated, very high-level language that does lots of clever
things that a lot of the lower-level languages don't. It has very
useful and flexible tools for manipulating strings and data-sets, and
even allows you to build “dictionaries” of key/value pairs. A
great many of the tasks presented in the course were easy if you were
familiar with the structures. If you weren't, you wouldn't A) know
how to write the code or B) know where to look for the answer, or what it would be called. OK,
so the answer to B is “the course forums,” I suppose, but that's
hardly adequate, surely? Audience participation is great and all,
but shouldn't good teaching prevent these blockages, these obstacles
to the learner?

8. Final exam certification

As I said, I haven't got that far yet.
I suspect retaking will be less of an issue as a lot more of the
material will be practical, and you can't expect to pass a coding
exam by trial and error.

9. Hucksterism

Huffman doesn't seem to be as
evangelistic as Thrun, but he still does talk a bit too positively
after some of the quizzes (despite not knowing whether I got the
answer right or wrong), and he does say from time to time that now we
“know how to” do something. Are you sure? I've followed a
fairly tightly defined process – take away the scaffolding, and
could I repeat it? That's not guaranteed.

10. Lack of updates?

The grating
positivity does seem to die down during the course, so there's some
evidence of responding to feedback, but the course first went out
months ago, and despite presumably thousands of completed courses,
there's no evidence of them going back to attempt to fix any problems
in the earlier videos. As I stated in my previous post on MOOCs: any
conscientious teacher reconsiders his material after any class, which
means an update for every 20-30 students – this course has had a
lot more students than that, so where are the updates.

My own evaluation

So the above was
recreating Delta's complaints, with the specific purpose of defending
him/her against those who claim that the AngryMath article was unfair
as it focused on a sample size of one. But I'd also like to post my
views in their own terms.

Because
to me, the big problem wasn't one that appeared in Delta's top 10; it
was that the course is not what I would consider a university-level
course. Or at least, not a complete
university-level course. What I have experienced so far feels a
little too blinkered and focused on one project. I don't remember
any course at any of the three universities I've studied at where the
teaching was driving so clearly towards one single end-of-course task. Each of
the end-of-unit programming tasks brings you closer to that final
task, and there feels like there's a lack of breadth. As I went
through my programming tasks as a student at Edinburgh, we were dealing with
incrementally increasing code complexity, but on an exponentially
increasing problem base – no more than two homework tasks would be
as closely linked as all the tasks here. In essence, what we're
doing is more like a “class project” than a full “class”. Most courses in Edinburgh would change the programming tasks substantially from year to year (certain courses excepted – my hardware design and compiler classes were fairly specialised), but Udacity simply cannot do this as the tasks are fundamental to the syllabus structure.

And Huffman, we're told, “teaches from experience” – which basically translated to "he is not a teacher," in layman's terms. He does an admirable job for someone who hasn't
been trained in pedagogy, but really, seriously, would it kill them
to get an actual teacher to teach the course? Huffman's
awkwardness and uncertainty about the format is the reason he keeps
killing the echo effect – he hasn't developed the instinct to know
how much time and space we need to process an idea. At times, he
gives a reasonably broad view of the topic, but at others, he just
splurges onto the page what is needed for the task at hand. There's
no progressive differentiation of concepts, and he doesn't use any
advance organisers to help the learner understand new concepts.

Case
in point: introducing ROT13/the Caesar cypher without once
demonstrating or even describing a codewheel – a video demonstration of the code wheel is easy, cheap and clear. His demonstration with lines on the virtual whiteboard was not clear. Even if you don't use a codewheel, you can always use the parallel alphabets method:

So, yeah, I can see that Thrun really genuinely believes that the
educational establishment doesn't “get it” when it comes to new
education, but he's throwing the baby out with the bath water if that
means getting rid of educationalists altogether.

Technologies
change quickly. While savvy companies are quick to adapt to these
changes, universities are sometimes slower to react. This discrepancy
can lead to a growing gap between the skills graduates have and the
skills employers need. So how do you figure out exactly what skills
employers are looking for? Our thinking: work with industry leaders
to teach those skills!

It's the old “academic” vs “vocational” debate once again,
and just as many universities are sacrificing their academic
credentials by providing more and more courses that are mere
“training courses” for a given technology, that's what Udacity is
becoming. Forthcoming courses from Udacity are pretty specific:.

Mobile Applications Development with Android

Applications Development with Windows 8

Data Visualization with Mathematica

Thrun keeps talking himself up as an alternative to
university, but he's starting to repaint his site as something that's
more an alternative to a Sams/Teach Yourself/for Dummies coursebook.
Because as they say:

We are working
with leading academic researchers and collaborating with Google,
NVIDIA, Microsoft, Autodesk, Cadence, and Wolfram to teach knowledge
and skills that students will be able to put to use immediately,
either in personal projects or as an employee at one of the many
companies where these skills are sought after.

That's not what university is about. So Thrun doesn't like
university. Fine. But plenty of us do. Stop criticising
universities for being universities. If you want to be a vendor-specific bootcamp, knock yourself out, but please don't criticise universities for teaching us how to think instead of leading us through the nose on writing a Sharepoint site.

The UK used to have a strong distinction between vocational
institutions (known as “colleges of further education”) and
academic institutions (universities, higher education). It's a
useful distinction, and we should have both – it's not an
“either/or” question.

On the other hand, I suppose Thrun's worked out the answer to how to fund MOOCs: sell out to big business. I hope they're paying you well enough.

17 October 2012

Normally, I do pretty well at speaking a language in a native-like accent, but my French accent had kind of slipped due to lack of use. I figured it would come back once I was here, but it never happened. I've found myself sounding more like my brother than a French speaker.

This frustrated me. This annoyed me. I tried a little bit, but I could only improve my accent while consciously working at it -- as soon as I stopped thinking about it, it got worse again.

The frustration is compounded when I say something in Corsican, and though my accent is far from good, I'm still new to it, and in terms of "time on task", my Corsican accent is much more promising than my French accent.

My first theory was that it was down to "speaking with learners" mode. After all, most of my French recently was spoken with other Scottish people. Given that I'm teaching English all day, could it be that I was simply slipping unconsciously into assuming that I was supposed to be speaking in a way that was easy for learners to understand?

But that didn't really sit right with me... it wasn't that.

So a few days ago I was thinking about it, and I was talking in my head in a fairly good accent -- the accent I think I used to have. I tried speaking, and while it was better than normal, it wasn't as good as it was in my head. Then it struck me: my accent was based on the north -- Paris, Lille and the like; people don't speak like that in Corsica.

And yet I'm not picking up the Corsican accent. My brain seems to have filtered it off as incorrect, and left me in a bit of a limbo. I'm not speaking with my old northern French accent, and I'm not picking up a Corsican accent. I'm speaking a weird accent that muddles up vowels that I never muddled up before.

It's weird. It's frustrating. It's confusing. And yet I don't think there's anything I really can do about it. Should I write this one off as a lost cause? Shrug my shoulders and just get on with other, more pressing, matters?

Well, for now: yes. I've got a lot of lessons coming up, and not a lot of time to prep for them.

Maybe I'll work out how to deal with it, and maybe that will give me some extra insights into how to help my students, but it's not something I can really do much about right now.

14 October 2012

My English classes at the moment are
mostly in mixed-ability sets, and the level of English varies
dramatically between one student and the next. It has long been my
policy to blame the teacher for the students' failings, not the
student, and I have been taking that to mean that previous teachers
have failed them, and that it's my job to make up for it, not theirs.

I've tried to reassure them that it's
OK not to understand, and that they should tell me when there's a
problem.

Standard practice in the monolingual
language classroom is to head to the weakest students shortly after
setting the task to re-explain. But before I get there, someone has
translated the task. And as the lesson continues, the weak students
carry out the actual task in French. They receive the task in
French, they carry out the task in French, and then they wonder why
they aren't learning any English. Friday's lesson wasn't about
practising interrogation techniques (I doubt any of my biology
students are going to become detectives) but about asking questions
in English.

I'm now fighting with myself over
whether I actually can blame
the students this time. Is it a lack of explanation, a lack of
teacher effort in making them feel comfortable that is to blame here?
But it's hard not to blame the students if they say things like
“bonjour”, “ça va”, “merci” and “au revoir” to their
English teacher – honestly, a little bit of effort would be nice.

I am
reminded of the difficulty I have of saying “thank you” in
certain circumstances. If I'm holding a conversation in a foreign
language when I buy something in a shop, or get on a bus, I find it
very, very difficult to thank the shopkeeper (or driver) in English.
When I start a new language, I make the resolution to say the simple
things in that language all the time.
When I started taking short courses at the Gaelic college, I would
thank the kitchen staff in Gaelic, even though I couldn't order the
food in Gaelic. I can exchange pleasantries in a handful of
languages I don't speak, because I forced myself to do it.

Because
if you're not going to do the easy stuff, how the hell are you going
to learn the hard stuff?

Anyhow,
last night I was at a friend's leaving party. (Yeah, I've hardly
been here two minutes and already one of the few friends I've made is
leaving. Murphy's bloody law.) Everybody else was Corsican or
French, so a native French speaker. I managed to keep up with the
conversation... more or less. The “more or less” might be very
important here, because there were definitely things that I didn't
understand, but I pretended to understand in order to keep the
conversation flowing.

The
experience was rather similar to my experience as a Spanish learner
in the world's best city for learning Spanish: Edinburgh. I have
lost count of the number of times I found myself in parties where the
Spanish speakers outnumbered every other demographic in the room.
Naturally we spoke more Spanish than English, and I sometimes got a
bit lost, but I didn't let every missed word derail the conversation
– I kept it going until I could get back on track. This normally
worked, and over time my Spanish improved to the point where I could
hang about at these parties and get mistaken for a native (a fact I
sometimes get too smug about – pride comes before a fall, and all
that).

I'm
not a fan of theories of “silent periods” or “assimilation”,
but I know there comes a point where you have to accept your limits
and put up with them. If you always fall back on your native
language when things get tough, if you can
always fall back on your native language when things get tough, why
should your brain ever see the need to use the new language?

This
reminds me of another anecdote I've probably mentioned here several
times: Gaelic song concerts. As a learner, I went to lots of them,
and over time I found I was listening less and less to the Gaelic, to
the point where I eventually stopped trying to understand it
altogether. Everything – everything – was said twice: Gaelic
first, then English. My brain worked out that the path of least
resistance was to wait for the English, and my Gaelic was in no way
improved by the experience.

But
how do you get that across to a bunch of university students? How do
you get them comfortable enough in not understanding everything that
they become functionally capable in English? Is it too late for the
final year students who still say “merci” every time I hand them
a worksheet? And, perhaps most importantly, to me at least: who's fault is it
really...?

06 October 2012

Since I moved here, I've been frequently corrected on an annoying little error in my French: I keep saying J'habite en Corté instead of J'habite à Corté. It's a tiny little thing, but it indicates a fundamental flaw in my internal model of the language.

But earlier today, as I came down off a mountain, I was thinking about this. OK, so the simple explanation is interference from Spanish (vivo en Corte/Corti) But didn't we do this in high school? Didn't we do this lots in high school. When I thought about the placenames of me and my classmates -- Stirling, Denny, Banknock, Alloa etc -- yes, I thought of the sentence correctly: à Stirling, à Denny, etc.

Why were these ones correct in my head, but not Corte? If the Spanish interference was overwriting my French, why hadn't it made *j'habite en Stirlingsound right to me?

This would have to indicate a failure to generalise on my part -- that I had learned by rote, not by meaning. This was the first phrase I was ever taught with the word à in it, so there simply wasn't the support for me to understand the whole structure, so I memorised it as sounds, just as I do the words to songs in languages I don't speak.

But something else caught the back of my head. Maybe the reason I was confused was because of the exceptions, like... aha! Countries. There are no exceptions at the town level -- it's a stable rule. But in my school, we got towns and countries thrown at us at the same time, which made the stable rule seem unstable and arbitrary, leading to a failure to generalise.

Or, to put it another way, perhaps I generalised that the preposition was arbitrary...?

04 October 2012

I've just had an email from the organiser of a free online course that I took (but I never watched or read any of the materials at all). It was borderline spam: it was an advert for the book he was about to launch.

Is that the future of the MOOC, then? Someone using it simply to get a mailing list for his next publication? If it is, I' not sure how I feel about that. I'm not convinced that a marketing campaign is the correct motivation for someone to write a coherent and (crucially) academically rigorous higher education course.

The warnings were there earlier, though. Coursera (before they were so-named) were at once point advertising an entrepreneurship course called Lean LaunchPad, but these guys eventually jumped ship and climbed aboard with Thrun's Udacity... presumably for commercial reasons. Yes, the course started out as Stanford course, but the name strikes me as more than a little...trademarky. Isn't real higher education supposed to be generic? Aren't we supposed to present a moderately broad and balanced view of the whole area of study, and not hone in one one specific methodology to the exclusion of all others?

To me, that looks like education taking one more step towards being a simple packager of vendor-specific training courses. It's cheap, but efficiency isn't much good when you sacrifice education in the process.

So what can we do?

Muvaffak commented on my earlier post, saying that courses need to be self-financing, but the big question is how to do that without affecting openness. No, $10 isn't much to me, but there are places where it is a hell of a lot. Simple fees aren't practical.

The solution normally kicked about is "something or other... certification". No, not very specific. The idea is usually the that course -- the "education" -- is free, but testing (and therefore certification) will be paid for. But that threatens to bring us back into an inequitable state, because we're still establishing a two-tier system. Rich people in rich countries get certified, poor people in poor countries don't.

So there is still the very real issue of openness at the commercial level. The internet makes the obvious answer difficult to see, or possibly just difficult to swallow: different prices in different places. If A Book On C isn't affordable in India at the US and European retail price, print it locally cheaper. But every couple of years, someone in the US makes a big thing about being "ripped off" by US prices, or someone gets taken to court for importing unlicensed copies of books.

So while people rave about the potential for free education to improve the lot of the poor, as soon as you start talking about offering them the same thing at a different price, you're no longer seen as helping the poor, you're now ripping off the pretty well-off (even if you're miles cheaper than the alternative).

Realistically, I'd say the fair and equitable way to fund MOOCs is through proctored exams with differential pricing. Institutions in various countries act as agents for the exam, and pay commission to the course writers. Make that commission a percentage, and the local market will determine local pricing.

No major exams at the moment really have this local pricing though -- the biggest example of inequity would have to be a certain internationally recognised English exam, which is several hundred pounds wherever you sit it. A reasonable chunk of cash for a European student, but a heck of a lot of money for someone from South America. The reason? They papers all go back to a rich country, where they're marked by people who demand pretty high wages (in global terms).

In order to allow differential pricing, then, we're going to have to allow the distribution of marking duties. The institutions taking the students' fees are going to have to be hiring their own markers.

BUT...

Having a competitive market for examination centres is very dangerous -- just see how the multiple exam boards for England and Wales became mired in controversy a few years back, with claims that one group of trainers were giving teachers advance warning of exam questions. Certain "bad apples" were effectively trying to get the pass mark up in order to make the exams more appealing to schools.

So the marking load has to be split, but you can't be marked by your own institution (so no way for them to game the system).

So what are you left with?

Well, say I sit the exam in Rome; I should now have no idea where my exam will be marked. Say it ends up in Ouagadougou. And a paper from Jean in Ouagadougou ends up in London. So I've paid much more money than Jean. But Jean's marker gets paid more than Jean paid for the entire exam, and my marker gets paid a tiny fraction of what I paid.

While the system would be entirely equitable -- we all get out the same, and we put pretty much equivalent amounts into the system -- it looks unfair, because people just aren't used to a barter economy.

So the most workable solution for funding these things will never happen.

So from now on, I'd expect to see more and more tie-ins to books and proprietary methodologies, because the only guys who'll be able to afford to do this are the people with something to sell.....