G is for Granularity

Granular is a buzz word in the discourse of publishing these days. With its vaguely breakfast cereal connotations it conjures up an image of learning content made palatable and wholesome.

For example, Knewton, the company that specializes in adaptive learning software, features a short video clip on its website, in which the presenter advises us that

“Publishers need to be looking at producing granular content. … no longer in the form of a big-package textbook, but broken down into small chunks that teachers, students, administrators can choose to use in combination or in a blend with any other content that they choose to use”.

Grains – chunks – blends: it’s making my mouth water.

Elsewhere on the Knewton site, we get this heady, but somewhat less appetizing stuff:

Within the adaptive learning industry, a shared infrastructure can benefit all existing educational apps by providing them with unlimited back-end content, granular and highly accurate student proficiency data, robust analytics, and more.

And

Differentiated learning can help each student maximize their potential by shaping the curriculum so that each student understands their proficiencies at a granular level and is given a direct path to improving them.

In a recent blog, they even show us what the granules (aka taxons) of second language acquisition look like:

Click to enlarge

But there are at least four major flaws in the way language learning has been granularized. These flaws long pre-date data analytics, but by bringing the power of industrial-scale computing to bear on data collection and analysis, companies like Knewton (and the publishers who enlist their services) are magnifying these flaws exponentially.

The first flaw – let’s call it the taxon fallacy – is that they have got their granules wrong. Notice that the so-called taxons in the Knewton graphic are the traditional ‘tenses and conjugations’ (present continuous, past perfect etc) – the same ‘tenses and conjugations’ that have been passed on like a bad gene from one generation to the next ever since the dawn of recorded time (or ever since the teaching of Latin) but which have little or nothing to do with how the English language is either used or internalized.

The units of language acquisition are not ‘tenses and conjugations’ (English has no conjugations, for a start). The units of language acquisition are words and constructions. Construction is a general term for any form-meaning association — whether a single word, a phrase, or a more abstract pattern — that has become conventionalized by the speakers of a language (see this related post). Constructions are more than just ‘lexical chunks’ – they can also include morpheme combinations (e.g. verb + -ing) and syntactic patterns (e.g. verbs with two objects) – and they are much, much more than ‘tenses and conjugations’. They are not easily located in the syllabus of a standard coursebook – the type of syllabus which is still the default setting for data analysts such as Knewton.

The second fallacy – I’ll call it the proceduralization fallacy – is another legacy of a long tradition of transmissive teaching: it is the belief that declarative knowledge (e.g. knowing that the past of ‘go’ is ‘went’) automatically converts to procedural knowledge, i.e. that it is available for use in real-time communication. Hence, the assumption is that, if the learner is tested on their knowledge of an item (or granule) and found to know it, it follows that they will be able to use it. As teachers we know this is nonsense. Researchers concur: Schmidt’s (1983: 172) long-term case study of a Japanese speaker of English led him to conclude that ‘grammatical competence derived through formal training is not a good predictor of communicative skills.’ Counting the granules tells you very little about a learner’s communicative capacity.

Related to this fallacy is what is known as the accumulated entities fallacy, described by Rutherford (1988: 4) as the view that ‘language learning … entails the successive mastery of steadily accumulating structural entities, and language teaching brings the entities to the learner’s attention’. Since at least the 1980s we have known that, as Ellis (2008: 863) puts it, ‘grammar instruction may prove powerless to alter the natural sequence of acquisition of developmental structures.’ And Diane Larsen-Freeman (1997: 151), coming from a dynamic systems perspective, reminds us that

Learning linguistic items is not a linear process – learners do not master one item and then move on to another. In fact, the learning curve for a single item is not linear either. The curve is filled with peaks and valleys, progress and backslidings.

Unless a granular approach to data collection and analysis factors in these ‘peaks and valleys’, it will have nothing very interesting to say about a learner’s progress.

Finally, there is the homogenization fallacy: the view that all learners are the same, have the same needs, and follow the same learning trajectory to the same ultimate goals. This quaint belief explains why the designers of adaptive learning software think that it is possible to calibrate any single learner’s diet of granules on the basis of how 50,000, or indeed 50 million, other learners consumed their granules. Although software designers using data analytics pay lip-service to ‘differentiation’ and ‘personalization’, essentially they have a battery chicken view of language learning, i.e. that the same grains are good for everyone, even if they are meted out in slightly different quantities and at slightly different rates.

Contrast that view with the sociolinguistic one that no two people speak the ‘same language’: ‘You and I may both be speakers of language X but your grammar and mine at the descriptive level will not be identical … We both appeal to different sets of rules’ (Davies 1991: 40). Or, as Blommaert (2010: 103) writes, ‘Our real “language” is very much a biographical given, the structure of which reflects our own histories and those of the communities in which we spent our lives.’ It does not exist in someone else’s data-base, much less in granular form.

In the end, as Brumfit (1979: 190) memorably put it, ‘language teaching is not packaged for learners, it is made by them. Language is whole people’.

45 responses

Thanks for this critique, Scott. I am faced with the same problem using a structure/lexical/functional coursebook of McNuggets, which we get parents to buy for teachers and learners to work through and which largely forms the syllabus. It dismays me to see some students still ‘failing’, despite the coursebook’s apparent concern to manage learning from a teenager’s perspective. I know that no matter how well I teach, I cannot guarantee my students will actually learn, but I can at least aim to create conditions in which learning may hope to take place. For me with my students, this consists of working a lot on their metacognition, giving them some choices (as the confines of the coursebook allows) regarding what and how they learn, individualising challenge as far as is possible, using assessing as a starting point where the real learning goes on, creating a cooperative ‘all for one, one for all’ learning atmosphere, and with that, positively engaging everyone at all times, with hopefully a lot of good humour. In other words, my psych/socio attention to learning is what hopefully pushes the linguistic input and output.

I think that while it’s clear that student’s aren’t all ” the same, have the same needs, and follow the same learning trajectory to the same ultimate goals”, they roughly have the same things to learn (K1 word list for example) and they all learn in the same ways (despite what LS advocates might suggest). This comment probably needs some explanation. All students obviously have different preferences about how they like to learn (I prefer reading in the bath for example) but learning, the process in the brain, must be the same for everyone. to Quote Nuthall “We all have different food preferences…[but this] does not mean that the metabolic processes by which we digest and use food are different” (Nuthall, 2007:34) http://www.nzcer.org.nz/nzcerpress/hidden-lives-learners

Thanks for the comment Russ. I’m not saying that ‘how’ we learn is different, but ‘what’. It’s a safe bet that the K1 word list will address most learners’ basic lexical needs, and in the absence of anything better, overall frequency is probably as good a guide as any. But once you start to go beyond this, or once you start to factor in specific needs, it all goes haywire. The first word I learned in Maltese, for example, was the word for ‘comb’. Why? Go figure. It’s not a word that appears in K1 or K2, I’d hazard. Extend that argument to grammar – reconfigured as constructions – and you’re in the same place. An air-traffic controller needs a very different grammar from a tour guide at the Great Pyramid. Arguably, there’s not even a ‘common core’. Adaptive learning software, to be doing its job properly, would need a data base of air-traffic controller talk, on the one hand, and tour guide at the Great Pyramid talk, on the other. And a third for the day when the tour guide became an air traffic controller.

To quote Blommaert again:

‘No one knows all of language. That counts for our so-called mother tongues and, of course, also for the other ‘languages’ we acquire in our life time. Native speakers are not perfect speakers. […] There is nothing wrong with that phenomenon of partial competence: no one needs all the resources that a language potentially provides’ (2010: p.103)

Thanks, Julie – I’ll add the proper attributions. Actually I chose these pics because they illustrate how argan oil is made – from (granular) argan nuts. A friend regularly brings a bottle or two from Morocco every summer. If you haven’t tried it, do.

Indeed they are Scott! A golden treasure. I thoroughly enjoyed my visit to this charity association which helps women earn a living by tirelessly cracking these golden nuts with a couple of rocks, and stirring for what seems interminable hours to produce that precious liquid. It’s interesting you made the connection between that process and the approach you describe so well. I have yet to read up on everything you mention to make full sense of it. Thank you again for a most informative post.

Congratulations on this excellent, well-organised, very well-written post. Your splendidly succinct review of “the Knewton approach” leads to one of the best brief critiques of current ELT practice I’ve ever read. Brilliant!

Granularisation: what fresh hell is this? Oh yes, it’s the same hell the latin grammars have been forcing on us of old. the difference seems to lie only in the extent to which what is forced on us can be quantified and monetized -taxon by billable taxon!

Thank you for the wealth of information stated in the post. I do agree that the business of teaching English or an apsect of it, being grammar or vocabulary for example, needs a whole industry -definitely beyond the walls of simple granularity. Language cannot be understood to be grammar rules only; it is a whole system of continuously interacting elements.

If we speak the language of food and cooks, then a recipe is a failure when the ingredients are not well-integrated or well-picked to suit the dish. So is the case for language, granularity does not suit grammar per se, and is the wrong spice to season a language course. The fallacies stated analysed with perfection the shortages of granularity.

Thomas, the ‘fallacy’ trope (if you like) was influenced by the term ‘linguistic fallacy’ in a paper by Peter Skehan, where he glosses it as the belief “that there is a straightforward relationship between how grammatical systems are described, and how they should be used practically.” (Skehan 1994: 181). I mentioned this is in the comments to G is for Grammar(s). The full reference is Skehan, P. 1994 ‘Second language acquisition strategies, interlanguage development and task-based learning’, in Bygate, M., Tonkyn, A., & Williams, E. (eds.) Grammar and the language teacher. London: Prentice Hall.

Other writers, such as Keith Folse, have organized their arguments around various ‘myths’ – which is also a useful device. So, no, it’s not really a new idea, nor is it ‘new’ to challenge to Knewton/Pearson approach, since Philip Kerr has been doing this (with a great deal more rigour than I have) in his excellent blog, referenced at the end of my post above.

For a fascinating, deep analysis of all these issues, people may like taking a look at Diane Larsen-Freeman’s (2013). Transfer of learning transformed. Language Learning, Volume 63, Special Issue, 107-129. Thanks, Scott, for starting another very interesting and timely discussion.

Thanks Elka, I will ferret that one out. I was actually thinking of calling the proceduralization fallacy the transfer fallacy, since it assumes that language knowledge is automatically transferred into use – but then ‘transfer’ has other senses, as in L1 transfer. I wonder what sense Diane uses it?

Thanks Scott for motivating me with this post. I’d paused a project I was working on – creating a more “natural” syllabus and this has got me back on track.

I agree with most of the fallacies but at the same time do think there is a bright future for “nuggets” and granularity. Creating authentic content which teachers can pick and chose from and drop into their lessons or have students use for self – study through a blended approach. That’s the way to go and provides for better teacher development and more “freedom” in their teaching. Technology can now provide us with “educational” authentic content.

But I have a quibble and disagree with your statement that “The units of language acquisition are words and constructions.”

In many cases yes but every learner learns differently and not all build through words/expressions. I think you are being too narrow and absolute with that statement.

If you go deeper, the building blocks are sound and meaning ( or to throw in Faulkner – sound and fury). I’m a bit of an iconoclast in the sense I don’t believe sound and meaning are arbitrary at all in language. I really think we do build meaning through sound and its production/use. But also as a die hard CLTer – I do believe that meaning (i+1 sort of thing) is the building block of acquisition. We learn through metaphor, making things stick by taking the unknown and sticking it to the known. I do hope we get more research in the future into “stickiness”, why certain forms get remembered and others just fade away. What is the best way to teach so language “sticks” and students acquire language. At the end of the day that is what good teaching is about – an experience. It’s more complex than just putting short term items of working memory into our longer term boxes. IMHO.

Fascinating, Mura – although trying to follow the argument made me cross-eyed. Nevertheless the conclusion (that the way forward is ‘the student-directed constructivist use of small learning objects’) is very suggestive. My take on that would be, for example: collaboratively create and perform your own text, using [x number] of these [y number] constructions. (Am I getting warm?)

Groups like Mozilla I think are showing one way in this sense if you take a look at their web-maker tools.

One thing ELT publishers could do along this idea of remix is say make listening transcripts available (maybe current coursebooks already do that?). I am thinking here along the lines of using the transcripts to get the audio examples to exploit with students, and so save me from the laborious current method of copy-pasting and cleaning up someone’s “illegal” scans found on the net.

The notion of students taking control of content is very appealing, maybe in the way you suggest.

Thanks David – and just to pick up on your last sentence (since it captures the gist of your argument) i.e. ‘It’s more complex than just putting short term items of working memory into our longer term boxes’: I would agree, and when I say that constructions are the units of acquisition, I don’t necessarily mean that they are learned as unanalyzed chunks. In fact, the more abstract ones, which have empty slots, can’t really be learned that way, unless specific exemplars are chosen. I’m thinking about something like ‘verb + possessive adjective + way + prepositional phrase’ as in ‘They pushed their way to the front’, or ‘She made her way through the crowd’ etc. There’s no reason why these couldn’t be taught explicity, as a pattern with various exemplars, just the way (old-fashioned) grammar is taught.

So, a construction grammar approach is not the same as the lexical approach, although it shares with the latter a much broader view of what linguistic competence consists of than does standard pedagogical grammar.

I really like this post of yours. I think it identifies key issues in a most efficacious way (see how they get into your brain these people!). There is nothing here I want to argue about or criticise (and much of what you say links in to your previous post about power: power = syllabus, power = tests that bow down before that syllabus; power = the teacher’s knowledge of that syllabus and the declarative knowledge that most students can’t access.

But the question that interests me (no, obsesses me, I think would be a better way of putting it) is how then can we organise learning events and progress(es) if we were to abandon the grammarisation (note: not L-F grammaring!) of the language learning world (and of course we have 2000 years of history to swim against)? I understand the attraction of (and efficacy (!) in certain circumstances) of unplugged teaching, learner texts etc etc. But that doesn’t address the system’s need for some kind of order. I don’t like the system, either, but I want to have a plan that has a chance of success!!

Thanks, Jeremy – difficult question to answer in as many words, but for a start I would be wary of assuming that ‘the units of acquisition’ should automatically become ‘the units of teaching’ – on the grounds of what I quote above, i.e. Skehan’s ‘linguistic fallacy’. Nor, as I mentioned to David, I am arguing for resurrecting either a functional-notional or a lexical syllabus (of chunks) that are taught and learned as unanalyzed strings, in the (faint?) hope that they will ‘release’ their grammar over time. Even Nick Ellis seems to accept that, in SLA, you can’t leave the ‘abstracting’ of patterns from the data up to chance. Life is too short.

But I do think that the canonical grammar syllabus that has been handed down from father to son uncritically for hundreds of years now, and is the legacy of misguided attempts to knock the round peg of classical languages into the square hole of modern ones, needs a massive review – a review, moreover, informed by corpus linguists and by researchers into SLA, especially those in the ‘usage-based’ camp.

I look forward to the day when the intermediate coursebook is no longer organized around ‘tenses and conjugations’!

You’re not arguing for a lexical syllabus (of chunks) that’s taught and learned as unanalyzed strings, because the hope that they will ‘release’ their grammar over time is faint and life is too short to leave the ‘abstracting’ of patterns from the data to chance. Is this consistent with the view expressed in your 2009 article “Slow Release Grammar” which suggests that lexicalised chunks – memorised initially as unanalysed wholes – slowly release their internal structure like slow-release pain-killers release aspirin, with the result that language emerges as “grammar for free”? It may well be, but from where I’m sitting (in the dark, down here in my bunker) it’s not entirely clear to me.

Hi Geoff… thanks for pointing out the inconsistency. Let’s say I wanted to believe that chunks release their grammar, but it may not be the case in SLA (even if it is FLA): 6 years down the line the research evidence is still lacking.

‘A key question raised by Hakuta (1976) is: ‘To what extent do these routines and patterns facilitate or hinder the acquisition of TL grammar?’ Krashen and Scarcella (1978) argue that formulaic speech and rule-created speech are unrelated but the prevailing view today is that learners unpack the parts that comprise the sequence and, in this way, discover the L2 grammar. In other words, formulaic sequences serve as a kind of starter pack from which grammar is generated. Some of the clearest evidence of this comes from a longitudinal case study of five Spanish-speaking children learning L2 English (Wong Fillmore, 1976) [….] Myles (2004) also argued that formulas constitute an important starting point. In her study, those classroom learners who failed to acquire a set of formulas showed very little development.’

(My voice recognition program is clearly incapable of analyzing chunks correctly: it glossed Wong Fillmore as ‘warm and feel more’).

Doesn’t the idea that unanalysed wholes might somehow give rise to grammar acquisition ‘for free’ suffer from the poverty of stimulus problem? Exposure to language can only provide the learner with information about what constructions are well formed, not about which ones are not. No amount of exposure to lexicalised chunks will inform the learner that there is something wrong with constructions like ‘I wasted at university four years doing a course I wasn’t interested in.’

Patrick, re the so-called poverty of stimulus problem, there is no shortage of stimuli (in English) in the form of high-frequency higher order constructions like SVOA, i.e. subject + verb + object + adverbial, (e.g. ‘I cooked pasta last night’) as opposed to the infrequent to non-existent SVAO construction to which your example (“‘I wasted at university four years doing a course I wasn’t interested in”) belongs. The reason a learner might come out with this (or ‘I cooked last night pasta’) is because of priming effect of their (equally non-impoverished) L1. What they need is corrective feedback, along with lots more exposure to the very rich stimulus that constitutes authentic text in the target language.

4052015

Simon Greenall(00:02:23) :

You’re right, Jeremy, but I think we also know that success doesn’t lie in this kind of granularity of grammar either. The main problem for me, having been recently involved in a K1 – K12 curriculum project, is that it reduces the vast battery of activity types and the rich, creative methodology of post-1976 communicative teaching to click-and-drag, matching and gapfill and … um, well, that’s it.

There’s a huge reliance on the grammatical system, on lexis, and very little substantive help in language skills, and especially integrated skills. A holistic approach (as the alternative to a granular one) to language learning which we’d all probably advocate, simply has no place in a digital course, and adaptive learning will still focus on the individual’s cumulative language knowledge rather than their language skills in interacting with others.

And yes, like you, I’m obsessive about systems too, even if they end up being knowingly abandoned in the interest of effective learning. But writers and teachers don’t get that discretionary choice if they’re following a course using an overwhelmingly granular analysis of language knowledge.

It’s a system of assessing only what can be assessed with the digital tools available, and abandoning other features of language acquisition which a learner may have acquired elsewhere.

I agree with most of what has been said above, however, I wonder if the answer to Jeremy’s concern of “a plan that works” is hindered by how those of us involved in EFL/ESL view our own field of practice. Mura, I agree “companies like Knewton are simply ignoring any current SLA theory”, but I also believe what they are not ignoring is the fact that by and large as esl/ efl teachers (and apologies for the generalization) we gain our legitimacy from who we are, not what we know. And with the assistance of corpus linguistics we now know there is an awful lot to know about the patterns in language and associated meanings. As a teacher I feel this type of knowledge of language is invaluable when working in ways which treat language as an emergent phenomenon. Perhaps we need to start valuing this type of knowledge of language as much as we value the social and psychological aspects of what we do, and in so doing move ourselves higher up Maton’s (2013) epistemic plane.

“… the view that all learners are the same, have the same needs, and follow the same learning trajectory to the same ultimate goals. This quaint belief explains why the designers of adaptive learning software think that it is possible to calibrate any single learner’s diet of granules on the basis of how 50,000, or indeed 50 million, other learners consumed their granules.”

That doesn’t mean, all things being equal, that there isn’t a good vs poor way to do teaching, given the obvious limitations on time and resources. For what it really matters, we are all the same and the differences are highly superficial and mostly based on cognitive biases. Learned behaviour after all… I’d suggest it’s rather eccentric and defeatist to assume that we are substantially different, and the idea that we should attempt to play to all strengths is at best multiple intelligence hokum and at worst elitism. Ability shouldn’t make a big difference to an honest learner or teacher because the ultimate truth is that everyone can learn if they follow the model. Everything gets ironed out through practice and you have to see it that way if you’re serious about getting ahead in the language. How efficient do you make that practice is the question these publishers are flailing around for and not succeeding in.

Models are multifarious things and indeed granular. As I stated before, models are role models, models are teachers in many ways, models are lessons, lesson plans and examples of something to aspire to. Models are examples of ways to improve, a model is a tried and tested system that works. If you model something that works you generally see better outcomes in your own work. That’s a given. Models are doors to development. A model is a text or piece of language, method or heuristic that influences learning to a greater degree than directionless meandering, although that’s needed, especially to find good models.

Re learned behaviour, it’s cognitive biases, the learning styles and ways we think. They peculiarities are mainly ingrained from childhood and the ways our parents treated us. The points is cogntive styles are something that can be worked with and changed whereas genetics are a fixed quantity. Under similar conditions we all behave in similar ways, the real differences are down to the opportunities we’ve been given and the cultural pressures we’ve lived under. And these can, in the context of learning languages well, be overcome. It’s not behaviourism, perhaps it’s more Hobbesian, but it’s knowing that we are all basically the same and the differences between us are slight in the context of how we perceive the world, how we feel and the existence of cultural universals.

Great! As usual, clear and to the point. This article should become required reading for all teachers and, especially, school managers and the like –provided they are interested and language learning, not just the material outcomes (€€€€) of effective school admin, which, I know, is a lot to presume….

Coming to think of it: Why should we hope for chunks to release their grammar in the first place? (assuming there is such a thing, and that we know it when we see it). What is the abstracting good for? I work with a text that hopes that students will figure out the grammar. Why would we think inducing some apparent structure does any good to students? What does the aha-experience, when students can express some insight about how the language seems to fall into place, add to the learning process? I am not sure. Could it be that we think that from the releasing moment on the “grammar” starts its generative influence? I am not sure. If grammar emerges, what makes it emerge (or acquired implicitly)? Wouldn´t the cause for emergence be the active ingredient in language acquisition. Wouldn’t it be better to replace the grammar ghost with memory processes? I mean, the fact that with time, i.e. repeated use and / or exposure, certain combinations start to ring true is a result of associations being validated and reinforced, and, accessible for recall.

Why should we hope for chunks to release their grammar? Because as humans we are innately primed to scan ‘noise’ in search of patterns. And to re-use these patterns creatively. This is the faculty which allows good chess players to beat their opponents, jazz musicians to riff on a theme, and everyone to learn their L1.

Dear Scott,
Wholeheartedly agree with your criticism of that CALL product, and actually can you really expect anything better from A MACHINE? It is only a ‘live’ teacher who is able to serve the most appetizing freshly-cooked dish with all its smells and sounds at that very moment when the student needs it. Those breakfast cereal granules are hardly edible unless they put lots of flavour enhancers in them🙂. Ergo, the language unit for the student should be an utterance, i.e., a context-bound stretch of language aimed to express one’s meaning potential, provide a minimum of mutual understanding to overcome contradiction and otherness in order to hold a dialogue (adapted from M. M. Bakhtin, 1975). That utterance can be further broken into form-meaning composites of all kinds, such as phonemes, morphemes, lexemes, textemes, or constructions of some kind, purely for pedagogical purposes to help the student notice and hopefully internalize through practice. But mind you, once it has been cut up, it becomes “dead” and that’s all that a computer made of plastic and metal can cook for the student.

All of those abstract form-meaning composites serve as mediators. Cuisenaire rods from the Silent Way would do the same job – they make up a tool for storing declarative knowledge and looking at language as an object for cognizing. Cogito ergo sum. The truth is, that declarative knowledge (semiotic system made of signs) seems easier to “transmit” to the student (beware of age limitations) and it is less time-consuming to form than procedural knowledge (N. P. Branscomb, 1988). Ergo, it is doomed to prevail as a teaching aim in educational systems ;-( . Yet as long as these pieces of frozen language are able to release vitamins and reinforce the formation of the skill (procedural knowledge) they should be welcomed in the classroom. Long live freshly-frozen composites squeezed out of live utterances! Down with hardly edible last millennium’s granules! 

A very fruitful debate, indeed. I think that language teachers and researchers owe a lot to the grammar teaching methods over history; the arguments against these methods have brought many new inspirations to the field of language teaching.
But I do not see the usefulness of continuous argumentation against grammar instructional methods, since there is no best alternative provided yet. I do not think that a good teaching method is one which prioritizes a language element over other elements (grammar in the grammar translation methods, Lexis in the lexical approach, etc), but what is the best alternative? Integration, Eclecticism? How can we truly overcome these fallacies? As a foreign language teacher, I am always puzzled about the wealth of teaching methodologies, and the absence of consensus.

I enjoyed reading this post, and I agree with the concerns you raised. Adaptive learning is something I’ve been thinking about recently – the possible ways (if any) that such tools might contribute to a learner’s wider development of communicative ability, and what the ‘grains’ of such a product could usefully consist of. For example, it’s tempting to say that lexical items could be ‘grains’, and a learner could use an adaptive tool to reinforce a vocabulary acquisition process based on meaningful communication. But in my own experience as a language learner I’ve found that there’s not always any correlation between my ‘mastery’ of a vocabulary item on a vocab app and my ability to understand the same item in a text, much less use it meaningfully.