Our linguistic and cultural landscapes become ever more complex. Teachers are challenged to prepare their students for immensely complex and diverse communication processes. The New London Group (1996) already sketched these developments, but the challenges are now more pressing than ever.

At the same time, we are confronted with a growing awareness of the diversity of learner groups. Most likely, learner groups ARE becoming more and more diverse, and the needs of diverse learners need to be met in current and future classrooms.

If we take both challenges together, then teachers today are asked to prepare increasingly diverse learner groups for increasingly complex and diverse communicative landscapes in their future lives. If this doesn’t call for a conference dealing with questions entailed by this!

The “New Horizons” conference brings together scholars from most languages taught at German schools. It has a focus on the old (Latin) and new (English) linguae francae, but also hosts scholars from German and Romance Studies, and even from Music!

In April 2017, I challenged myself “officially” to a new long-term research project. In this project, I investigate linguistic action patterns that arise in the context of cooperative learning in English language classrooms. I have been intruiged by the question of how teachers manage these highly emergent, hard-to-plan cooperative learning phases, i.e. introduce the complex tasks and, most importantly, provide help and motivation along the way.

My article “Brain book buddy boss” published here is the first publication from that project, sketching out some of the basic ideas. Another article is on its way 🙂 Still, I am currently in the challenging phase of collecting classroom data. So far, 15 lessons have been video- and audiotaped (and transcribed), but some more are still yet to come…

A more detailed description of the project can be found here (never mind the acronym, I’m still looking for a nice one!).

10plus1 | Issue #3 | The Linguistics of Politics | Out Now!

After finishing the volume “Communication Forms and Communicative Practices” (I’ll post about this once it has been officially published, currently we’re still waiting for the Library of Congree ID) with Peter Lang Verlag, my colleagues Alexander Brock (Halle), Jana Pflaeging (Bremen / Salzburg) and I have set out on another project: a collected volume on “Genre Emergence”. You can find the Call for Book Chapters here 🙂

This is a quick update to the previous post in which I announced that I had won the second prize in the Stifterverband’s essay competition “Bildung heute – Bildungsideal einer digitalen Zeit”. The essays (1st, 2nd, 3rd) are now available online as audio reading and as pdf file (here’s mine).

After having the pleasure of listening to Maria reading her winning essay, I can only recommend her text. I have seldom come across a text with such depth, clarity and elegance at the same time. Congratulations!

I have been thinking about these comments ever since, trying to find arguments for not extending the corpus. What I found, however, were quite weak excuses. Even more, I started wondering how I could justify a particular number of texts for a period in question at all. I came up with the following line of reasoning:

I work with both qualitative and quantitative methods, even though my general focus lies on the qualitative end of the continuum. Text numbers, therefore, have to be justified both from a qualitative and a quantitative point of view.

The qualitative framework of my thesis is heavily inspired by Grounded Theory (eg. following Glaser & Holton 2004). In Grounded Theory, there is a process called “Theoretical Sampling” combining data collection, coding and analysis. The basic idea is that data collection is guided by the emerging theory and strives for theoretical saturation. In other words: If nothing new is found, no conflicting cases, no cases challenging the categories established so far, the analyst has reached some point close enough to theoretical saturation to stop collecting samples. (footnote: He might as well have turned blind to new phenomena by excessive preceeding analysis. Anyway, further collection of samples would not help the research project in that case, either.) So that’s exactly my qualitative part of the argumentation: Collecting text samples until nothing new or challenging is discovered. This point had already almost been reached after collecting and analysing 80 to 90 texts for the periods II.A to II.C, but it was good to put my categories to the test by collecting more texts and assimilating them into my theory.

From a quantitative point of view, a researcher has to make some kind of informed guess on how many cases will probably be enough to make some statistically sound statements. One formula suggested by Raithel (2008: 62) uses the number of variables to be joint in one analytical step (e.g. a correlation study of two variables) and associated features (e.g. two features for the variable “gender”) ; this value is multiplied by 10: n >= 10 * K^V As I try to trace the change within several variables which are investigated apart from each other, my analytical steps quite often only contain one variable with a particular number of features. The variable with the highest number of features at present is the textual function with about ten distinct features (e.g. Update, Filter, Sharing Experience as outlined in my last post. Consequently, about 100 texts per period are roughly enough according to this formula. This is quite a tight budget; if I want to correlate the variable “textual function” with the variable “gender of author” I have to point out that the results give some hint at a possible statistical connection but have to be taken with a pinch of salt.

I think that both arguments taken together form a fairly stable basis for the justification of the number of cases. I guess 100 texts in the periods II.A, II.B and II.C are also a good compromise between striving for ever higher case numbers and the feasability of qualitatively and thoroughly analysing, say, 500 texts in each period.

So, after the extension phase that took me a bit more than one week of searching for texts, coding, basically repeating all analytical steps I had done before and updating the numbers in my thesis, the corpus looks like that now (snapshot from my screen, sorry for the quality):