I am organizing the '1st Colloquium on Multimodal literacies' as part of the 28th European Systemic Functional Linguistics Conference
2018 in Pavia, Italy and invite researchers that share an interest in exploring multimodal literacy practices by looking at the interplay of media affordances, semiotic resources and
discourse communities. I would welcome both theoretical and empirical contributions that explore a vast range of topics, including but not limited to the sub-themes indicated below.

How can we define and measure multimodal literacy?

How can SFL and social semiotics help to uncover the concept of multimodal literacy?

What is a mode and how can we examine its stratified systems of meaning?

What is the meaning making potential of sound/music, image, text and other semiotic resources?

How can different modes/semiotic resources be integrated into a cohesive and coherent whole, i.e. a song, text, film, blog/vlog, meme, text message, etc.?

How does a given medial environment enable and constrict multimodal meaning making? In what ways do individual users and discourse communities adapt to the semiotic affordances of different media
environments?

What skills and competencies are required for a critical and fruitful engagement with multimodal texts and their underlying media technologies, cf. television, movies, music, surfing the
Internet, social networking, talking on a cell phone, texting, magazines, newspapers, fiction and video games?

As part of my Applied Media Linguistics class at AAU Klagenfurt students learned how to create mobile applications with help of the software development kit ionic. Check out the Website and
use the barcodes below to install the mobile APPs that were created by the course participants.

The App module.org is based on my research on mobile learning and digital literacies and was developed in collaboration with the higher education
technology company xStudy SE. module.org enables innovative peer-advising practices in academic decision-making and course planning.