July 2, 2016

Randomised controlled trials (RCTs) of interventions are a gold standard source of evidence in medicine, as people like Ben Goldacre have argued repeatedly. As people are allocated to receiving the intervention at random, this should eliminate many of the biases that come from people self-selecting for interventions they would like.

But RCTs are vulnerable to another sort of bias – that of deciding whether to take part in the trial at all. The study I am discussing here, by Rogers and collaborators, takes a very thorough look at why older people decided to take part in a trial that tested an intervention which is designed to get older people to walk more.

Study participants were recruited from three general practices in relatively affluent parts of England, Oxfordshire and Berkshire. Potential participants were identified from the general practitioners’ records. General practitioners then filtered out those whom they deemed unsuitable for the intervention, and then the invitations to the remaining people were sent out through the practices.

This means that while the researchers did not see the names and addresses of the non-participants, they still had access to some basic demographic information which allowed them to compare who did and did not show interest in the trial. This information included age, gender, whether they were invited on their own or as a couple, and the socioeconomic deprivation index of the area they lived in – but not the area itself.

988 people were contacted initially. Everyone had three options: to take part in the trial, to complete a survey with information about why they chose not to take part, and not to respond at all. 298 (30.2% or one in three) people agreed to participate, and 690 were not interested. Of those 690, 183 (26.5% or one in four) returned the survey, and 77 of the 183 (42%) agreed to be contacted further about their reasons for not participating. Rogers then interviewed 15 of these people herself; the interviews stopped after 15 because no new insights emerged.

Instead of discussing the complex pattern of results that emerged from the study, I would like to highlight two findings that I consider to be the most interesting.

Finding 1: The people who don’t respond at all are very different from the people who will return your non-participation survey.

Table 1 of the paper shows the overall demographic differences between participants and non-participants, while Table 2 looks at the demographic differences between participants and non-participants that returned the survey. The pattern that emerges from Table 1 is that people are less likely to take part if they are male and if they live in a deprived area. Age and whether they were invited as a couple or not did no matter. Table 2, on the other hand, shows no difference at all on any of these four metrics.

Finding 2: Taking part in a trial is hard work for participants.

While the most common reason people cited for not taking part was that they were already physically active (67.3% of the 183 who returned the non-participation survey), the second most important reason was that they just didn’t have the time (44%).

The qualitative interviews provide an insight into the demands that taking part in the trial would place on participants. They would have to

find time in lives that were already full of family commitments and activities

stay with the trial for three months

walk regularly in the dark winter

look after an accelerometer device to measure physical activity

walk regardless of other health issues such as chronic pain, depression, or knee problems

change their existing habits and routines

Conclusion

The people who ended up taking part in the trial were not only more wealthy and more likely to be female, but also more likely to be able to organize their lives around increased physical activity.

What does that mean for clinical practice? While it appears to be very easy to tell people to just be more active, the recruitment patterns for this trial indicate that those who might need help the most don’t necessarily contribute to the evidence base that doctors are told to rely on.

Living with dementia can be hard for the person with dementia and for the people who care for them. Good support can make life a lot easier, and create space for moments of contentment, joy, and happiness.

In the past decades, assistive technology developers have sought to provide part of this support through specialized technology. Just as a prosthetic leg can help amputees walk, developers have created prosthetic memory solutions that fill the gaps in a person’s own memory.

However, finding the package of solutions and services that is best for the person with dementia and their carers can be very difficult. Often, these services are put together by occupational therapists.

To give you an idea of the complexity of these solutions, the assistive technology introduced, which is documented in Table 1 of the paper, ranged from a simple digital calendars to a package of four devices, and included specialized solutions such as a reminder connected to the timer of the coffee machine and non-technological solutions such as whiteboards.

Methodology

For their study, they used a longitudinal, qualitative design. This means that they followed 12 people and their carers for a year after the first home visit, when their needs were assessed. They talked to their participants every three months, reviewing issues that had come up at past interviews, and exploring new issues that had arisen. They also took notes of their own observations.

Qualitative data such as interviews are notoriously difficult to analyse. Each analyst approaches the text with their own preconceptions and ideas. Therefore, texts are often analysed by two to three people over several iterations, comparing and contrasting their findings, to ensure that their interpretation is grounded in what the participants told them.

As a result of analysis, patterns and themes emerge as well as individual experiences that highlight wider issues. There are many ways to ensure that these findings can be useful in different contexts. In this study, Arntzen, Holthe, and Jentoft interpreted their findings in light of a particular theory of lived experience, phenomenology.

Findings

Arntzen, Holte, and Jentoft identified five elements of a successful assistive technology. As I list them, I will comment on each from my own experience of working in this field.

1 The technology has to address an actual need, which can be practical, emotional, or about the way people relate to each other.

Comment: This means that careful initial assessment is important, and default packages are likely to fail.

2 The technology has to fit in with people’s established habits and problem solving strategies, because they reflect how a person thinks about and relates to the world.

Comment: If a piece of assistive technology is introduced because it provides useful data to central services, even though it would require people with dementia and their carers to rethink the way they organize their lives, it is highly likely to fail.

3 The technology needs to be reliable and trustworthy, and people need to feel good about it.

Comment: This means that assistive technology needs to be well designed and tested, requiring high quality software engineering, and supported by qualified engineers who can intervene quickly in case of malfunctions.

4 The technology needs to be user friendly, adaptable, and easy to manage.

Comment: Ideally, all technology would be like that, but all too often, technology is designed primarily to provide data and to enforce standard procedures, and to enforce a predictable life with no room for spontaneity. This has been a problem for a long time, and it is a typical of the conflicts between stakeholders (people with dementia who want to live their own lives as they used to; carers; social care; health care; policy)

5 The technology needs to interest and engage the family carer. It is likely that the family carer will be the one to look after the technology, keep it up to date, make sure it works, and use the more complex functionalities. If they like it, and if it engages them, it is more likely to be used. A case in point was the digital calendar, which proved very popular with the carers, less so with the people with dementia.

Comment: Family carers are sometimes overlooked in the work on supporting people with dementia to live in the community, as there is a strong emphasis on helping those who do not have family live independently – those are the people who require more social care time. Carers are also often assumed to be children, and spouses are assumed to be technophobic, as they are older. However, all of the people with dementia in this study were cared for by their spouse, who was in a similar age group. This means that we need to make sure technology for people with dementia is also accessible to older people without cognitive impairment.

Conclusion

While much of what Arnzten, Holthe, and Jentoft found in their paper will not be new to people who work in the field, I still think that their paper is a salutary reminder of just how important adaptable, flexible technology solutions are. Fixed packages of standard technology may be easier to maintain and to prescribe, but will they pay for themselves in actual daily use? If Arntzen, Holthe, and Jantoft are right, then this is highly unlikely.

November 27, 2015

Reminders only work if you can hear them – as I found out to my cost this morning. I had been looking forward to a scrumptious Yorkshire breakfast, served from 7am to 10am, only to wake up at 10.17am.

Why did I sleep through my trusty phone alarm? Because my phone hadn’t been charging; I had forgotten to switch on the socket into which I had plugged it. (In the UK, we need to switch on sockets before they will provide electricity).

Now imagine that you can no longer hear the alarms you set not because you failed to charge your phone, but because your hearing is going. What do you do?

I discuss a few strategies that I have discovered when working with older people as part of my research into human-computer interaction.

All of these ideas are inspired by what older people have told me and my colleagues, or by what we have seen them do. This is perhaps the most important point of my talk. People are experts in what works for them. Very often, all it takes is a bit of active listening to uncover a solution that builds on their existing habits, their routines, and the layout of the spaces and places where they live.

This is really the most important trick – make the action to be remembered as natural and habitual as possible.

Once you have ensured that, the rest is icing on the cake:

ensure that people choose reminders that they actually choose to hear. (That includes reminders which are so irritating that you just have to get out of bed to silence them.)

ensure that people can understand what the reminder is all about. Again, you can take advantage of associations people already have. For example, people may choose a snippet from their favorite love song to remind them to take their heart medications

ensure that the reminders are not stigmatizing. It can be hard to admit that one’s memory is going, that one is no longer coping. Having one’s style cramped is even harder.

If you would like personalized advice or talk further, please do not hesitate to contact me via email (maria dot wolters at ed dot ac dot uk) or on Twitter (@mariawolters).

November 21, 2015

Just as writing was thought to be the death of memory back before the Common Era, when Real Poets memorized their work, technology is now deemed to be the death of memory, because people can have information at their fingertips and don’t need to remember it anymore.

But actually, people appear to use this new ability strategically and judiciously, based on their assessment of their own memory (or metamemory, as it’s called in the psychological literature).

In this post, I want to highlight two relevant papers I heard at the Annual Meeting of the Psychonomic Society in Chicago, one about remembering information (retrospective memory), and one about remembering to do something (prospective memory).

Saving some information frees capacity to remember

When we save information in a file on a computer, we’re more likely to forget it. But this forgetting has a function – it frees resources for remembering other information. Storm and Stone asked people to type a set of words into a file, which they then saved or did not save. Next, they were asked to memorize a second set of words, and finally, they were asked to recall the first set and the second set. If they had saved the first set, they were able to study those words again before they had to recall them.

If people had been able to save the first set of words, they were much better at remembering the second set of words – less so when they hadn’t been able to save it, and had to keep both in memory.

Next, Storm and Stone repeated the study with a twist – for half the participants, saving worked every time, for the other half, it was unreliable. The people who couldn’t rely on the first set being saved started to keep it in memory, too – so the effect of saving disappeared.

So what happened was that saving the first set of words for later study helped people use their memory more efficiently.

Whether people set reminders is determined by how they rate their own memory

Another aspect of metamemory is how confident your are in your ability to remember. In a series of two elegant studies, Sam Gilbert of University College London showed that two aspects influence whether people will set a reminder:

how complex the task is that they need to remember

their own confidence in their abilities (regardless of task difficulty)

People were asked to remember to do two separate tasks while performing a background task (moving numbers across a screen), one that was simple and one that was more complicated. When participants were able to set reminders (arrange the numbers to hint at what needed to be done with them), they performed well, when they were unable to do so, performance, in particular on the complex task, plummeted.

The second study involved a task that could be adjusted so that it was equally difficult for all participants. In that case, participants who had less confidence in their memory set more reminders than those who were more confident.

Metacognition matters

These studies show that memory is not automatic. People make judgements and assess tradeoffs – they harness technology (and external memory aids) to support them whenever they feel they need the support.

We need to bear this in mind when we design systems that help people remember – if they feel they don’t need these reminder systems, providing one will jar painfully with their own assessment of their abilities. Depending on how they react to such challenges to their self perception, this might lead them to be more despondent and dependent, instead of more independent.

At the moment, I am at the Annual Conference of the Psychonomic Society. Psychonomics is a conference that encompasses all aspects of psychology, in particular cognition and language. And to be there as a computer scientist / linguist / human factors specialist is hugely inspiring. I keep spotting research that has direct implications for the kind of work I do with older people, designing reminders, creating environments that help people thrive, writing messages that people can understand.

In the next few days, I will post a few impressions from the oral and poster sessions. I livetweeted 1.5 oral sessions, one on statistics and one on autobiographical memory, but haven’t talked about the posters yet.

What is so special about Psychonomics is that it’s not archival, so many people will use it to present more or less fully formed work that is being written up as a paper or is in the process of being published in a journal. Sometimes, it is like a technicolor advance table of contents, with lots of juicy research results to look forward to. I hope to share a few of them with you in the coming weeks.

August 9, 2015

If you are an author presenting a paper at ICPhS, you have all received detailed instructions on what to do. In this post, I want to give some of the rationale behind the requests. Most of these remarks are aimed at first-time presenters or presenters who feel relatively inexperienced, but experienced presenters might find some interesting nuggets, too.

First of all, the poster presenters are restricted to A1 / portrait, no landscape. This is very tight, and it is much easier to tell a visually beautiful and complete story to an audience if you have A0 landscape.

However, it allows us to leave your poster for longer. That makes life a lot less hectic for you. No rushing out of the last oral session before your poster to put it up, no missing the start of the next session before your poster needs to be taken down.

In order to make the poster sessions less cramped, we are also alternating posters between morning and afternoon, so that when you are at your poster to present, you essentially have double the space.

Best of all, it gives your poster a much bigger audience. Whenever attendees have a spare half hour, when they are having coffee or lunch, when they are unfortunate enough to miss one of the breathtakingly amazing oral session, they can wander around the posters and absorb your poster in peace.

Finally, if you have been assigned a poster, but were hoping for a paper, you will hopefully be pleasantly surprised at how deep and useful discussions at posters can be. In paper sessions, time for discussion is necessarily limited, and people need to rush off afterwards or are not necessarily comfortable making their comment in front of a crowd of their peers.

The paper presenters appear to be similarly restricted at first: 10 minutes for presentation, 3 minutes for discussion, and 2 minutes for changing over.

However, what these restrictions do is ensure that everybody gets a fair hearing. Imagine having travelled halfway around the world to present your paper to an audience who wants to hear what you have to say. You are the last speaker in a three-paper session. But then, the first speaker overruns. And the second speaker not only overruns, but sparks a heated ten minute discussion. At the end, all that is left for your carefully rehearsed talk is 5 minutes, no discussion, because people are heading to the next session.

While 10 minutes is not enough to present your work in detail, it is more than enough to tell people why your work is interesting, why it matters, and what your main findings are (and by the middle of Day 2, your audience won’t be able to absorb much more information, anyway).

Strict timekeeping also makes it easier for the audience to switch between sessions. This is particularly important in a large multi-session conference as this, where sessions are compiled according to many different criteria, and people are likely to pick and choose where they go.

Finally, the two-minute change-over time allows us sufficient time to deal with the vagaries of technology, especially when talks rely on the sound system working.

Discussant Sessions are an innovation that have become quite common in conferences that deal with speech and speaking. The papers in each session are hand-picked by a senior, highly respected researcher, who provides an introduction and facilitates discussion. These sessions replace Special Sessions which typically have dedicated calls for papers.

Bert Remijsen and Pavel Iosad organised the Discussant Sessions at ICPhS. Ten outstanding scholars agreed to be Discussants, covering a range of basic and applied phonetics. Once the original acceptance notices had gone out on April 1, these discussants had three to four weeks to put together their sessions from all those papers whose authors had indicated they would like to be considered for these sessions – an extremely challenging task. Discussants had access to both abstracts and full papers.

Scheduling these sessions was governed by several constraints. First and foremost, Discussant Sessions are longer than normal sessions, to allow for the discussant’s introduction and a subsequent discussion, after all papers have been given. This means Discussant sessions had to be scheduled as far as possible in parallel, while making enough space in the programme for 72 additional oral sessions and leaving the now traditional Wednesday afternoon free for workshops, sightseeing, and recovery. As a result, Discussant sessions are in two blocks, one on Monday, and one on Friday.

This decision also allows us to have Discussant Sessons in rooms with sufficient capacity.

The next constraint was speaker availability. Some discussants were not avaiable on Monday, some speakers were not available on one of those days, and some lucky people were coauthors on papers that had been selected for two different Discussant sessions – and we wanted them to be able to attend both sessions.

Finally, we attempted to separate Discussant Sessions that were of interest to the same group of people, but that proved next to impossible while making sure that speakers (and authors) could attend the sessions where their papers would be given.

We are hoping to at least partially address this scheduling issue by the fashionable remedy of crowdsourcing.

So if you are on social media and in a discussant session, please tweet, facebook, and blog it – share the findings, share the excitement, and maybe we can even get some discussion going across sessions!

The 18th International Congress of Phonetic Sciences (ICPhS) is easily the largest yet, with over 770 oral and poster presentations, if we count the plenaries as well. All of those papers were accepted based on a full, four-page paper that represents a substantial piece of completed work and can be 1500-2500 words long, as long as a brief communication in a journal. This is very different from conferences in medicine, psychology, or the life sciences, where authors merely submit a 300-word abstract.

So, how can you as authors ensure that their papers are seen and heard?

In a sense, you have already completed the most important steps, which are to choose appropriate subject areas, create a good title and write a suitable abstract for your paper. The PDF version of the abstract book is easy to search, and we would like to encourage all attendees to use the search function liberally.

Remember that many criteria were used to create sessions: Not all ultrasound papers are grouped together, not all papers that deal with voice onset time are in the same sessions, and not all papers that address bilingualism are in dedicated bilingualism sessions.

If you are on social media (or know somebody who is on social media), we would like to encourage you or your colleague / friend / marketing accomplice to tweet your paper and session. For example, if you paper is an ultrasound study of consonants that involve a complete break in airflow (stop consonants) in people who are fluent in two languages (bilingual), you may want to tweet:

Finally, if somebody really should have been at your paper, and wanted to be at your paper, but missed it – they have the contact email of the corresponding right next to your abstract in the abstract book.

In this post, we will look at the way in which the oral programme was assembled. This is not just a peek behind the scenes, but should also go a long way to explain why your paper (of all papers) got stuck in that particular session.

First of all, ICPhS is much bigger than it used to be, which also makes it more tricky to organise and schedule For example, the 16th ICPhS in Saarbrücken, Germany (2007) featured around 450 oral and poster presentations. At this ICPhS, the attendees have the choice of around 750 papers, split almost equally between oral and poster sessions.

Phonetics has also become both more diverse, with specialisations upon specialisations. This is particularly true for the prosody community (or should I say avalanche?) where every single aspect of rhythm, stress, and intonation will be discussed in great detail. Methods range from corpus-based studies (i.e., you speak, we record and annotate) to intricate perception experiments.

The prosody avalanche is almost matched in sheer impact by the language acquisition (in particular second language acquisition) tsunami. Pretty much every oral session features either papers or full sessions on bilingualism or second language acquisition.

Needless to say, this made the task of putting together the programme a challenge, and some of the resulting sessions are best approached with a spirit of discovery.

(After all, each paper has to fit somewhere, and if it fits, it sits. Even if one has to be a little creative sometimes.)

In order to help with this process, we relied on the people who know their papers best – the authors. On submitting a paper to ICPhS, each author (or author team, in most cases) was asked to categorise their paper into 27 scientific areas. Authors could specify up to three areas for their paper. All papers also had keywords that describe key aspects of the content, a meaningful title, and an abstract, which could be consulted in case of confusion (or despair).

All oral papers were first grouped by the scientific areas that the authors had indicated. After some checking, we found that papers were described best by the combination of areas specified, and took this as the starting point for the next step.

The initial grouping yielded around 40 groups of oral papers. Some of them fell neatly into sessions, and there was much rejoicing. Others were more complex. For these papers, keywords were consulted. Sometimes, frequently used keywords suggested themes (such as rhotics). If that approach was not fruitful, groups of papers were inspected for meaningful clusters.

The overall approach was what us computer scientists would call greedy – coherent sessions emerged first, and were fixed in the structure. The remaining papers were then grouped into sessions that were as coherent as possible, until all papers had been assigned to one of 72 sessions.

As a result, sessions can be grouped by topic (coronals), method (ultrasound investigations of speech), area of phonetics (speech perception), or language group (Arabic Phonetics), and therefore, one paper can easily fit into several different sessions.

In this post, we will talk about the way the poster sessions were assembled.

This task was somewhat more straightforward than the oral sessions, because each poster session could hold up to 60 papers (a little more if the other poster session of the day was below 60 papers).

Authors had assigned one to three subject areas to their paper, and we used the main subject area to group papers initially. We then created sub groups for all of the larger subject areas, so that posters in an area were spread over several days. This gives people who are interested in an area more time to look at the posters carefully and talk to presenters; it also makes poster sessions more diverse and interesting for those attendees who like to browse and who prefer variety.

When we saw clear thematic links, subgroups were named (for example Speech Perception), when the group was very mixed, subgroups were just numbered (for example Phonetic Psycholinguistics and Neurolinguistics).

When assigning poster sessions to specific slots, we worked around the following constraints:

timing of relevant plenaries, such as Simon King’s plenary on speech technology

timing requests by attendees that reached us in the first few weeks after acceptance

the original position of discussant sessions, which shifted slightly as additional scheduling constraints became clear

ensuring that different sessions from the same subject area were on different days