Simply adding up followers doesn’t make sense. This new method is better, I reckon, and can be used as a proxy to assess scientists’ relative social media impact.

I’ve been wanting to do this for an awful long time: Find a method that could compare follower numbers on the social media platforms that are the most used by scientists and researchers, namely Twitter and LinkedIn.

Living and working with science communication in the Copenhagen area, I wanted also to use the Danish metropolis as a first test case to find out which researchers and scientists would get on, say, a top 50 ranking.

For good or for worse, scientists and researchers are assessed and evaluated by their ‘impact’. Most often, it is the number of scholarly citations that is considered the most important measure of this. Google Scholar, f.ex., shows researchers and scientists h-index, which is based on the scientist’s most cited papers and the number of citations in other scholarly work.

Each scientist is represented by a dot. On the x-axis is the log of their number of Twitter followers. On the y-axis is the log of their number of LinkedIn followers

At the same time, many scientists have embraced the use of social media to disseminate their research, and to network with other scientists. They are often supported in this by their affiliated university institutions that can improve their brand among stakeholders, other scientists, and the wider public if their own scientists are active on social media.

In all spheres of life nowadays, the number of social media followers correlates with real, or certainly perceived, status. Academia is no exception, and follower numbers seem to correlate well with other measures of success.

The TwiLi Index is, I feel, a better, more honest, way to calculate the numbers.

The number of social media followers is visible, and important. If for nothing else then certainly as a ‘vanity metric’ that supports researchers and their institutions’ egos.

A well-known metric that calculates social media impact for researchers is the so-called Kardashian Index (K-Index), named after the pop star Kim Kardashian. It is a measure of the discrepancy between a scientist’s social media profile and their publication record. The measure compares the number of followers a researcher has on Twitter to the number of citations they have for their peer-reviewed work.

The trouble with the Kardasian index, which was invented by Neil Hall, is that the index itself is a criticism of scientists having to have a social media impact at all.

To slightly misquote the Good Book, ‘for every one who has followers, more will be given…’

The TwiLi Index is, I feel, a better, more honest, way to calculate the numbers. A high social media following for a scientist is, everything being equal, surely a good thing. But apart from that, I take no stand on the deeper issue of whether activity on social media, or (heaven forbid!) competition on social media is a good thing for science in general.

The method

You could just count followers. But the trouble with just counting followers is that social media platform followings are susceptible to a runaway, Matthew, effect. To slightly misquote the Good Book, ‘for every one who has followers, more will be given…’ People are more likely to follow the accounts that already have many followers. And this effect is exacerbated by the effect of accumulation, so that the longer an account holder is active on an account, the higher the likelihood that he or she has many followers. This favours long-serving, distinguished researchers who have been at it a long time, and who are more likely to have a successful academic career behind them: Succesful professors who have many followers, tend to have many, many followers.

So if we are to compare social media following in any meaningful way, we need to reduce this effect. So my first intervention is to use a base 10 logarithm of the follower number to counteract it.

At the same time, the number of followers on one platform, say Twitter, correlates quite well with the number of platforms on another platform, say LinkedIn (see graph above).

As far as I can see, there are three reasons for this.

First and most important. If you have many followers on one platform, you likely already have the status and leverage that attracts followers on another.

Second, if you are very active/successful on Twitter, you are also likely quite active/successful on LinkedIn and vice versa, as you are predisposed to social media activity.

I think it works pretty well. Runaway numbers on one platform don’t give you an unfair advantage. And your index is higher if you have good numbers on both platforms, rather than low numbers on one and excellent numbers on the other.

Third, the posts and information that is accessed on one platform can be leveraged with success on the other, making your other platform more popular.

From this I deduce that just adding the base 10 log of Twitter followers to the base 10 log of LinkedIn followers does not capture this affect. That is why I propose multiplying the two numbers.

Finally I add 2 to each follower number. This is so that if you have zero followers, or only one follower, on one of the platforms, you can still get a meaningful TwiLi number.

The resulting formula is the following.

TwiLi Index number = log (2 + A) x log (2 + B)

Where A = number of Twitter followers and B = number of LinkedIn followers.

I think it works pretty well: Runaway numbers on one platform don’t give you an unfair advantage. Your index is higher if you have good numbers on both platforms, rather than low numbers on one and excellent numbers on the other. And the TwiLi Index ranks researchers in a different way than simply adding or even multiplying follower numbers.

Gathering the data

So how did I proceed in this, my first, tentative, case?

First off, and the most difficult part of it, is to try and find all the scientists in the Greater Copenhagen / Øresund area that have large follower numbers.

Luckily my own @MkeYoungAcademy Twitter account, which I have been running for several years and which posts Copenhagen seminars and lectures, consistently follows and is followed by Copenhagen scientists and researchers. So I used the group I was following as a starting set of data that I could augment as I went along.

I got some help at this point from my good friend and data scientist Lasse Hjort Madsen who helped me with some ideas and in extracting good data from my Twitter account to a workable spreadsheet.

Then, whenever I stumbled across a Twitter account that identified itself as a scientist in the Copenhagen area, I simply added them to the spreadsheet.

I ended up with a macro list with a set of upwards of 150 researchers. My daughter Atlanta Young, a historian by training and yet another data ‘ninja’ helped me at this point. She spent some time cross-checking all of the researchers LinkedIn follower numbers. For larger numbers this process could probably be automated, but we did it manually.

Note here that we looked for LinkedIn followers, and not connections. Connections are, by default, followers. But not every follower is a connection. For most people the two numbers are nearly the same. However for top scientists, the follower numbers can be higher. The follower metric is more comparable to Twitter.

I am also hoping that scientists and researchers who are interested in this ranking – and this goes both for those who made the top 50, and for those who didn’t make it this time round – will give me feedback on my methodology.

Some scientists were excluded from the list at this point, if they failed to live up to our inclusion criteria (see box above).

Now it was time to apply the formula to the data, and to extract the index. Here I was helped by Andreas Junge, a maths whiz who is the CEO of Methodica Ventures when he is not helping his friends with their pet data projects.

My dataset now consists of more than 120 scientists and researchers, but I am adding new ones to the spreadsheet all the time.

My hope is that next summer, I will be able to redo the ranking based on the same methodology. Hopefully with a more comprehensive and accurate dataset.

I am also hoping that scientists and researchers who are interested in this ranking – and this goes both for those who made the top 50, and for those who didn’t make it this time round – will give me feedback on my methodology.

I appreciate any help! Feel free to leave any feedback in the comments below. Or write to me on mike@mikeyoungacademy.dk if you know someone who needs to be on the ranking who I have missed.

If #twitter followers correlates with #LinkedIn followers as claimed just use twitter. Why? Because sane folk like me gave up LinkedIn when it started spamming me. It provides ZERO benefits to me. I have an H-index of 107 on GS and 86 I think at WoK. How do your top 50 fare?

Thanks for your comment Douglas. I don’t agree. I think LinkedIn offers benefits. Both on its own, and in conjunction with Twitter. The search functionality is very powerful, and has many research networking applications. And the platform is ideal for forging relationships with people outside your own niche field, and outside academia. As for how my top 50 fares on H-index: I looked into it, but haven’t compared my top 50 systematically with another sample of researchers. I would leave the Kardashian formula for that! Also: The trouble with H-index for the sample is that it is so biased against the human and social sciences. So I deliberately delimited this ranking to social media following. best Mike

I feel that is among the most vital info for me.
And i am satisfied reading your article. However should observation on few basic issues,
The web site style is perfect, the articles is really great
: D. Just right task, cheers