How many hours of sleep does the average person require? The American Academy of Sleep Medicine and the Sleep Research Society recently convened an expert panel which reviewed over 5,000 scientific articles and determined that sleeping less than 7 hours in adults (ages 18-60) was associated with worsening health, such as increased obesity and diabetes, higher blood pressure as well as an increased risk of stroke and heart disease. In addition to increasing the risk for illnesses, inadequate sleep is also linked to impaired general functioning, as evidenced by suppressed immune function, deficits in attention and memory, and a higher rate of errors and accidents. Since at least one third of adults report that they sleep less than 7 hours a day (as assessed by the Centers for Disease Control and Prevention in a survey of 444,306 adults), one can legitimately refer to insufficient sleep as a major public health issue. Even though insufficient sleep and other sleep disorders have reached epidemic-like proportions affecting hundreds of millions of adults world-wide, they are not adequately diagnosed and treated when compared to medical risk factors and conditions. For example, in most industrialized countries, primary care physicians perform annual blood pressure and cholesterol level checks, but do not routinely monitor the sleep duration and quality of their patients.
One reason for this may be the complexity of assessing sleep. Checking the blood cholesterol level is quite straightforward and provides a reasonably objective value which is either below or above the recommended cholesterol thresholds. However, when it comes to sleep, matters become more complicated. The above-mentioned expert panel acknowledged that there can be significant differences in the sleep requirements of individuals. Those who suffer from illnesses or have incurred “sleep debt” may require up to 9 hours of sleep, and then there are also significant environmental and genetic factors which can help determine the sleep needs of an individual. The average healthy person may need at least seven hours of sleep but there probably groups of individuals who can function well with merely 6 hours while others may need 9 hours of sleep. Then there is also the issue of the sleep quality. Sleeping for seven hours between 10 pm and 5 am has a higher quality of sleep than sleeping between 6 am and 1 pm because the latter will be associated with many more spontaneous awakenings and interruptions as well as less slow-wave sleep (a form of “deep sleep” characterized by classical slow wave patterns on a brain EEG recording during sleep). Unlike the objective cholesterol blood test, a true assessment of sleep would require an extensive sleep questionnaire asking details about sleep history and perhaps even recording sleep with activity monitors or EEGs.

Another reason for why insufficient sleep is not treated like other risk factors such as cholesterol and blood pressure is that there aren’t any easy fixes for poor sleep and the science of how poor sleep leads to cognitive deficits, diabetes and heart disease is still very much a topic of investigation.

In the case of cholesterol, numerous studies have shown that cholesterol levels can be effectively lowered by taking a daily medication such as a statin and that this intervention clearly lowers the risk of heart attacks and stroke. Furthermore, the science of how cholesterol causes stroke and heart disease has been worked out quite well by identifying the molecular mechanisms of how cholesterol contributes to the build-up of plaque in the arteries which can then lead to heart attacks and stroke. When it comes to sleep, on the other hand, multi-faceted interventions are required to restore healthy sleep levels. Medications to help patients sleep can be used in certain circumstances for a limited time but they are not a long-term solution. Instead, improving sleep requires individualized solutions such as developing a sleep schedule of fixed bed-times, minimizing the use of digital screens in the bedroom, and avoiding caffeine, large meals, nicotine or alcohol just before bedtime. The complexity of assessing and treating insufficient sleep also makes it very difficult to prove the efficacy of interventions. Controlled clinical studies can demonstrate that a cholesterol-lowering medication reduces the risk of heart attacks by treating thousands of patients with the active medication when compared to thousands of patients who receive a placebo, but how do you test the efficacy of individualized sleep interventions in thousands of patients?

Understanding the precise mechanisms by which insufficient sleep impairs our functioning and health has therefore become a major topic of research with significant advances that have been made in the past decades. Correlative studies which link poor sleep to worse health cannot prove that it is the inadequate sleep which caused the problems, but studies in which human subjects undergo well-defined sleep deprivation for a defined number of hours coupled with EEGs, brain imaging studies and cognitive assessments are providing important insights into how poor sleep can affect brain function. The sleep researcher Matthew Walker at the University of California and his colleagues recently reviewed some of the key studies in sleep research and identified some of the major categories of brain function impairment as a consequence of sleep deprivation:

1. Attention:

Several studies of human subjects have consistently shown that sleep deprivation leads to a significant decrease in the ability to pay attention to tasks. Some studies have kept subjects awake for 24 hours at a stretch whereas other studies merely restricted sleep to a few hours a night and monitored the performance. Importantly, one study that restricted sleep to less than 3 hours for one week was able to show that the attentiveness and performance of subjects recovered rapidly once the sleep-deprived subjects were allowed to sleep for 8 hours but it still did not return back to the levels of those without sleep deprivation. This means that the after-effects of sleep deprivation can linger for days even when we start sleeping normally.

2. Memory:

The impairment of working memory (the temporary memory we use to make decisions and complete tasks) is another key feature of sleep deprivation. Brain imaging studies have been able to identify specific abnormalities in certain areas of the brain that are critical for the “working memory” function such as the dorsolateral prefrontal cortex and thus provide somewhat objective measures of cognitive impairment. Interestingly, placing magnetic coils around the head of sleep-deprived subjects to initiate TMS (transcranial magnetic stimulation) has been reported to help restore some of the loss of visual memory, however, Walker and colleagues note that the benefits of TMS in sleep deprivation are not always consistent and reproducible.

3. Responding to negative stimuli

Sleep deprivation increases responses to negative stimuli such as fear. For example, when subjects who had one night of sleep deprivation were shown images of weapons, snakes or mutilations, their aversion responses were much stronger than those of control subjects. Hyper-responsiveness of the amygdala, the part of the brain which processes emotional reactions, is thought to be one major element in these exaggerated responses of sleep-deprived subjects.

Walker and colleagues note that not all changes seen in the brain imaging studies are necessarily detrimental. In fact, some of these changes may be adaptations that have evolved to help our brains cope with the stress of sleep deprivation. Even though significant progress has been made in sleep deprivation research, understanding differences between individuals in terms of how and why they respond differently to sleep deprivation, distinguishing the mechanisms of beneficial adaptations in brain function from detrimental responses and also developing new studies that study the effects of chronic sleep deprivation – one that occurs over a period of weeks and months and thus mimics real-life sleep deprivation – instead of the short-term acute sleep deprivation studies that are currently performed in the laboratory are major challenges for sleep researchers. Hopefully, advances in sleep research will lead to a better understanding of sleep health and ultimately also translate into sleep becoming an integral part of medical exams in order to address this burgeoning public health problem.

Would you rather receive $100 today or wait for a year and then receive $150? The ability to delay immediate gratification for a potentially greater payout in the future is associated with greater wealth. Several studies have shown that the poor tend to opt for immediate rewards even if they are lower, whereas the wealthy are willing to wait for greater rewards. One obvious reason for this difference is the immediate need for money. If food has to be purchased and electricity or water bills have to be paid, then the instant “reward” is a matter of necessity. Wealthier people can easily delay the reward because their basic needs for food, shelter and clothing are already met.

Unfortunately, escaping from poverty often requires the ability to delay gratification for a greater payout in the future. Classic examples are the pursuit of higher education and the acquisition of specialized professional skills which can lead to better-paying jobs in the future. Attending vocational school, trade school or college paves the way for higher future wages, but one has to forego income during the educational period and even incur additional debt by taking out educational loans. Another example is of delayed gratification is to invest capital – whether it is purchasing a farming tool that increases productivity or investing in the stock market – which in turn can yield greater pay-out. However, if the poor are unable to pursue more education or make other investments that will increase their income, they remain stuck in a vicious cycle of increasing poverty.

Understanding the precise reasons for why people living in poverty often make decisions that seem short-sighted, such as foregoing more education or taking on high-interest short-term loans, is the first step to help them escape poverty. The obvious common-sense fix is to ensure that the basic needs of all citizens – food, shelter, clothing, health and personal safety – are met, so that they no longer have to use all new funds for survival. This is obviously easier in the developed world, but it is not a trivial matter considering that the USA – supposedly the richest country in the world – has an alarmingly high poverty rate. It is estimated that more than 40 million people in the US live in poverty, fearing hunger and eviction from their homes. But just taking care of these basic needs may not be enough to help citizens escape poverty. A recent research study by Jon Jachimowicz at Columbia University and his colleagues investigated “myopic” (short-sighted) decision-making of people with lower income and identified an important new factor: community trust.

The researchers first used an online questionnaire (647 participants) to assess trust and asked participants to choose between a payoff in the near future that is smaller and a larger pay-off in the distant future. They also measured community trust by asking participants to agree or disagree with statements such as “There are advantages to living in my neighborhood” or I would like my child(ren) to be raised in the neighborhood I currently live in”. They found that lower income participants were more likely to act in a short-sighted manner if they had low levels of trust in their communities. In a second online experiment, the researchers recruited roughly 100 participants from each state in the US and assessed their community trust levels. They then obtained real-world data on payday loans – a sign of very short-sighted financial decision-making because people take out cash advances at extraordinarily high interest rates that have to be paid back when they get their paycheck – for each state. They found that the average community trust for each state was related to the use of payday loans. In states with high average community trust ratings, people were less likely to take out these payday loans, and this trend remained even when the researchers took into account unemployment rates and savings rates for each state.

Even though these findings all pointed to a clear relationship between community trust and sound financial decision-making, the results did not prove that increased community trust is an underlying cause that helps improve the soundness of financial decisions. To test this relationship in a real-world setting, the researchers conducted a study in rural Bangladesh by collaborating with an international development organization based in Bangladesh. The vast majority of participants in this study were poor even by Bangladeshi standards, earning less than $1/day per household member. The researchers adapted the community trust questionnaire and the assessment of financial decision-making for the rural population, with live interviewers asking the questions and filling out the responses for the participants. After assessing community trust and the willingness to delay financial rewards for greater payouts in the future, half of the participants received a two year intervention to increase community trust. This intervention involved volunteers from the community that acted as intermediaries between the local government and the rural population, providing input into local governance and community-level decisions (for example in the distribution of social benefits and the allocation of funds for development projects).

At the end of the two year period, participants who had received the community intervention showed significant increases in their community trust levels and they also improved their financial decision-making. They were more likely to forego immediate lower financial rewards for greater future rewards when compared to the villagers who did not receive any special intervention.

By combining correlational data from the United States with an actual real-world intervention to build community trust, the researchers show how important it is to build trust when we want to help fellow humans escape the “poverty trap“. This is just an initial study with a limited group of participants and a narrow intervention that needs to be replicated in other societies and with long-term observation of the results to see how persistent the effects are. But the results should make all of us realize that just creating “jobs, jobs, jobs” is not enough. We need to invest in the infrastructures of communities and help citizens realize that they are respected members of society with a voice. Empowering individuals and ensuring their safety, dignity and human rights are necessary steps if we are serious about battling poverty.

What is the externalization society? According to Lessenich, this expression describes how developed countries such as the United States, Japan and Germany transfer or externalize risks and burdens to developing countries in South America, Africa and Asia. The Bento Rodrigues disaster is an example of the environmental risk that is externalized. Extracting metals that are predominantly used by technology-hungry consumers in developed countries invariably generates toxic waste which poses a great risk for the indigenous population of many developing countries. The externalized environmental risks are not limited to those associated with mining raw materials. The developed world is also increasingly exporting its trash into the third world.

The US, for example, are the world’s largest exporter of paper trash, exporting scrap paper worth US$ 3.1 billion each year. The US is also the largest producer of electronic waste (E-Waste), estimated at more than 7 million tons of E-Waste per year (PDF). Every new smartphone or tablet release generates mountains of E-Waste as consumers discard their older devices. Re-cycling the older devices sounds like a reasonable solution bu true recycling and re-using of electronic components is quite costly and time-consuming. It is also often not clear which electronic components of devices actually get recycled. To track the fate of discarded electronic devices, Jim Puckett from the Basel Action Network and his colleagues placed GPS-trackers in old electronics dropped off at US-based recycling centers. They found that a third of the “recycled” electronics were shipped overseas to countries such as Mexico, Taiwan, China, Pakistan, Thailand and Kenya. Puckett used the GPS signal to identify the sites where the E-Waste ended up and visited such a location in Hong Kong, where he found that the “recycled” electronics were being dismantled in junkyards by migrant workers from mainland China who were not wearing any protective clothing that would have protected them from hazardous materials released during the extraction of salvageable E-Waste materials. There are many regulations that restrict the trading of E-Waste but the United Nations Environment Program (UNEP) estimates that up to 90% of the world’s E-Waste is traded or dumped illegally. This means that even though dropping off old devices at a recycling center may alleviate the conscience of consumers, a significant number of these will not be re-used but instead shipped off to junkyards in other countries – without appropriate monitoring of how these electronic waste products will affect the local environment and health of the population.

Exporting environmental risks to developing countries by either outsourcing high-risk extraction of raw materials or simply dumping waste is just one example of externalization according the Lessenich. Externalizing occupational health risks and poverty by severely under-paying workers are other examples. Bangladesh has now emerged as one of the world’s largest manufacturers of clothing because of its cheap labor. In 2011, the typical monthly wage of a garment industry worker in Bangladesh was estimated at $91 per month – roughly one and a half dollars per day! In addition to this dismally low pay, garment factory workers in Bangladesh also face terrible occupational risks. The collapse of the Rana Plaza garment factory building in 2013 which called over 1,100 people and injured more than 2,500 people is just one example of the occupational risks faced by the workers.

Lessenich’s concept of the externalization society isn’t just another critique of the global inequality that we so often hear about. The fundamental principle of the externalization society put forth by Lessenich is the interdependence between the “imperial lifestyle” of wealth and comfort in the developed world and the “wretched lifestyle” of poverty and hardship in the developing world. If those of us who live in the developed world want the convenience of upgrading our smartphones every few years or buying cheap cotton t-shirts, then we need those who manufacture these products in the developing world to be paid lousy wages. If those workers were paid humane wages and their employers instituted appropriate occupational safety measures, as well as health and disability insurance plans that are routine in most parts of the developed world, then the cost of the products would be incompatible with our current economy and lifestyle which are fueled by consumerism and the capitalist imperative of incessant growth.

The pillars of the externalization society are indifference and ignorance. We are indifferent because we see the differential in lifestyle as a Selbstverständlichkeit – a German word for obviousness or taken-for-grantedness. They were born in developing countries, so of course they have to struggle – tough luck, they ended up with the wrong lottery tickets. This Selbstverständlichkeit also extends to the limited mobility of the people born in the developing world. They lack the birthright of the developing world citizens whose passports allow them to either travel visa-free or obtain a visa to nearly any country in the world with minimal effort. This veneer of Selbstverständlichkeit is easiest to maintain if “they” and “their” problems are invisible and thus allow us to ignore the interdependence between our good fortune and their misery. We might see images of the toxic flood in Brazil but few, if any, members of the externalization society will link the mining of cheap iron in Brazil to the utensils they use in their everyday life.

A decade ago, disposable single-use coffee pods such as the Keurig K-cups or the Nespresso pods were extremely rare but by 2014, K-cup manufactures sold a mind-boggling 9 billion K-cups! A new need for disposable products that had previously been met by standard coffee machines arose without considering the environmental and global impact of this need. In theory, the K-cups are recyclable but this would require careful separating of the paper, plastic and the aluminum top. It is not clear how many K-cups are properly recycled, and the E-Waste example shows that even if items are transported to recycling centers, that does not necessarily mean that they will be successfully recycled. Prior to the advent coffee pods, our coffee demands had been easily met without generating additional mountains of disposable plastic and aluminum coffee pod trash. Out of nowhere, there arose a new need for aluminum which again is extracted from the aluminum ore bauxite – another process that generates toxic waste. Instead of feeling a sense of absolution when we drop a disposable item into a recycling bin, we should simply curtail unnecessary consumption of products in disposable containers.

How do we overcome the externalization society? We can make concerted efforts through advocacy, education and regulations that restrict exporting environmental waste, improve health and safety conditions for workers in the developing world and try to restrict our consumerist excesses by clarifying the interdependence between wealth in the externalization society and the poverty in the developing world as well as the moral imperative to abrogate the inequality and asymmetry. Numerous advocates have already attempted this approach for the past decades with limited success. Maybe instead of appealing to the ethics of interdependence, a more effective approach may be to educate each other about the consequences of the interdependence. When millions of refugees show up at the doorstep of the externalization society, “they” are no longer invisible. One can blame wars, religious extremism and political ideologies for the misery of the refugees but it becomes harder to ignore the extent and central role of the underlying inequality. Creating humane working and living conditions for people in the developing world is perhaps the most effective way to stop the so-called “refugee crisis“.

Global climate change is another threat to the externalization society, a threat of its own making. Transferring carbon footprints and pollution to other countries does not change the fact that the whole planet is suffering from the consequences of climate change. Political leaders of the externalization society often demand the closing of borders, erecting walls and expanding their armed forces so they are less likely to have to confront the victims of their externalization but no army or wall is strong enough to lower the rising water levels or stabilize the climate. The externalization society will end not because of a crisis of conscience but because its excesses are undermining its own existence.

The Affordable Care Act, also known as the “Patient Protection and Affordable Care Act”, “Obamacare” or the ACA, is a comprehensive healthcare reform law enacted in March 2010 which profoundly changed healthcare in the United States. This reform allowed millions of previously uninsured Americans to gain health insurance by establishing several new measures, including the expansion of the federal Medicaid health insurance coverage program, introducing the rule that patients with pre-existing illnesses could no longer be rejected or overcharged by health insurance companies, and by allowing dependents to remain on their parents’ health insurance plan until the age of 26. The widespread increase in health insurance coverage – especially for vulnerable Americans who were unemployed, underemployed or worked for employers that did not provide health insurance benefits – was also accompanied by new regulations targeting the healthcare system itself. Healthcare providers and hospitals were provided with financial incentives to introduce electronic medical records and healthcare quality metrics.

As someone who grew up in Germany where health insurance coverage is guaranteed for everyone, I assumed that over time, the vast majority of Americans would appreciate the benefits of universal coverage. One no longer has to fear financial bankruptcy as a consequence of a major illness and a government-back health insurance also provides for peace of mind when changing jobs. Instead of accepting employment primarily because it offers health benefits, one can instead choose a job based on the nature of the work. But I was surprised to see the profound antipathy towards this new law, especially among Americans who identified themselves as conservatives or Republicans, even if they were potential beneficiaries of the reform. Was the hatred of progressive-liberal views, the Democrats and President Obama who had passed the ACA so intense among Republicans that they were willing to relinquish the benefits of universal health coverage for the sake of their political ideology? Or were they simply not aware of the actual content of the law and opposed it simply for political reasons?

A recent study published by a team of researchers led by Sarah Gollust at the University of Minnesota may shed some light on this question. Gollust and her colleagues analyzed 1,569 local evening television news stories related to the ACA that were aired in the United States during the early months of when the health care reform was rolled out (between October 1, 2013, and April 19, 2014). They focused on analyzing local television news broadcasts because these continue to constitute the primary source of news for Americans, especially for those who are age 50 and higher. A Pew survey recently showed that 57% of all U.S. adults rely on television for their news, and among this group, local TV new (46%) is a more common source than cable news (31%) or network news (30%).

Gollust and colleagues found that 55% of the news stories either focused on the politics of the ACA such as political disagreements over its implementation (26.5%) or combined information regarding its politics with information on how it would affect healthcare insurance options (28.6%). Only 45% of the news stories focused exclusively on the healthcare insurance options provided by the law. The politics-focused news stories were also more likely to refer to the law as “Obamacare” whereas healthcare insurance focused news segments used the official name “Affordable Care Act” or “ACA”. Surprisingly, the expansion of Medicaid, which was one of the cornerstones of the ACA because it would increase the potential access to health insurance for millions of Americans, was often ignored. Only 7.4% of news stories mentioned Medicaid at all, and only 5% had a Medicaid focus.

What were the sources of information used for the news stories? President Obama was cited in nearly 40% of the stories, whereas other sources included White House staff or other federal executive agencies (28.7%), Republican (22.3%) or Democratic (15.9%) politicians and officials. Researchers, academics or members of think tanks and foundations were cited in only 3.9% of the news stories about the ACA even though they could have provided important scholarly insights about the ACA and its consequences for individual healthcare as well as the healthcare system in general.

The study by Gollust and colleagues has its limitations. It did not analyze TV network news, cable news, or online news outlets which have significantly gained in importance as news sources during the past decade. The researchers also did not analyze news stories aired after April 2014 which may have been a better reflection of initial experiences of previously uninsured individuals who signed up for health insurance through the mechanisms provided by the ACA. Despite these limitations, the study suggests that one major reason for the strong opposition among Republicans against the ACA may have been the fact that it was often framed in a political context and understated the profound effects that the ACA had on access to healthcare and the reform of the healthcare system itself.

During the 2016 election campaign, many Republican politicians used the idea of “repealing” the ACA to energize their voters, without necessarily clarifying what exactly they wanted to repeal. Should all the aspects of the ACA – from the Medicaid expansion to the new healthcare quality metrics in hospitals –be repealed? If voters relied on the local television news to learn about the ACA, and if this coverage – as is suggested by Gollust’s study – viewed the ACA predominantly as a political entity, then it is not surprising that voters failed to demand nuanced views from politicians who vowed to repeal the law. The research also highlights the important role that television reporting plays in framing the debate about healthcare reform. By emphasizing the actual content of the healthcare reform and its medical implications and by using more scholars instead of politicians as information sources, these media outlets could educate the public about the law.

There are many legitimate debates about the pros and cons of the healthcare reform that are not rooted in politics. For example, electronic medical records allow healthcare providers to easily monitor the results of laboratory tests and avoid wasting patient’s time and money on unnecessary tests that may have been ordered by another provider. However, physicians who are continuously staring at their screens to scroll through test results may not be able to form the interpersonal bond that is critical for a patient-doctor relationship. One could consider modifying the requirements and developing better record-keeping measures to ensure a balance between adequate documentation and sufficient face-to-face doctor-patient time. The ACA’s desire to track quality of healthcare delivery and penalize hospitals or providers who deliver suboptimal care could significantly improve adherence to guidelines based on sound science. On the other hand, one cannot demand robot-like adherence to guidelines, especially when treating severely ill, complex patients who require highly individualized care. These content-driven discussions are more productive than wholesale political endorsements or rejections of the healthcare reform.

Healthcare will always be a political issue but all of us – engaged citizens, patients, healthcare providers or journalists – need to do our part to ensure that this debates about this issue which directly impacts millions of lives are primarily driven by objective information and not by political ideologies.

Words are routinely abused by those in power to manipulate us but we should be most vigilant when we encounter a new class of “plastic words“. What are these plastic words? In 1988, the German linguist Uwe Pörksen published his landmark book “Plastikwörter:Die Sprache einer internationalen Diktatur” (literal translation into English: “Plastic words: The language of an international dictatorship“) in which he describes the emergence and steady expansion during the latter half of the 20th century of selected words that are incredibly malleable yet empty when it comes to their actual meaning. Plastic words have surreptitiously seeped into our everyday language and dictate how we think. They have been imported from the languages of science, technology and mathematics, and thus appear to be imbued with their authority. When used in a scientific or technological context, these words are characterized by precise and narrow definitions, however this precision and definability is lost once they become widely used. Pörksen’s use of “plastic” refers to the pliability of how these words can be used and abused but he also points out their similarity to plastic lego bricks which act as modular elements to construct larger composites. The German language makes it very easy to create new composite words by combining two words but analogous composites can be created in English by stringing together multiple words. This is especially important for one of Pörksen’s key characteristics of plastic words: they have become part of an international vocabulary with cognate words in numerous languages.

Here are some examples of “plastic words”(German originals are listed in parentheses next to the English translations) – see if you recognize them and if you can give a precise definition of what they mean:

exchange (Austausch)

information (Information)

communication (Kommunikation)

process (Prozess)

resource (Ressource)

strategy (Strategie)

structure (Struktur)

relationship (Beziehung)

substance (Substanz)

progress (Fortschritt)

model (Modell)

development (Entwicklung)

value (Wert)

system (System)

function (Funktion)

growth (Wachstum)

supply (Versorgung)

quality (Qualität)

welfare (Wohlfahrt)

planning (Planung)

Even though these words are very difficult to pin down in terms of their actual meaning, they are used with a sense of authority that mandates their acceptance and necessity. They are abstract expressions that imply the need for expertise to understand and implement their connotation. Their implicit authority dissuades us from questioning the appropriateness of their usage and displaces more precise or meaningful synonyms. They have a modular lego-like nature so that they can be strung together with each other or with additional words to expand their authority; for example, “resource development“, “information society“, “strategic relationship” or “communication process“.

How about the word “love”? Love is also very difficult to define but when we use it, we are quite aware of the fact that it carries many different nuances. We tend to ask questions such as “What kind of love? Erotic, parental, romantic, spiritual? Who is in love and is it truly love?” On the other hand, when we hear “resource development’, we may just nod our heads in agreement. Of course resources need to be developed!

Pörksen published his book during the pre-internet, Cold War era and there have been new families of plastic words that could perhaps be added to the list in the 21st century. For one, there is the jargon of Silicon Valley that used by proponents of internet-centrism. Words such as digital, cyber, internet, online, data or web have entered everyday language but we rarely think about their actual meaning. The word internet, for example, technically refers to a bunch of servers and input devices and screen connected by cables and routers but it has taken on a much broader cultural and societal significance. An expression such as internet economy should elicit the important question of who is part of the “internet economy” and who is left out? The elderly and the poor have limited access to the internet in many countries of the world but we may gloss over this fact when we speak of the internet. The words innovation, integration, global and security/safety have also become key plastic words in the 21st century.

How do these plastic words become vehicles for the imposition of rigid views and tyranny? Two recent examples exemplify this danger.

The British Prime Minister Theresa May justified Britain’s decision to leave the European Union after a campaign characterized by anti-immigrant prejudice and nationalism in a speech by invoking Britain’s new global role:

“I want us to be a truly Global Britain – the best friend and neighbour to our European partners, but a country that reaches beyond the borders of Europe too. A country that goes out into the world to build relationships with old friends and new allies alike.”

It is difficult to argue with the positive connotation of a Global Britain. Global evokes images of the whole planet Earth, and why shouldn’t Britain forge new relationships with all the people and countries on our planet? However, the nationalist and racist sentiments that prompted the vote to leave the European Union surely did not mean that Britain would welcome people from all over the globe. In fact, the plastic words global and relationships allow the British government to arbitrarily define the precise nature of these relationships, likely focused on maximizing trade and profits for British corporations while ignoring the poorer nations of our globe.

Similarly, an executive order issued by the new American president Donald Trump within a week of his inauguration banned the entry of all foreigners heralding from a selected list of Muslim-majority countries into the USA citing concerns about security, safety and welfare of the American people. As with many plastic words, achieving security, safety and welfare sound like important and laudable goals but they also allow the US government to arbitrarily define what exactly constitutes security, safety and welfare of the American people. One of the leading enforcement agencies of the totalitarian East German state was the Stasi (Ministerium für Staatssicherheit – Ministry for State Security). It allowed the East German government to arrest and imprison any citizen deemed to threaten the state’s security – as defined by the Stasi.

How do we respond to the expanding use of plastic words? We should be aware of the danger inherent in using these words because they allow people in power – corporations, authorities or government agencies – to define their meanings. When we hear plastic words, we need to ask about the context of how and why they are used, and replace them with more precise synonyms. Resist the tyranny of plastic words by asking critical questions.

Competition for government research grants to fund scientific research remains fierce in the United States. The budget of the National Institutes of Health (NIH), which constitute the major source of funding for US biological and medical research, has been increased only modestly during the past decade but it is not even keeping up with inflation. This problem is compounded by the fact that more scientists are applying for grants now than one or two decades ago, forcing the NIH to enforce strict cut-offs and only fund the top 10-20% of all submitted research proposals. Such competition ought to be good for the field because it could theoretically improve the quality of science. Unfortunately, it is nearly impossible to discern differences between excellent research grants. For example, if an institute of the NIH has a cut-off at the 13 percentile range, then a grant proposal judged to be in the top 10% would receive funding but a proposal in top 15% would end up not being funded. In an era where universities are also scaling back their financial support for research, an unfunded proposal could ultimately lead to the closure of a research laboratory and the dismissal of several members of a research team. Since the prospective assessment of a research proposal’s scientific merits are somewhat subjective, it is quite possible that the budget constraints are creating cemeteries of brilliant ideas and concepts, a world of scientific what-ifs that are forever lost.

Red Panda

How do we scientists deal with these scenarios? Some of us keep soldiering on, writing one grant after the other. Others change and broaden the direction of their research, hoping that perhaps research proposals in other areas are more likely to receive the elusive scores that will qualify for funding. Yet another approach is to submit research proposals to philanthropic foundations or non-profit organizations, but most of these organizations tend to focus on research which directly impacts human health. Receiving a foundation grant to study the fundamental mechanisms by which the internal clocks of plants coordinate external timing cues such as sunlight, food and temperature, for example, would be quite challenging. One alternate source of research funding that is now emerging is “scientific crowdfunding” in which scientists use web platforms to present their proposed research project to the public and thus attract donations from a large number of supporters. The basic underlying idea is that instead of receiving a $50,000 research grant from one foundation or government agency, researchers may receive smaller donations from 10, 50 or even a 100 supporters and thus finance their project.

How can scientists get involved in scientific crowdfunding? Julien Vachelard and colleagues recently published an excellent overview of scientific crowdfunding. They analyzed the projects funded on experiment.com and found that projects which successfully achieved the funding goal tend to have 30-40 backers. The total amount of funds raised for most projects ranged from about $3,000 to $5,000. While these amounts are impressive, they are still far lower than a standard foundation or government agency grant in biomedical research. These smaller amounts could support limited materials to expand ongoing projects, but they are not sufficient to carry out standard biomedical research projects which cover salaries and stipends of the researchers. The annual stipends for postdoctoral research fellows alone run in the $40,000 – $55,000 range.

Vachelard and colleagues also provide great advice for how scientists can increase the likelihood of funding. Attention span is limited on the internet so researchers need to convey the key message of their research proposal in a clear, succinct and engaging manner. It is best to use powerful images and videos, set realistic goals (such as $3,000 to $5,000), articulate what the funds will be used for, participate in discussions to answer questions and also update backers with results as they emerge. Presenting research in a crowdfunding platform is an opportunity to educate the public and thus advance science, forcing scientists to develop better communication skills. These collateral benefits to the scientific enterprise extend beyond the actual amount of funding that is solicited.

One of the concerns that is voiced about scientific crowdfunding is that it may only work for “panda bear science“, i.e. scientific research involving popular themes such as cute and cuddly animals or studying life on other planets. However, a study of what actually gets funded in a scientific crowdfunding campaign revealed that the subject matter was not as important as how well the researchers communicated with their audience. A bigger challenge for the long-term success of scientific crowdfunding may be the limited amounts that are raised and therefore only cover the cost of small sub-projects but are neither sufficient to embark on exploring exciting new ideas and independent ideas nor offset salary and personnel costs. Donating $20 or $50 to a project is very different from donating amounts such as $1,000 because the latter requires not only the necessary financial resources but also a represents a major personal investment in the success of the research project. To initiate an exciting new biomedical research project in the $50,000 or $100,000 range, one needs several backers who are willing to donate $1,000 or more.

Perhaps one solution could be to move from a crowdfunding towards a tribefunding model. Crowds consist of a mass of anonymous people, mostly strangers in a confined space who do not engage each other. Tribes, on the other hand, are characterized by individuals who experience a sense of belonging and fellowship, they share and take responsibility for each other. The “tribes” in scientific tribefunding would consist of science supporters or enthusiasts who recognize the importance of the scientific work and also actively participate in discussions not just with the scientists but also with each other. Members of a paleontology tribe could include specialists and non-specialists who are willing to put in the required time to study the scientific background of a proposed paleontology research project, understand how it would advance the field and how even negative results (which are quite common in science) could be meaningful.

Tribefunding in higher education and science may sound like a novel concept but certain aspects of tribefunding are already common practice in the United States, albeit under different names. When wealthy alumni establish endowments for student scholarships, fellowship programs or research centers at their alma mater, it is in part because they feel a tribe-like loyalty towards the institutions that laid the cornerstones of their future success. The students and scholars who will benefit from these endowments are members of the same academic institution or tribe. The difference between the currently practiced form of philanthropic funding and the proposed tribefunding model is that tribe identity would not be defined by where one graduated from but instead by scientific interests.

Tribefunding could also impact the review process of scientific proposals. Currently, peer reviewers who assess the quality of scientific proposals for government agencies spend a substantial amount of time assessing the strengths and limitations of each proposal, and then convene either in person or via conference calls to arrive at a consensus regarding the merits of a proposal. Researchers often invest months of effort when they prepare research proposals which is why peer reviewers take their work very seriously and devote the required time to review each proposal carefully. Although the peer review system for grant proposals is often criticized because reviewers can make errors when they assess the quality of proposals, there are no established alternatives for how to assess research proposals. Most peer reviewers also realize that they are part of a “tribe”, with the common interest of selecting the best science. However, the definition of a “peer” is usually limited to other scientists, most of whom are tenured professors at academic institutions and does not really solicit input from non-academic science supporters. In a tribefunding model, the definition of a “peer” would be expanded to professional scientists as well as science supporters for any given area of science. All members of the tribe could participate during the review and selection of the best projects as well as throughout the funding period of the research projects that receive the support.

Merging the grassroots character and public outreach of crowdfunding with the sense of fellowship and active dialogue in a “scientific tribe” could take scientific crowdfunding to the next level. A comment section on a webpage is not sufficient to develop such a “tribe” affiliation but regular face-to-face meetings or conventional telephone/Skype conference calls involving several backers (independent of whether they can donate $50 or $5,000) may be more suitable. Developing a sense of ownership through this kind of communication would mean that every member of the science “tribe” realizes that they are a stakeholder. This sense of project ownership may not only increase donations, but could also create a grassroots synergy between laboratory and tribe, allowing for meaningful education and intellectual exchange.

Less than one fifth of PhD students in the United States will be able to pursue tenure track academic faculty careers once they graduate from their program. Reduced federal funding for research and dwindling support from the institutions for their tenure-track faculty are some of the major reasons for why there is such an imbalance between the large numbers of PhD graduates and the limited availability of academic positions. Upon completing the program, PhD graduates have to consider non-academic job opportunities such as in the industry, government agencies and non-profit foundations but not every doctoral program is equally well-suited to prepare their graduates for such alternate careers. It is therefore essential for prospective students to carefully assess the doctoral program they want to enroll in and the primary mentor they would work with. The best approach is to proactively contact prospective mentors, meet with them and learn about the research opportunities in their group but also discuss how completing the doctoral program would prepare them for their future careers.

The vast majority of professors will gladly meet a prospective graduate student and discuss research opportunities as well as long-term career options, especially if the student requesting the meeting clarifies the goal of the meeting. However, there are cases when students wait in vain for a response. Is it because their email never reached the professor because it got lost in the internet ether or a spam folder? Was the professor simply too busy to respond? A research study headed by Katherine Milkman from the University of Pennsylvania suggests that the lack of response from the professor may in part be influenced by the perceived race or gender of the student.

Milkman and her colleagues conducted a field experiment in which 6,548 professors at the leading US academic institutions (covering 89 disciplines) were contacted via email to meet with a prospective graduate student. Here is the text of the email that was sent to each professor.

Subject Line: Prospective Doctoral Student (On Campus Next

Monday)

Dear Professor [surname of professor inserted here],

I am writing you because I am a prospective doctoral student with considerable interest in your research. My plan is to apply to doctoral programs this coming Fall, and I am eager to learn as much as I can about research opportunities in the meantime.

I will be on campus next Monday, and although I know it is short notice, I was wondering if you might have 10 minutes when you would be willing to meet with me to briefly talk about your work and any possible opportunities for me to get involved in your research. Any time that would be convenient for you would be fine with me, as meeting with you is my first priority during this campus visit.

Thank you in advance for your consideration.

Sincerely,

[Student’s full name inserted here]

As a professor who frequently receives emails from people who want to work in my laboratory, I feel that the email used in the research study was extremely well-crafted. The student only wants a brief meeting to explore potential opportunities without trying to extract any specific commitment from the professor. The email clearly states the long-term goal – applying to doctoral programs. The tone is also very polite and the student expresses willingness of the prospective student to a to the professor’s schedule. Each email was also personally addressed with the name of the contacted faculty member.

Milkman’s research team then assessed whether the willingness of the professors to respond depended on the gender or ethnicity of the prospective student. Since this was an experiment, the emails and student names were all fictional but the researchers generated names which most readers would clearly associate with a specific gender and ethnicity.

Here is a list of the names they used:

White male names: Brad Anderson, Steven Smith

White female names: Meredith Roberts, Claire Smith

Black male names: Lamar Washington, Terell Jones

Black female names: Keisha Thomas, Latoya Brown

Hispanic male names: Carlos Lopez, Juan Gonzalez

Hispanic female names: Gabriella Rodriguez, Juanita Martinez

Indian male names: Raj Singh, Deepak Patel

Indian female names: Sonali Desai, Indira Shah

Chinese Male names; Chang Huang, Dong Lin

Chinese female names: Mei Chen, Ling Wong

The researchers assessed whether the professors responded (either by agreeing to meet or providing a reason for why they could not meet) at all or whether they simply ignored the email and whether the rate of response depended on the ethnicity/gender of the student.

The overall response rate of the professors ranged from about 60% to 80%, depending on the research discipline as well as the perceived ethnicity and gender of the prospective student. When the emails were signed with names suggesting a white male background of the student, professors were far less likely to ignore the email when compared to those signed with female names or names indicating an ethnic minority background. Professors in the business sciences showed the strongest discrimination in their response rates. They ignored only 18% of emails when it appeared that they had been written by a white male and ignored 38% of the emails if they were signed with names indicating a female gender or ethnic minority background. Professors in the education disciplines ignored 21% of emails with white male names versus 35% with female or minority names. The discrimination gaps in the health sciences (33% vs 43%) and life sciences (32% vs 39%) were smaller but still significant, whereas there was no statistical difference in the humanities professor response rates. Doctoral programs in the fine arts were an interesting exception where emails from apparent white male students were more likely to be ignored (26%) than those of female or minority candidates (only 10%).

The discrimination primarily occurred at the initial response stage. When professors did respond, there was no difference in terms of whether they were able to make time for the student. The researchers also noted that responsiveness discrimination in any discipline was not restricted to one gender or ethnicity. In business doctoral programs, for example, professors were most likely to ignore emails with black female names and Indian male names. Significant discrimination against white female names (when compared to white males names) predicted an increase in discrimination against other ethnic minorities. Surprisingly, the researchers found that having higher representation of female and minority faculty at an institution did not necessarily improve the responsiveness towards requests from potential female or minority students.

This carefully designed study with a large sample size of over 6,500 professors reveals the prevalence of bias against women and ethnic minorities at the top US institutions. This bias may be so entrenched and subconscious that it cannot be remedied by simply increasing the percentage of female or ethnic minority professors in academia. Instead, it is important that professors understand that they may be victims of these biases even if they do not know it. Something as simple as deleting an email from a prospective student because we think that we are too busy to respond may be indicative of an insidious gender or racial bias that we need to understand and confront. Increased awareness and introspection as well targeted measures by institutions are the important first steps to ensure that students receive the guidance and mentorship they need, independent of their gender or ethnic background.