Defined narrowly, epistemology is the study of knowledge and justified belief. As the study of knowledge, epistemology is concerned with the following questions: What are the necessary and sufficient conditions of knowledge? What are its sources? What is its structure, and what are its limits? As the study of justified belief, epistemology aims to answer questions such as: How we are to understand the concept of justification? What makes justified beliefs justified? Is justification internal or external to one's own mind? Understood more broadly, epistemology is about issues having to do with the creation and dissemination of knowledge in particular areas of inquiry...
When we conceive of epistemology as including knowledge and justified belief as they are positioned within a particular social and historical context, epistemology becomes social epistemology. How to pursue social epistemology is a matter of controversy. According to some, it is an extension and reorientation of traditional epistemology with the aim of correcting its overly individualistic orientation.

The Earth is far more alive than previously thought, according to “deep life” studies that reveal a rich ecosystem beneath our feet that is almost twice the size of that found in all the world’s oceans.

Despite extreme heat, no light, minuscule nutrition and intense pressure, scientists estimate this subterranean biosphere is teeming with between 15bn and 23bn tonnes of micro-organisms, hundreds of times the combined weight of every human on the planet.

Researchers at the Deep Carbon Observatory say the diversity of underworld species bears comparison to the Amazon or the Galápagos Islands, but unlike those places the environment is still largely pristine because people have yet to probe most of the subsurface.

“If aliens ever visit Earth what else would we talk about other than physics?” said Schlamminger. “If we want to talk about physics we have to agree on a set of units, but if we say our unit of mass is based on a lump of metal we keep in Paris, we’ll be the laughing stock of the universe.”

If the vote proceeds as expected, Earth will be spared such galactic shame. Since 1983, the metre has been derived from the speed of light in a vacuum. The kilogram makeover will derive mass from the Planck constant, a number deeply rooted in the quantum world. It describes the size of bundles of energy, known as quanta, which pour out of a hot oven, for example.

However the entire paper is based largely on a false premise: the idea that it is the “introduction of a deluge of new open-access online journals” which creates this reliability problem. This is hardly the case. The difficulty in identifying poor articles is not the deluge of open access journals nor is it predatory publishing. The growth in the volume of publications is not particularly related to open access and predatory publishing can be easily identified (with a little bit of common sense and a few pointers). The abstract (and to a lesser extent the talk) also conflates the evaluation of the reliability of a journal (an impossible task if you ask me) and the reliability of an article (an extremely onerous task if you ask me, but more on this later). Do I need to comment on the “rule of thumbs“?

I do teach third year undergraduate students on a similar topic. I ask them this same question: “how can you evaluate the validity of a scientific article?”. I write their answers on the white board; in whatever order, I get: the prestige of the University/Authors/Journal, the impact factor, the quality (?) of the references… I then cross it all.

Saddle height influences cycling performance and would be expected to influence cyclists physically, perceptually, and emotionally. We investigated how different saddle positions and cadences might affect cyclists’ torque, heart rate, rate of perceived exertion (RPE), and affective responses (Feeling scale). Nine male recreational cyclists underwent cycling sessions on different days under different conditions with a constant load. On Day 1, the saddle was at the reference position (109% of the distance from the pubic symphysis to the ground), and on Days 2 and 3, the saddle was in the “upward position” (reference + 2.5%) and “downward position” (reference − 2.5%) in random order. Each session lasted 30 minutes and was divided into three cadence-varied 10-minute stages without interruption: (a) freely chosen cadence (FCC), (b) FCC − 20%, and (c) FCC + 20%. We assessed all dependent measures at the end of each 10 minute stage. While there was no significant interaction (Saddle × Cadence) for any of the analyzed variables, torque values were higher at lower cadences in all saddle configurations, and the FCC + 20% cadence was associated with faster heart rate, higher RPE, and lower affect compared with FCC and FCC − 20% in all saddle positions. At all cadences, the saddle at “downward position” generated a higher RPE compared with “reference position” and “upward position.” The affective response was lower in the “downward position” compared with the “reference position.” Thus, while cyclists perceived the downward (versus reference) saddle position as greater exercise effort, they also associated it with unpleasant affect.

German automakers had financed the experiment in an attempt to prove that diesel vehicles with the latest technology were cleaner than the smoky models of old. But the American scientists conducting the test were unaware of one critical fact: The Beetle provided by Volkswagen had been rigged to produce pollution levels that were far less harmful in the lab than they were on the road.

The results were being deliberately manipulated.

The Albuquerque monkey research, which has not been previously reported, is a new dimension in a global emissions scandal that has already forced Volkswagen to plead guilty to federal fraud and conspiracy charges in the United States and to pay more than $26 billion in fines.

Random allocation of patients to receive some of the limited supply of streptomycin was an equitable way of distributing the drug. It was also the way to find out more about the magnitude of streptomycin’s beneficial effects in a form of TB from which many people recover spontaneously, and about the drug’s unwanted effects, including the development of drug-resistant forms of TB. The first patients entered the trial in 1947.

Orwell’s hospital was not one of the hospitals in the trial: in fact, no Scottish hospital was included (MRC 1948b). That didn’t make any difference for Orwell – he wouldn’t have been eligible to participate in the study for several reasons, including his age (he was too old).

However, even with narrow entry criteria, the trial did help many people. Instead of languishing for months on a waiting list, being chosen for the trial meant that people were admitted to hospital within a week, even if they weren’t going to end up in the group of patients randomized to receive the drug.

Because of the complexity of the climate system and limitation of computing power, a model cannot possibly calculate all of these processes for every cubic metre of the climate system. Instead, a climate model divides up the Earth into a series of boxes or “grid cells”. A global model can have dozens of layers across the height and depth of the atmosphere and oceans.

The solution to these methodological problems is a randomized controlled trial, but randomization to breastfeeding vs artificial feeding is infeasible and probably unethical. It is, however, both feasible and ethical to randomize the participants to a breastfeeding promotion intervention. One strategy would be to promote breastfeeding initiation, but most women decide whether to breastfeed early in or even before pregnancy and such a strategy is therefore difficult with regard to both timing and logistics. An alternative and more feasible strategy is to promote breastfeeding exclusivity and duration among those mothers who have already decided to initiate breastfeeding, with analysis by intention to treat. This is the strategy we used in the Promotion of Breastfeeding Intervention Trial (PROBIT), a cluster-randomized trial in the Republic of Belarus.12 In this article, we describe measures of cognitive development among children enrolled in this trial and followed up at age 6.5 years.

Cajal added several levels of preparation and made other refinements as the debate over the true structure of the central nervous system was intensifying. While no one had yet seen an entire nerve cell, or could tell whether it was independent or just part of a larger structure, some scientists already questioned the old “single network” theory. Fridtjof Nansen, better known today for his Arctic explorations, had joined several others in theorizing that nerve cells were independent, basic structures. Still, almost everyone else, including Golgi and Cajal, believed in the network structure.

cajaldrawIn 1887, Cajal became chair of Normal and Pathological Histology at the university in Barcelona. His most consuming work, however, was slicing, soaking, staining and affixing to glass slides, slivers of the cerebellum of the embryo of a small bird. Then he carefully drew what he saw under the microscope. He became an ardent convert to the independent-cell camp.

Out of NHTSA’s full 2015 dataset, only 448 deaths were linked to mobile phones—that’s just 1.4 percent of all traffic fatalities. By that measure, drunk driving is 23 times more deadly than using a phone while driving, though studies have shown that both activities behind the wheel constitute (on average) a similar level of impairment. NHTSA has yet to fully crunch its 2016 data, but the agency said deaths tied to distraction actually declined last year.

There are many reasons to believe mobile phones are far deadlier than NHTSA spreadsheets suggest. Some of the biggest indicators are within the data itself. In more than half of 2015 fatal crashes, motorists were simply going straight down the road—no crossing traffic, rainstorms, or blowouts. Meanwhile, drivers involved in accidents increasingly mowed down things smaller than a Honda Accord, such as pedestrians or cyclists, many of whom occupy the side of the road or the sidewalk next to it. Fatalities increased inordinately among motorcyclists (up 6.2 percent in 2016) and pedestrians (up 9 percent).

Open Knowledge is developing Open Trials, an open, online database of information about the world’s clinical research trials. We are funded by The Laura and John Arnold Foundation through the Center for Open Science. The project, which is designed to increase transparency and improve access to research, will be directed by Dr. Ben Goldacre, an internationally known leader on clinical transparency.

OpenTrials is building a collaborative and open linked database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial:

The intention is to create an open, freely re-usable index of all such information, to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive standards around open data in evidence-based medicine.

"Within the United States, Davis, California is generally recognized as having the most elaborate system of cycling facilities of any American city. It also has, by far, the highest bicycling modal split share (22%), and a very low fatality and accident rate, among the lowest in California. If Forester were correct that separate facilities are so dangerous, one would certainly expect Davis to be overwhelmed by all the resulting bicycling injuries and deaths. Yet cycling in Davis is extraordinarily safe.

It’s his conviction that we are in the midst of a “catastrophic sleep-loss epidemic”, the consequences of which are far graver than any of us could imagine. This situation, he believes, is only likely to change if government gets involved.
The Guardian's Science Weekly A neuroscientist explains: the need for ‘empathetic citizens’ - podcast
What is the neuroscience behind empathy? When do children develop it? And can it be taught?
Listen

Walker has spent the last four and a half years writing Why We Sleep, a complex but urgent book that examines the effects of this epidemic close up, the idea being that once people know of the powerful links between sleep loss and, among other things, Alzheimer’s disease, cancer, diabetes, obesity and poor mental health, they will try harder to get the recommended eight hours a night (sleep deprivation, amazing as this may sound to Donald Trump types, constitutes anything less than seven hours)

The concept that all riders should be using the same length crank has the strength of being the status quo, but I fail to see any logic whatsoever behind it. Some have suggested that all cranks should be the same length because of something to do with the nature of the sport, the same way that baseball players all use the same length bat and tennis players all use the same length racket. But those are artificial limitations intended to force all players to compete on equal terms, not personal fit issues. Golfers use different length clubs depending on how tall they are, and runners use shoes that fit their feet -- and take whatever length stride suits them, with taller runners generally taking longer strides than shorter runners. Cyclists choose shoes, frames, and clothing that fits their bodies -- why not cranks as well?

If anyone can provide rational arguments that all cyclists should be using the same length cranks, I'd love to hear them.

one could begin this experiment by testing the subject with just three different length cranks. On this and all the following protocols, it would be desirable to do each and every test when the rider is completely rested so that only accurate and repeatable results are generated. An initial protocol would be to test the subject at 85% of his VO2 max with cranks that are 170 mm long and then two totally 'crazy' crank lengths: a test with ultra-short 50 mm cranks, and a test with ultra-long 300 mm cranks. And in each test, ergometer's seat height will be adjusted so that it will remain constant relative to the pedal position at bottom dead center (as measured parallel and along the actual or virtual seat tube). The subject will be allowed to self-select his own pedalling cadence, but during the test protocol, the subject will have a chance to try a wide range of normal cadences. The cadences that result in the highest sustained wattages (at 85% of his VO2 max) will be identified and the test rider will be encouraged to use the higher wattage cadences.

I could go on for hours about how no decent crank length test has ever been done by anyone and why it is actually virtually impossible to do a definitive one. Rider adaptation to the crank over time is required for each crank length to ensure optimal performance, but, if adaptation is allowed, it then becomes impossible to ensure that the subjects have exactly the same fitness level for the testing of each crank length. And a proper scientific test is double-blind, so the subject doesn’t know what they are testing. But with crank length, the subjects can feel the difference, and the bike has to be set up differently and the cadences used have to be adjusted for the different crank lengths. So the test riders are tipped off, which can also skew the results and make it not stand up to scientific scrutiny.

So a clearly-explained logical examination of crank length and its relation to rider size may actually be about as good as can be done. There simply will never be a definitive test that will tell riders exactly what crank length will make them fastest. After doing lots of crank-length testing over a few decades, in and out of labs, the best thing I’ve settled on is simply to run numerous full-out climbing test on the same approximately 30-minute climb, year after year.

The core of Lave and Wenger’s argument is that to understand learning we have to see the learner as a whole person in a web of relationships. They propose that a particularly fruitful lens for analysing these relationships is to see learning as “legitimate peripheral participation” in a “community of practice”. That is, that productive learning occurs when a learner is licensed to be a participant in a community, and through that participation they become more fully engaged in its practices. They use case studies from models of apprenticeship as their material...
Ravetz’s (1971) Scientific Knowledge and its Social Problems. Ravetz is arguing that the most reliable scientific facts are ones that have been transferred and abstracted across multiple communities. In particular he discusses the process by which scientific claims that arise in specific communities are transmuted into the ahistorical stories that are used in standardised school curricula. The key point I took away was that teaching was a remarkably productive site to test candidate “facts” for their comprehensibility and salience to new community members. That is, the production of knowledge occurs at the boundaries of communities and is highly productive when it engages legitimate peripheral participants.

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins.

the Telegraph ran the headline “Wind farms blamed for stranding of whales”. “Offshore wind farms are one of the main reasons why whales strand themselves on beaches, according to scientists studying the problem”, it continued. Baroness Warsi even cited it as a fact on BBC Question Time this week, arguing against wind farms.

But anyone who read the open access academic paper in PLoS One, titled “Beaked Whales respond to simulated and actual navy sonar”, would see that the study looked at sonar, and didn’t mention wind farms at all. At our most generous, the Telegraph story was a spectacular and bizarre exaggeration of a brief contextual aside about general levels of manmade sound in the ocean by one author at the end of the press release (titled “Whales scared by sonars”). Now, I have higher expectations of academic institutions than media ones, but this release didn’t mention wind farms, certainly didn’t say they were “one of the main reasons why whales strand themselves on beaches”, and anyone reading the press release could see that the study was about naval sonar.

The Telegraph article was a distortion (now deleted, with a miserly correction), perhaps driven by their odder editorial lines on the environment, but my point is this: if we had a culture of linking to primary sources, if they were a click away, then any sensible journalist would have been be too embarrassed to see this article go online. Distortions like this are only possible, or plausible, or worth risking, in an environment where the reader is actively deprived of information.

Publishing, scientific publishing I mean, is simply irrelevant at this point. The strong part of Open Science, the new, original idea it brings forth is validation.

Sci-Hub acted as the great leveler, as concerns scientific publication. No interested reader cares, at this point, if an article is hostage behind a paywall or if the author of the article paid money for nothing to a Gold OA publisher.

Scientific publishing is finished. You have to be realistic about this thing.

***>

Transparency is superior to trust—as long as some relevant person(s) actually exploit(s) the transparency. Look at how long that ssl flaw hung about in Debian, for example: https://pinboard.in/u:juliusbeezer/t:security/t:opensource/
That was all open code, utterly vital to the security of hordes of crucial servers run by the world's top-most geeks, and therefore, every internet user. But the problem sat there for two years, apparently.
That's an extreme example that did get fixed. Transparency is necessary yes, but unless it's actually backed by readers/critics/reviewers/coders/experts actually looking through the windowpane afforded by it, its value is only rhetorical.
It does mean that the guards can guard the guards and we can watch the guards guarding the guards though. Or maybe McGregor-Maywether.

“Faster riders generate more drag because drag is proportional to the square of velocity,” Swiss Side’s Jean-Paul Ballard told us. “But faster riders are also on the course for less time, and experience a narrower range of yaw angles. Through our simulations, we see that slower riders actually save more absolute time. They’re out on the road for longer and can therefore benefit from the aero gains for longer."

Don’t believe him? Plug your figures into an online power/speed scenario calculator like CyclingPowerLab’s (link is external) and see how much difference dropping 1kg, say, makes to your speed at a given wattage. As well as altering weight you can play around with the average gradient and wind speed, and also with CdA (coefficient of drag). You might well be surprised at just how little you’ll gain in most circumstances by dropping bike weight.

Wattbike’s Pedalling Effectiveness Score is inspired by the index of force effectiveness (IFE) which is an existing way of expressing mechanical efficiency during pedalling. IFE compares the gross force – the total force applied to the pedal – and the net force – the component of force that is tangential to the chainset. In other words, it gives the proportion of force you put out that actually goes towards creating torque and turning the chainrings.

Pedalling Effectiveness Score is calculated from Wattbike’s 100Hz force data as you ride. After measuring your net force and predicting your gross force, the Pedal Effectiveness Score function displays a real-time pedal stroke graphic alongside a target score graphic, including a colour-coded breakdown for each leg. This information is intended to provide the basis for adjusting your pedal technique until you’re cycling efficiently.

We published “The Uninhabitable Earth” on Sunday night, and the response since has been extraordinary — both in volume (it is already the most-read article in New York Magazine’s history) and in kind. Within hours, the article spawned a fleet of commentary across newspapers, magazines, blogs, and Twitter, much of which came from climate scientists and the journalists who cover them.

Some of this conversation has been about the factual basis for various claims that appear in the article. To address those questions, and to give all readers more context for how the article was reported and what further reading is available, we are publishing here a version of the article filled with research annotations. They include quotations from scientists I spoke with throughout the reporting process; citations to scientific papers, articles, and books I drew from; additional research provided by my colleague Julia Mead; and context surrounding some of the more contested claims. Since the article was published, we have made four corrections and adjustments, which are noted in the annotations (as well as at the end of the original version).

"It can be hard for local politicians to understand why a bumpy cycle path isn't liked and doesn't get used," Jiménez told me, "but when we show them a graph of the bumpiness, next to a graph of the smooth road next to it, it's a light-bulb moment for them. It then becomes easier to push for high-quality infrastructure."

Fietserbond of the Netherlands also has a measuring bike, but the Belgian one is more advanced, can measure a greater variety of parameters, and has been constantly improved since Jiménez joined Fietserbond in 2014. (He has an academic background in mobility sciences, and along with Bruno Coessens, is part of a two-man "vélo-mesureur" team at the Brussels-based organisation.)

Originally conceived as a tool for Fietserbond alone the meetsfiet is now hired in by municipalities – the dense data it can provide is used to improve not just the width of cycleways but also their surfaces.

"Comfort is a more important indicator of cycle-friendliness than many people imagine," said Jiménez.

In this study, to explore the potential of crowdsourced geographic information in research of active travel and health, we used Strava Metro data and GIS technologies to assess air pollution exposure in Glasgow, UK. Particularly, we incorporated time of the trip to assess average inhaled dose of pollutant during a single cycling or pedestrian trip. Empirical results demonstrate that Strava Metro data provides an opportunity to an assessment of average air pollution exposure during active travel. Additionally, to demonstrate the potential of Strava Metro data in policy-making, we explored the spatial association of air pollution concentration and active travel.

While the political scientist in me as a rule stops listening when I hear someone is an “anarchist” the use of the word in this case carries far different baggage. That said, here’s the quote from his introduction, page 2:..
In cases where the scientists’ work affects the public it even should participate: first, because it is a concerned party (many scientific decisions affect public life); secondly, because such participation is the best scientific education the public can get–a full democratization of science (which includes the protection of minorities such as scientists) is not in conflict with science.

a warning not to confuse political strategy with winning a war. Winning requires true understanding of your opponents, their resources and capabilities, and especially their motives and objectives.

What appears to be a war on science by the current Congress and president is, in fact, no such thing. Fundamentally, it is a war on government. To be more specific, it is a war on a form of government with which science has become deeply aligned and allied over the past century. To the disparate wings of the conservative movement that believe that US strength lies in its economic freedoms, its individual liberties, and its business enterprises, one truth binds them all: the federal government has become far too powerful.

Science is, for today’s conservatives, an instrument of federal power. They attack science’s forms of truth-making, its databases, and its budgets not out of a rejection of either science or truth, but as part of a coherent strategy to weaken the power of the federal agencies that rely on them. Put simply, they war on science to sap the legitimacy of the federal government. Mistaking this for a war on science could lead to bad tactics, bad strategy, and potentially disastrous outcomes for both science and democracy.

In Bollen’s system, scientists no longer have to apply; instead, they all receive an equal share of the funding budget annually—some €30,000 in the Netherlands, and $100,000 in the United States—but they have to donate a fixed percentage to other scientists whose work they respect and find important. “Our system is not based on committees’ judgments, but on the wisdom of the crowd,” Scheffer told the meeting.

Bollen and his colleagues have tested their idea in computer simulations. If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do, they showed in a paper last year—but at much lower overhead costs.

The other study worth noting found that beans' bad reputation may be undeserved: fewer than half of people noticed an increase in flatulence from eating pinto or baked beans; only 19 percent had more gas with black-eyed peas. Amusingly, people on control diets also reported a 3 percent to 11 percent increase in farting. "People's concerns about excessive flatulence from eating beans may be exaggerated," the authors write helpfully. "It is important to recognize there is individual variation in response to different bean types."

Organic farming can help to both feed the world and preserve wildland. In a study published this year, researchers modeled 500 food production scenarios to see if we can feed an estimated world population of 9.6 billion people in 2050 without expanding the area of farmland we already use. They found that enough food could be produced with lower-yielding organic farming, if people become vegetarians or eat a more plant-based diet with lower meat consumption. The existing farmland can feed that many people if they are all vegan, a 94% success rate if they are vegetarian, 39% with a completely organic diet, and 15% with the Western-style diet based on meat.

This resulted in an extraordinary response from doctors and scientists, initiated by Bonnie Liebman, director of nutrition in a Washington-based pressure group, The Centre for Science in the Public Interest (4). In an email, she claimed that the article was “full of errors” and asked respondents to sign a letter to The BMJ demanding retraction of the Teicholz article.

Search in an environment with an uncertain distribution of resources involves a trade-off between exploitation of past discoveries and further exploration. This extends to information foraging, where a knowledge-seeker shifts between reading in depth and studying new domains. To study this decision-making process, we examine the reading choices made by one of the most celebrated scientists of the modern era: Charles Darwin. From the full-text of books listed in his chronologically-organized reading journals, we generate topic models to quantify his local (text-to-text) and global (text-to-past) reading decisions using Kullback-Liebler Divergence, a cognitively-validated, information-theoretic measure of relative surprise. Rather than a pattern of surprise-minimization, corresponding to a pure exploitation strategy, Darwin’s behavior shifts from early exploitation to later exploration, seeking unusually high levels of cognitive surprise relative to previous eras. These shifts, detected by an unsupervised Bayesian model, correlate with major intellectual epochs of his career as identified both by qualitative scholarship and Darwin’s own self-commentary. Our methods allow us to compare his consumption of texts with their publication order. We find Darwin’s consumption more exploratory than the culture’s production, suggesting that underneath gradual societal changes are the explorations of individual synthesis and discovery. Our quantitative methods advance the study of cognitive search through a framework for testing interactions between individual and collective behavior and between short- and long-term consumption choices. This novel application of topic modeling to characterize individual reading complements widespread studies of collective scientific behavior.

The secret of Sky's success lies in the phenomenally effective approach of team manager Sir David Brailsford, which has become known as "marginal gains". The principle is to break down every aspect of an activity and try to do it 1 per cent better, with a significant increase in performance when you put all these improvements together.

In cycling, this ranges across everything from how athletes wash their hands, to the perfect positioning of their head to reduce wind resistance and installing tyres perfectly straight to the wheel rim. It also extends to areas previously not associated with cycling performance, as Brailsford explained in a recent interview with Freakonomics Radio, when he discussed the daily routine for Team Sky while competing:

"The hotel is given to you by the organisation, you can't change it, you don't know what the mattress is going to be like, you don't know what the room is going to be like. So we have a forward team that go into the hotels and they have a room protocol. Basically, they lift the bed up, they Hoover under the bed,

And even truth-seeking journalists could easily be pressured into inadvertently or even intentionally covering stories in order to satisfy a false or imaginary sense of balance. You can’t blame them. The concept of “balance” – or as its critics refer to it “false equivalence” – has long been a key precept of journalism. It epitomises the idealistic notion that journalists ought to be fair to all so that, whenever they write a story, they give equal weight to both sides of the argument.

But, especially in our new “post-truth” era, this doesn’t always work to the benefit of the public good. Here are some examples of where balance doesn’t necessarily work...

But you can’t help but have some sympathy for Jacob Weisberg of Slate magazine, quoted in Spayd’s article, who said that journalists used to covering candidates who were like “apples and oranges” were presented with a candidate, Trump, who was like “rancid meat”.

Nowhere in the memorial issue of the journal, however, is there any discussion of Bell’s exemplary reports of randomized trials of pertussis (whooping cough) vaccines in the 1940s, the decade during which randomized trials can be said to have been born. This is particularly surprising given that Bell’s reports of his randomized trials were not published in obscure places, but in mainstream journals. Yet, as far as I am aware, none of the many people who have written about the history of randomized trials have referred to the remarkable report that Bell published in 1941, seven years earlier than the now iconic report of the randomized trial of streptomycin in pulmonary tuberculosis conducted under the aegis of the Medical Research Council (MRC 1948).

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

I blame publishers. In the good old days of print journals, each edition only held a finite amount of information, so paper lengths were limited. Although you may have needed 30 pages of close-spaced text to describe how you accomplished some arcane scientific feat, some journals only gave you half a column. Any scientific results that could not be communicated properly in a short format ended up in another journal that could accommodate them.

This sometimes led to double publications: a short description of your results was published in one journal, while the extensive explanation of what you did appeared in a more technical publication.

Over time, short, direct articles have become more prestigious. Since university administrators are all about prestige, scientists now face increasing pressure to publish shortened forms of their research. The publishing houses, many of whom benefit from this pressure, are happy to accommodate.

To keep papers short, many journals emphasize results and conclusions at the expense of methods, often by moving them to the end and printing them in a font that requires a microscope. When I tried to report on a paper about adiabatic quantum computing recently published in Nature, I was dismayed to discover that all the useful information on methods wasn't in Nature at all, but in a separate document called supplementary information.

Given the limits in estimating the resisting forces accurately, you might think that it's pointless to try to calculate the relationship between power and speed. Take heart! There is a wonderful adage that applies: "All engineering models are wrong, but some are useful." Check out the Examples page.

Back to the details: Once you know the three forces you need to resist, you can easily compute the power required to maintain a certain speed. Power is force multiplied by speed - pretty simple.

If you want to calculate speed from power, however, you run into the general difficulty of solving a non-linear equation. As I noted above, air resistance depends on the square of speed. Power is this resistance times speed, so overall, the power required to overcome air resistance is related to the third power of speed - a cubic relationship. The full expression involves a squared term (quadratic) as well, due to the head wind component. Solving a cubic equation "backwards" as you must do to figure speed from power is not straightforward. Bike Calculator uses Newton's Method - a fancy trial-and-error method that does the easier "frontwards" calculation typically about eight times before reaching the displayed precision.

Monsanto argued vigorously that unlike with other herbicides, resistance to glyphosate was unlikely, and—contrary to good practice—encouraged farmers to spray it to their hearts’ content. The company even placed advertisements in the farm press to this effect. All this was severely criticized and opposed by academic weed scientists, who, needless to say, were spot on in their criticism.

As a result, we are now facing an epidemic of glyphosate resistant weeds that have led to increased herbicide use and a new generation of crops engineered to be immune to older herbicides like dicamba. Use of dicamba on these new GE crops is extensively damaging nearby crops that are not engineered to be resistant to it. It will also lead to increased industry sales of seed, drastically more herbicide use, and will foster more resistant weeds.

I put up a note on my blog offering physics consultation, including help with theory development: ‘Talk to a physicist. Call me on Skype. $50 per 20 minutes.’

A week passed with nothing but jokes from colleagues, most of whom thought my post was a satire. No, no, I assured them, I’m totally serious; send me your crackpots, they’re welcome. In the second week I got two enquiries and, a little nervous, I took on my first customer. Then came a second. A third. And they kept coming.

My callers fall into two very different categories. Some of them cherish the opportunity to talk to a physicist because one-to-one conversation is simply more efficient than Google. They can shoot up to 20 questions a minute, everything from: ‘How do we know quarks exist?’ to ‘Can atoms contain tiny universes?’ They’re normally young or middle-aged men who want to understand all the nerdy stuff but have no time to lose. That’s the minority.

The majority of my callers are the ones who seek advice for an idea they’ve tried to formalise, unsuccessfully, often for a long time. Many of them are retired or near retirement, typically with a background in engineering or a related industry. All of them are men.

In a much-discussed article at Slate, social psychologist Michael Inzlicht told a reporter, “Meta-analyses are fucked” (Engber, 2016). What does it mean, in science, for something to be fucked? Fucked needs to mean more than that something is complicated or must be undertaken with thought and care, as that would be trivially true of everything in science. In this class we will go a step further and say that something is fucked if it presents hard conceptual challenges to which implementable, real-world solutions for working scientists are either not available or routinely ignored in practice.

The format of this seminar is as follows: Each week we will read and discuss 1-2 papers that raise the question of whether something is fucked.

Rolling resistance: Friction between your tires and the road surface slows you down. The bumpier the road, the more friction you'll experience; the higher quality your tires and tube, the less friction you'll experience. As well, the heavier you and your bike are, the more friction you'll experience. There is a dimensionless parameter, called the coefficient of rolling resistance, or Crr, that captures the bumpiness of the road and the quality of your tires.

On June 4, the satirical news site the Science Post published a block of "lorem ipsum" text under a frightening headline: "Study: 70% of Facebook users only read the headline of science stories before commenting."

Nearly 46,000 people shared the post, some of them quite earnestly — an inadvertent example, perhaps, of life imitating comedy.

Now, as if it needed further proof, the satirical headline's been validated once again: According to a new study by computer scientists at Columbia University and the French National Institute, 59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.

The CDC conducted gun violence research in the 1980s and 1990s, but it abruptly ended in 1996 when the National Rifle Association lobbied Congress to cut the CDC's budget the exact amount it had allocated to gun violence research.

"It's worth pointing out that the language never specifically forbade the CDC from conducting the research," Wintemute said.

The 1997 appropriations bill stated, "None of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control." Congress also threatened more funding cuts if the gun research continued.

Rebutting bad science may not be effective, but asserting the true facts of good science is. And including the narrative that explains them is even better. You don’t focus on what’s wrong with the vaccine myths, for instance. Instead, you point out: giving children vaccines has proved far safer than not. How do we know? ...

The other important thing is to expose the bad science tactics that are being used to mislead people. Bad science has a pattern, and helping people recognize the pattern arms them to come to more scientific beliefs themselves. Having a scientific understanding of the world is fundamentally about how you judge which information to trust. It doesn’t mean poring through the evidence on every question yourself. You can’t. Knowledge has become too vast and complex for any one person, scientist or otherwise, to convincingly master more than corners of it...

Few working scientists can give a ground-up explanation of the phenomenon they study; they rely on information and techniques borrowed from other scientists. Knowledge and the virtues of the scientific orientation live far more in the community than the individual. When we talk of a “scientific community,” we are pointing to something critical: that advanced science is a social enterprise, characterized by an intricate division of cognitive labor...

The mistake is to believe that educational credentials... give you any special authority on truth. What you have gained is far more important: an understanding of what real truth-seeking looks like

Wilson was specifically asked what contribution the new laboratory would make to national defense. He replied in words that should be etched on the foundation stone of every center of basic research. The research, he said, had no direct bearing on national defense. Instead,

"It has only to do with the respect with which we regard one another, the dignity of men, our love of culture. It has to do with: Are we good painters, good sculptors, great poets? I mean all the things we really venerate in our country and are patriotic about. It has nothing to do directly with defending our country except to make it worth defending".

Turned on to this guy by some random Facebook meme (which was nice enough, but the reference to the title whence it came seems to have been wrong. And now I can't find *that* again. Bloody Facebook. Text as .gif is so stupid-making. Ho hum!)

I am now not employed by a university anymore, but while I left academia, I certainly did not leave science! I am still very interested in the pursuit of knowledge. The way I initially planned things was to swap my weekend and weekday pursuits; In Australia I had already taught scuba on the weekends, something I really enjoyed. My plan was to make this hobby my main source of income, and then do science – real science, not university administration & grant writing – on my evenings and days off. I was going to be an independent scientist (“gentleman scientist” in the words of my friend John J. – who immediately after saying that felt the need to qualify that he didn’t really think I was much of a gentleman)...
Very often, in order to participate in the more formal exchanges of scientific ideas, you need an affiliation. That is, some institute, university or museum where your academic home is. It would come across as odd to just write your home address in the author affiliation line of a scientific paper. I have joined the Neurolinx Institute in La Jolla, CA, founded by my mate Jay Coggan, as a home for independent scientists. I think institutions like Neurolinx will have an important role to play in a future with more “gentleman scientists

Last week I discussed how drugs get their International Nonproprietary Names (INNs). The World Health Organization’s expert panel that assigns INNs has nine principles to guide its decisions, two primary and seven secondary. Here they are in abbreviated form:

1. The names should be distinctive in sound and spelling. They should not be inconveniently long and should not be liable to confusion with names in common use.

Over thirty years ago, Alex P. Schmid, former Office-in-Charge of the UN’s Terrorism Prevention Branch and Albert Jongman of Leiden University’s PIOOM Foundation (Interdisciplinary Research Programme on Root Causes of Human Rights Violations) reviewed over 6,000 academic studies of terrorism published between 1968 and 1988. Shockingly, as they explained in their seminal book Political Terrorism, they found that “perhaps as much as 80 percent of the literature is not research-based in any rigorous sense.”

Of course, that’s a very polite, typically academic way of putting it.

When I say bullshit, I mean arguments, data, publications, or even the official policies of scientific organizations that give every impression of being perfectly reasonable — of being well-supported by the highest quality of evidence, and so forth — but which don’t hold up when you scrutinize the details. Bullshit has the veneer of truth-like plausibility. It looks good. It sounds right. But when you get right down to it, it stinks.

There are many ways to produce scientific bullshit. One way is to assert that something has been “proven,” “shown,” or “found” and then cite, in support of this assertion, a study that has actually been heavily critiqued (fairly and in good faith, let us say, although that is not always the case, as we soon shall see) without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.

Another way is to refer to evidence as being of “high quality” simply because it comes from an in-principle relatively strong study design, like a randomized control trial, without checking the specific materials that were used in the study to confirm that they were fit for purpose...
As the programmer Alberto Brandolini is reputed to have said: “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.” This is the unbearable asymmetry of bullshit I mentioned in my title, and it poses a serious problem for research integrity. Developing a strategy for overcoming it, I suggest, should be a top priority for publication ethics.

Much like the trade and traits of bubbles in financial markets, similar bubbles appear on the science market. When economic bubbles burst, the drop in prices causes the crash of unsustainable investments leading to an investor confidence crisis possibly followed by a financial panic. But when bubbles appear in science, truth and reliability are the first victims. This paper explores how fashions in research funding and research management may turn science into something like a bubble economy.

The pope’s contribution to the climate debate builds on the words of his predecessors—in the first few pages he quotes from John XXIII, Paul VI, John Paul II, and Benedict XVI—but clearly for those prelates ecological questions were secondary...

It is, therefore, remarkable to actually read the whole document and realize that it is far more important even than that. In fact, it is entirely different from what the media reports might lead one to believe. Instead of a narrow and focused contribution to the climate debate, it turns out to be nothing less than a sweeping, radical, and highly persuasive critique of how we inhabit this planet—an ecological critique, yes, but also a moral, social, economic, and spiritual commentary. In scope and tone it reminded me instantly of E.F. Schumacher’s Small Is Beautiful (1973), and of the essays of the great American writer Wendell Berry. As with those writers, it’s no use trying to categorize the text as liberal or conservative; there’s some of each, but it goes far deeper than our political labels allow. It’s both caustic and tender, and it should unsettle every nonpoor reader who opens its pages.

The ecological problems we face are not, in their origin, technological, says Francis. Instead, “a certain way of understanding human life and activity has gone awry, to the serious detriment of the world around us.” He is no Luddite (“who can deny the beauty of an aircraft or a skyscraper?”) but he insists that we have succumbed to a “technocratic paradigm,” which leads us to believe that “every increase in power means ‘an increase of “progress” itself’…as if reality, goodness and truth automatically flow from technological and economic power as such.”

As Aneta Pavlenko wrote in an earlier post, the early findings "captured our hearts and minds" and were a change from concerns about the disadvantages of bilingualism found in the literature in the first half of the last century (see here). But she asked whether the pendulum had swung too far in favor of bilinguals and she reported on a heated debate that had started on this issue. Basically, many research teams, working with both children and adults, could not replicate the effect and doubted its veracity.

In the middle of last year, researchers Kenneth Paap, Hunter Johnson and Oliver Sawi published a very critical review paper of the field for the prestigious brain sciences journal, Cortex. In it, they question the very existence of the bilingual advantage and summarize their findings in the following way: "It is likely that bilingual advantages in EF (executive functions) do not exist. If they do exist they are restricted to specific aspects of bilingual experience that enhance only specific components of EF. Such constraints, if they exist, have yet to be determined."

Instead of simply publishing the paper, and letting it have the life of an ordinary article, the editors of Cortex asked 21 research teams in the area to write comments on it in a "Bilingualism forum". The short texts which have just appeared make for interesting reading and show how complex the debate really is

“What is a theory?” Gross began by noticing that philosophy and physics have, ahem, “grown apart” over the years — citing the now classic quote by Richard Feynman about philosophy, birds, and ornithology. Gross himself said, however, that he envies the pioneers of quantum mechanics and relativity, who were well versed in philosophy, and he still thinks there is much the two fields can say to each other...
For Gross, experiments are “usually evidently real,” while theory must await experimental confirmation — which is why Nobel prizes for theory are given much later than those for experimental results. Another difference: experiments are expensive, theory is cheap… The scientific method is “undeniably” based on the thesis that the final authority as to scientific truth is observation and experiment.

Gross proposed to distinguish among frameworks, theories, and models. Classical mechanics, quantum mechanics and string “theory” are not theories, but rather frameworks. Theories are something like Newton’s or Einstein’s theory of gravity, or the unfortunately named Standard “Model.” Theories can be tested, frameworks not so much. Models include the BCS model of superconductivity, or BSM (Beyond Standard Model) models.

In our letter (November 2015), we urged the Society’s boards and senior committees to respond to the very serious problems of replicating psychological research that were revealed by the meagre 36 per cent success rate of the Reproducibility Project’s report of 100 attempted replications. In reply, Professor Andy Tolmie commented that ‘low n research may be a more endemic part of the problem than any deliberate attempts at massaging data’. However, low ns were not the problem for the Reproducibility Project because a priori power analyses for the replications indicated that a 92 per cent replication rate was predicted based on the originally reported effect sizes.

The Project’s report (Open Science Collaboration, 2015) noted that the best predictor of replication success was the effect size observed in the replication, which is independent of sample size. Sadly, the average effect size for the replications was less than half of that for the original studies. The report described the original studies as having ‘upwardly biased effect sizes’. It seems likely that the psychology literature reflects questionable research practices that can inflate effect sizes, such as: p-hacking, unreported removal of troublesome data, and capitalising on chance through selective publishing after adjusting a paradigm to produce significant results or reporting a ‘successful’ dependent variable but not those showing smaller effects.

The médialab at Sciences Po have started work to extract and classify the texts of past negotiations leading. This has culminated in the release of the climate negotiations browser ahead of the Paris talks. From this we start to can put the current talks into context – including looking at the rise and fall of different topics (such as climate financing and assessing vulnerability) and the respective positions and priorities of different countries in the discussions.

Pre-registering a study consists of leaving a written record of how it will be conducted and analyzed. Very few researchers currently pre-register their studies. Maybe it’s because pre-registering is annoying. Maybe it’s because researchers don’t want to tie their own hands. Or maybe it’s because researchers see no benefit to pre-registering. This post addresses these three possible causes. First, we introduce AsPredicted.org, a new website that makes pre-registration as simple as possible. We then show that pre-registrations don’t actually tie researchers’ hands, they tie reviewers’ hands, providing selfish benefits to authors who pre-register.

Pope Francis’s encyclical rather accurately depicts the current reality of climate change. While it does contain a few minor scientific inaccuracies, and could be interpreted as understating the degree of certainty scientists have in understanding climate change impacts, the encyclical fairly represents the present concerns raised by the scientific community.

What difference would one hundred grams (from an 8kg bike to an 8.1kg bike, with a 75kg rider) do to the ride time?

Well, it would increase it.

By three seconds.

Adding weight to a rider going that fast, over that terrain, makes precious little difference, really. 100g is 1.25% of the wheel weight; even at four times that, the penalty is just 17 seconds.

Changing that 1.25% weight penalty to an aero penalty - upping the overall drag of the bike by the same percentage – gives a 22-second penalty, and quadrupling the drag penalty pretty much does the same to the time lost: 87 seconds at 5%.

Now these still aren't big numbers: just under a minute and a half in four hours of riding. But the difference is certainly significant: aero gains are worth six times what weight gains are, and a fair conclusion from Swiss Side's stats would be that on rolling terrain it's worth going heavier and more aero.

Survation was approached by the Sun because the paper’s regular pollsters, YouGov, “didn’t want to do the poll”. YouGov said it did not want to carry out the study because it could not be confident that it could accurately represent the British Muslim population within the timeframe and budget set by the paper.

A spokesperson said: “To survey Britain’s Muslim population, particularly at a time of such heightened sensitivities, requires the kind of time, care, and therefore cost, that is beyond a newspaper’s budget.”

Other pollsters told the Guardian that it could require tens of thousands of phone calls at a cost of tens of thousands of pounds to generate a statistically representative sample of the 2.7 million Muslims who live in the UK.

It cannot be determined how representative the Survation sample is because of a lack of various socioeconomic and demographic details.

it was bollocks. There was indeed a poll carried out on behalf of the Sun by Survation, but it did not reveal any particular level of sympathy for “jihadis”, which the paper wants us to believe means ISIS, or whatever they’re calling themselves this week. Fortunately, the poll detail has been made available to view (HERE).

Humans can’t digest soluble fiber, so we enlist microbes to dismantle it for us, sopping up their metabolites. The Burkina Faso microbiota produced about twice as much of these fermentation by-products, called short-chain fatty acids, as the Florentine. That gave a strong indication that fiber, the raw material solely fermented by microbes, was somehow boosting microbial diversity in the Africans.
Indeed, when Sonnenburg fed mice plenty of fiber, microbes that specialized in breaking it down bloomed, and the ecosystem became more diverse overall. When he fed mice a fiber-poor, sugary, Western-like diet, diversity plummeted. (Fiber-starved mice were also meaner and more difficult to handle.) But the losses weren’t permanent. Even after weeks on this junk food-like diet, an animal’s microbial diversity would mostly recover if it began consuming fiber again.

the surprising finding was that as the grain size got smaller, the hydrates first got stronger, able to tolerate both compression and tensile stress--but only until they reached a certain grain size. If the researchers conducted simulations on grain sizes smaller than those identified as the turning point, the hydrate actually got weaker.

The maximum capacity of the hydrates appears when the grain size is around 15 to 20 nm. This resembles the behaviour of polycrystalline metals, such as copper. However, this is the first time that researchers have seen this type of behaviour in methane hydrates as a material. The grain size-dependent strength and maximum capacity that the researchers found can be used in predicting and preventing the failure of hydrates in the future.

Instability can be triggered

This unexpected rapid weakening of the crystal structure as the grain size gets smaller has important implications for any work in areas where hydrates are found.

The researchers reported that the dissociation of methane hydrates can be triggered by the ground deformation caused by "earthquakes, storms, sea-level fluctuations or man-made disturbances (including well drilling and gas production from hydrate reservoirs)."

I immediately asked my supervisor where I’d gone wrong. Experiment conducted carefully? Tick. No major flaws? Tick. Filled a gap in the specialist literature? Tick. Surely it should be published even if the results were a bit dull? His answer taught me a lesson that is (sadly) important for all life scientists. “You have to build a narrative out of your results”, he said. “You’ve got to give them a story”. It was a bombshell. “But the results are the results!” I shouted over my coffee. “Shouldn’t we just let the data tell their own story?” A patient smile. “That’s just not how science works, Chris.”
He was right, of course, but perhaps it’s the way science should work.

None of us in the reproducibility community would dispute that the overselling of results in service of high-profile publications is problematic, and I doubt that Chambers really believes that our papers should just be data dumps presented without context or explanation. But by likening the creation of a compelling narrative about one's results to "selling cheap cars", this piece goes too far. Great science is not just about generating reproducible results and "letting the data tell their own story"; it should also give us deeper insights into how the world works, and those insights are fundamentally built around and expressed through narratives, because humans are story-telling animals.

In my work I distinguish between three styles of speech: the Greenhouse, the Garden, and the Jungle. The Greenhouse is the domain of the citation form, where each word is presented in isolation, with all its features perfectly represented, un-interfered with by other words. The Garden is the domain of the rules of connected speech, where words are in orderly and pleasing arrangements and where they glide into each other, with genteel touches (handshakes) and make slight changes in sound shapes at their boundaries. Words behave politely, in a way that appropriate for those genteel occasions when you are having tea on the lawn (‘Would you like another cup of tea dear?’ becomes ‘Wu jew lie ka cuppa tea dear?). The Greenhouse and the Garden are useful for teaching pronunciation, and clear intelligible speech. The Jungle is real life speech, where words are mangled, crushed, bashed in a disorderly mess – speed and lack of clarity are the order of the day (‘July annuvver cuffer tea pop?’). The Jungle is where we need to go if we are to improve the teaching of listening.

on a street with a stop sign every 300 feet, calculations predict that the average speed of a 150-pound rider putting out 100 watts of power will diminish by about forty percent. If the bicyclist wants to maintain her average speed of 12.5 mph while still coming to a complete stop at each sign, she has to increase her output power to almost 500 watts. This is well beyond the ability of all but the most fit cyclists.

Canada has a new underground of scientists and statisticians and wonks who've founded a movement called LOCKSS -- "Lots of copies, keep stuff safe" -- who make their own archives of disappeared data, from the libraries of one-of-a-kind docs that have been literally incinerated or sent to dumpsters to the websites that vanish without notice. There's an election this October -- perhaps we can call on them then to restore the country's lost memory.