One of my major issues with modern, broadcast journalism is its normalisation of a one dimensional view of accuracy. When called out over a questionable story the retreat mostly takes place to the “facts” within the story itself. Solace is found in the precise sourcing of a story, even if that isn’t the way knowledge actually works.

Rarely are other dimensions questioned, such as whether the story’s very existence is misleading or lends undue credence or salience. Because that, also, is inaccurate. The five stories every day on petty crimes may be exact in recounting the details (as far as we can ever know), but is the presence of five stories an accurate portrayal of the magnitude of the problem?

Is this conflating precision with accuracy?

…precision can mask inaccuracy by giving us a false sense of certainty, either inadvertently or quite deliberately.”

This is from Naked Statistics by Charles Wheelan. I’m about halfway through and haven’t come across much that would be surprising to anyone who has done an intro statistics course. But Wheelan has an interesting way of theorising what are otherwise mundane concepts.

Consider his framing of “precision” and “accuracy” (forgive the long quote):

These words are not interchangeable. Precision reflects the exactitude with which we can express something. In a description of the length of your commute, “41.6 miles” is more precise than “about 40 miles,” which is more precise than “a long f——ing way.” If you ask me how far it is to the nearest gas station, and I tell you that it’s 1.265 miles to the east, that’s a precise answer. Here is the problem: That answer may be entirely inaccurate if the gas station happens to be in the other direction. On the other hand, if I tell you, “Drive ten minutes or so until you see a hot dog stand. The gas station will be a couple hundred yards after that on the right. If you pass the Hooters, you’ve gone too far,” my answer is less precise than “1.265 miles to the east” but significantly better because I am sending you in the direction of the gas station. Accuracy is a measure of whether a figure is broadly consistent with the truth—hence the danger of confusing precision with accuracy. If an answer is accurate, then more precision is usually better. But no amount of precision can make up for inaccuracy

Bringing this back to journalism, it highlights the fallacy in retreating to details rather than the bigger picture. If a portrayal of the world is an accurate one then precision is laudable. But you can’t sacrifice one for the other. By no means conflate one with the other.

If the audience walks away with all the details of the criminals but a misleading impression of the likelihood of their being a victim, that’s a failure. And it’s one we all eventually pay for through public policy.

This is a rabbit hole I’ve wandered down many a time when thinking about journalism and the possibility of representing truth. Whether achievable or not, truth definitely isn’t entirely in the details.

We’re pretty well in to the internet age. But how much do our perceptions, selection and judgement reflect that?

Is the smartest person you know the one with the deepest personal repository of knowledge? Or the one with the widest knowledge, armed with the tools and skills to find out anything?

Are there many pub trivia nights that arm patrons with the web to hunt down obscure clues or answers?

I’ve been thinking of this as I get stuck into my latest coding textbook, the Python Data Science Handbook. Early on author Jake VanderPlas writes:

When a technologically minded person is asked to help a friend, family member, or colleague with a computer problem, most of the time it’s less a matter of knowing the answer as much as knowing how to quickly find an unknown answer. In data science it’s the same: searchable web resources such as online documentation, mailing-list threads, and Stack Overflow answers contain a wealth of information, even (especially?) if it is a topic you’ve found yourself searching before. Being an effective practitioner of data science is less about memorizing the tool or command you should use for every possible situation, and more about learning to effectively find the information you don’t know, whether through a web search engine or another means.

Surely this goes for most things. Knowledge itself isn’t redundant, obviously. It’s experience, information and skills that inform how and where you search, and what for.

But, at the same time, it feels like we’re still living with an outdated perception of intelligence. Intelligence as a kind of isolated store of information that can’t be updated or augmented mid-problem.

I’m waiting to see a job ad that’s looking for a candidate that isn’t just qualified, but has skills to locate, store and retrieve appropriate information. Better yet, emphasises that.

I’ve been struggling with the notion of beliefs as the output of a transitory and “swirling mass” of complexity inside each of us. Myriad tiny influences we can neither observe nor prise apart.

But, assuming this theory is valid, the logical conclusion is that the quality of the information you allow in is incredibly important. If that which inspires your System 1 thinking is rubbish, so is your thinking.

Asked to account for our beliefs and choices, how often would we say it was an unknown nudge from the flotsam of incidentals? ‘I probably only believe this because I pulled the equivalent of a red sock from my mental laundry under the influence of the last thing I heard in a bar. . .’

If the incidentals are so important, unpredictable and seemingly invisible, it’s not enough to swamp the bad with good. The only answer is to be incredibly vigilant about what gets in.

Perhaps this will prove easier than culling the poorly informed and conscious conspirators from our social networks.

We asked our volunteers to choose their political priorities on a scale of 1 to 10. For example, what would you do if it came to a choice whether the country should spend more on state-provided healthcare, or spend less and cut taxes (where 1 was definitely spend more and 10 was definitely cut tax)?…A short while later, we went back to talk over with our volunteers what they’d written and why. But we cheated. We left their original answer sheet as it was–written in their own hand with their names at the top to help convince them nothing fishy was going on. But where their answers were anywhere from 3 to 7–so not a definite 1 or a definite, uncompromising 10–we flipped the question around.

These are quite long quotes, but bear with me.

The partially handwritten page in front of them was evidence of what they believed–or so they thought. And it was this (doctored) opinion that they now defended. I sat down with a man who originally said that tax cuts were more important than more spending on state healthcare–and listened as he now explained why the opposite was true. His explanation was earnest, intelligent, clear, without hesitation. He wasn’t confused. He accepted this new position as a legitimate summary of his beliefs and didn’t miss a beat in justifying them.

I’ve read of studies where people surrender their opinion in the face of a majority or authority figure.

But that we are so intellectually supplicant that an unrecorded belief is essentially meaningless has quite thrown me. And that we could be dictated to by a recorded belief – even a false one – even more so.

To a certain extent this merely lines up with previous arguments in the book about complexity and simplification. But the lack of stability in the “lens” we use to understand the world – that I can’t feed you similar information over and over and expect a somewhat predictable response – has huge implications for discourse and institutions.

Let me end with a concluding remark from this section of the book:

…the ideal of holding a complete picture in our heads damns our capabilities with an impossible aspiration. The world, quite simply, is too complicated, too big, too messy, to frame in one go. The fact that we observe it in often contradictory fragments is also a measure of the enormity of the perceptual ask.

My coding odyssey continues and as a result I stumbled across the how to ask a good question page on Stack Overflow. It’s a site for people to ask questions of a large community of coders (I shan’t share what led me to browse the help centre of a help centre 😇).

While much of the page is understandably specific to questions about coding, reading it gave me several thoughts for some universal guidance for good questions.

•Pretend you’re talking to a busy colleague and have to sum up your entire question in one sentence: what details can you include that will help someone identify and solve your problem? Include any error messages, key APIs, or unusual circumstances that make your question different from similar questions already on the site…

• If you’re having trouble summarizing the problem, write the title last – sometimes writing the rest of the question first can make it easier to describe the problem.

So often people come with a simple question or problem that they have buried in so much story/minutiae as to make it boring/unintelligible. But if you really want to find an answer, and quickly, it’s probably best to approach it like click bait.

What are the details that will hook me into your question? Can you summarise as to confirm I even know the answer?

The busy colleague is a good device. When I first started as a journalist my producer told me something similar – I should pitch ideas to him as if he was a stranger in a pub who would leave or find me boring if given a long preamble.

Then:

In the body of your question, start by expanding on the summary you put in the title. Explain how you encountered the problem you’re trying to solve, and any difficulties that have prevented you from solving it yourself. The first paragraph in your question is the second thing most readers will see, so make it as engaging and informative as possible.

Help others reproduce the problem

Not all questions benefit from including code. But if your problem is with code you’ve written, you should include some. But don’t just copy in your entire program! Not only is this likely to get you in trouble if you’re posting your employer’s code, it likely includes a lot of irrelevant details that readers will need to ignore when trying to reproduce the problem. Here are some guidelines:

•Include just enough code to allow others to reproduce the problem. For help with this, read How to create a Minimal, Complete, and Verifiable example.

Good questions contain context and are rarely just one question (more reason press conferences and panels are bad).

When I’m really trying to pick someone’s brain or find an answer I often break questions down into their component parts. If it’s code we need to establish we’re all using the same version. This applies to basically everything.

Questions can go awry when we’re each making assumptions about intentions, definitions, steps etc. So if you’re trying to find something out it’s often best to start small, at first principles.

You can then walk through the problem with the person, just as the respondents on Stack will try and replicate problems. Sometimes you may discover you’re asking the wrong question. Are the assumptions baked into the question the real answer?

Obviously this only works in a medium where you can go back and forth.

Lastly there needs to be some amount of good faith. This is why I like anonymous questions delivered by a moderator at events, and why short interviews make little sense. Is the question a genuine attempt at knowledge or is it trying to convey something else?