Saturday, July 01, 2017

Working from home, I sometimes have the TV on during the day. So every once in a while, I see programming get interrupted with "This is a test of the Alert Ready system!", where there program is interrupted with an intrusive beep and the screen turns red, announcing that they're testing the system. It's similar to this youtube.

What I want to know: what are they testing for? To see if it shows up? Does someone have to look at all the TV channels to see if it's working? And is that why it takes so long? Or is there more to it than that? What things could possibly go wrong that this test could detect?

Tuesday, June 28, 2016

This post is about the information that reaches people (including me) organically, without them making any effort to find it, as opposed to the full set of all information available. While reading this, you may find yourself thinking "But you don't have all the information! You're just talking about the subset of information that reached you organically!" Yes, and that is exactly what this post is about.

In the wake of Brexit, my twitter feed has been showing me examples of people who voted Leave but were unaware of the consequences. I was rather surprised by this, because I was aware of those same consequences, and I haven't even been actively following the issue! The information reached me with no effort on my part (and, in fact, despite my having mentally categorized it as To Disregard), but it didn't reach people who actually got to vote in this referendum, and would have voted differently if they'd had this information.

Someone should do research and/or journalism about these people. What did they think was going to happen? Where did they get that idea from? Were they given incorrect information, or just not given all the correct information they needed? Why didn't the information they missed reach them?

And, perhaps most importantly, how close did they the information get to reaching them? Was a friend of a friend on a social network posting the information they needed? Was it in the newspaper they read but on a boring page they just skimmed over? Or were they nowhere near it and would have needed to drastically revamp their media consumption practices and/or voting research to have reached it.

After interviewing as many of the people who didn't see it coming as possible, the researchers/journalists should publish the results, highlighting any patterns they noticed. This would serve two purposes: helping regular people see information consumption patterns that correlate with being less informed than one would like, and helping people who are trying to spread information or raise awareness see how to reach the people who would like to be more informed but don't even know it yet.

As a random made-up example, suppose 68% of the people who were misinformed got their incorrect information from their hairdresser. Then people would know that you should question/snopes/factcheck political information provided by your hairdresser, no matter how brilliant she is about doing your hair. Or, suppose 68% of people who didn't get the information they wanted were two degrees of social media separation from that information. Knowing that, people might retweet links to political information that they normally wouldn't retweet because they think it's glaringly obvious.

And this isn't just a Brexit thing. Similar postmortems should be conducted for all elections, and for any other undertaking where they can find a significant number of people who didn't see it coming. For Brexit we're hearing the morning after about the people who didn't see it coming, but the turnaround isn't always this fast. They should follow up after six months or a year, find people who didn't see it coming, and figure out why.

There's something wrong when the desired information doesn't reach people who will be voting in a referendum, even though that same information organically reached a random foreigner who is deliberately disregarding information on the issue. Investigating exactly how this happened is probably the first step to making the problem go away.

My question: who are these people? This is really a unique convergence of factors. They are people to whom it occurs to write a letter to the editor, they are savvy enough to write a letter to the editor that gets selected for publication, they are completely unaware of the fact that a letter to the editor would become googleable, and they are affected by the fact that their letter (and the opinions contained therein) are googleable.

How do all these factors manage to converge? The combination of inclination to write a letter to the editor and unawareness of how googleability works makes me think of people who are very old and technologically illiterate, but would these people be affected by the googleability of their letter? I mean, my own parents are senior citizens and they know how googleability works, so those who are unaware of it would be, like, octogenarians and above, most of whom aren't in the workforce or any other situation where the googleability of their opinions would have any impact.

Also, they don't print truly extremist positions in letters to the editor. If someone wrote in with hate speech or something, it wouldn't get printed. But one of the reasons cited for requesting a letter to be unpublished is professional repercussions for the political views expressed. Jobs where people would suffer repercussions for political views sufficiently benign to be printed in a letter to the editor are generally the sort of job that requires some degree of savvy and nuance - the sort of thing where you'd think people would need to know how googleability works in order to function properly at their job. So how did they get there?

I really want the newspaper to interview these people (even if anonymously) and tell us their stories. How did all these factors converge?

How many students are actually getting a mark of under 35%? How many of them are going to be able to pull their mark up to a pass by the end of the semester? And why 35%, of all numbers?

Also, when I was in school at least, teachers entered the mark for each assignment into a spreadsheet, which weighted them accordingly and calculated the student's overall mark. The overall mark was not subjective; it was the mathematical result of the mark one each test and assignment. Because of this, you could figure out how many points you needed to get on an assignment or exam or during the rest of the semester to reach a certain grade. (During bouts of senioritis, this was also used to calculated where you could slack off.)

So if a student's real total is under 35% but their report card shows 35%, they might use the 35% to calculate how well they need to do in the second semester to pass the whole course. But if they really have some unknown number less than 35%, they won't get the mark they expect when all the numbers are plugged into the teacher's spreadsheet. Is there some mechanism in place to address this problem?

I'm labelling this post "journalism wanted" because, even though the situation has nothing to do with me, I left the article with way more questions than I went in with. And if I have all these questions, surely the people affected have even more.

In looking at all of this, I have to wonder why the Star published this at all — especially at this sensitive time in public health. If there is no proof that any of the young women’s illnesses, or the 60 adverse reactions in the database, were caused by the vaccine, then what is the story?

In that same column, she says:

To be fair, in the Gardasil investigation, reporters David Bruser and Jesse McLean absolutely do not conclude or state that the vaccine caused any of the suspected side effects the young women talk about. The article was written carefully to try to impart to readers the message that there was no conclusive evidence.

"We failed in this case. We let down. And it was in the management of the story at the top."

What I want to know: how did the front page layout and presentation and tone of the story turn out sensationalist if the public editor and the publisher both think this is inappropriate and it's not consistent with the reporters' stated intentions?

I know the writers don't write the headlines and aren't necessarily involved in layout, and I know that senior editors might not necessarily vet every single page layout in the whole newspaper every single day. But you'd think they'd approve the front page! You'd think they'd edit an article extra-carefully if it's going to be the first thing people see, and you'd think they'd look at the big, front-page, above-the-fold headline and make sure it reflects the writers' intended thesis.

It would be informative to readers to write a story about how this sort of thing comes about.

Thursday, February 05, 2015

As they mention in the subheadline (with some weird conjuction use), they found 60 people who reported illnesses, out of hundreds of thousands who have received the vaccine

The problem: they don't mention the statistics of these kinds of illnesses occurring in similar populations who have not recently be vaccinated. We're talking tens of times among a sample size of hundreds of thousands, which is hundredths of a percent. It is certainly plausible that the number of illnesses reported are consistent with what would happen ordinarily in the general population.

Back when I did my research before getting Gardasil, my research found just that: the number of reported conditions in the sample group was consistent with the number in the general population. That could certainly be the case here. But the Star doesn't provide the numbers!

If the number of illnesses found in this investigation is significantly higher than what would have occurred in the control group, then that is important information that supports the Star's thesis and they should include it.

But if it is not, then this is an irresponsible piece of journalism.

By failing to include these numbers, they've made the article non-credible in the eyes of the most-informed audience who will read it critically, while sensationalistically creating paranoia among the least-informed audience who will only skim the headlines.

The article ends with one of the interviewees saying “I am not against the vaccine, I want people to be responsible about Gardasil. I am trying to inform people.”

In order to inform people so that they can make responsible decisions about Gardasil, you need to include control group numbers!

All this coverage would have benefited from an interview with the doctor in question, or others like him, shedding light on their internal reasoning for choosing this medical specialty.

As we've discussedbefore, approximately one third of all Canadians use prescription contraception. That means that any given doctor working in family practice or a clinic can expect one third of all their patients to come in at least once a year asking for contraception.

If you're morally opposed to providing contraception, why would you pursue a line of work where one third of your clientele is going to ask for something you're morally opposed to?

There are many fields of medicine where contraception is not going to come up at all. Gerontology, podiatry, oncology, pediatrics, palliative care, otolaryngology, gastroenterology, cardiology, pulmonology, hematology, and I'm sure many other kinds of medicine whose existence I've never thought about. Contraception is only going to come up in general practice, walk-in clinics, and gynecology/urology, with occasional appearances in emergency medicine, dermatology, and possibly endocrinology.

Why doesn't this doctor and other like him choose one of the many other fields of medicine, or work in a children's hospital or a long-term care home or somewhere similar where they simply won't be called upon to provide contraception?

I also wonder if medical schools and colleges of physicians and whatever other organizations might be involved take any measures to discourage future doctors from studying to practise in fields in which they're morally opposed to very common and medically-accepted treatments.

Meanwhile, workers report that, after finally restoring power in many neighbourhoods, they are being forced to disconnect
some houses because of damage done to stand pipes, the hollow masts
usually mounted on rooftops that serve as a conduit for power cables to
enter a dwelling. A bent or broken stand pipe poses a risk of fire, and
it’s the homeowner’s responsibility to have it fixed by a qualified
electrician.

Hydro workers are not electricians.

(My emphasis.)

So why aren't Hydro workers electricians? They're working with electricity. They're connecting bigger wires than electricians usually work with, so it seems like they should be able to be electricians. Are they actually unable to do the work of electricians? Or is this merely a certification issue? Or is it a jurisdiction issue?

What would it take for Hydro workers to be electricians? Would they have to learn new skills? Or just get an additional certification?

I hate it when I walk away from a newspaper article with my questions than I went in with.