-What is an authoritative website? A lot different than what we are teaching students.

-We need to teach kids to be careful drivers, but not in a “rote” manner.

-Autopilot thinking is what some people see as a solution to evaluation.

Recent NYTimes article about software EdX by Harvard and MIT are looking at automated solution on software grading. Doug Downs presentation at HETL on automated scores–first step, get humans to read like a computer. (Human inter-rater reliability) Formulas for computerized grading similar to standardized test online grading and Debbie Abilock predicts this autograding is going to have a big impact in the next year.

However, software algorithms may fail when human behavior changes. For example, flu maps of how flu spreads can be seen via Google searches. But this year because of the media frenzy, people started searching before they had the flu–so it skewed the information.

We need to give kids a rule of thumb–BIG DATA/Software don’t produce knowledge.

Weinberger–“We can now see every idiotic idea put forward seriously and every serious idea treated idiotically” Too Big To Know

Real evaluation of a source is a DYNAMIC process. Reader has to see the information, has to use their own rules of thumb for the given situation, has to look at credibility in context. Everyone has a different context of understanding that they bring to judging a source.

Unfortunately the way we are teaching students to do it is a process learned by “rote.” Boiling it down too much. For example, the statement “Databases are authoritative” is misleading. Articles in databases aren’t all “pristine.” For example, Lexis Nexis has Stephen Glass’s articles that are completely fabricated; research that shows academic writers have repressed information that contradicts their findings, etc.

Students rush through research and don’t think sometimes. She thinks we need to add a POINT OF FRICTION. Carol Kuhlthau’s “moments of intervention”–at critical junctures in the research process, we add points of friction.

If you tell students certain sites are credible (like CNN or a newspaper, etc.) you are ignoring the fact that they have CNN eyewitness journalists, editorials, etc. And there is “auto-evaluation” software that is misleading etc.

“Skepticism takes effort.” But that can be something that intrigue kids. Kids can fall into categories–like skeptics, intuitives and sociable kids(rely on others).

Abilock shared video about coconut oil as Alzheimer’s cure–the kind of video that is emailed around. IT “looks” accurate and when students Google it, the first hits will all be echoes of that video. How do we teach students to look for that?

Our students are capable of understanding flexible evaluation criteria based on the discipline.

For example in science, the scientific paper is the core–teach students to read the abstracts, and check citations. Is there media coverage or coverage in other journals or in social media?

For example in this infographic from Good, they mix up epidemics, pandemics and other illnesses. But it looks good. How the data is presented matters. “Information in a visual is not yet knowledge.” Visualized data can be manipulated just like any other information. How is it portrayed–that affects how we read results.

U.S. stereotypes of honesty/ethics — nurses are at top, members of congress and car salespeople at bottom. (annual Gallup poll).

–Is crowdsourcing a way to find out what is credible? It will work if the crowd is very diverse, disinterested in the results, and aggregated by software not by people. (Debbie Abilock says this crowdsourcing works well in a site like Flickr–where individuals are just tagging their own photos). Same as Netflix–people don’t get an advantage for starring movies they dislike–they are disinterested and so it works as a recommendation system.

How do you evaluate a tweet? Look at who it tweeted it, their profile, etc. Facebook tries to write ads that are indistinguishable from posts.

Currently we are teaching students a “checklist” for web evaluation. But that checklist is too simple–doesn’t work, doesn’t distinguish by subject area. Sometimes date/currency matters and sometimes it doesn’t. Sometimes it can be too recent, sometimes too delayed. Portrayal of groups change.

Rule of Thumb– Date matters, sometimes

Rules like “Wikipedia” isn’t accurate are misleading to students. Some articles like Rosetta Stone are written by experts at British museum. We have to help them examine how the “pedia” sites work–how wikis work. They can examine the authors of the articles or editors.

Wikidashboard lets you search wikipedia article to drill down to get info on the editors. Wikitrust (Firefox) computes the “reputation” of an article.– and then color codes info in an article by what is more trustworthy and less trustworthy.

Another moment of friction–teach students to read closely/skeptically. Teach them to summarize notes, and to tag their notes somehow.

Big picture:

Novices overestimate how good they are, and experienced web users don’t evaluate information well either. We have to partner with teachers to help them understand this. Every time you work with kids, ask them a question about evaluation…ask them to propose rules of thumb.

The right checklist at the right time–recommending book CheckList Manifesto.

Corroborate–use different genres and different search engines. “Corroborate like a historian.”

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Notify me of followup comments via e-mail

About me

I'm a high school and district librarian at a suburban high school who loves to explore the intersection of technology, libraries, and schools. Named White House Champion of Change for Connected Learning and all about library design!