Humans can’t expect AI to just fight fake news for them

Here’s some news that’s not fake: Not everything you can read on the internet is true. Trouble is, it can be hard to know truths from untruths, and there’s evidence untruths travel faster. Many hands have been wrung in recent months over what to do about made-up news stories created to convert social media shares into page views, ad dollars, and perhaps even political traction. The modest first results from an effort to crowdsource machine learning technology to help stem the flood of falsity are a reminder that machines may help us grapple with fake news—but only if humans take the lead.

Late last year, Facebook’s director of AI research Yann LeCun told journalists that machine learning technology that could squash fake news “either exists or can be developed.” The company has since said it tweaked the News Feed to suppress fake news, although it’s unclear to what effect. Not long after LeCun’s comment, a group of academics, tech industry insiders, and journalists launched their own project called the Fake News Challenge to try and get fake news-detecting algorithms built out in the open.

The first results from that effort were released this morning. The algorithms the winning teams created might help rein in online misinformation, but as tools to speed up humans working on the problem, not autonomous fake news killbots.

Related

Published by lcdrain

Post navigation

About

Action for Media Education (AME) is a non-profit organization. We’ve been trailblazers in the development of media literacy programs since our incorporation in 1991. Our team includes parents and experts in education, journalism, mass communications, and community health.
We see media education as a vital element of literacy due to the barrage of media messages aimed at us every day. See where we ’ve been, who we are, and how we can work together.

Subscribe to our blog

Enter your email address to follow us and receive notifications of new posts via email.