We are pleased to announce that the Internet Archive and OCLC have agreed to synchronize the metadata describing our digital books with OCLC’s WorldCat. WorldCat is a union catalog that itemizes the collections of thousands of libraries in more than 120 countries that participate in the OCLC global cooperative.

What does this mean for readers?When the synchronization work is complete, library patrons will be able to discover the Internet Archive’s collection of 2.5 million digitized monographs through the libraries around the world that use OCLC’s bibliographic services. Readers searching for a particular volume will know that a digital version of the book exists in our collection. With just one click, readers will be taken to archive.org to examine and possibly borrow the digital version of that book. In turn, readers who find a digital book at archive.org will be able, with one click, to discover the nearest library where they can borrow the hard copy.

There are additional benefits: in the process of the synchronization, OCLC databases will be enriched with records describing books that may not yet be represented in WorldCat.

“This work strengthens the Archive’s connection to the library community around the world. It advances our goal of universal access by making our collections much more widely discoverable. It will benefit library users around the globe by giving them the opportunity to borrow digital books that might not otherwise be available to them,” said Brewster Kahle, Founder and Digital Librarian of the Internet Archive. “We’re glad to partner with OCLC to make this possible and look forward to other opportunities this synchronization will present.”

“OCLC is always looking for opportunities to work with partners who share goals and objectives that can benefit libraries and library users,” said Chip Nilges, OCLC Vice President, Business Development. “We’re excited to be working with Internet Archive, and to make this valuable content discoverable through WorldCat. This partnership will add value to WorldCat, expand the collections of member libraries, and extend the reach of Internet Archive content to library users everywhere.”

We believe this partnership will be a win-win-win for libraries and for learners around the globe.

Today, the Boston Public Library announced the transfer of significant holdings from its Sound Archives Collection to the Internet Archive, which will digitize, preserve and make these recordings accessible to the public. The Boston Public Library (BPL) sound collection includes hundreds of thousands of audio recordings in a variety of historical formats, including wax cylinders, 78 rpms, and LPs. The recordings span many genres, including classical, pop, rock, jazz, and opera – from 78s produced in the early 1900s to LPs from the 1980s. These recordings have never been circulated and were in storage for several decades, uncataloged and inaccessible to the public. By collaborating with the Internet Archive, Boston Public Libraries audio collection can be heard by new audiences of scholars, researchers and music lovers worldwide.

Some of the thousands of 20th century recordings in the Boston Public Library’s Sound Archives Collection.

“Through this innovative collaboration, the Internet Archive will bring significant portions of these sound archives online and to life in a way that we couldn’t do alone, and we are thrilled to have this historic collection curated and cared for by our longtime partners for all to enjoy going forward,” said David Leonard, President of the Boston Public Library.

Listening to the 78 rpm recording of “Please Pass the Biscuits, Pappy,” by W. Lee O’Daniel and his Hillbilly Boys from the BPL Sound Archive, what do you hear? Internet Archive Founder, Brewster Kahle, hears part of a soundscape of America in 1938. That’s why he believes Boston Public Library’s transfer is so significant.

“Boston Public Library is once again leading in providing public access to their holdings. Their Sound Archive Collection includes hillbilly music, early brass bands and accordion recordings from the turn of the last century, offering an authentic audio portrait of how America sounded a century ago.” says Brewster Kahle, Internet Archive’s Digital Librarian. “Every time I walk through Boston Public Library’s doors, I’m inspired to read what is carved above it: ‘Free to All.’”

The 78 rpm records from the BPL’s Sound Archives Collection fit into the Internet Archive’s larger initiative called The Great 78 Project. This community effort seeks to digitize all the 78 rpm records ever produced, supporting their preservation, research and discovery. From about 1898 to the 1950s, an estimated 3 million sides were published on 78 rpm discs. While commercially viable recordings will have been restored or remastered onto LP’s or CD, there is significant research value in the remaining artifacts which include often rare 78rpm recordings.

“The simple fact of the matter is most audiovisual recordings will be lost,” says George Blood, an internationally renowned expert on audio preservation. “These 78s are disappearing right and left. It is important that we do a good job preserving what we can get to, because there won’t be a second chance.”

The Internet Archive began working with the Boston Public Library in 2007, and our scanning center is housed at its Central Library in Copley Square. There, as a digital-partner-in-residence, the Internet Archive is scanning bound materials for Boston Public Library, including the John Adams Library, one of the BPL’s Collections of Distinction.

To honor Boston Public Library’s long legacy and pioneering role in making its valuable holdings available to an ever wider public online, we will be awarding the 2017 Internet Archive Hero Award to David Leonard, the President of BPL, at a public celebration tonight at the Internet Archive headquarters in San Francisco.

The Internet Archive is now leveraging a little known, and perhaps never used, provision of US copyright law, Section 108h, which allows libraries to scan and make available materials published 1923 to 1941 if they are not being actively sold. Elizabeth Townsend Gard, a copyright scholar at Tulane University calls this “Library Public Domain.” She and her students helped bring the first scanned books of this era available online in a collection named for the author of the bill making this necessary: The Sonny Bono Memorial Collection. Thousands more books will be added in the near future as we automate. We hope this will encourage libraries that have been reticent to scan beyond 1923 to start mass scanning their books and other works, at least up to 1942.

While good news, it is too bad it is necessary to use this provision.

Trend of Maximum U.S. General Copyright Term by Tom W Bell

If the Founding Fathers had their way, almost all works from the 20th century would be public domain by now (14-year copyright term, renewable once if you took extra actions).

Some corporations saw adding works to the public domain to be a problem, and when Sonny Bono got elected to the House of Representatives, representing Riverside County, near Los Angeles, he helped push through a law extending copyright’s duration another 20 years to keep things locked-up back to 1923. This has been called the Mickey Mouse Protection Act due to one of the motivators behind the law, but it was also a result of Europe extending copyright terms an additional twenty years first. If not for this law, works from 1923 and beyond would have been in the public domain decades ago.

Lawrence Lessig

Creative Commons founder, Larry Lessig fought the new law in court as unreasonable, unneeded, and ridiculous. In support of Lessig’s fight, the Internet Archive made an Internet bookmobile to celebrate what could be done with the public domain. We drove the bookmobile across the country to the Supreme Court to make books during the hearing of the case. Alas, we lost.

Internet Archive Bookmobile in front of Carnegie Library in Pittsburgh: “Free to the People”

But there is an exemption from this extension of copyright, but only for libraries and only for works that are not actively for sale — we can scan them and make them available. Professor Townsend Gard had two legal interns work with the Internet Archive last summer to find how we can automate finding appropriate scanned books that could be liberated, and hand-vetted the first books for the collection. Professor Townsend Gard has just released an in-depth paper giving libraries guidance as to how to implement Section 108(h) based on her work with the Archive and other libraries. Together, we have called them “Last Twenty” Collections, as libraries and archives can copy and distribute to the general public qualified works in the last twenty years of their copyright.

Today we announce the “Sonny Bono Memorial Collection” containing the first books to be liberated. Anyone can download, read, and enjoy these works that have been long out of print. We will add another 10,000 books and other works in the near future. “Working with the Internet Archive has allowed us to do the work to make this part of the law usable,” reflected Professor Townsend Gard. “Hopefully, this will be the first of many “Last Twenty” Collections around the country.”

Now it is the chance for libraries and citizens who have been reticent to scan works beyond 1923, to push forward to 1941, and the Internet Archive will host them. “I’ve always said that the silver lining of the unfortunate Eldred v. Ashcroft decision was the response from people to do something, to actively begin to limit the power of the copyright monopoly through action that promoted open access and CC licensing,” says Carrie Russell, Director of ALA’s Program of Public Access to Information. “As a result, the academy and the general public has rediscovered the value of the public domain. The Last Twenty project joins the Internet Archive, the HathiTrust copyright review project, and the Creative Commons in amassing our public domain to further new scholarship, creativity, and learning.”

We thank and congratulate Team Durationator and Professor Townsend Gard for all the hard work that went into making this new collection possible. Professor Townsend Gard, along with her husband, Dr. Ron Gard, have started a company, Limited Times, to assist libraries, archives, and museums implementing Section 108(h), “Last Twenty” collections, and other aspects of the copyright law.

Prof. ElizabethTownsend Gard

Tomi AinaLaw Student

Stan SaterLaw Student

Hundreds of thousands of books can now be liberated. Let’s bring the 20th century to 21st-century citizens. Everyone, rev your cameras!

—
Limited tickets left for 20th Century Time Machine — the Internet Archive’s Annual Bash – happening this Wednesday at the Internet Archive from 5pm-9:30pm. In case you missed it, here’s our original announcement.

Which recent hurricane got the least amount of attention from TV news broadcasters?

Irma

Maria

Harvey

Thomas Jefferson said, “Government that governs least governs best.”

True

False

Mitch McConnell shows up most on which cable TV news channel?

CNN

Fox News

MSNBC

Answers at end of post.

The Internet Archive’s TV News Archive, our constantly growing online, free library of TV news broadcasts, contains 1.4 million shows, some dating back to 2009, searchable by closed captioning. History is happening, and we preserve how broadcast news filters it to us, the audience, whether it’s through CNN’s Jake Tapper, Fox’s Bill O’Reilly, MSNBC’s Rachel Maddow or others. This archive becomes a rich resource for journalists, academics, and the general public to explore the biases embedded in news coverage and to hold public officials accountable.

Last October we wrote how the Internet Archive’s TV News Archive was “hacking the election,” then 13 days away. In the year since, we’ve been applying our experience using machine learning to track political ads and TV news coverage in the 2016 elections to experiment with new collaborations and tools to create more ways to analyze the news.

Helping fact-checkers

Since we launched our Trump Archive in January 2017, and followed in August with the four congressional leaders, Democrat and Republican, as well as key executive branch figures, we’ve collected some 4,534 hours of curated programming and more than 1,300 fact-checks of material on subjects ranging from immigration to the environment to elections.

We’re also proud to be part of the Duke Reporter’s Lab’s new Tech & Check collaborative, where we’re working with journalists and computer scientists to develop ways to automate parts of the fact-checking process. For example, we’re creating processes to help identify important factual claims within TV news broadcasts to help guide fact-checkers where to concentrate their efforts. The initiative received $1.2 million from the John S. and James L. Knight Foundation, the Facebook Journalism Project and the Craig Newmark Foundation.

The work of TV Architect Tracey Jaquith, our Third Eye project scans the lower thirds of TV screens, using OCR, or optical character recognition, to turn these fleeting missives into downloadable data ripe for analysis. Launched in September 2017, Third Eye tracks BBC News, CNN, Fox News, and MSNBC, and collected more than four million chyrons captured in just over two weeks, and counting.

Vox news reporter Alvin Chang used the Third Eye chyron data to report how Fox News paid less attention to Hurricane Maria’s destruction in Puerto Rico than it did to Hurricanes Irma and Harvey, which battered Florida and Texas. Chang’s work followed a similar piece by Dhrumil Mehta for FiveThirtyEight, which used Television Explorer, a tool developed by data scientist Kalev Leetaru to search and visualize closed captioning on the TV News Archive.

CNN’s Brian Stelter followed up with a similar analysis on “Reliable Sources” October 1.

We’re also working with academics who are using our tools to unlock new insights. For example, Schultz and Jaquith are working with Bryce Dietrich at the University of Iowa to apply the Duplitron, the audiofingerprinting tool that fueled our political ad airing data, to analyze floor speeches of members of Congress. The study identifies which floor speeches were aired on cable news programs and explores the reasons why those particular clips were selected for airing. A draft of the paper was presented in the 2017 Polinfomatics Workshop in Seattle and will begin review for publication in the coming months.

What’s next? Our plans include making more than a million hours of TV news available to researchers from both private and public institutions via a digital public library branch of the Internet Archive’s TV News Archive. These branches would be housed in computing environments, where networked computers provide the processing power needed to analyze large amounts of data. Researchers will be able to conduct their own experiments using machine learning to extract metadata from TV news. Such metadata could include, for example, speaker identification–a way to identify not just when a speaker appears on a screen, but when she or he is talking. Metadata generated through these experiments would then be used to enrich the TV News Archive, so that any member of the public could do increasingly sophisticated searches.

Going global

We live in an interdependent world, but we often lack understanding about how other cultures perceive us. Collecting global TV could open a new window for journalists and researchers seeking to understand how political and policy messages are reported and spread across the globe. The same tools we’ve developed to track political ads, faces, chyrons, and captions can help us put news coverage from around the globe into perspective.

We’re beginning work to expand our TV collection to include more channels from around the globe. We’ve added the BBC and recently began collecting Deutsche Welle from Germany and the English-language Al Jazeera. We’re talking to potential partners and developing strategy about where it’s important to collect TV and how we can do so efficiently.

History is happening, but we’re not just watching. We’re collecting, making it accessible, and working with others to find new ways to understand it. Stay tuned. Email us at tvnews@archive.org. Follow us @tvnewsarchive, and subscribe to our weekly newsletter here.

A weekly round up on what’s happening and what we’re seeing at the TV News Archive by Katie Dahl and Nancy Watzman. Additional research by Robin Chin.

In an era when social media algorithms skew what people see online, the Internet Archive TV News Archive’s collections of on-the-record statements by top political figures serves as a powerful model for how preservation can provide a deep resource for who really said what, when, and where.

Since we launched our Trump Archive in January 2017, and followed in August with the four congressional leaders, Democrat and Republican, as well as key executive branch figures, we’ve collected some 4,534 hours of curated programming and more than 1,300 fact-checks of material on subjects ranging from immigration to the environment to elections.

As a library, we’re dedicated to providing a record – sometimes literally, as in the case of 78s! – that can help researchers, journalists, and the public find trustworthy sources for our collective history. These clip collections, along with fact-checks, now largely hand-curated, provide a quick way to find public statements made by elected officials.

The big picture

Given his position at the helm of the government, it is not surprising that Trump garners most of the fact-checking attention. Three out of four, or 1008 of the fact-checks, focus on Trump’s statements. Another 192 relate to the four congressional leaders: Senate Majority Leader Mitch McConnell, R., Ky.; Senate Minority Leader Chuck Schumer, D., N.Y.; House Speaker Paul Ryan, R., Wis.; and House Minority Leader Nancy Pelosi, D., Calif. We’ve also logged 140 fact-checks related to key administration figures such as Sean Spicer, Jeff Sessions, and Mike Pence.

The topics

The topics covered by fact-checkers run the gamut of national and global policy issues, history, and everything in between. For example, the debate on tax reform is grounded with fact-checks of the historical and global context posited by the president. Fact-checkers have also examined his aides’ claims on the impact of the current reform proposal on the wealthy and on the deficit. They’ve also followed the claims made by House Speaker Paul Ryan, R., Wis., the leading GOP policy voice on tax reform.

Another large set of fact-checks cover health care, going back as far as this claim made in 2010 by Pelosi about job creation under healthcare reform (PolitiFact rated it “Half True.”) The most recent example is the Graham-Cassidy bill that aimed to repeal much of Obamacare. One of the most sharply contested debates about that legislation was whether or not it would require coverage of people with pre-existing conditions. Fact-checkers parsed the he-said he-said debate as it unfolded on TV news, for example examining dueling claims by Schumer and Trump.

The old stuff

The collection of Trump fact checks include a few dating back to 2011, long before his successful presidential campaign. Here he is at the CPAC conference that year claiming no one remembered now-former President Barack Obama from school, part of his campaign to question Obama’s citizenship. (PolitiFact rated: “Pants on Fire!”) And here he is with what FactCheck.org called a “100 percent wrong” claim about the Egyptian people voting to overturn a treaty with Israel.

This fact-check of McConnell dates back to 2009, when PolitiFact rated “false” his claim of how much federal spending occurred under Obama’s watch: “In just one month, the Democrats have spent more than President Bush spent in seven years on the war in Iraq, the war in Afghanistan and Hurricane Katrina combined.”

Meanwhile, this 2010 statement by Schumer, rated “mostly false” by PolitiFact, asserted that the U.S. Supreme Court “decided to overrule the 100-year-old ban on corporate expenditures.” The ban on giving directly to candidates is still in place; however, corporations are free to spend unlimited funds on elections providing they do so separate from a candidate’s official campaign.

The repetition

Twenty-four million people will be forced off their health insurance, young farmers have to sell the farm to pay estate tax, NATO members owe the United States money, millions of women turn to Planned Parenthood for mammograms, and sanctuary cities lead to higher crime. These are all examples of claims found to be inaccurate or misleading, but that continued or continue to be repeated by public officials.

The unexpected

Whether you lean one political direction or another, there are always surprises from the fact-checkers that can keep all our assumptions in check. For example, if you’re opposed to building a wall on the southern border to keep people from crossing into the U.S., you might guess Trump’s claim that people use catapults to toss drugs over current walls is an exaggeration. In fact, that statement was rated “mostly true” by PolitiFact. Or if you’re conservative, you might be surprised to learn an often repeated quote ascribed to Thomas Jefferson, in this case by Vice President Mike Pence, is in fact falsely attributed to him.

How to find

If you’re looking for the most recent TV news statements with fact-checks, you can see the latest offerings on the TV Archive’s homepage by scrolling down.

You can review whole speeches, scanning for just the fact-checked claims by looking for the fact-check icon on a program timeline. For example, starting in the Trump Archive, you can choose a speech or interview and see if and how many of the statements were checked by reporters.

You can also find the fact-checks in the growing table, also available to download, which includes details on the official making the claim, the topic(s) covered, the url for the corresponding TV news clip, and the link to the fact-checking article.

The Wayback Machine has an exciting new feature: it can list the dates and times, the Timestamps, of all page elements compared to the date and time of the base URL of a page. This means that users can see, for instance, that an image displayed on a page was captured X days before the URL of the page or Y hours after it. Timestamps are available via the “About this capture” link on the right side of the Wayback Toolbar. Here is an example:

The Timestamps list includes the URLs and date and time difference compared to the current page for the following page elements: images, scripts, CSS and frames. Elements are presented in a descending order. If you put your cursor over a list element on the page, it will be highlighted and if you click on it you will be shown a playback of just that element.

Under the hood

Web pages are usually a composition of multiple elements such as images, scripts and CSS. The Wayback Machine tries to archive and playback web pages in the best possible manner, including all their original elements. Each web page element has its own URL and Timestamp, indicating the exact date and time it was archived. Page elements may have similar Timestamps but they could also vary significantly for various reasons which depend on the web crawling process. By using the new Timestamps feature, users can easily learn the archive date and time for each element of a page.

Why this is important

The Wayback Machine is increasingly used in critical procedures such as legal evidence or political debate material. It is important that what is presented is clear and transparent, even in the light of a web that was not designed to be archived. One of the ways a web archive could be confusing is via anachronisms, displaying content from different dates and times than the user expects. For example, when a archived page is played back, it could include some images from the current web, making it look like the image came from the past when it did not. We implemented Timestamps to provide users with more context about, and in turn hopefully greater confidence in, what they are seeing.

The Internet Archive’s Wayback Machine has preserved President Donald Trump’s deleted tweets praising failed GOP Alabama U.S. Senate candidate Luther Strange following his defeat by Roy Moore on September 26. So does the Pulitzer Prize-winning investigative journalism site ProPublica, through its Politwoops project.

The story of Trump’s deleted tweets about Strange was reported far and wide, including this segment on MSNBC’s “Deadline Whitehouse” that aired on September 27.

In a fact-check on the legality of a president deleting tweets, linked in the TV News Archive clip above, John Kruzel, reports for PolitiFact that the law is murky but still being fleshed out:

Experts were split over how much enforcement power courts have in the arena of presidential record-keeping, though most seemed to agree the president has the upper hand.

“One of the problems with the Presidential Records Act is that it does not have a lot of teeth,” said Douglas Cox, a professor at the City University of New York School of Law. “The courts have held that the president has wide and almost unreviewable discretion to interpret the Presidential Records Act.”

That said, many of the experts we spoke to are closely monitoring how the court responds to the litigation around Trump administration record-keeping.

He also provides background on that litigation, a lawsuit brought by Citizens for Responsibility and Ethics in Washington. The case is broadly about requirements for preserving presidential records, and a previous set of deleted presidential tweets is a part of it.

Fact Check: NFL attendance and ratings are way down because people love their country (Mostly false)

Speaking of Trump’s tweets, the president ignited an explosion of coverage with an early morning tweet on Sunday, Sept. 24, ahead of a long day of football games: “NFL attendance and ratings are WAY DOWN. Boring games yes, but many stay away because they love our country.”

Manuela Tobias of PolitiFact rated this claim as “mostly false,” reporting, “Ratings were down 8 percent in 2016, but experts said the drop was modest and in line with general ratings for the sports industry. The NFL remains the most watched televised sports event in the United States.” “As for political motivation, there’s little evidence to suggest people are boycotting the NFL. Most of the professional sports franchises are dealing with declines in popularity.”

How did different cable TV news networks cover the NFL protests?

We first used the Television Explorer tool to see where there was a spike in the use of the word “NFL” near the word “Trump.” It looked like Sunday showed the most use of these words. After a closer look, we saw MSNBC, Fox News, and CNN all showed highest mentions of these terms around 2 pm Pacific.

Spike at 2 pm (PST) for CNN, MSNBC, and CNN

Then we downloaded data from the new Third Eye project, which turns TV News chyrons into data, filtering for that date and hour. We were able to see how the three cable news networks were summarizing the news at that particular point in time.

At about 2:02, CNN broadcast this chyron: “NFL teams kneel, link arms in defiance of Trump.”

Screen grab of chyron caught by Third Eye from 2:02 pm 9/24/17 on CNN

Fox News chose the following, also seen below tweeted from one of the Third Eye twitter bots: “Some NFL owners criticize Trump’s statements on player protests, link arms with players”

Writing for FiveThirtyEight.com, Dhrumil Mehta demonstrated that both online news sites and TV news broadcasters paid less attention to Puerto Rico’s hurricane Marie than to hurricanes Harvey and Irma, which hit mainland U.S. primarily in Texas and Florida. Mehta used TV News Archive data via Television Explorer, as well as data from Media Cloud on online news coverage, to help make his case:

While Puerto Rico suffers after Hurricane Maria, much of the U.S. media (FiveThirtyEight not excepted) has been occupied with other things: a health care bill that failed to pass, a primary election in Alabama, and a spat between the president and sports players, just to name a few. Last Sunday alone, after President Trump’s tweetsabout the NFL, the phrase “national anthem” was said in more sentences on TV news than “Puerto Rico” and “Hurricane Maria” combined.

Join us this Saturday, September 23 @ 10:30am PT for our Experiments Day Hackathon

It’s almost that time again — October 11 — the day the Internet Archive invites you to celebrate another year of preserving our cultural heritage and the progress our community has made towards building tools that facilitate universal access to all knowledge.

Making these collections as discoverable and accessible as possible is a huge task, and we need your help! It’s often our community members who bring our items to life.

Now’s your chance!

Champions of open-access, unite: This Saturday, September 23 @ 10:30am PT, join us in person at the Internet Archive HQ or joins us remotely online for an Experiments Day Hackathon;a day of camaraderie and civic action fuelled by fresh ground coffee and abundant amounts of pizza.

Let’s team up to prototype experimental interfaces, remix content, and build tools to make knowledge more accessible to those who need it most.

Today the Internet Archive’s TV News Archive announces a new way to plumb our TV news collections to see how news stories are reported: data feeds for the news that appears as chyrons on the lower thirds of TV screens. Our Third Eye project scans the lower thirds of TV screens, using OCR, or optical character recognition, to turn these fleeting missives into downloadable data ripe for analysis. At launch, Third Eye tracks BBC News, CNN, Fox News, and MSNBC, and contains more than four million chyrons captured in just over two weeks.

Breaking news often appears as chyrons on TV before newscasters begin reporting or video is available, whether the subject is a hurricane or a breaking political story. Which chyrons a TV news network chooses to display often reveals editorial decisions that can demonstrate a particular slant on the news. With Third Eye data, investigations by journalists, fact-checkers, researchers, can explore how messages are delivered to the public in near real-time.

Third Eye on Twitter tweets the most clear, representative chyron from a one-minute period on a particular TV news channel. This can serve as an alert system, showing how TV networks are reporting news.

For example, on September 6, 2017, in the midst of a heavy news day featuring Hurricane Irma, the debate over a deal on immigration, and other stories, TV news cable networks began to show the breaking news that Facebook had turned over information about $100,000 in ads purchased by Russian sources during the 2016 elections to Robert S. Mueller III, the special counsel investigating ties between the Trump campaign and Russia. Our Third Eye CNN Twitter bot tweeted out this chyron recorded at 2:38 pm Pacific Standard Time.

However, our data do not show Fox News running any chyrons on the Facebook ad news that day. To cross-check, we used Television Explorer, a tool for searching TV News Archive closed captions. (Captions differ from chyrons; captions capture what news anchors are actually saying, as opposed to chyrons, which feature text chosen by the TV channel to run at the bottom of the screen.) Television Explorer shows CNN and MSNBC covering the story on September 6, but not Fox News.

However, the Facebook ad story did make it on to the Fox News website during the 2 p.m. hour, as this search on the Wayback Machine shows.

This is just one example of the way that researchers might use Third Eye chyron data in conjunction with other tools to explore how a particular story is portrayed on TV news. We’d love for others to dig in, explore, and give us feedback on this new public data source.

More on Third Eye data

The work of the Internet Archive’s TV architect Tracey Jaquith, the Third Eye project applies OCR to the “lower thirds” of TV cable news screens to capture the text that appears there. The chyrons are not captions, which provide the text for what people are saying on screen, but rather are narrative display text that accompanies news broadcasts.

Created in real-time by TV news editors, chyrons sometimes include misspellings. The OCR process also frequently adds another element where text is not rendered correctly, leading to entries that may be garbled. To make sense out of the noise, Jaquith applies algorithms that choose the most representative chyrons from each channel collected over 60-second increments. This cleaned-up feed is what fuels the Twitter bots that post which chyrons are appearing on TV news screens.

We provide options to download this filtered feed and/or the raw feed nearly as soon as it appears on the TV screen. Both may be useful depending on the type of project. In addition, the Twitter feed itself is a good source to see what the filtered feed looks like.

Some notes:

Chryons are derived in near real-time from the TV News Archive‘s collection of TV news. The constantly updating public collection contains 1.4 million TV news shows, some dating back to 2009.

Data can be affected by temporary collection outages, which typically can last minutes or hours, but rarely more. If you are concerned about a specific time gap in a feed and would like to know if it’s the result of an outage, please inquire at tvnews@archive.org.

The “raw feed” option provides all of the OCR’ed text from chryons at the rate of approximately one entry per second. The “filtered tweets feed” provides the data that fuels our Twitter bots; this has been filtered to find the most representative, clearest chyrons from a 60-second period, with no more than one entry/tweet per minute (though the duration may be shorter than 60 seconds.) The filtered feed relies on algorithms that are a work in progress; we invite you to share your ideas on how to effectively filter the noise from the raw data.

We want to hear from you! Please contact us with questions, feedback, concerns – and also to tell us what project you’ve done with the TV News Archive’s Third Eye project: tvnews@archive.org. Follow us @tvnewsarchive, and subscribe to our weekly newsletter here.