Group 5 – OpenNews: Critical Response

In this update, we will discuss and address some of the concerns that were brought to our attention by various critics of our system. We have chosen to omit generally positive comments and focus on areas that need improvement and better definition.

Notable Responses

Reviewer: Kari

Process and Methods: 5 / 5. “Lots of supporting documentation/research. I wish they approached them them with a more [unknown word] eye”.Response: That is true, during our research we may have been looking for evidence that supports trends rather than evidence that does not.

Quality of Proposed System: 3 / 5. “I’m concerned about the desire to eliminate bias from news, as we discussed during the feedback session. The aims are [unknown], just I’m not sure that the prototype will accomplish them”. Response: I think the reviewer is pointing out that we may have issues eliminating bias in our prototype or that our prototype will not be robust enough to demonstrate. I believe this is something we will address more closely in the coming weeks as we begin to flush out our presentation ideas. Ultimately, we realize that our idea is too ambitious to fully implement; however, we believe we can implement aspects (including a rudimentary classification system) that will create a compelling argument for the existence of our entire system.

Reviewer: Ellis

Process and Methods: 4 / 5. “I was curious about related work. I saw a lot of what was wrong but not of anything with similar solution”.
Response: This comment indicates the importance of understanding our system and where it gets its roots. That is to say, we should have done and should do more research and examination of similar existing system such as Wikinews and Politifact in order to clearly demonstrate how we will address the weak points and problems with these. Generally, we believe our system gets it strength and distinction from having an intuitive and clean user experience, a robust natural language processing backend, and a strong crowd driven experience with defined checks and balances.

Reviewer: Wisnioski

Presentation and Communication: 3 / 5. “Strong desire to tackle key issue. Is objectivity possible in a media environment?”
Response: We need to research and analyze whether objectivity really is possible in a media environment and present that in a clear manner. We believe this really comes down to better defining how we are quantifying the quality of news and preventing the introduction of slow-moving, hard-to-see algorithmic biases. We will begin to address this more closely in the coming weeks as we begin to think about our prototype more.

Process and Methods: 3.5 / 5. “Lots of exciting literature on this. What systems currently exist? (re: politifact). Response: This question was addressed two comments up. The fact that it came up again indicates that we should prioritize this discussion.

Quality of Proposed System: 3.5 / 5. “Important domain space, I suggest focusing on an element of “news” that especially fits your model”.
Response: We should identify and then focus on different elements of news. We likely do not know enough about news itself.

Reviewer: Zach Duer

Process and Methods: 4 / 5. Also commented next to the bullet point of “is there an appropriate review of related work and existing projects?” with “not enough” then commented “I’m deeply concerned about the idea that NLP can be trained on unbiased vs biased articles, and that it wouldn’t understand bias-by-omissions for example, and would reflect the bias of the people labeling articles as biased/unbiased for training”.
Response: We need to more clearly present our solution for avoiding bias in labeling, which is crowdsourcing to people from all demographics and having each article reach a certain percent agreement on whether or not it is biased. Bias by omission is a strong concern but we are hoping that the open source aspect of OpenNews will encourage those to add details that were omitted and refine algorithms.

Quality of Proposed System: 5 / 5. “Yes, great idea… is an AI for automatic first-layer WikiNews editing, makes total sense”.
Response: Let’s ensure we continue to focus on our AI and continue defining it. This is after all what makes our system unique.

Key Notes and Details

We need to explore and elaborate on our NLP ideas more – how do we quantify the quality of news exactly? We don’t want to focus on eliminating bias but rather making it clear when bias exists.

How do we keep our NLP from getting trained improperly such that biases are introduced through less obvious avenues (bias by omission)

How bad is bias? Is purely factual/unbiased news worth anything? We should mock up examples of what we consider to be ideal articles and unideal articles

There were two comments regarding pre-existing systems, noting that we should explore and understand exactly what these systems did wrong and how we’re improving on them with ours. Notably, WikiNews and Poltifact

We need to better define what “news” is to OpenNews

Progress Update

We’ve started mocking up some designs for OpenNews, these were shown to critics as part of the review process this post is covering. These mockups are to help us understand what news looks like and what information is important to a reader.

Article View Mockup

Homepage View Mockup

This class we’ve also determined the materials needed for our final project. So far, this includes: