Category: Machine Learning

GUEST: Recently I interviewed Clare Gollnick, CTO of Terbium Labs, on the reproducibility crisis in science and its implications for data scientists. The podcast seemed to really resonate with listeners (judging by the number of comments we’ve received via the show notes page and Twitter), for several reasons. To sum up the issue: Many resear…Read More

Humans play an indispensable role in many modern AI-enabled services – not just as consumers of the service, but as the actual intelligence behind the artificial intelligence. From news portals to e-commerce websites, it is people’s ratings, clicks, and other interactions which provide a teaching signal used by the underlying intelligent systems to learn. While these human-in-the-loop systems improve through user interaction over time, they must also provide enough short-term benefit to people to be helpful. Continue reading “Improving AI Systems with Human Feedback and no Heartburn”→

Big data is a big deal, and if you follow the popular technical press, you’ll have heard all the metaphors: data is the new oil, the new bacon, the new currency, the new electricity. It’s even been called the new black. While data may not actually be any of these things, we can say this: in today’s networked world, data is increasingly valuable, and it is essential to research, both basic and applied. Continue reading “Getting Linked In to Data Science with Dr. Igor Perisic”→

Artificial Intelligence (AI) systems can perform amazing feats of problem-solving. But no matter how accurate AI solutions are, they won’t be relevant, insightful and adopted by people without great design work.

The practice of design is about problem solving. It starts long before the visual look and feel is created and continues long afterward. It creates a vital connection between humans and machines that allows AI systems to perform at their best.

Developers are working on tools that can help spot suspect stories and call them out, but it may be the beginning of an automated arms race.

“Testing a demo version of the AdVerif.ai, the AI recognized the Onion as satire (which has fooled many people in the past). Breitbart stories were classified as “unreliable, right, political, bias,” while Cosmopolitan was considered “left.” It could tell when a Twitter account was using a logo but the links weren’t associated with the brand it was portraying. AdVerif.ai not only found that a story on Natural News with the headline “Evidence points to Bitcoin being an NSA-engineered psyop to roll out one-world digital currency” was from a blacklisted site, but identified it as a fake news story popping up on other blacklisted sites without any references in legitimate news organizations.”

John McCarthy first coined the term “artificial intelligence” (AI) in 1956 at a conference in Dartmouth. AI means that machines can mirror the functions of the human brain in various applications such as problem-solving in Mathematics. Since then, interest in AI has grown exponentially over the years. Recently, Apple published an AI paper of its…read more

In early October, two scientists shared a software program that detects incorrect gene sequences in already published research experiments. Using the program, the duo identified experimental flaws in more than 60 papers within cancer research alone. Scientists Jennifer Byrne and Cyril Labbé combined their expertise in cancer-research and computer-science, to introduce the software “Seek &…read more

Tech giants such as Google and Baidu spent from $20 billion to $30 billion on AI last year, according to the recent McKinsey Global Institute Study. Out of this wealth, 90 percent fueled R&D and deployment, and 10 percent went toward AI acquisitions.

Research plays a crucial role in the AI movement, and tech giants have to do everything in their power to seem viable to the AI community. AI is mostly based on research advances and state-of-the-art technology, which is advancing very quickly. Therefore, there is no business need to make closed infrastructure solutions, because within a few months everything will be totally different.

A paper published in arXiv by researchers from Stanford describes a deep neural network that can look at a patient’s records and estimate the chance of mortality in the next three to 12 months. The team found that this serves as a good way to identify patients who could benefit from palliative care. Importantly, the algorithm also creates reports to explain its predictions to doctors.