FLI August, 2019 Newsletter

AI in China & More

Discussions of Chinese artificial intelligence frequently center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond the arms race narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. Listen here.

Coming Soon: Not Cool, a Climate Podcast

We’re thrilled to announce the upcoming launch of our new podcast series, Not Cool with Ariel Conn, which will go live on Tuesday, Sept 3. Through interviews with climate experts from around the world, Not Cool will dive deep into the climate crisis, exploring its causes and impacts and examining solutions.

We’ll talk about what we know and what’s still uncertain. We’ll break down some of the basic science behind climate change, from the carbon cycle to tipping points and extreme weather events. We’ll look at the challenges facing us, and at the impacts on human health, national security, biodiversity, and more. And we’ll learn about carbon finance, geoengineering, adaptation, and how individuals and local communities can take action.

Let’s talk about what’s happening. Let’s build momentum. And let’s make change. Because climate change is so not cool.

Podcast Survey

How can we make our podcasts a better resource for you? Please take this short survey and let us know what works, what doesn’t, and what you’d like to see in the future!

Recent Articles

As we move towards a more automated world, tech companies are increasingly faced with decisions about how they want — and don’t want — their products to be used. Perhaps most critically, the sector is in the process of negotiating its relationship to the military, and to the development of lethal autonomous weapons in particular. Some companies, including industry leaders like Google, have committed to abstaining from building weapons technologies; Others have wholeheartedly embraced military collaboration. In a new report titled “Don’t Be Evil,” Dutch advocacy group Pax evaluated the involvement of 50 leading tech companies in the development of military technology. Read our summary of their findings here.

As data-driven learning systems continue to advance, it would be easy enough to define “success” according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly. Read our overview of current value alignment research trends here.

What We’ve Been Up to This Month

Max Tegmark was a keynote panelist at the Johns Hopkins University Applied Physics Laboratory entitled The Future of Humans and Machines: Assuring Artificial Intelligence. Ariel Connand Jared Browncollaborated with APL/JHU on planning and also attended.

Richard Mallah co-chaired and presented on the AI safety landscape at AISafety 2019, a workshop at the International Joint Conference on AI (IJCAI) in Macau.

This website uses both functional and non-functional cookies. For the placement and reading of non-functional cookies, we require your prior consent. You can change the use of cookies later and adjust your preferences. I agreeI do not agreeRead more