Audio & Video

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday. AI with AI is mixed and edited by John Stimpson.

Season 3

Episode 3.30

June 5, 2020

In COVID-related AI news, Andy and Dave discuss work from Mount Sinai researchers, who have created an AI system that uses CT scans to diagnose patients with COVID-19. MIT and IBM Watson announce plans to fund 10 AI research projects to find COVID-19. The National Security Commission on AI releases its second white paper on COVID-19, on mitigating economic impacts of the COVID-19 pandemic, and preserving US strategic competitiveness in AI. In non-COVID AI news, DARPA’s Gamebreaker project holds a virtual kickoff meeting of its program, seeking to model and then break game balance. The United Nations Secretary-General releases a report on the protection of civilians in armed conflict. And the JAIC unveils its "business process transformation" initiative. In research, Hong Kong University of S&T publishes research on EC-Eye, an artificial eye that "sees" like a human eye. Other research demonstrates that neural networks trained for prediction mimic the diverse features of biological neurons and perception. And NVidia, University of Toronto, Vector Institute, and MIT publish GameGAN, and generative model that learns visually to imitate a desired video game (in this case, observing and replicating the gameplay of Pac-Man). The report of the week comes from NATO, which publishes a look at S&T Trends 2020-2040. Wolfgang Ertel pens the book of the week, with Introduction to AI (2nd Edition), free through Springer. The University of Southern California hosts a virtual symposium AI for COVID-19 in LA. And a collaboration between Google and the Getty Museum produces Art Transfer, transforming photos using the style of different artists.

Video of the Week

AI For COVID-19 in LA: A Virtual Symposium

Fun Site of the Week

Episode 3.30

May 29, 2020

In COVID-19-related AI news, Andy and Dave discuss the Novel Coronavirus Research Compendium, a curated resource from Kate Grabowski, an epidemiologist at Johns Hopkins, with the goal of providing a smaller, but higher-quality, data set on coronavirus research. Primer AI uses natural language processing to summarize the latest information on COVID-19. C3.ai provides a COVID-19 data lake with accompanying knowledge graphs, and ready for use with R or Python. On 1 June, the Stanford Institute for Human-Centered AI will host a free virtual conference on the way ahead for AI as the world recovers from COVID-19. And Singapore hires Boston Dynamics’ Spot to roam a public park (controlled by a park ranger) and play a recorded message to encourage social distancing. In non-COVID news, Sony unveils the first line of cameras with a built-in image classifying AI. Researchers at UCLA and Baylor demonstrate the ability to dynamically stimulate the brain cortex to mirror the motion of writing. Booz Allen wins an $800M contract to support the Joint AI Center with AI services. Thomas Dimson uses GPT-2 to create and define words that don’t exist.

And August Cole - best-selling author, lecturer, and consultant on national security issues - joins for a discussion on fiction intelligence (FICINT), the role that it plays in thinking about possible futures, and how those ideas play out in his latest book, Burn-In.

NATO: Science and Technology Trends 2020-2040

Book of the Week

Introduction to Artificial Intelligence: Second Edition

Interview: August Cole

Episode 3.29

May 22, 2020

In COVID-related AI news, Andy and Dave discuss an approach from FiveThirtyEight that uses a mini-model-ensemble to predict possible trajectories for the COVID-19 death toll. MIT Tech Review has released a tracker for COVID-19 tracing trackers, which includes information on how they work and what policies they have in place. In non-COVID-related AI news, DIU releases a solicitation for Vigilante Keeper, an AI solution for detecting behavioral changes that might indicate increased vulnerability. OpenAI releases an analysis that shows the amount of computation needed to train an ImageNet classifier decreases by a factor of 2 ever 16 months, which suggests algorithm progress has resulted in more gains than increased hardware efficiency. The Library of Congress is using machine learning to digitize and organize photos from old newspapers. Microsoft unveils a new tool in Word that makes sentence-level suggestions. And MIT Tech Review Insights publishes an examination of Asia’s advantage in AI with a look at the Asia-Pacific region in the Global AI Agenda. In research, Andy and Dave discuss RTFM (Read to Fight Monsters) from Facebook, which uses roguelike procedural generation to dynamically create goals, monsters, and other attributes, which agents then attempt to fight. The book of the week comes from Miroslav Kubat, with the second edition of An Introduction to Machine Learning. The Australian Defense College has announced the winners of its 2020 Sci-Fi Writing Competition. The full documentary for AlphaGo – The Movie, is now available on YouTube. The proceedings are now available from a federal health virtual forum on AI for COVID-19 Response. CSET will host a discussion on lessons learned for Algorithmic Warfare in DoD on 27 May. And LessWrong by Stuart Armstrong takes a look at Kurzweil's predictions (from 1999) about 2019.

Algorithmic Warfare in the DoD: Lessons Learned

Fun Site of the Week

Episode 3.28

May 15, 2020

Andy and Dave discuss a white paper from the National Security Commission on AI, on Privacy and Ethics Recommendations for Computing Applications Developed to Mitigate COVID-19. The Office of the Undersecretary of State for Arms Control and International Security issues a second point paper on lethal autonomous weapons systems, with AI, Human-Machine Interaction, and Autonomous Weapons. DARPA announces its Air Space Total Awareness for Rapid Tactical Execution (ASTARTE) program, which aims to use low-cost sensors to create a better common operating picture. The Joint AI Center establishes a Data Governance Council to establish an enterprise-wide data governance framework. The JAIC also releases an AI Primer for DoD officials. The U.S. Patent and Trademark office denies patents on the behalf of AI systems. And Google Health describes the challenges in transitioning to clinical environments a system designed to detect diabetic eye disease. In research from the University of Bordeaux, researchers demonstrate the ability to give algorithms intrinsically-motivated goal exploration to enable them to search out interesting patterns in Lenia, an analog version of Conway's Game of Life. A review paper provides an overview of how neural networks sometimes attempt to "short circuit" learning. Peters, Janzing, and Schölkopf and MIT Press make Elements of Causal Inference available. The International Conference on Learning Representation makes its 2020 session available through a slick interface, covering the nearly 700 papers. And OpenAI releases Jukebox, an attempt to create music of a specified style, when given lyrics.

Fun Site of the Week

OpenAI Releases Jukebox

Episode 3.27

May 8, 2020

In COVID-related AI news, hospitals across the US are using an AI system called Deterioration Index to provide a snapshot of patients’s risks, even though the software has not yet been validated to be effective for those with COVID-19. Meanwhile, Qure.ai has retooled its qXR system, designed for chest x-rays, to detect COVID-induced pneumonia, and a preliminary validation study with 11,000 images found a 95% accuracy in distinguishing patients with and without COVID-19. The Digital Ethics Lab at University of Oxford has provided a set of ethical guidelines (16 yes/no questions) for those making COVID-19 Digital Tracking and Tracing (DTT) systems. And Carnegie Mellon provides five interactive maps for COVID-related issues in the US. The Joint AI Center unveils Salus, a prototype AI tool for examining where COVID-19 might impact logistics and supply chains. And Reuters spends time to debunk a false claim on the relation of AI to COVID-19. In regular AI news, Washington State passes major facial recognition legislation, defining how state and local government may use facial recognition. DARPA selects Georgia Tech and Intel to lead its Guaranteeing AI Robustness against Deception (GARD) program. And the Association for the Understanding of AI launches AIhub.org, to connect the public and AI community. In research, two German Institutes investigate the roles of different neurons in neural networks, and found populations that serve different functions; in addition, these populations could be extracted to a new network without having to train the new network on the same knowledge. Research from Bar-Ilan University demonstrate human brain learning mechanisms that outperform common AI learning algorithms, to include observing the same image 10 times in a second being more effective than observing the same imagine 1000 times in one month. The book of the week comes from Matthieu Thiboust, with Insights from the Brain, which aims to provide "neuroscience chunks of information related to AI." And CBS News 60 Minutes has a report on BlueDot, the company that warned its clients about the COVID-19 outbreak a week before the CDC.

Shortcut Learning in Deep Neural Networks

Book of the Week

Insights From the Brain: The Road Towards Machine Intelligence

Video of the Week

AI for Full-Self Driving

Resource of the Week

Episode 3.26

May 1, 2020

Andy and Dave discuss the initial results from King’s College London’s COVID Symptom Tracker, which found fatigue, loss of taste and smell, and cough to be the most common symptoms. MIT’s CSAIL and clinical team at Heritage Assisted Living announce Emerald, a wi-fi box that uses machine learning analyzes wireless signals to record (non-invasively) a person’s vital signs. AI Landing has developed a tool that monitors the distance between people and can send an alert when they get too close. And Johns Hopkins University updates its COVID tracker to provide greater levels of detail on information in the US. In non-COVID news, OpenAI releases Microscope, which contains visualizations of the layers and neurons of eight vision systems (such as AlexNet). The JAIC announces its “Responsible AI Champions” for AI Ethics Principles, and also issues a new RFI for new testing and evaluation technologies. In research, Udrescu and Tegmark publish AI Feynman, and improved algorithm that can find symbolic expressions that match data from an unknown function; they apply the method to 100 equations from Feynman’s Lectures on Physics, and it discovers all of them. The report of the week comes from nearly 60 authors across 30 organizations, a publication on Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. The review paper of the week provides an overview of the State of the Art on Neural Rendering. The book of the week takes a look at the history of DARPA, in Transformative Technologies: Perspectives on DARPA. Stuart Kauffman gives his thoughts on complexity science and prediction, as they related to COVID-19. The ELLIS society holds its second online workshop on COVID on 15 April. Matt Reed creates Zoombot, a personalized chatbot to take your place in Zoom meetings. Ali Aliev creates Avatarify, to make yourself look like somebody else in real-time for your next Zoom meeting.

Fun Sites of the Week

Zoombot – Automated Video Meeting AI

Avatarify face-swaps your own face with a celebrity in live video calls

Episode 3.25

April 24, 2020

In COVID-related AI research, Andy and Dave discuss the joint announcement from Apple and Google on creating a voluntary COVID-19 tracing system that makes use of Bluetooth and anonymous crypto keys. A report in the BMJ screened 27 recent studies describing 31 COVID prediction models and found that all of the studies had a high risk of bias and that the reported performance of the models was probably optimistic. The Allen Institute for AI has updated its COVID-19 Open Research Dataset (CORD) to include “CoViz,” an AI-powered graph visualization tool. And mathematician John Conway, creator of The Game of Life, died at 82 from complications due to COVID-19. In non-COVID AI news, the National Security Commission on AI releases its 1st Quarter recommendations to Congress. Google Brain introduces an deep RL algorithm to the placement optimization problem for computer chip design. And MIT has provided a hub for AI learning for K-12 students. In research, Facebook AI, Oregon State, and Georgia Institute of Technology describe efforts at combining vision and language representation learning, with ViLBERT (vision-and-language BERT), resulting in a single model that can perform multiple tasks, and even leads to improvements on single-task performance. The United Nations Institute for Disarmament Research releases its report on Swarm Robotics. A research paper from Princeton shows prediction of life outcomes (e.g., likelihood of layoff, material hardship, GPA, etc) is still really hard. Joseph Blitzstein and Jessica Hwang provide their 2014 edition of Introduction to Probability for free. The Marine Corps University Press freely releases its 2019 Destination Unknown, a collection of short stories written and illustrated by Marines. And the New York Times publishes a Special Report on AI.

Episode 3.24

April 17, 2020

In COVID-related AI topics, Andy and Dave discuss an emerging crop (no less than three!) of COVID-19 cough detectors that attempt to diagnose the presence of COVID by various voice measurements. In a similar vein, but for different purposes, the U.S. drone maker Draganfly announces it is working with the Australian Department of Defence to produce “pandemic drones,” which can detect coughing, sneezing, and respiratory rate at a difference. Folding@home has shifted its crowdsourcing computational power toward the COVID-19 problem set. In non-COVID news items, researchers at the University of California San Francisco have used deep learning algorithms to translate human brain signals for a set of 250 unique words, by recording brain signals for sentences as patients read them. In research, Uber AI and OpenAI announces their Enhanced POET (Pair Open-ended Trailblazer), which uses a procedural environment to create problems (gaps, stumps, stairs), which the agent then learns to solve, producing a diverse range of sophisticated behaviors. DeepMind reveals Agent57, the first reinforcement learning agent capable of surpassing the human benchmark for all 57 Atari games (though it still must be trained on each individually), using Never Give Up (NGU) memory to identify new environments, as well encouraging exploration and other components. The Survey of the Week takes a look at the development of deep learning for scientific discovery. A report from the BMJ suggests that studies claiming that AI outperforms doctors are “arguably exaggerated,” with a high risk to bias identified in 58 out of 81 studies. A New Conception of War, by Ian Brown, makes the Free Book of the Week, coming from the Marine Corps University Press; among many important concepts, it stresses the importance of debate and intellectual exploration among professional warfighters. Johns Hopkins APL is hosting a virtual event on Operationalizing AI in Health on 21 April. And Intelligent Heath Inspired! seeks to hold the largest summit on 25-27 May on the use of AI in medicine, with particular focus on COVID-19.

Free Non-Technical Book of the Week

A New Conception of War

Virtual Conferences of the Week

Episode 3.23

April 10, 2020

Jvion has provided an online mapping tool to view regions of the United States and see the areas most vulnerable to issues related to COVID, a “COVID Vulnerability Map.” A video clip from Tectonix uses anonymized crowdsourced data to show how Spring Breakers at one Fort Lauderdale beach spread back across the United States, to demonstrate the ease with which a virus *could* spread. A new initiative from Boston Children’s Hospital and Harvard Medical School seeks to create a real-time way to get crowdsourced inputs on potential COVID infections, with “COVID Near You.” Kinsa, maker of smart thermometers, uses its information in an attempt to show county-level spread of COVID-19. On 23 March, CIFAR convened an International Roundtable on AI and COVID-19, which had over 60 particpants; among other points, the ground noted the stark gap between data that is available to governments and what is available to epidemiologists and modelers. C3.ai Digital Transformation Institute, a newly formed research consortium dedicated to accelerating applications of AI, seeks research proposals for AI tools to help curb the effects of the coronavirus. The European Commission is seeking ideas for AI and robotic solutions to help combat COVID-19. The New York Times builds the first U.S. county-level COVID-19 database. Complexity Science Hub Vienna compiles a dataset of country- and U.S. state-policy changes related to COVID-19. The Stanford Institute for Human-Centered AI convenes a virtual conference on 1 April on COVID-19 and AI. And the ELLIS Society sponsors an online workshop on COVID-19 and AI. Finally, AI with AI producer John Stimpson interviews Dr. Alex Wong, co-founder of Darwin.AI and Euclid Labs, on COVID-Net, an open-sourced convolutional neural network for detecting COVID-19 in chest x-rays.

Virtual Conferences of the Week

COVID-19 and AI: A Virtual Conference

Online Workshop on COVIS-19@ELLIS

Episode 3.22

April 4, 2020

In COVID-related news, Andy and Dave discuss CloseLoop.ai and its release of an open-source toolkit for predicting people vulnerable to COVID-19. A Korean biotech company, Seegene, announces that it has used AI to create a coronavirus test. DarwinAI and research at the University of Waterloo announce COVID-Net, a convolutional neural network for detecting COVID-19 in chest x-rays. In non-COVID news, the White House releases its first annual report on AI. The U.S. intelligence community describes its interest in using explainable and interpretable AI. And Microsoft introduces a checklist that attempts to bridge the gap between the AI ethics community and ML practitioners. And House Science Committee members introduce the National AI Initiative Act, which aims to accelerate and coordinate federal investments in AI. In research, the NIH monitors brains replaying memories in real time, by examining neuron firing patterns for word pattern association (such as camel and lime). Facebook AI Research announces Rewarding Impact-Driven Exploration (RIDE), where agents are encouraged to take actions that have significant impact on the environment state. Researchers from the WHO and other institutions examine the landscape of AI applications to COVID-19. Andrea Gilli publishes The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction, a collection of papers on the topic. And David Foster’s book on Generative Deep Learning becomes available for free.

Survey of the Week

Mapping the Landscape of AI Applications Against COV-19

Episode 3.21

March 27, 2020

Not surprisingly, COVID-19 has taken over the news section, but still as it all relates to AI and machine learning. Andy and Dave discuss the COVID-19 Open Research Data Set, a free resource of over 29,000 scholarly articles on the coronavirus family, made available for the Allen Institute, CSET, CZI, Microsoft Research, NIH, and the White House OSTP. In similar news, over 100 organizations have signed a “wellcome statement” to make COVID-19 research and data open for access. The New England Complex Systems Institute provides a host of pandemic resources online. The CDC is using machine learning to forecast COVID-19 (adapting its efforts in forecasting influenza outbreaks). And Anodot launches a public machine learning-driven service to track COVID-19. In research, somehow not COVID-19 related, Google Brain and Google Research demonstrate Auto-ML Zero, which discovers complete machine learning algorithms by using basic mathematical functions as building blocks. The report of the week comes from Complex Multilayer Networks Lab along with Harvard, which provides a COVID-19 Infodemics Observatory, processing more than 100M tweets to quantify various sentiments as well as reliability of information from around the globe (with Singapore topping the list for most reliable information). David Barber provides Bayesian Reasoning and Machine Learning for free. And the Bipartisan Commission on Biodefense and Max Brooks provide Germ Warfare: a Very Graphic History (published in 2019).

Free Technical Book of the Week

Bayesian Reasoning and Machine Learning

Free Non-Technical Book of the Week

Germ Warfare A Very Graphic History – last part is very timely!

Episode 3.20

March 20, 2020

In news items, Andy and Dave discuss an effort by Boston Children’s Hospital to use machine learning to help track the spread of COVID-19. Meanwhile, a proposal from researchers wants to use mobile phones to track the virus’s spread. Fifty-two organization have come together to develop the “first-ever industry-led” standard for AI in healthcare. The National Oceanic and Atmospheric Administration (NOAA) announces its AI strategy. And IBM and Promare begin sea trials for Mayflower, an autonomous ship that, later this year, will make the reverse of the 1620 Mayflower transit, completely unmanned. In research, Google and Columbia University enable a robot to teach itself how to walk with minimal human intervention (bounding the terrain, and making the robot’s trial movements more cautious). Researchers at Harvard, MIT CSAIL, IBM-Watson-AI Lab, and DeepMind introduce CLEVRER (Collision Events for Video Representation and Reasoning), a diagnostic video dataset for the evaluation of models on a wide range of reasoning tasks. And DeepMind proposal a new reinforcement learning technique that models human behavior, using a gifting game in which agents learn to trust each other. The Berkman Klein Center at Harvard updates its data map of Ethical and Rights-based approaches to Principles for AI. The Center for the Study of the Dragon releases its likely last paper, Unarmed and Dangerous, which looks at how non-weaponized drones can still have lethal effects. Cansu Canca has provided a database and interface that looks at global dynamics of AI principles. Mario Alemi provides the book of the week, with the Amazing Journey of Reason: from DNA to AI. And the livestream talks from the 34th AAAI Conference are now available online.

Book of the Week

Videos of the Week

Episode 3.19

March 13, 2020

In news, Andy and Dave discuss announcements from two Chinese firms that have developed AI that can identify COVID-19 infections with high accuracy. And the Francis Crick Institute makes DeepMind's AlphaFold data on COVID-19 available for free access to researchers. Scientists at the University of Southampton and the University of Padova demonstrate that artificial and biological neurons can communicate over the internet (using memristors). Researchers at the University of Miguel Hernandez develop a new brain implant that bypasses eye and optical nerves and sends visual signals straight to the brain's visual cortex. DARPA announces the winners of the second circuit of its Subterranean Challenge (with CoSTAR taking the honors). And DARPA also kicks off its ASIST (Artificial Social Intelligence for Successful Teams) program. In other news, Freeman Dyson has passed away, at the age of 96; Andy recommend's Dyson's talk from 2014 on "Are Brains Analogue or Digital?" among many other works by the late physicist. In research, UC Berkeley demonstrates that deep learning reinforcement algorithms can be attacked and made to malfunction through their policies that govern overall behavior. Researchers at Northwestern University create the first decentralized algorithm with collision-free (and deadlock-free) movement for a swarm of agents (over 1,000 robots virtually, and 100 real robots in a lab). A report from the Stanford Law School and the NY University School of Law examines the use of AI across all U.S. federal administrative agencies. Frontiers in Robotics and AI provides a review and discussion of the challenges in successfully developing swarms of Micro Air Vehicles (MAVs). The Army Futures Command publishes Non-simplicity: The Warrior's Way. And Georgia Tech shines the spotlight on its music playing and improvising robot, Shimon.

Book of the Week

Non-simplicity: The Warriors Way

Fun Site of the Week

Episode 3.18

March 6, 2020

The U.S. Department of Defense Chief Information Officer formally announces that DoD will adopt the Defense Innovation Board’s recommendations on five principles for AI. MIT research have used machine learning to discover a new antibiotic, which they named Halicin. Researchers develop a quantum dot nanoscale device that acts like the brain’s visual cortex to “see” things in its path. The Creative Commons submits its comments to the World Intellectual Property Organization, suggesting that copyright is fundamentally centered on human creativity, and that new rights for AI-generated content would be inappropriate. Researchers at Leiden University construct a Hazardous Object Identifier to identify 11 asteroids that “can hit the world.” And an analyst suggests creating AI of the U.S.’s founding fathers to gain their views on current issues. In research, Google and the Allen Discovery Center publish research on neural cellular automata, which demonstrate the ability to maintain the shape and structure of a greater “organism.” CSBA takes a look at exploiting AI and autonomous systems in Mosaic Warfare. MIT releases one of the first books on cellular automata, Cellular Automata Machines, by Toffoli and Margolus. A new open access journal comes online: Human-Machine Communication. Gary Marcus publishes a paper that looks ahead for the next decade in AI, and identifies four steps toward “robust” AI. “The Brains Behind AI” provides 2-minute snapshots into Canada’s AI researchers. And an artist uses 99 phones to trick Google maps into a traffic jam alert – both Andy and Dave can’t quite get the Star Trek quote correct, which is “the more they overthink the plumbing, the easier it is to stop up the drain.”

The Next Decade in AI: Four Steps Towards Robust AI

Video of the Week

Fun Site of the Week

Episode 3.17

February 28, 2020

Andy and Dave discuss the recent announcement that the U.S. Department of Defense announces that it will adopt the Defense Innovation Board’s detailed principles for using AI. The European Commission releases its white paper on AI. The University of Buffalo’s AI Institute receives a grant to study gamers’ brains in order to build AI military robots. Microsoft announces Turing-NLG, a 17-billion parameter language model. MIT’s CSAIL demonstrates TextFooler, which makes synonym-like substitutions of words, the results of which can severely degrade the accuracy of NLP classifiers. Researchers from McAfee show simple tricks to fool Tesla’s Mobileye EyeQ3 camera. And Andy and Dave conclude with a discussion with Professor Josh Bongard, from the University of Vermont, on his recent “xenobots” research.

Episode 3.16

February 21, 2020

Andy and Dave discuss the President’s 2021 Budget Request, which increases funding for AI but decreases funding to science in general. Google releases Jigsaw, a tool to spot faked and doctored images, but not for the public. DARPA announces its NOMARS program, a No Manning Required Ship. Google creates an ML “fairness gym” to let researchers explore the long-term paths of AI’s decisions. The U.S. Army introduces Aided-Threat Recognition from Mobile Cooperative and Autonomous Sensors (ATR-MCAS), to assist soldiers in using suites of sensors on the battlefield. The Army also issues an RFI for a Sense-Through-the-Wall System. In research, Facebook AI demonstrates the ability to use ‘radioactive data’ to detect if a data set was used for training a particular classifier. PlosOne and the University of Liege in Belgium graft a neuromodulation capability onto deep neural networks as a way to learn adaptive behaviors. Marek Rei has collected a database of ML and NLP publication statistics with an interactive interface. MIT’s Lincoln Lab disseminates AI: A Short History, Present Developments, and Future Outlook (originally published but only internally disseminated a year ago). An Introduction to Machine Learning Interpretability by Hall and colleagues is the book of the week. Mary “Missy” Cummings pens a thought piece on rethinking the maturity of AI in safety-critical settings. The 34th AAAI publishes a video of the winners of the ACM 2018 Turing Award: LeCun, Hinton, and Bengio. And Denis Shiryaev uses a variety of techniques to upscale and colorize an 1896 short film to 4k and 60 frames per second.

***By coincidence, an unrelated paper, but on the same topic appeared on Feb 8 that describes a temporal NN model called DeepRemAster that identifies and corrects defects such as noise and flickers, and colorizes vintage videos!

Episode 3.15

February 14, 2020

Andy and Dave discuss an announcement from Exscientia and Sumitomo that they have the first entirely-AI developed drug that is now entering clinical trials. The Director of the Joint Artificial Intelligence Center, Lt. Gen. Jack Shanahan has announced his retirement. Senator Michael Bennet sends a scathing letter to the U.S. Chief Technology Officer on its recent AI Principles for regulation. DARPA's Habitus program seeks to automate the process of revealing and using local information, to enhance stability operations in under-governed regions. And Washington state legislature has at least one facial recognition bill under consideration. Google Research announces its Meena chatbot, which claims is more Sensible and Specific (a new metric that it developed) than the award-winning Mitsuku, though requiring 30 days to compile on 2,048 tensor processing units. Researchers at the Max-Planck Institute for Intelligent Systems and the University of Florence announce a method for fusing deep learning with combinatorics solvers to create a neural network for combinatorial problems. RAND releases a report on Deterrence in the Age of Thinking Machines. Sejnowski pens thoughts on "the unreasonable effectiveness of deep learning in AI." Taleb takes a detailed look at the statistical consequences of fat tail distributions. Maj. Gen. Mick Ryan pens the final (?) part in his trilogy, AugoStrat Awakenings. Fortune publishes a special magazine on the topic of AI. And Andrew Ng and Geoffrey Hinton sit down for a 40-minute chat on deep learning.

On-line magazine of the week

Video of the Week

Andrew Ng Interview with Father Of Deep Learning , Geoffrey Hinton

Episode 3.14

February 7, 2020

Happy Pi-cast! Andy and Dave discuss some of the stories that have followed the New York Times articles on Clearview AI, to include Twitter telling the company to stop using its photos, and a consortium of 40 agencies calls on the U.S. government to ban facial recognition systems until more is known about the technology. Meanwhile, London’s Metropolitan Police is rolling out live facial recognition technology. BlueDot says that it used AI and its epidemiologists to send a warning about the Wuhan virus on 31 December 2019, a full week before the US CDC announcement on 6 January 2020. Google releases the largest high-resolution map of the fruit fly’s brain, with 25,000 neurons. DARPA’s Gremlin (X-61A) drone system makes its first test flight. And the Guinness Book of World Records recognizes Stephen Worswick as the most frequent winner (5 times) of the Loebner Prize, for his Mitsuku chatbot. In research, Facebook AI achieves near-perfect (99.9%) navigation without needing a map, testing its algorithm in its AI Habitat. Robert J. Marks makes The Case *for* Killer Robots. The Brookings Institute’s Indermit Gill predicts that the AI leader in 2030 will “rule the planet” until at least 2100. The ACT-IAC releases an AI Playbook, with step-by-step guidance for assessment, readiness, selection, implementation, and integration. Jessica Flack examines the Collective Computation of Reality in Nature and Society. Google’s Dataset Search is out of beta. And DoD will be holding its East Coast AI Symposium and Exposition 29 and 30 April in Crystal City.

The Collective Computation of Reality in Nature and Society

Useful Site of the Week

Google’s Dataset Search Out of Beta

Conference of the Week

Episode 3.14

February 7, 2020

Happy Pi-cast! Andy and Dave discuss some of the stories that have followed the New York Times articles on Clearview AI, to include Twitter telling the company to stop using its photos, and a consortium of 40 agencies calls on the U.S. government to ban facial recognition systems until more is known about the technology. Meanwhile, London’s Metropolitan Police is rolling out live facial recognition technology. BlueDot says that it used AI and its epidemiologists to send a warning about the Wuhan virus on 31 December 2019, a full week before the US CDC announcement on 6 January 2020. Google releases the largest high-resolution map of the fruit fly’s brain, with 25,000 neurons. DARPA’s Gremlin (X-61A) drone system makes its first test flight. And the Guinness Book of World Records recognizes Stephen Worswick as the most frequent winner (5 times) of the Loebner Prize, for his Mitsuku chatbot. In research, Facebook AI achieves near-perfect (99.9%) navigation without needing a map, testing its algorithm in its AI Habitat. Robert J. Marks makes The Case *for* Killer Robots. The Brookings Institute’s Indermit Gill predicts that the AI leader in 2030 will “rule the planet” until at least 2100. The ACT-IAC releases an AI Playbook, with step-by-step guidance for assessment, readiness, selection, implementation, and integration. Jessica Flack examines the Collective Computation of Reality in Nature and Society. Google’s Dataset Search is out of beta. And DoD will be holding its East Coast AI Symposium and Exposition 29 and 30 April in Crystal City.

The Collective Computation of Reality in Nature and Society

Useful Site of the Week

Google’s Dataset Search Out of Beta

Conference of the Week

Episode 3.13

January 31, 2020

In a string of related news items on facial recognition, Andy and Dave discuss San Diego’s reported experiences with facial recognition over the last 7 years (coming to an end on 1 January 2020 with the enacting of California’s ban on facial recognition for law enforcement). Across the Atlantic, the European Union is considering a ban on facial recognition in public spaces for 5 years while it determines the broader implications. And the New York Times puts the spotlight on Clearview AI, a company that claims to have billions of photos of people scraped from the web, and that identify people (and the sources of the photos, to include profiles and other information about the individuals) within seconds. In other news, the JAIC is looking for public input on an upcoming AI study, and it is also looking for help in applying machine learning to humanitarian assistance and disaster relief efforts. In research, Google announces that it has developed a “physics free” model for short-term local precipitation forecasting. And researchers at DeepMind and Harvard find experimental evidence that dopamine neurons in the brain may predict rewards in a distributional way (with insight gained from efforts in optimizing reinforcement-learning algorithms). Nature Communications examines the role of AI, whether positive or negative, in achieving the United Nations’ Sustainable Development Goals. The U.S. National Science Board releases its biennial report on Science and Engineering Indicators. The MIT Deep Learning Series has Lex Fridman speaking on Deep Learning State of the Art (and as a bonus, Andy recommends a video of Fridman interviewing Daniel Kahneman, author of “Thinking, Fast and Slow”). GPT-2 wields its sword and dashed bravely into the realm of Dungeons and Dragons. And GPT-2 tries its hand at chess, knowing nothing about the rules, with surprising results.

Fridman interviews Daniel Kahneman (“Thinking, Fast and Slow”):

Fun Stuff of the Week

Episode 3.12

January 24, 2020

The U.S. Government announces the restriction of the sale outside of the U.S. of AI for satellite image analysis. Baidu beats out Google and Microsoft for language “understanding” with its model ERNIE, which uses a technique that it developed specifically for the Chinese language. Samsung unveils NEON, its humanoid AI avatars. The U.S. Department of Defense stands up a counter-unmanned aerial system office. And GoogleAI publishes an AI system for breast cancer screening, but meets with some Twitter (and Wired) backlash on solving the “wrong problem.” Researchers at University of Vermont, the Allen Discovery Center/Tufts, and Wyss Institute/Harvard introduce the world’s “first living robots,” xenobots, constructed from skin and muscle cells of frogs (from designs made with evolutionary algorithms). RAND releases a report on an assessment and recommendations of the DOD’s posture for AI. AI for social good (AI4SG) releases its survey of research and publications on beneficial applications of AI. Daniel Dennett explores the question of whether HAL committed murder, in a classic 1996 essay. From the Bengio and Marcus debate, both references Daniel Kahneman’s “Thinking, Fast and Slow.” And Robert Downey Jr. hosts a YouTube series on The Age of AI.

Classic Essay of the Week

Did HAL Commit Murder?

Video(s) of the Week

The Age of AI – hosted by Robert Downey Jr.

Episode 3.11

January 17, 2020

Andy and Dave discuss a new White House proposal on Principles for AI Regulation. A NIST study examines the effects of race, age, and sex on recognition software and identifies a variety of troubling issues. Facebook removes hundreds of accounts with AI-generated fake profile photos, and Facebook also bans the posting of deepfake videos (with some caveats). And Finland is making its online AI course available for the rest of the world. In research, Uber AI Labs offers a novel approach to accelerating neural architectural search by learning to generate synthetic training data; but the scientific community doesn’t think the findings are quite yet ready for publishing. Researchers at Korea University create an Evolvable Neural Unit (ENU) as a way to approximate the function of an individual neuron and synapse. And researchers at Charite in Berlin show that a single human biological neuron can compute XOR, previously thought not possible. Human-Centered AI at Standard University releases 2019 Annual Report on its AI Index, examining various trends and research in AI in 2019. The Center for a New American Security releases its full report on A Blueprint for Action in AI. Rafael Irizarry provides an Introduction to Data Science. And the video of the week is the debate between Yoshua Bengio and Gary Marcus on the current and future state of research in AI.

Video of the Week

Episode 3.10B

January 10, 2020

In research, Andy and Dave discuss a new idea from Schmidhuber, which introduces Upside-Down reinforcement learning, where no value functions or policy search are necessary, essentially transforming reinforcement learning into a form of supervised learning. Research from OpenAI demonstrates a “double-descent” inherent in deep learning tasks, where performance initially gets worse and then gets better as the model increases in size. Tortoise Media provides yet-another-AI-index, but with a nifty GUI for exploration. August Cole explores a future conflict with Arctic Night. And Richard Feynman provides thoughts (from 1985) on whether machines will be able to think.

Story of the Week

Video of the Week

Episode 3.10A

January 3, 2020

Andy and Dave discuss Lee Sodol’s announcement that he is quitting playing Go because AI “cannot be defeated.” Facebook’s Head of AI says the field will soon “hit the wall” (or does he?). A human beats an AI-powered drone during the Drone Racing League’s latest competition. A Boston Dynamics robot dog has joined a Massachusetts bomb squad. And a new US federal bill would constrain some police use of facial recognition tools. A report from CNAS on the American AI Century provides a Blueprint for Action on how to achieve national AI strategy objectives.

Report of the Week

The American AI Century: A Blueprint for Action

Episode 3.9

December 20, 2019

Andy and Dave discuss OpenAI’s update to an earlier summary of how computational resources have increased to reach each new AI breakthrough. The National Transportation Safety Board releases its report on the 2018 deadly Uber self-driving vehicle crash. Nasdaq has enlisted the aid of machine learning to provide additional security to stock trades. Researchers use a GAN to GANalyze the aspects of “memorable” pictures, while other researchers use a GAN (SinGAN) to generate new pictures from a single image. Over 20 authors come together to publish a paper on tackling climate change with machine learning. Francois Chollet publishes The Measure of Intelligence. Horace He has provided OpenReviewExplorer to include the International Conference on Learning Representations 2020. And FRONTLINE examines the promise and perils of AI.

Video of the Week

Artificial Intelligence: A Guide for Thinking Humans

(Fun) Resource Site of the Week

Episode 3.8

December 13, 2019

Andy and Dave discuss OpenAI’s update to an earlier summary of how computational resources have increased to reach each new AI breakthrough. The National Transportation Safety Board releases its report on the 2018 deadly Uber self-driving vehicle crash. Nasdaq has enlisted the aid of machine learning to provide additional security to stock trades. Researchers use a GAN to GANalyze the aspects of “memorable” pictures, while other researchers use a GAN (SinGAN) to generate new pictures from a single image. Over 20 authors come together to publish a paper on tackling climate change with machine learning. Francois Chollet publishes The Measure of Intelligence. Horace He has provided OpenReviewExplorer to include the International Conference on Learning Representations 2020. And FRONTLINE examines the promise and perils of AI.

Resource of the Week

Videos of the Week

FRONTLINE investigates promise and perils of AI

Episode 3.7

December 6, 2019

Andy and Dave discuss the full release of the algorithm that originally had to be locked up for the safety of humanity (GPT-2). NATO releases its final reports on the implications of AI for NATO’s Armed Forces. The US Army Research Lab wraps up a series of events on its efforts in robotics collaborative technology. The UAE announces the world’s first graduates level AI University opening in September 2020. And John Carmack announces he will step down at CTO of Oculus to tackle the challenge of artificial general intelligence, as a Victorian Gentleman Scientist. In research, two independent research groups introduce adversarial T-shirts. A report examines the taxonomy of real faults in deep learning systems. Krohn, Deyleveld, and Bassens publish Deep Learning Illustrated. The Nov/Dec issue of MIT Technology Review features a variety of AI and related stories. And Manuel Blum of CMU discusses Towards a Conscious AI: A Computer Architecture Inspired by Neuroscience.

Towards a Conscious AI: A Computer Architecture inspired by Neuroscience

Episode 3.6

November 29, 2019

In news, the Defense Innovation Board releases AI Principles: Recommendations on the Ethical Use of AI by the Department of Defense. The National Institute of Standards and Technology’s National Cybersecurity Center of Excellent releases a draft for public comment on adversarial machine learning, which includes an in-depth taxonomy on the possibilities. Google adds BERT to its search algorithm, with its capability for bidirectional representations, in an attempt to “let go of some of your keyword-ese.” In research, Stanford University and Google demonstrate a method for explaining how image classifiers make their decisions, with Automatic Concept-based Explanations (ACE) that extra visual concepts such as colors and textures, or objects and parts. And GoogleAI, Stanford, and Columbia researchers teach a robot arm the concept of assembling objects, with Form2Fit, which is also capable of generalizing its learning to new objects and tasks. Danielle Tarraf pens the latest response to the National Security Commission on AI’s call for ideas, with Our Future Lies in Making AI Robust and Verifiable. Jure Leskovec, Anand Rajaraman, and Jeff Ullman make their second edition of Mining of Massive Datasets available. The Defense Innovation Board posts a video of its public meeting from 31 October at Georgetown University. Maciej Ceglowski’s “Superintelligence: the idea that eats smart people” takes a look at the arguments against superintelligence as a risk to humanity.

Defense Innovation Board Public Meeting

Fun Site of the Week

Season 3

Episode 3.5

November 22, 2019

In the news, Andy and Dave discuss the interim report from the National Security Commission on AI. DARPA’s new OFFensive Swarm-Enabled Tactics (OFFSET) program takes a look at swarm behavior. And DARPA picks the teams for its virtual Air Combat Competition (ACE). In research, DeepMind’s AlphaStar beats 99.8% of human games at StarCraft. A report on Mosaic Warfare looks at restoring the military competitiveness of US forces. Daniel Egel and Eric Robinson pen the latest response to the NSCAI call for ideas, examining the likely evolution, not revolution, of AI in Irregular Warfare. The Promise of AI: Reckoning and Judgment, by Brian Cantwell Smith, rounds out Andy’s pick for a trio of recent, interesting books on AI, taking a philosophical look at the topic. And mosaic warfare and multi-domain battle make the video of the week.

Book of the Week

Video of the Week

Mosaic Warfare and Multi-Domain Battle

Episode 3.4

November 15, 2019

Facebook announces the Deepfake Detection Challenge, a rolling contest to develop technology to detect deepfakes. The US Senate passes the Deepfake Report Act, bipartisan legislation to understand the risks posed by deepfake videos. And US Representatives Hurd and Kelly announced a new initiative to develop a bipartisan national AI strategy with the Bipartisan Policy Center. In research, AI allows a paralyzed person to “handwrite” using his mind. From the University of Grenoble, a paralyzed man is able to walk using a brain-controlled exoskeleton. From the Moscow Institute of Physics and Technology, researchers use a neural network to reconstruct human thoughts from brain waves in real time using electroencephalography. A report from Elsa Kania and Sam Bendett looks at technology collaborations between Russia and China in A New Sino-Russian High-Tech Partnership. In another response to the National Security Commission on AI, Margarita Konaev publishes With AI, We’ll See Faster Fights, But Longer Wars on the War on the Rocks. James, Witten, Hastie, and Tibshirani release An Introduction to Statistical Learning. Open Science Framework makes THINGS available, an object concept and object image database of nearly 14 GB, over 1800 object concepts and more than 26,000 naturalistic object images. And finally, Janelle Shane explains why the danger of AI is Weirder Than You Think.

THINGS object concept and object image database

Videos of the Week

The danger of AI is Weirder Than you Think - TEDTalk

Episode 3.3

November 8, 2019

In news items, Microsoft wins bid for the Pentagon’s $10B Joint Enterprise Defense Infrastructure (JEDI) contract. DARPA’s Spectrum Collaboration Challenge (SC2), which aimed to create devices that work together to optimize spectrum use, names GatorWings (from the University of Florida) as the winner. A report from the Stanford University Institute for Human-Centered AI calls for the US Government to invest $120B in the nation’s AI ecosystem over the next 10 years. And CSET provides a translation of Russia’s National AI Strategy. In research, Google announces Quantum Supremacy, that is, they perform a calculation with Sycamore, their 53-qubit computer, taking 200 seconds to perform, that a classical computer “cannot” (saying that it would take 10,000 years). In response, IBM postulated that a classical computer could take advantage of hard drive space to do the calculation in a couple days. In reports, the Center for Security and Emerging Technology (CSET) publishes an examination of China’s Access to Foreign AI Technology, particularly noting that China’s “copycat” reputation oversimplifies its indigenous science and technology capacity and ability to innovate. Geist and Blumenthal from RAND pen “Military Deception: AI’s Killer App” for War on the Rocks, in response to the National Security Commission on AI’s call for ideas.. Stuart Russell releases Human Compatible, where he describes his approach to avoiding the threat of superhuman AI destroying civilization, which includes inherent uncertainty about the human preferences that they are required to satisfy. For resources, Nikola Plesa provides a centralized list of the biggest datasets available for machine learning. And “Bosstown Dynamics” by Corridor Digital provides a humorous look at military robots.

Episode 3.2

November 1, 2019

Andy and Dave discuss the AI-related supplemental report to the President’s Budget Request. The California governor signs a bill banning facial recognition use by the state’s law enforcement agencies. The 2019 Association of the US Army meeting focuses on AI. A DoD panel discussion explores the Promise and Risk of the AI Revolution. And the 3rd Annual DoD AI Industry Day will be 13 November in Silver Spring, MD. Researchers at the University of Edinburgh, the University of Cambridge, and Leiden University announce using a deep neural network to solve the chaotic 3-body problem, providing accurate solutions up to 100 million times faster than a state-of-the-art solver. Research from MIT uses a convolutional neural network to recover or recreate probable ensembles of dimensionally collapsed information (such as a video collapsing to one single image). Kate Crawford and Meredith Whittaker take a look at 2019 and the Growing Pushback Against Harmful AI. Air University Press releases AI, China, Russia, and the Global Order, edited by Nicholas Wright, with contributions from numerous authors, including Elsa Kania and Sam Bendett. Michael Stumborg from CNA pens a response to the National Security Commission’s request for ideas, on AI’s Long Data Tail. Deisenroth, Faisal, and Ong make their Mathematics for Machine Learning available. Melanie Mitchell pens AI: A Guide for Thinking Humans. An article in the New Yorker by John Seabrook examines the role of AI/ML in writing, with The Next Word. And the Allen Institute for AI updates its Semantic Scholar with now more than 175 million scientific papers across even more fields of research.

Mainstream Article of the Week

Resources of the Week

Episode 3.1

October 25, 2019

Welcome to Season 3.0! Andy and Dave discuss the AI in Advancement Advisory Council’s State of AI Advancement report, which takes a look at the impact of AI on roles within advancement. Researchers at Fudan and Changchun Institute of Optics announce a 500 MP camera (with associated cloud-powered AI) capable of identifying a face among tens of thousands. The U.S. National Science Foundation announces the National AI Research Institute, which anticipates approving $120M in grants next year. A recent solicitation from the Defense Innovation Unit seeks to understand trends in world events. And the JAIC has a new website. In research, OpenAI announces Dactyl, a robot hand capable of solving Rubik’s cube, as part of an effort to build a general purpose robot (transferring learning from simulation to the real world), and robust to perturbations such as broken fingers or intrusions by plush giraffes. Research accepted to ICLR 2020 demonstrates the application of deep learning to symbolic mathematics. Dan Gettinger of Bard College publishes The Drone Databook, cataloging the drones from 101 countries. The Carnegie Endowment for International Peace takes a look at the origins of AI Surveillance Technology in use around the globe. The Oliver Wyman Forum measures Global Cities’ AI Readiness, and Oxford Insights updates its Government AI Readiness Index. Arthur I Miller publishes the Artist in the Machine, while Marcus du Sautoy takes a look at The Creativity Code: Art and Innovation in the Age of AI. Lex Fridman and Gary Marcus have a discussion on AI. And Alexa will soon channel the voice of Samuel L Jackson.

Fun Fact of the Week

Samuel L. Jackson to Invade Your Home As the New Voice of Amazon’s Alexa

Season 2

Episode 2.43B

October 18, 2019

This week, Microsoft Research and University of Montreal show that machines can learn through interactive language by answering questions (question answering with interactive text, or QAit). The Allen Institute for AI’s Aristo system, a suite of eight solves, can pass (90%+) the New York 8th Grade regents science exams (for non-diagram, multiple choice questions), and can exceed 83% on the 12th grade exam, though Melanie Mitchell suggests the achievement may not be as profound as it seems. A “meta-research” paper from Milan and Klagenfurt takes a broader look at neural network research and highlights concerns of reproducibility (or lack thereof) as well as utility (or lack thereof, where simple heuristic methods can outperform the neural networks). From a workshop organized by Max Tegmark and Emilia Javorsky, a group of diverse authors produced a “possibility of a middle road” look at roadmapping a way ahead for Autonomous Weapons Systems. An opinion piece from Zachary Kallenborn on War on the Rocks look at What If the US Military Neglects AI? A paper in Nature provides an overview of open-ended evolution, as a part of artificial life. Gary Marcus and Ernest Davis publish a book on Rebooting AI: Building AI We Can Trust. The 57th Annual Meeting of the Association for Computational Linguistics occurred at the end of July, and Kate Koidan provides a summary of the top trends. The IEEE ranks robot creepiness with the top 100 creepy robots. Booz Allen releases a documentary on the Dawn of Generation AI. And the Naval Facilities Engineering and Expeditionary Warfare Center (NAVFAC EXWC) will host an industry day conference on cyber, control system, and machine learning in December.

Video of the Week

The Dawn of Generation AI

Upcoming Conferences

Episode 2.43A

October 11, 2019

Andy and Dave discuss the U.S. Air Force’s recently released AI strategy. NATO releases a draft reports on the implications of AI for NATO forces. A report collects 2,602 uses of AI for social good. And California legislature bans facial recognition for policy body cameras. In research, OpenAI takes a multi-agent game of hide-and-seek to 11, and discovers emergent tool use as the hiders and seekers try to gain advantages. Research from the Freie Universitat Berlin samples equilibrium states of many-body systems using deep learning to speed up sampling calculations.

Episode 2.42

October 4, 2019

Two special guests join Andy and Dave for a discussion about research in AI and autonomy. First, Dr. Andrea Gilli is a researcher at the NATO Defense College in Rome, where he works on defense innovation, military transformation, and armed forces modernization. And second, Ms. Zoe Stanley-Lockman is a fellow at the Maritime Security Programme of the Institute of Defence and Strategic Studies at the Rajartnam School of International Studies in Singapore, where she is researching, among other things, the roles of ethics in AI.

Biographies

“Andrea Gilli is an affiliate at CISAC and a Researcher at the NATO Defense College in Rome where he works on defense innovation, military transformation and armed forces modernization. Andrea holds a PhD in Social and Political Science from the European University Institute (EUI) in Florence. In 2015 he was awarded the European Defence Agency and Egmont Institute’s bi-annual prize for the best dissertation on European defense, security and strategy. Andrea has provided consulting services to both private and public organizations, including the EU Military Committee and the U.S. Department of Defence's Office of Net Assessment, and worked and conducted research for or been associated with several institutions, including the Royal United Services Institute, the European Union Institute for Security Studies, the Saltzman Institute for War and Peace Studies at Columbia University in New York, the Center for International Security and Cooperation at Stanford University and the Belfer Center for Science and International Affairs at the John F. Kennedy School of Government of Harvard University. Andrea’s research has been published or is forthcoming in International Security, Security Studies, The RUSI Journal, and Washington Post’s Monkey Cage.”

“Zoe Stanley-Lockman is an Associate Research Fellow in the Maritime Security Programme of the Institute of Defence and Strategic Studies (IDSS) at the S. Rajaratnam School of International Studies (RSIS). Previously she was a Visiting Fellow in the Military Transformation Programme at the RSIS. Zoe holds a Master’s degree in International Security with a concentration in Defence Economics from Sciences Po Paris and a Bachelor’s degree from Johns Hopkins University. Prior to joining the RSIS, she spent two years at the European Union Institute for Security Studies (EUISS), first as a Junior Analyst and then as the Institute’s Defence Data Research Assistant, researching defence-industrial issues, arms exports, innovation, and military capability development. Throughout her studies, Zoe’s practical experience included working on dual-use export controls with the US government and consulting for defence contractors.”

Episode 2.41

September 27, 2019

Andy and Dave discuss research from DeepMind, University College London, and Oxford, that shows that human mental replay spontaneously reorganizes experience, implied by abstract knowledge, and which further suggests AI could use this approach to learn and improve. In other research, adversarial triggers cause natural language processing algorithms (such as GPT-2) to generate incorrect sentiment analysis, or to generate racist output (even in non-racial contexts). And researchers from Dalian, Peng Cheng, and City University of Hong Kong create a segmentation method for visual classifiers to identify and processes mirrors and reflective surfaces, which may otherwise cause confusing results. FutureGrasp provides a report on an overview of State initiatives in AI. An article in nature examines the global landscape of AI ethics guidelines. Patrick Walker pens War Without Oversight: Challenges to the Deployment of Autonomous Weapon Systems. Springer Nature publishes “the first research book generated using machine learning,” on lithium-ion batteries. Henrik Saetra publishes The Ghost in the Machine, on what it means to be human in the age of AI/ML. The Alife 2019 conference provides open access to its 2019 proceedings. And Mackmyra Whisky announces the world’s first AI-created whisky.

Alife 2019 Proceedings

Not-Entirely-Silly-AI-Silliness and Video of the Week

The world’s first AI-created whisky

Episode 2.40

September 20, 2019

Andy and Dave discuss the establishment of the Artificial Intelligence and Technology Office under the U.S. Department of Energy. DARPA announces Context Reasoning for Autonomous Teaming (CREATE), a new program to investigate team between groups of systems that have limited centralized coordination. Defense One and Nextgov sponsored a one-day “Genius Machines” conference in Hawaii, where it was revealed that AI is being developed to predict Chinese and Russian movement in the Pacific. MIT Lincoln Lab releases a large data set for public safety, which includes images of flooding and other disasters. And a video appears to show a Tesla driver asleep in a moving car. Finally, Russia expert Sam Bendett joins Andy and Dave to discuss his latest article in Defense One, on the draft of the Russian AI strategy.

Sam Bendett’s Interview

Episode 2.39

September 13, 2019

Andy and Dave discuss the Joint Artificial Intelligence Center's efforts to tackle deep fakes through DARPA's Media Forensics program, as well as the announcement that the JAIC's biggest project for FY20 will include "AI for maneuver and fires." Intel reveals its first AI chips, on the Nervana Neural Network Processor line, with one to train AI systems and another to handle inference. Cerebras Systems announces the world's largest chip, with 1.2 trillion transistors and 400,000 cores. A Russian Soyuz spacecraft docked with the International Space Station; it had Roscosmos's Skybot F-850 humanoid robot aboard. Researchers at Hong Kong University of S&T demonstrate an all-optical neural network for deep learning. Researchers at MIT and Tubingen identify four types of neuronal cells based on their electrical spiking activity. And a larger team of researchers, primarily based in China, unveil the Tianjic chip, as a hybrid that combines computer science (with a binary focus) with neuroscience (with a neural burst and spike focus) on one chip. In the book of the week, K. Eric Drexler of Oxford publishes a large report on Reframing Superintelligence. An article from Melanie Mitchel in Popular Computing in 1985 seems hardly out of place in 2019 with its look at what people were predicting for the future. A report from PAX surveys the tech sector's stance on lethal autonomous weapons. The Intelligence Community Studies Board releases the proceedings of a workshop on Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies. Jonathan Clifford pens a piece in War on the Rocks on how "AI will change war, but not in the way you think." In a video, Elon Musk and Jack Ma discuss AI at the World AI Conference in Shanghai. And the Australian Defence College will host a seminar on Science Fiction and the Future of War on 3 October 2019.

Video of the Week

Elon Musk and Jack Ma discuss AI at the World Artificial Intelligence Conference in Shanghai

Upcoming Conference

Science Fiction and the Future of War Seminar

Episode 2.38

September 6, 2019

Happy 100th Episode to AI with AI! Andy and Dave celebrate the 100th episode of the AI with AI podcast, starting with a new theme song, inspired by the Mega Man series of games. Andy and Dave take the time to look at the past two years of covering AI news and research, including at how the podcast has grown from the first season to the second season. They also take a look back at some of the recurring themes and favorite topics, including GPT2 and the Lottery Ticket hypothesis, among many others; they also look forward to (hopefully!) all the latest and greatest news to come. Throughout this episode, we hear from listeners, supporters, and colleagues who have appeared on the podcast. Here’s to another 100, and thanks for listening!

Pushing the “envelope” of basic theory of ML/AI

The “Lottery Ticket” hypothesis: only certain "winning combinations" are necessary for training a neural networks - researchers may have been wasting processing power to train NN that are ten times too big! (Podcast 2.29):

GPT-2

Episode 2.37B

August 30, 2019

Researchers at Berkeley, Washington, and Chicago identify “natural adversarial” examples that cause classifier accuracy to significantly degrade, likely due to an over-reliance on color, texture, and background cues. Andy and Dave then discuss a series of events following a Nature paper on application of deep learning to aftershock patterns of earthquakes, wherein other researchers raised questions on the researcher (one demonstrating that a simple logistic regression does better; and another showing that the original researchers included their test data set in their training data set). A new study by the Insurance Institute for Highway Safety shows that drivers overestimate the capability of vehicle automated systems, with Telsa’s Autopilot leading the rest in overestimation. Goodfellow, Bengio, and Courville publish their 800 page tome on Deep Learning. The Classic Paper of the Week comes from Pattie Maes and Rodney Brooks, who published Learning to Coordinate Behaviors in 1990. The video presentation of the octopus research makes the video of the week. NASA streams 24/7 with OUTERHELIOS, a neural network trained on Coltrane to produce non-stop free jazz (though the feed may now be “static only”).

Episode 2.37A

August 24, 2019

The National Security Commission on AI solicits creative and original ideas to challenge the status quo assumptions on maintaining US global leadership in AI. Researchers at MIT and Colgate publish an engineering *concept* that would use superconducting nanowires to mimic artificial neurons in a way that would theoretically match the energy efficiency of brains. Microsoft invests $1B in OpenAI to create brain-like machines. A proposed bill would prohibit the use of facial recognition technology for all public housing units that receive funding from the Department of Housing and Urban Development. Researchers at the University of Washington, Seattle demonstrate that octopuses' arms are capable of making decisions without input from their brains, with more than 350 million of its 500 million neurons in their arms. Google DeepMInd uses a generative adversarial model to generate fictional videos with DVD-GAN.

Efficient Video Generation on Complex Datasets

Episode 2.36B

August 16, 2019

The University of Singapore creatures an artificial skin that can sense temperature, pressure, and humidity. The International Center for Ethics in the Sciences and Humanities releases its Evaluation of (AI) Guidelines. A report from FutureGrasp takes a global look at the AI initiatives (or lack thereof) of States. Hayden Klok and Yoni Nazarathy release a draft of Statistics with Julia. Metaacademy provides learning plans and resources for learning about topics, from beginner to advanced. Claude Shannon’s 1948 paper “A Mathematical Theory of Communication” makes Andy’s Classic Paper for the week. Stephen Wolfram’s testimony on AI before the US Senate Commerce Committee becomes available, including his blog write-up about the testimony. And Fedor Kitashov publishes an essay on using AI to restore and colorize photos.

Interesting Site of the Week

A Technical Look at Creating an AI to Restore and Colorize Photos

Episode 2.36A

August 9, 2019

Andy and Dave discuss the Digital Modernization Strategy that the US Department of Defense released on 12 July 2019. Todd Austin at the University of Michigan presents research at a conference on Morpheus, a project to create a chip that randomizes elements of its code, in an attempt to slow would-be hackers. Also in chip-related news, Intel introduces Pohoiki Beach, a new 8 million-neuron neuromorphic system with 64 Loihi research chips, with expectations that they will produce a system capable of simulation 100 million neurons by the end of 2019. Baylor College of Medicine in collaboration with the University of California and Second Sight Medical Products announce Project Orion, an implant that transmits video images directly to the visual cortex, bypassing the eye and optic nerve. And the Naval Information Warfare Systems Command and PEO C4I announce the AI Applications to Autonomous Cybersecurity (AI ATAC), a contest for using AI/ML to bolster network security operations. Research from University of Wisconsin, Madison, demonstrates that optical waves passing through a nanophotonic medium can perform artificial neural computing – here, that a sheet of glass can identify numbers by “looking,” or in this case, by making use of bubbles and other impurities in the glass to function as a neural processor. Research from Stanford creates an convolutional neural network that can play Go without game tree search, more mimicking a human-level understanding and approach.

Playing Go without Game Tree Search Using Convolutional Neural Networks

Episode 2.35B

August 2, 2019

Continuing in research, Andy and Dave discuss research from Imperial College and the Samsung AI Centre, which can take a single image of any face, and create realistic speech-driven facial animations, using a GAN. From the Conference on Computer Vision and Pattern Recognition, researchers create an algorithm that can learn individual styles of conversational gesture, and then produce plausible gestures to accompany other audio input. And research in Nature examines 3.3 million material-science abstracts with unsupervised word embeddings to capture “latent knowledge.” The survey paper of the week looks at the reproducibility of machine learning in health-related fields, and finds health consistently lags behind other subfield of machine learning. Safety First for Automated Driving identifies the guiding principles for autonomous cars to be safe, with input from 11 authors; among the information, the report finds that verification and validation of the systems is still lacking in the existing literature. The Berkman Klein Center at Harvard compiles an infographic on all of the published AI “principles” from governments, industry, and other organizations. The “classic paper” of the week comes from Alan Turing’s 1948 paper on “Intelligent Machinery.” The 36th International Conference on Machine Learning releases over 150 videos from its June session. CognitionX 2019 releases a video on managing security in an insecure world. Manlio de Domenico and Hiroki Sayama (and many others!) provide an interactive site for explaining and exploring complexity. Wendy Anderson and August Cole explore what war in the late 2020s might look like for the Secretary of Defense, in The Secretary of Hyperwar. And for click-bait of the week, astrophysicists get “baffled” by their simulation of the universe using AI.

“Click bait” of the Week – But that also links to interesting work!

“Astrophysicists baffled by their own AI simulation of the universe”

Episode 2.35A

July 26, 2019

Andy and Dave discuss a scathing report on Scotland Yard’s facial recognition software, which researchers at the University of Essex found to have an 81% error rate (but that the Met Police say has an error rate of 0.1%). In related news, Axon announced that it will ban the use of facial recognition systems on its devices; Axon supplies 47 of the 69 largest police agencies in the U.S. with body cameras and software. DARPA announces IDAS, the Intent-Defined Adaptive Software (IDAS), in an attempt to reduce the need for manual software modifications. NIST posts the first draft guideline for developing AI technical standards. Elon Musk says that its Neuralink is almost ready for the first human volunteers; Neuralink uses ultrafine threads that can be implanted into the brain to detect the activity of neurons. And the Bank of England announced that Alan Turing will be on the new Fifty Pounds note. In research, Andy and Dave discuss Pluribus, the latest AI for multiplayer poker from CMU and Facebook AI, which won during a 12-day poker marathon in 6-player no-limit Texas hold’em; the AI runs on two Intel processors and a “modest” 128GB during play.

Alan Turing’s Portrait to be Featured on Bank of England’s £50 note

Research

Superhuman AI for multiplayer poker: Pluribus

Episode 2.34B

July 19, 2019

More research from Berkeley and also University of Southern California creates a method to “protect” world leaders against deep fakes, by identifying, among other things, 17 Facial Action Units (such as subtle movements of eyebrows, cheeks, nose, etc, during speech). And research from MIT can take an audio clip and convert it to a generic human face. A report from RAND looks at Ethics in Scientific Research. Deakin University and Harvard provide a survey of deep reinforcement learning in cyber security. Another survey from Dublin University and Intel Labs looks at Generative Adversarial Networks and their taxonomy. Vishal Maini and Samer Sabri provide Machine Learning for Humans. Andy recommends Ludwig von Bertalanffy’s General System Theory from 1968. Matt Turek takes a look at the history of media forensics. The House Homeland Security Subcommittee on Intelligence and Counterterrorism holds a hearing on AI and Counterterrorism. And the Computer Vision and Pattern Recognition 2019 conference begins to post its tutorials, workshops, and its 80 page program guide.

Conference of the Week

Computer Vision and Pattern Recognition (CVPR) – 2019

Episode 2.34A

July 12, 2019

Andy and Dave discuss the update to the US National AI Research and Development Strategic Plan, which establishes 8 objectives for federally funded AI research. Meanwhile, the European Commission starts its pilot phase for ethics guidelines for trustworthy AI, with the first AI Alliance Assembly meeting in Brussels and the High-Level Expert Group of AI (AI HLEG). The Joint AI Center, in conjunction with CMU, CrowdAI, and DIU, plans to make available xBD (x-Building-Damage), an open-source labeled data set of satellite imagery of some of the largest natural disasters in the past decade; it will contain ~700k building annotations across over 5,000 km^2 of imagery from 15 countries. The JAIC also announced a partnership with the Singapore’s Defence Science and Technology Agency to collaborate on AI in humanitarian assistance and disaster relief. A white paper by Pactera suggests that 85% of AI projects fail. A new DARPA program, Virtual Intelligence Processing (VIP) aims to explore “brain-inspired” methods for dealing with incomplete, spare, and noisy data. Facebook releases AI Habitat, an open source environment for training and testing AI agents. And NIST’s RFI on AI Standards receives nearly 100 respondents. Researchers at Adobe Research and Berkeley use AI to detect facial image manipulations that were done by Photoshop’s “Face Aware Liquify” feature; while humans were able to judge an altered face 53% of the time, the Convolutional Neural Network tool achieved results as high as 99%.

Research

Episode 2.33

July 5, 2019

Russia expert Sam Bendett joins Andy and Dave for a discussion and update on Russia’s latest developments and efforts in AI and autonomy. The group discusses a 30 May meeting, in which Russian President Vladimir Putin outlined the national AI priorities; the Russian AI strategy, originally expected in June, is now expected in the June-to-October timeframe. They also discuss the growing AI infrastructure, and the opening of AI centers across the country, with a mindset similar to a “startup culture,” with Russian AI developers getting international recognition. The group touches on relations between Russia and China, particularly in the wake of the Huawei issues. The “Army-2019” military expo in June should also provide useful insights about the Russian military development and employment of AI and related capabilities.

Episode 2.32B

June 28, 2019

Researchers at the University of Tubingen demonstrate that virtual neurons spontaneously develop a “number sense” when assessing the number of visual items (such as dots) in a set. The Allen Institute for AI create Grover, a neural network that can generate fake news, but that can also detect NN-generated fake news; Grover uses the same architecture as GPT-2 (the previous “unreleasable for the safety of humanity” algorithm), but these researchers highlight the importance of making available such generators. In related news, Witness Media Lab releases a report on the current state of deepfake tech; a CNN report looks at how Finland is fighting fake news; and a NY Times article examines the “weaponization” of AI-generated disinformation. A Mashable article from Marcus Gilmer looks at the state of software that attempts to identify deepfakes. The International Committee of the Red Cross releases a report a “human-centered approach” to AI and machine learning in armed conflict. A paper from Springer-Verlag provides a history and references for the “neural-symbolic debate.” Hiroki Sayama at SUNY Binghamton makes available “Introduction to the Modeling and Analysis of Complex Systems.” The US-China Commission releases testimony on a day-long session, with testimony from experts on three topics, including the US-China Competition in AI. The Allen Institute brain atlases available for exploring online. The 36th International Conference on Machine Learning meets in Long Beach, CA, with over 6,000 participants. Meanwhile, CogX meets in King’s Cross, London. And former Secretary of Defense Ash Carter pens a “letter to a young Googler” on the morality of defending America.

36th International Conference on Machine Learning (ICML)

CogX 2019

“Opinion” of the Week

Episode 2.32A

June 21, 2019

Andy and Dave discuss early thoughts from the House Intelligence Committee hearing on deep fakes, manipulated media, and AI; artists take a shot at Mark Zuckerberg to demonstrate the power of fake videos; the House Armed Services Committee doubles Joint AI funding; Google AI releases the Google Research Football Environment; a study examines the amount of CO2 released when training AI models; Microsoft provides an AI curriculum for government decision-makers; Microsoft also removes access to a database with 10 million “celebrity” images; and Rodney Brooks and Gary Marcus launch startup Robust.AI, which aims to build the first industrial-grade cognitive platform for robots. Research from CMU, Google AI, and Stanford “peeks into the future” by predicting the future activities and locations of people in videos.

Research

Peeking into the Future: Predicting Future Person Activities and Locations in Videos

Episode 2.31

June 14, 2019

In news items, Andy and Dave discuss China’s call for international cooperation on a code of ethics for AI. The Organisation for Economic Co-operation and Development (OECD) unveils the first intergovernmental standards for AI policies, with support from 42 countries. The US Army has invited the design of prototypes for the Next-Generation Squad Weapon, which may include wind-sensing and even facial-recognition technology. DARPA’s Spectrum Collaboration Challenge (SC2) presents an essay at IEEE Spectrum, which describes the challenges of making the most out of an increasingly crowded electromagnetic spectrum, including running contests for better spectrum management, and using Colosseum as the test ground. Google announces the ‘AI Workshop,’ which offers early access to AI capabilities and experiments. In research, Google DeepMind announces an AI that has achieved human-level performance in Quake III Arena Capture the Flag mode; among other things, human players rated the AI as “more collaborative than other humans” (though had mixed reaction to the AI as their teammates). Google Research presents HOList, an environment for machine learning of higher-order theorem proving. Research from Oxford University creates a model for human-like machine thinking by mimicking the prefrontal cortex for language-guided imagination. A paper from Jeff Cline at Uber AI Labs suggests a different approach to Artificial General Intelligence, by means of AI-generating algorithms that learn how to produce AgI. MacroPolo produces a series of 6 charts on Chinese AI talent. CBInsights compiles the view of 52 “experts” on “How AI Will Go Out of Control.” Blum, Kopcroft, Kannan, and Microsoft release Foundations of Data Science; Hutter, Kotthoff, Vanschoren, and Springer-Verlag make Automated Machine Learning available. The Purdue Symposium on Ethics, Technology, and the Future of War and Security releases a video on the Ethical, Legal, and Social Implications of Autonomy and AI in Warfare. The University of Colorado Boulder creates an Index of Complex Networks (ICON). And Alexander Reben creates a repository of 1 million fake AI-generated faces.

1 million fake AI generated faces for anyone to download at 1024x1024 resolution

Episode 2.30b

June 7, 2019

Continuing in research topics, Andy and Dave discuss research from MIT that treats image classification adversarial examples not as bugs, but as features – and intentionally mislabeled pictures; the approach adds robustness to vulnerability, and provides evidence that adversarial vulnerability is caused by non-robust features and is not inherently tied to the standard training framework. The Bulletin of the Atomic Scientists releases The Global Competition for AI Dominance in its May 2019 issue. Isaac Godfrie provides a summary of “few shot” learning papers that were presented at ICLR 2019. A research paper shows the interface between machine learning and the physical sciences. A new survey from Alegion and Dimensional Research examines the data issues impacting AI/ML research (for example, 96% of companies surveyed said they ran into problem with data quality). Georgios Mastorakis examines issues that arise from taking a human-like approach to training algorithms. Mohri, Rostamizadeh, and Talwalkar release a graduate-level book on Foundations of Machine Learning through MIT Press. CollegeHumor produces “A Computer Co-Wrote this Sketch,” in which the characters appear to become aware of their situation. And finally, the Genetic and Evolutionary Computation Conference is scheduled for 13-17 July 2019 in Prague, Czech Republic

Upcoming Conferences

Genetic and Evolutionary Computation Conference (GECCO)

Episode 2.30a

May 31, 2019

Andy and Dave discuss a new IARPA program, Camera Network Research Data Collection, which intends to identify and track subjects across areas as large as six miles via a security camera footage of varying type and quality. DARPA announces the recipients of its Next-Generation Non-Surgical Neurotechnology (N3) program, which includes efforts to read from and write to the brain. The Joint Artificial Intelligence Center adds two new areas of focus: cybersecurity, and robotic process automation. Roborder, a provider of autonomous swarms of heterogeneous robots for border surveillance, will be running three pilot programs in Europe. Ford announced a team-up with Agility Robotics to launch a self-driving vehicle service by 2021, using Digit to deliver packages to doorsteps. The Computing Community Consortium and the Association for the Advancement of AI have made a request for comments on a draft of a “20-Year Community Roadmap for AI Research in the US.” In research items, Facebook AI, UT Austin, and UC Berkley announced research that uses “active observation completion” to demonstrate the emergence of look-around behaviors. And other research from UT Berkley explores the benefits of self-driving vehicles using “social perception” of the nearby drivers in order to gain additional information.

Research

Episode 2.29

May 24, 2019

Andy and Dave take a look at the reintroduction of the "AI in Government Act," a bill that intents to get more AI technical experts into the US Government. San Francisco bans facial recognition software (but leaves the door open in the future), while Moscow announces plans to weave AI facial recognition into its urban surveillance net. Facebook opens up its data to academic researchers for analysis. DARPA announces the Air Combat Evolution (ACE) program, to automate air-to-air combat; DARPA also announces Teaching AI to Leverage Overlooked Residuals (TAILOR), to make soldiers fitter, happier, and more productive. And IARPA announces Trojans in AI (TrojAI), an effort to inspect AI for malicious code. In research, Andy and Dave discuss research from Frankle at MIT that proposes a "Lottery Ticket" hypothesis, which suggests only certain "winning combinations" are necessary for training a neural networks, and that researchers have been training neural networks that are much larger than they need to be to increase the chances of includes one of these winning combinations. Leon Bottou at Facebook AI proposes a method for using AI to identify causal relationships in data (and which goes against common modern practice of combining data sets into one giant dataset). And research from Cambridge, George IT, and the University of Pennsylvania demonstrates that Magic: the Gathering is officially the world’s most complicated game (and is Turing complete). In reports of the week, the Stockholm International Peace Research Institute releases the Impact of AI on Strategic Stability and Nuclear Risk. IKV and Pax Christi release The State of AI. Analytics Vedhya has compiled a list of 25 open datasets for deep learning. Benedek Rozemberczki has curated a list of decision tree research papers. The IEEE Spectrum releases a report on Accelerating Autonomous Vehicle Technology. The May 2019 issue of The Scientist contains 15 articles on how Biology is tackling AI. David Kriesel provides A Brief Introduction to Neural Networks. COL Jasper Jeffers wins the 2019 Sci-Fi Writing Contest with AN41. The ICLR 2019 provides video on four talks, including Frankle’s Lottery Ticket hypothesis, and Bottou’s Casual Invariance. Melanie Mitchell gives a Ted Talk on the Collapse of AI and the possibility of an AI winter. And the National Academies-Royal Society Public Symposium will be meeting in DC on 24 May for an International Dialogue on AI.

Videos of the Week

The Collapse of Artificial Intelligence

(Upcoming) Conference of the Week

Episode 2.28

May 17, 2019

“Bots” reign supreme in this week’s episode, though Andy and Dave start the discussion NIST’s RFI on the development of technical standards for AI. A Harvard Medical School project demonstrates a catheter that can autonomously move inside a live, beating pig’s heart. Zipline uses medical delivery drones in Rwanda. University of Maryland researchers demonstrate drone delivery of a kidney for transplant. NASA tests a CACADA swarm, and is also investigating Marsbees. And Starship robo-couriers deliver food to students at GMU. In research from Berkeley, a robot learns to use improvised tools to complete tasks, including those with physical cause-and-effect relationships. Researchers at MIT, MIT-IBM Watson, and DeepMind create the Neuro-Symbolic Concept Learner (NSCL), which uses a hybrid connectionist/symbolic approach, and seeming to be a “true” AI implementation of Winograd’s SHRDLU system from the 60s. Research from Tsinghua University and Google demonstrates Neural Logic Machines, a neural-symbolic architecture for both inductive learning and logic reasoning. Two papers compare logistic regression with machine learning methods for clinical predictions; one shows no benefit of one method over the other, while the other claims better performance with neural network methods (although Andy and Dave wonder whether this statement is true, given the error bars in the results). Algorithm Watch publishes a Global Inventory of AI Ethics Guidelines. Times Higher Education (THE) and Microsoft release a survey on AI of more than 100 AI experts and university leaders. The Department of Information Technology at the University of Uppsala in Sweden has made its lecture notes for a statistical machine learning course available. The Santa Fe Institute reprints a classic collection of essays from its Founding Workshops. Robert Kranekg pens a story about an Angry Engineer. And the OpenAI Robotics Symposium 2019 releases the full video proceedings online.

(Upcoming) Conference of the Week

Episode 2.27

May 10, 2019

Professor Jennifer McArdle, Assistant Professor of Cyber Defense at Salve Regina University, joins Andy and Dave for a discussion on AI and machine learning. Jenny is leading a group of graduate students who are working on creating a strategic-level primer on AI, particularly aimed at those who may be less familiar with the technical aspects, as well as a War on the Rocks article on AI in training and synthetic environments. Her students are studying in a variety of areas, including cyber defense and digital forensics, cyber and synthetic training, cyber intelligence, healthcare and healthcare administration, and administrative justice. Graduate students Mackenzie Mandile and Saurav Chatterjee also join for a discussion on their research topics. In the photo (from left to right): Maria Hendrickson, Gabrielle Cusano, Abigail Verille, Erin Rorke, (John Cleese), Saurav Chatterjee, Allegra Graziano, Santiago Durango, Eric Baucke, Mackenzie Mandile, Dave Broyles, Jennifer McArdle, Andy Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle.

Overview of a New Field of Research: Machine Behavior

AI Pioneer Nils Nilsson Passes Away

Episode 2.25

April 26, 2019

Andy and Dave discuss the Department of Energy’s attempt to create the world’s longest acronym, with DIFFERENTIATE (Design Intelligence for Formidable Energy Reduction Engendering Numerous Totally Impactful Advanced Technology Enhancements), and to accelerate incorporation of ML into energy technology and product design. Google cancels its AI ethics board after thousands of employees sign a petition calling for the removal of one member with anti-LGBTQ and anti-immigrant views. NASA unveils the Astrobees, one-foot cube robots that will work autonomously on the International Space Station to check inventory and monitor noise levels, among other things. And Microsoft partners with French online education platform OpenClasrooms to train and recruit promising students in AI. Research from Eindhoven University of Technology and the University of Trento takes a biologically “inspired” approach to neural net learning, through Neuron Elevation Traces (NATs), that allow additional data storage in each synapse; the result appears to increase the plasticity of the synapses. A mathematical reasoning model from DeepMind can solve some arithmetic, algebra, and probability problems, though sometimes gets simple calculations incorrect (such as 1 + 1 + … + 1, for n>=7). And research creates a musculoskeletal system that can use muscle activation to simulate movement and control. A report from Element AI examines the Global AI Talent distributions in 2019, to include (perhaps not surprisingly) the observation that the supply of top-tier AI talent does not meet the demand. A paper in Nature Reviews Physics surveys the physics of brain network structure, function, and control. A short sci-fi story from Jeffrey Ford describes The Seventh Expression of the Robot General. And Andy highlights a video from 1961 on The Thinking Machine.

Short Story of the Week

Video of the Week

Episode 2.24

April 19, 2019

Andy and Dave discuss the first image of a black hole, and its link to machine learning -- with research from Katie Bouman while she was at MIT, developing Continuous High-resolution Image Reconstruction using Patch priors (CHIRP), as a way to stitch together different sources to create a continuous whole. Next, Andy and Dave discuss research from the Sorbonne and IST Austria that tries to deduce the reward function of a recurrent neural network by assuming the neurons are agents. And research from Hopfield and Krotov examine a way to approach neural network learning in a more “plausible” biological fashion, with a more physically local method of plasticity. In reports, the European Comission releases its 41-page report on Ethics Guidelines for Trustworthy AI. Elizabeth Holm publishes a short paper in defense of the black box. A paper in IEEE Spectrum examines the actual health care products (compared to the partnerships and promises) of IBM Watson. Sean Luke publishes the second edition of The Essentials of Metaheuristics. And the video of the week is a 2016 TED Talk by Katie Bouman on the development of the software that combines the data collected by individual telescopes.

Book of the Week

Video of the Week

Discusses the development of the software used to combine the “images” collected by individual telescopes.

Episode 2.23

April 12, 2019

Andy and Dave discuss Simulated Policy Learning (SimPLe), from Google Brain, which attempts to help reinforcement learning methods learn effective policies for complex tasks, such as Atari games (using the Atari Learning Environment, ALE); the method trains a policy in a simulated environment so that it achieves good performance in the original environment. From Google and Princeton University, the TossingBot learns to throw arbitrary objects into bins; research use “residual physics” to provide a baseline knowledge of the world (e.g., ballistics) to further improve tossing accuracies. Researchers at Rutgers demonstrate a probabilistic approach for reasoning the 3D shapes of unknown objects, as a robot manipulates its environment. DeepMind publishes results that use the AI itself to figure out where the AI will fail. And research from Northwestern, University of Chicago, and the Santa Fe Institute examines the dynamics of failure across science, startups, and security efforts. In clickbait-y news, scientists create an AI that can predict when a person will die (when in actual, they used machine learning methods to examine prediction of premature death, and compared with standard epidemiological approaches). Researchers create a memristor-based hybrid analog-digital computing platform to demonstrate deep-Q reinforcement learning. Microsoft demonstrates end-to-end automation of DNA data storage (21 hours to encode the word “hello”). The US Air Force is exploring AI-powered autonomous drones in its Skyborg program. Keen Security Lab of Tencent reports vulnerabilities of Telsa Autopilot, to include inducing the vehicle to switch lanes. A paper in the Springer AI Review Journal provides a survey of ML and DL frameworks and libraries for large-scale data mining. Los Alamos Labs publishes a survey of quantum algorithm implementations. Scott Cunningham publishes Causal Inference. Yaneer Bar-Yam makes a 2003 work, Dynamics of Complex Systems, available. Easley and Kleinberg publish Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Andy highlights a sci-fi story from 2008 from Elizabeth Bear, Tideline. Paul Oh pens a fictional story of the Army’s C2 AI program, Project AlphaWare. The National Academies-Royal Society Public Symposium will hold a discussion on 24 May, AI: An International Dialogue. More videos appear from DARPA’s AI Colloquium. A website compiles datasets for machine learning. And Stephen Jordan provides a comprehensive catalog of quantum algorithms.

Episode 2.22

April 5, 2019

The Institute of Electrical and Electronics Engineers (IEEE) has released its first edition of Ethically Aligned Design (EAD1e), a nearly 300-page report involving thousands of global experts; the report covers 8 major principles including transparency, accountability, and awareness of misuse. DARPA announces the Artificial Social Intelligence for Successful Teams program, which will attempt to help AI build shared mental models and understand the intentions, expectations, and emotions of its human counterparts. DARPA also announced a program to design chips for Real Time Machine Learning (RTML), which will generate optimized hardware design configurations and standard code, based on the objectives of the specific ML algorithms and systems. The U.S. Army awarded a $152M contract to QinetiQ North America for producing “backpack-sized” robots; the common robotic system individual (CRS(I)) is a remotely operated, unmanned ground vehicle. The White House has launched a site to highlight AI initiatives. Anduril Industries gets a Project MAVEN contract to support the Joint AI Center. And the 2019 Turing Award goes to neural network pioneers Hinton, LeCun, and Bengio. Researchers at Johns Hopkins demonstrate that humans can decipher adversarial images; that is, they can “think like machines” and anticipate how image classifiers will incorrectly identify unrecognizable images. A group of researchers at MIT, Columbia, Cornell, and Harvard demonstrate “particle robots” inspired by biological cells; these robots can’t move, but can pulsate from a size of 6in to about 9in, and as a collective, they can demonstrate movement and other collective behavior (even with a 20% failure of the components). Researchers at the Harbin Institute of Technology and Machine State University control a swarm of “microbots” (here, single grains of hematite) through application of different magnetic fields. And researchers use honey bees (in Austria) and zebrafish (in Switzerland) to influence each other’s collective behavior through robotic mediation. A report from the Interregional Crime and Justice Research Institute released a report on AI in law enforcement, from a recent meeting organized by INTERPOL. DefenseOne publishes a report from Tucker, Glass, and Bendett, on how the U.S. military services are using AI. An e-book from Frontiers in Robotics and AI collects 13 papers on the topic of “Consciousness in Humanoid Robots.” Andy highlights a book from 2007, “Artificial General Intelligence,” which claims to be the first to codify the use of AGI as a term-of-art. MIT Tech Review’s EnTech Digital 2019 has released the videos from its 25-26 March event. And DARPA has released more videos from its AI Colloquium. The U.N. Group of Governmental Experts is meeting in Geneva to discuss lethal autonomous weapons systems (LAWS). A short story from Husain and Cole describes a hypothetical future war in Europe between Russian and NATO forces. And Ian McDonald pens a story that captures the life of military drone pilots in Sanjeev and Robotwallah.

Conference of the Week

Sci-Fi Short Stories of the Week

Episode 2.21

March 29, 2019

Andy and Dave begin with an AI-generated podcast, using the “dumbed down” GPT-2 with the repository of podcast notes; GPT-2 ends the faux podcast with a video called “The World Ends with Robots” and Dave later discovers that a Google search on the title brings up zero hits. Ominous! Andy and Dave continue with a discussion of the Boeing 737 MAX crashes and the implications for autonomous systems. Stanford University launches the Stanford Institute for Human-centered Artificial Intelligence (HAI), which seeks to advance AI research to improve the human condition. Ahead of the Convention on Certain Conventional Weapons in Geneva, Japan announces its intention to submit a plan for maintaining control over lethal autonomous weapons systems. A new report from Hal Hodson at the Economist reveals that, should DeepMind successfully create artificial general intelligence, its Ethics Board will have legal “control” of the entity. And Steve Walker and Vint Cerf discuss other US Department of Defense projects that Google is working on, including the identification of deep fakes, and exploring new architectures to create more computing power. NVidia announces a $99 AI development kit, the AI Playground, and the GauGAN. In research topics, Google explores whether neural networks show gestalt phenomena, looking specifically at the law of closure. Researchers with IBM Watson and Oxford examine supervised learning with quantum-enhanced feature spaces. Shashu and co-workers explore quantum entanglement in deep learning architectures. Dan Falk takes a look at how AI is changing science. And researchers at Facebook AI and Google AI examine the pitfalls of measuring emergent communication between agents. The World Intellectual Property Organization releases its 2019 trends in AI. A report takes a survey of the European Union’s AI ecosystem. While another paper surveys the field of robotic construction. Kiernan Healy releases a book on Data Visualization. Allen Downey publishes Think Bayes: Bayesian Statistics Made Simple. The Defense Innovation Board releases a video from its public listening session on AI ethics at CMU from 14 March. The 2019 Human-Centered AI Institute Symposium releases a video. And Irina Raicu compiles a list of readings about AI ethics.

Episode 2.20

March 22, 2019

Andy and Dave discuss “activation atlases,” recent work from OpenAI and Google that offers a new technique for visualizing interactions between the neurons in an image classifying deep neural network. The UCLA Center for Vision, Cognition, Learning, and Autonomy together with the International Center for AI and Robot Autonomy publish work on RAVEN – a dataset for Relational and Analogical Visual rEasoNing, which uses John Raven’s Progressive Matrices for testing joint spatial-temporal reasoning; in combination with a dynamic residual tree method, they see improvement over other methods, but still short of human performance. Research from the University of New South Wales uses machine learning to predict which of two patterns a subject will choose, before the subject is aware which one they have chosen. And Google Brain publishes research that demonstrates BigGAN, capable of generating high-fidelity images with much fewer (10-20%) labeled data. In announcements, DARPA holds its AI Colloquium on 6-7 March; the US Army is investing $72M into CMU for AI research; OpenAI launches OpenAI LP, a new company for funding safe artificial *general* intelligence; and the IEEE is set to release on 29 March the first edition of its Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. In reports of the week, the Allen Institute for AI examines the quality of AI papers and predicts that China will soon overtake the US in quality AI research; MMC publishes an examination of the State of AI in Europe; a paper looks at predicting research trends in the publications on Arxiv; and another paper surveys deep learning advances on different 3D data representations. Dive into Deep Learning is the book of the week, available online. The University of Vermont uses an AI and Project Gutenberg stories to identify six main arcs of storytelling. Dear Machine, by Greg Kieser, is the AI sci-fi story of the week. John Sunda Hsia’s website compiles the “ultimate guide” to all of the upcoming AI and ML conferences. And the Allen Institute releases a “dumbed down” version of OpenAI’s GPT-2, with some resulting humorous reflections.

Upcoming Conferences

Some Last Minute Items

Episode 2.19

March 15, 2019

Andy and Dave discuss research from Neil Johnson, who looked to the movements of fly larvae to model financial systems, where a collection of agents share a common goal, but have no way to communicate and coordinate their activities (a memory of five past events ends up being the ideal balance). Researchers at Carnegie Mellon demonstrate that random search with early-stopping is a competitive Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near simultaneously published, from AI Lab Swisscom shows that random search outperforms state-of-the-art NAS algorithms. Researchers at DeepMind investigate the possibility of creating an agent that can discover its world, and introduce NDIGO (Neural Differential Information Gain Optimization), designed to be “information seeking.” And the Electronics and Telecomm Research Institute in South Korea creates SC-FEGAN, a face-editing GAN that builds off of a user’s sketches and other information. Georgetown University announces a $55M grant to create the Center for Security and Emerging Technology (CSET). Microsoft workers call on the company to cancel its military contract with the U.S. Army. DeepMind uses machine learning to predict wind turbine energy production. Australia’s Defence Department invests ~$5M to study how to make autonomous weapons behave ethically. And the U.K. government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural Reading Comprehension and Beyond by Danqi Chen becomes highly read. A report looks at the evaluation of citation graphs in AI research; and researchers provide a survey of deep learning for image super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence of creativity,” while Sean Dorrance Kelly argues that AI can’t be an artist. Interpretable Machine Learning by Christoph Molnar hits version 1.0, and Andy highlights Asimov’s classic short story, The Machine that Won the War. And finally a symposium at Princeton University’s Institute for Advanced Studies examines deep learning – alchemy or science?

Episode 2.18

March 8, 2019

OpenAI has trained an unsupervised language model that can perform basic reading comprehension, summarize text, answer questions, and generate coherent paragraphs; as Andy and Dave discuss, the bigger news came from OpenAI's decision to release a less-capable version of the GPT-2 model, "for the good of humanity," as one news site claimed. IBM's Project Debater lost a debate with champion debater Harish Natarajan, but more of the audience said Project Debater better enriched their knowledge on the topic. Princeton and Microsoft announce NAIL, an agent for playing general interactive fiction (such as the Zork series), and consisting of multiple Decision Modules for performing various tasks. Columbia University takes a step toward reconstructing speech directly from the brain's auditory cortex, by temporarily placing electrodes in patients and having them listen to spoken numbers. DARPA announces SAIL-ON, the Science of Artificial Intelligence and Learning for Open-world Novelty, in an attempt to help AI adapt to constantly changing conditions. DARPA's Systematizing Confidence in Open Research and Evidence (SCORE) promises $7.6M to the Center for Open Science, for leading the charge on reproducibility. The Animal-AI Olympics hopes to create a survival-of-the-fittest for AI approach to the animal kingdom. Facebook releases ELF OpenGo, an open source implementation of DeepMind's AlphaZero. Neuroscientists from Case Western Reserve discover an entirely new form of neural communication that works through electrical fields and can function over gaps in severed tissues. The Nufffield Foundation and the Leverhulme Centre for the Future of Intelligence release a reports on the Ethical and Societal Implications of Algorithms, Data, and AI. Technology for Global Security and Center for Global Security and Research join forces to understand and manage risks to international security and warfare, as posed by AI-related tech. A short review in Science looks at brain circuitry and learning, and Andy pulls DeepMind's look at Neuroscience-inspired AI paper from 2017. Research examines engineering-based design methodology for embedding ethics in autonomous robots, while another paper assess the local interpretability of machine learning methods. Jeff Erickson releases a text book on Algorithms; Daniel Shiffman publishes The Nature of Code; and Jason Brownlee offers up Clever Algorithms – Nature-Inspired Programming Recipes. A video from This Week in Machine Learning and AI dissects the controversy surrounding OpenAI's GPT-2 model. And finally, two websites offer up faces of fictional people.

The Nature of Code, by Daniel Shiffman

Episode 2.17

February 22, 2019

Andy and Dave discuss a series of announcements: President Trump signs an Executive Order to prioritize and promote AI; the U.S. Department of Defense releases its 2019 AI Strategy; DARPA announces an Intelligent Neural Interface program focused on improving neurotechnology, and DARPA announces Guaranteeing AI Robustness against Deception (GARD), intended as an almost immune-system like approach to increase the resistance of ML models to deception; Securities and Exchange Commission filings from both Google and Microsoft disclose in “risk factors” that products with AI and ML may not work as intended, and may exacerbate a variety of problems, which could adversely affect the companies’ branding and reputation; and Uber AI releases Ludwig, an open source deep learning toolbox that allows users to train and test deep learning models without writing code. In research topics, DeepMind sets its sights on using ML to conquer Hanabi, a cooperative game with imperfect information, that requires a “theory of mind.” The Allen Institute for AI releases “Iconary,” a game of Pictionary with an AI partner. Research from Expedia Group uses a attentional convolution network for facial expression. IBM publishes research on a neuro-inspired “creativity” decoder. IBM Research AI and Arizona State University examine when AI bots might lie (in the context of “acceptable” social white lies). And research from Munchen demonstrates that humans are less likely to hurt or sacrifice a robot, if it is more human-like. In reports, the McKinsey Global Institute examines Europe’s Gap in Digital and AI. In papers, Johns Hopkins University publishes an opinion paper on the strengths and weaknesses of deep nets for vision, and the Centre of AI in Australia and the University of Illinois at Chicago publish a comprehensive survey on graph neural networks. John Brockman will be releasing a new book, Possible Minds: 25 Ways of Looking at AI. A TED Talk from Hugh Herr looks at bionics ability to extend human potential. And registration is now open for the Sackler Colloquium on the science of Deep Learning at the National Academy of Sciences.

Episode 2.16

February 15, 2019

For research topics, Andy and Dave discuss the task-agnostic self-modeling machine from Columbia University, a robotic arm that learns to build an approximate model of itself and then interact with the world; they also discuss the over-hyped reporting of the research. A much less hyped, but possibly more groundbreaking research from MIT results in a robot that can play the tower-block game Jenga, using multisensory fusion to do so. More research from MIT attempts to synthesize probabilistic programs for automatic data modeling. Research from the University of Tubingen shows that approximating convolutional neural nets with bag-of-local-features modeling yields decent results with ImageNet. And University of Washington and the Allen Institute for AI announce the Atlas of Machine Commonsense (ATOMIC), a collection of 877k textual descriptions of inferential knowledge, which allows more accurate inference for previously unseen events. In announcements of the week, DARPA announces the Competency-Aware Machine Learning (CAML) program for ML systems to assess their own performance; and Measuring Biological Aptitude (MBA) attempts to link genotype to phenotype in order to improve recruiting, training, and other aspects. The U.S. Navy’s Sea Hunter drone ship completes an autonomous trip from San Diego to Hawaii and back. The "Papers with Code" archive attempts to collect and link ML-related papers, code, and evaluation tables. The U.S. Army activates its AI Task Force at Carnegie Mellon. And the International Conference on Learning Representations (ICLR) 2019 has been announced for 6-9 May 2019. In media of the week, the World Intellectual Property Organization releases its report on the Technology Trends of 2019; the AMA Journal of Ethics publishes an entire (open-access) issue devoted to AI in health care; the Congressional Research Service updates its report on AI and National Security; Dan Simmons provides a hefty tome on Evolutionary Optimization Algorithms; and Julian Togelius publishes a book on Playing Smart. Wake Word is the Game of the Week, and in videos, Super Bowl ads provided a variety of glimpses into life with robots.

Game of the Week

Videos of the Week

Roundup of All the Super Bowl Ads About Robots and AI

Episode 2.15

February 8, 2019

Description : In recent announcements, Andy and Dave discuss the National Endowment for Science, Technology, and the Arts (Nesta) launch of a project that is ‘Mapping AI Governance;’ MIT Tech Review’s survey of AI and ML research suggests that “the era of deep learning coming to an end” (or does it?); a December 2018 survey shows strong opposition to “killer robots;” China has (internally) released a report on its view of the “State of AI in China;” and DARPA wants to build conscious robots using insect brains, announcing its (mu)BRAIN Program. In research topics, Andy and Dave discuss the recent competition between DeepMInd’s AlphaStar and human professional gamers in playing Starcraft II. MIT and Microsoft have created a model that can identify instances where autonomous systems have learned from training examples that don’t match what’s happening in the real world, thus creating blind spots. Boston University publishes research that allows an ordinary camera to “see” around corners using shadow projection, in essence turning a wall into a mirror – and doing so without any AI or ML techniques. In papers and reports, the Office of the Director for National Intelligence releases its AIM Initiative – a strategy for augmenting intelligence using machines; a report provides a survey of the state of self-driving cars; and another report surveys the state of AI/ML in medicine. Game Changer takes a look at AlphaZero’s chess strategies, while The Hundred-Page Machine Learning Book offers a condensed overview of ML. The Association for the Advancement of AI conference (27 Jan – 1 Feb) begins to release videos of the conference, including an Oxford-style debate of the Future of AI. And finally, Andy and Dave conclude with a “hype teaser” for next week – with SELF AWARE robots!

Episode 2.14

February 1, 2019

Description: CNA’s Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea.

Episode 2.13

January 25, 2019

Andy and Dave discuss Microsoft’s $1.76B five-year service deal with the Department of Defense, US Coast Guard, and the intelligence communities; the US Defense Innovation Board announces its first "public listening session" on AI principles; Finland announces an AI experiment to teach 1% of its population the basics of AI; a report from the Center for the Governance of AI and the Future of Humanity Institute reports on American attitudes and trends toward AI; and the Reuters Institute for the Study of Journalism examines UK media coverage of AI. In research news, MIT and IBM Watson AI Lab dissect a GAN to visualize and understand its inner workings, and they identify clusters of neurons that represent concepts; they also created GAN Paint, which lets a user add or subtract elements from a photo. Research from NYU and Columbia trained a single network model to perform 20 cognitive tasks, and discover this learning gives rise to compositionality of task representations, where one task can be performed by recombining representations from other tasks. Researchers at the University of Waterloo, Princeton University, and Tel Aviv University demonstrate that a type of machine learning can be undecidable, that is, unsolvable. Jeff Huang at Brown University has compiled a list of the best papers at computer science conferences since 1996; McGill and Google Brain offer a condensed Introduction to Deep Reinforcement Learning; Nature launches the inaugural issue of Nature Machine Intelligence; and a paper explores designing neural networks through neuroevolution. Major General Mick Ryan debuts a sci-fi story “AugoStrat Awakenings;” NeurIPS 2018 makes all videos and slides available, and USNI’s Proceedings publishes an essay from CAPT Sharif Calfee on The Navy Needs an Autonomy Project Office.

Episode 2.12

January 18, 2019

Anna Williams joins Andy and Dave as CNA’s Russia AI and Autonomy expert Sam Bendett returns to discuss the latest news and developments from Russia. Sam describes the progress that the Russian Ministry of Defense has made in implementing AI since its announcement of an AI Roadmap in March 2018, including some of the organizations involved and their advances. The group also discusses developments in the Russian civilian AI sector, as well as Russia’s intent to publish a civilian AI Roadmap by mid-year. Sam also describes some of the recent AI research and announcements (into which Andy and Dave note less visibility in English venues), and the group wraps up with a discussion on the latest developments in Russian military unmanned systems.

Episode 2.11

January 11, 2019

Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018, and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts to provide researchers broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work from Keio University, which shows that slime mold can approximate solutions to NP-hard problems in linear time (and differently from other known approximations). Researchers in Spain, the UK, and the Netherlands demonstrate that kilobots (small 3 cm robots) with basic communication rule-sets will self-organize. Research from UCLA and Stanford creates an AI system that mimics how humans visualize and identify objects by feeding the system many pieces of an object, called "viewlets." NVIDIA shows off its latest GAN that can generate fictional human faces that are essentially indistinguishable from real ones; further, they structure their generator to provide more control over various properties of the latent space (such as pose, hair, face shape, etc). Other research attempts to judge a paper on how good it looks. And in the "click-bait" of the week, Andy and Dave discuss an article from TechCrunch, which misrepresented bona fide (and dated) AI research from Google and Stanford. Two surveys provide overviews on different topics: one on safety and trustworthiness of deep neural networks, and the other on mini-UAV-based remote sensing. A report from CIFAR summarizes national and regional AI strategies (minus the US and Russia). In books of the week, Miguel Herman and James Robins are working on a Causal Inference Book, and Michael Nielsen has provided a book on Neural Networks and Deep Learning. CW3 Jesse R. Crifasi provides a fictional peek into a combat scenario involving AI. And Samim Winiger has started a mini documentary series, "LIFE," on the intersection of humans and machines.

Episode 2.10

January 4, 2019

In shorter news items, Andy and Dave discuss the announcement that the Allen Institute for Artificial Intelligence is partnering with Microsoft Research to connect AI2’s Semantic Scholar academic search engine with Microsoft’s Academic Graph. The University of Pavia in Italy demonstrates an artificial neuron (a perceptron) on an actual quantum processor. Another Tesla on Autopilot has an accident; and Waymo demonstrates that pure imitation learning (with 30 million examples) is not sufficient for teaching a model to drive a car. And Tumblr implements a porn-detecting AI. In research topics, researchers with Facebook AI, MIT, and UC Berkeley demonstrate “dataset distillation,” compressing 60,000 MNIST images into 10 synthetic images. Researchers at University of Maryland demonstrate the ability to hide adversarial attacks from network interpretation; so for networks that visually locate the item identified, that network would locate the “original” item instead of the adversarial item. Adobe and Auburn show that neural networks fail miserably for “out-of-distribution” inputs (or, “strange poses of familiar objects”), and they probe deeper into the parameters that cause the misbehavior. In other news, the AI Narratives Report explores how AI is portrayed and perceived. The AI Index releases its 2018 version. AI researchers have a spirited debate on Twitter about deep learning and symbol manipulation. Quantum Computing: Progress and Prospects provides a deeper look at this nascent technology. And Juergen Schmidhuber gives a TEDx talk on how “true AI” will change everything.

Episode 2.9

December 21, 2018

The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning conference of the year, holds its 32nd annual conference in Montreal, Canada, with a keynote discussion on “What Bodies Think About” by Michael Levin. And a group of graduate students have create a community-driven database to provide links to tasks, data, metrics, and results on the “state of the art” for AI. In other news, one of the “best paper” awards at NeurIPS goes to Neural Ordinary Differential Equations, research from University of Toronto that replaces the nodes and connections of typical neural networks with one continuous computation of differential equations. DeepMind publishes its paper on AlphaZero, which details the announcements made last year on the ability of the neural network to play chess, shogi, and go “from scratch.” And AlphaFold from DeepMind brings machine learning methods to a protein folding competition. In reports of the week, the AI Now Institute at New York University releases a 3rd annual report on understanding social implications of AI. With a blend of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples to this space. For interactive fun of generating images with a GAN, check out the “Ganbreeder,” though maybe not before going to sleep. In videos of the week, “Earworm” tells the tale of an AI that deleted a century; and CIMON, the ISS Robot, interacts with the space crew. And finally, Russia24 joins a long history of people dressing up and pretending to be robots.

Episode 2.8

December 14, 2018

This week, Andy and Dave discuss the US Department of Commerce’s announcement to consider regulating AI as an export; counter to that idea, Amazon makes freely available 45+ hours of training materials on machine learning, with tailored learning paths; Oren Etzioni proposes ideas for broader regulation of AI research, that attempts to balance the benefits with the potential harms; DARPA tests its CODE program for autonomous drone operations in the presence of GPS and communications jamming; a Chinese researcher announces the use of CRISPR to produce the first gene-edited babies; and the 2018 ACM Gordon Bell Prize goes to Lawrence Berkeley National Lab for achieving the first exa-scale (10^18) application, running on over 27,000 NVIDIA GPUs. Uber’s OpenAI announces advances in exploration and curiosity of an algorithm that help it “win” Montezuma’s Revenge. Research from Facebook AI suggests that pre-training convolutional neural nets may provide fewer benefits over random initialization than previously thought. Google Brain examines how well ImageNet architectures transfers to other tasks. A paper from INDOPACOM describes the exploitation of big data for special operations forces. And Yuxi Li publishes a technical paper on deep reinforcement learning. And a recent paper explores self-organized criticality as a fundamental property of neural systems. Christopher Bishop’s Pattern Recognition and Machine Learning is available online, and the Architects of Intelligence provides one-on-one conversations with 23 AI researchers. Maxim Pozdorovkin releases “The Truth about Killer Robots” on HBO, and finally, a Financial Times articles over-hypes (anti-hypes?) a questionable graph on Chinese AI investments.

Episode 2.7

December 7, 2018

In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI new anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interest list of open problems for future research. Research from the Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available; and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.

Episode 2.6

November 30, 2018

Andy and Dave discuss research from Hasani and colleagues that uses a natural method for growing a neural network, which they use to demonstrate that a 12-neuron network can be trained to steer and park a rover robot to a given spot. Jeff Hawkins and co-workers describe a new theory of intelligence, positing that every part of the human neocortex learns complete models of objects and concepts, resulting in a "thousand brains theory of intelligence." The UK publishes a 2000+ page report on the state of AI industry in the UK. A technical paper asks whether multiagent deep reinforcement is learning the answer or the question. The books of the week include Sejnowski’s The Deep Learning Revolution, and Gerrish’s How Smart Machines Think. And the videos of the week include the Deep Learning Summer School series and Reinforcement Learning Summer School series.

Episode 2.5

November 23, 2018

In the latest news, Andy and Dave discuss Microsoft’s announcement that it will sell artificial intelligence and other advanced technology to the Pentagon; Google is giving $25M to projects that use artificial intelligence for humanitarian projects; Stanford announces the Human-Centered AI initiative; AdaNet offers fast and flexible AutoML with “learning guarantees;” and a “human brain” supercomputer (using neuromorphic computing) with 1 million processors is switched online for the first time. In other stories, Andy and Dave discuss the AI-generated portrait that sold at a Christie’s auction for $432,500. MIT Media Lab announces the results of their “Moral Machine” experiment, which asked people around the globe to choose how a self-driving vehicle should behave in different moral dilemmas. And GoogleAI describes its “fluid annotation” method, an exploratory machine language-powered interface for faster image annotation.

Episode 2.4

November 16, 2018

Deep generative models can generate “spurious” samples (i.e. errors). Researchers from Université Paris-Saclay and PSL Research University explore a basic question, “Is it possible to get rid of all spurious samples [in deep generative models] without sacrificing coverage of a model?” Their research suggests a “Heisenberg Uncertainty”-like tradeoff between full coverage and spurious objects. DeepMind announces large-scale GAN training for natural image synthesis with high fidelity. And Andy discusses Topaz’s “AI Gigapixel,” an AI-driven software capability that intelligently adds information to photos to increase their resolution/size. In the paper of the work, researchers flip the Turing Test and ask humans what one word would they use to convince a human judge that they’re alive; the results are crappy. On a related note, Andy recalls Brian Christian’s achievement of being The Most Human Human. For books of the week, the UK’s Development, Concepts, and Doctrine Centre publishes the 6th edition of Global Strategic Trends; papers from the 3rd conference on the Philosophy and Theory of AI are available in a single publications; and Minsky’s Society of the Mind get a free hyperlinked online version (with the classic illustrations). In the video of the week, the Center for Technology Innovation asks “Who should answer the ethical questions surrounding AI?” And in the “silliness of the week,” a robot appears at a UK parliamentary meeting and “talks” to MPs about the future of AI in the classroom.

Episode 2.3

November 9, 2018

Andy and Dave discuss the latest corporate buzz on the Department of Defense’s JEDI contract, in which Microsoft employees publish an open letter and accuse the company of straying from its AI principles; a new DARPA program seeks to codify humans’ basic common sense through computational models and repositories; MIT establishes the Stephen A. Schwarzman College of Computing, a $1B initiative and the single largest by an American academic institution; MIT also announces an Autonomous Vehicle Technology study, a data-driven effort for “safe and enjoyable” human-AI interaction in driving; Wired takes a look at initial data on accidents involving self-driving vehicles; and researchers (at least 23!) publish a complete electron microscopy volume of the brain of the fruit fly. In deeper topics, Andy and Dave discuss research from the University of Louisville that shows the failure of neutral networks to understand optical illusions. Researchers from UPenn, ARL, and NYU demonstrate a drone that can be controlled by your eyes. Stocco and colleagues demonstrate BrainNet, a “social network” of that allows 3 people to transmit “thoughts” to each other. And researchers at Ecole Centrale de Lyon have created new framework that may allow robots to autonomously optimize their own hyper-parameters – about which Dave tries to look on the bright side.

Episode 2.2

November 2, 2018

Andy and Dave focus of a variety of big news items, including: Google bows out of the bidding for the Pentagon’s “JEDI” cloud contract valued at $10 billion; the Government Accountability Office releases a 50-page report on the poor state of the cybersecurity of U.S. weapons systems; “The Big Hack” makes big news, with Bloomberg reporting that China inserted a tiny chip on hardware in order to infiltrate U.S. networks; the U.S. Department of Transportation looks to rewrite safety rules in order to accommodate fully driverless vehicles on public roads; two leaders in collaborative robots (Rethink and Jibo) close their doors; and DeepMind announces efforts to discuss “Technical AI safety” including the areas of specification (true intentions), robustness (safety upon perturbation), and assurance (understanding and control). The latter topic launches further discussion into ethics-related efforts for AI, including the UK Machine Intelligence Garage Ethics Committee; a paper on the motivations and risks of machine ethics; and research from North Caroline State University shows that the (Association for Computing Machinery) code of ethics does not appear to affect the decisions made by software developers. All the excitement somehow causes Dave to invoke Jean Valjean when he means to say Javert. C’est la vie! Finally, Andy describes a couple motherlodes of papers; Biostorm by Anthony DeCapite makes the story of the week; ZDNet ranks 36 of the best movies on AI; AutoML is prepping an open access book on AutoML; and Dave goes fan boy over the Automata web series from Penny Arcade.

Episode 2.1

October 26, 2018

Welcome to Version 2.0 of AI with AI! Dave starts off by trying to explain the weird podcast titles, and he plugs Andy’s (@ai_ilachinski) and his (@crypticnarwhal) Twitter accounts. Andy and Dave then get down to business discussing Britain’s “successful” trials of using AI (“SAPIENT”) in urban battlefield scanning to identify enemy movements; the IEEE launches an ethics certification program for autonomous and intelligent systems; the U.S. Department of Energy invests $218M in Quantum Information Science; and DARPA announces the Subterranean Challenge, for technologies to augment underground operations, and wherein Dave makes a dire prediction of Tolkien-proportions! Andy and Dave then delve greedily and deeply into a series of topics of counter-AI. They start with discussing Dedrone, which has developed a capability to detect and track swarms (of robots/drones). Researchers in Korea use an AI-enabled drone to herd flocks of birds (diverting them from designated airspace). Researchers at the University of Albany, with GE, demonstrate the ability to attack object detectors (Faster Regional Convolutional Neural Networks) using imperceptible patches on the background; and researchers at the Georgia Institute of Technology, with Intel, announce ShapeShifter, a targeted physical attack on Faster R-CNN object detectors found in “state-of-the-art” detectors (such as the current generation of self-driving vehicles). On the other side, Luca de Alfaro at the University of California, Santa Cruz, published research into creating neural networks with built-in resistance to adversarial attacks, by reducing the neural networks’ “local linearity.” After a quick touch on research from Google Research on simplifying and compacting neural networks (for resource-constrained devices) without floating point operations or multiplications, Andy recommends a paper on Learning Causality; August Cole’s Angry Trident makes the story of the week; Interpretable Machine Learning (by Molnar) is the book of the week, along with Pattern Classification by Duda, Hart, and Stork; and Christopher Moore explores the Limits of Computation in a two-part video series.

Season 1

Episode 50

October 19, 2018

Andy and Dave discuss the “Transparency by Design Network” (TbD-net), research from MIT Lincoln Lab that uses a collection of modular neural nets to perform specific image identification subtasks. The resulting output places heat-map blobs over objects in an image, which allows a human analyst to see how a module is interpreting the image (and to use that information to further improve the model’s accuracy). In research from DeepMind and the University of Oxford, researchers attempt to solve the problem that neural nets have in not manipulating numerical information well outside of the range of values encountered during training. Researchers created a Neural Accumulator and a Neural Arithmetic Logic Unit (in essence, representing numerical quantities as individual neurons without a nonlinearity) to allow a system to learn to represent and manipulate numbers in a systematic way. Georgia Tech has developed a machine learning-based method to automate the generation of novel video games, using Super Mario Bros, Mega Man, and Kirby’s Adventure as inputs. And Kate Crawford and Vladan Joler have created a massive visualization of the many processes that make an Amazon Echo work, in the “Anatomy of an AI system.” DARPA celebrates its 60th anniversary with a 184-page paper that highlights its research over the last 60 years; Google launches a “What-If Tool” for probing datasets at a non-coding level; Neural Networks and Learning Machines (3rd Edition) by Simon Haykin is available for free. Robin R. Murphy curates information on “Robotics Through Science Fiction” (and more); all of the keynotes and presentations from the Joint Multi-Conference on Human-Level Artificial Intelligence are available online, likely requiring a week of vacation to view them all; and the 11th International Conference on Swarm Intelligence will be in Rome at the end of October 2018.

Episode 49

October 12, 2018

Andy and Dave discuss an online essay by Tim Dutton, which summarizes the AI strategies that nations have published in the last year and a half. Sentient Investment Management announces plans to liquidate its hedge fund that used AI to forecast investment strategies. IBM spearheads effort to create standards for AI developers to demonstrate the fairness of their AI algorithms, through a Supplier’s Declaration of Conformity. Google announces an Unrestricted Adversarial Examples Challenge, with “birds versus bicycles,” where applicants can either submit a defender (an image classifier that will resist adversarial attacks) or submit an attacker (an adversarial attack that attempts to make the defender declare a confident, incorrect answer). The Drone Racing League announces a new competition for teams developing AI pilots for drone racing. And DARPA announces research that has allowed a paralyzed man to send (and receive) signals for three drones simultaneously, through a surgically-implanted microchip in the brain.

Episode 48

September 28, 2018

Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).

Episode 47

September 14, 2018

Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”

Episode 46

September 7, 2018

Andy and Dave discuss the latest developments in OpenAI’s AI team that competed against human players in Dota 2, a team-based tower defense game. Researchers published a method for probing Atari agents to understand where the agents were focusing when learning to play games (and to understand why they are good at games like Space Invaders, but not at Ms. Pac-Man). A DeepMind AI can match health experts when spotting eye diseases from optical coherence tomography (OCT) scans; it uses two networks to segment the problems, which also allows a way for the AI to indicate which portion of the scans prompted the diagnosis. Research from Germany and the UK showed that children may be especially vulnerable to peer pressure from robots; the experiments replicated Asch’s social experiments from the 1950s, but interestingly adults did not show the same vulnerability to robot peer pressure. Research from Rosenfeld, Zemel, and Tsotsos showed that “minor” perturbations in images (such as shifting the location of an elephant) can cause misclassifications to occur, again highlighting the potential for failures in image classifiers. Andy recommends “The Seven Tools of Causal Inference with Reflections on Machine Learning” by Pearl; Algorithms for Reinforcement Learning by Szepesvari is available online; Robin Sloan has a novel, Sourdough, with much use of AI and robots; Wolfram has an interview on the computational universe; a new documentary on AI look at the life and role of Geoffrey Hinton ; and Josh Tenenbaum examines the issues of “Growing a Mind in a Machine.”

Episode 45

August 31, 2018

In breaking news, Andy and Dave discuss the Convention on Conventional Weapons meeting on lethal autonomous weapons systems (LAWs) at the United Nations, where more than 70 countries are participating in the sixth meeting since 2014. Highlights include the priorities for discussion, as well as the UK delegation's role and position. The Pentagon’s AI programs get a boost in the defense budget. DARPA announces the Automating Scientific Knowledge Extraction (ASKE) project, with the lofty goal of building an AI tool that can automatically generate, test, and refine its own scientific hypotheses. Google employees react to and protest the company’s secret, censored search engine (Dragonfly) for China. The Electronic Frontier Foundation releases a white paper on Mitigating the Risks of Military AI, which includes applications outside of the “kill chain.” And Brookings releases the results of a survey that asks people whether AI technologies should be developed for warfare.

Episode 44

August 24, 2018

The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in application for war, but also in general application, which include: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI and autonomy in war, including the context of military operations, the supporting institutional development (including materiel, training, and test & evaluation), as well as the law and policy that govern their use. They wrap up with a discussion about the current status of organizations and thought leaders in the Department of Defense and the Department of the Navy.

Episode 43

August 17, 2018

In breaking news, Andy and Dave discuss the Dota 2 competition between the Open AI Five team of AIs and a top (99.95th percentile) human team, where the humans won one game in a series of three; the Pentagon signs a $885M AI contract with Booz Allen; MIT builds Cheetah 3, a “blind” robot that has no visual sensors but can climb stairs and maneuver in a space with obstacles; Tencent Machine Learning trains AlexNet in just 4 minutes on ImageNet (breaking the previous record of 11 minutes); researchers at MIT Media Lab have developed a machine-learning model to perceive human emotions; and the 2018 Conference on Uncertainty in AI (UAI) may have been held 7-10 August in Monterey, CA – we’re not certain (but what is certain is that Dave will never tire of these jokes). In other news, IBM Watson reportedly recommended cancer treatments that were “unsafe and incorrect, and Amazon’s Rekognition software incorrectly identifies 28 lawmakers as crime suspects, about which Andy and Dave yet again highlight the dangerous gap in AI between expectations and reality. Lipton (CMU) and Steinhardt (Standford) identify “troubling trends” in machine learning research and scientific scholarship. The Institute for Theoretical Physics in Zurich describes SciNet, a neural network that can discover physical concepts (such as the motion of a damped pendulum). A paper by Kott and Perconti makes an empirical assessment of forecasting military technology on the 20-30 year horizon, and finds the forecasts are surprisingly accurate (65-87%). “Elements of Statistical Learning Data Mining, Inference, and Prediction,” is available online. Andy recommends the Ellison classic story, I Have No Mouth, and I Must Scream, and finally, a video by Percy Liang at Stanford discusses ways of evaluating machine learning for AI.

Episode 42

August 10, 2018

Continuing in a discussion of recent topics, Andy and Dave discuss research from Johns Hopkins University, which used supervised machine learning to predict toxicity of chemicals (the results of which beat animal tests). DeepMind probes toward general AI by exploring AI’s abstract reasoning capability; in their tests, they found that systems did OK (75% correct) when problems used the same abstract factors, but that AI systems fared very poorly if the testing differed from the training set (even minor variations such as using dark-colored objects instead of light-colored objects) – in a sense, suggesting that deep neural nets cannot “understand” problems they have not been explicitly trained to solve. Research from Spyros Makridakis demonstrated that existing traditional statistical methods outperform (better accuracy; lower computation requirements) than a variety of popular machine-learning methods, suggesting the need for better benchmarks and standards when discussing the performance of machine learning methods. Finally, Andy and Dave wrap up with two reports from the Center for a New American Security, on Technology Roulette, and Strategic Competition in an Era of AI, the latter of which highlights that the U.S. has not yet experienced a true “Sputnik moment.” Research from MIT, McGill and Masdar IST defines and visualizes skill sets required for various occupations, and how these contribute to a growing disparity between high- and low-wage occupations. The conference proceedings of Alife2018 (nearly 700 pages) are available for the 23-27 July event. Art of the Future Warfare Project features a collection of “war stories from the future,” and over 50 videos are available from the 2018 International Joint Conference on AI.

Episode 41

August 3, 2018

In breaking news, Andy and Dave discuss the "Future of Life" pledge that various AI tech leaders have signed, promising not to develop lethal autonomous weapons; DARPA announces its Artificial Intelligence Exploration (AIE) program, to provide "unique funding opportunities;" DARPA also announces a Short-Range Independent Microrobotic Platform (SHRIMP) program, which seeks to develop multi-functional tiny robots for use in natural and critical disaster scenarios; GoodAI announces the finalists in the "General AI Challenge," which produced a series of conceptual papers; and a report from UK's parliament examines the issues surrounding the government’s use of drones. Then in deeper topics, Andy and Dave discuss various attempts to use AI to predict the FIFA World Cup 2018 champion (all of which failed), which includes a discussion on the appropriate types of questions to which AI is amenable, and also includes an obligatory Star Trek reference. Baidu announced ClariNet, which performs text-to-speech synthesis within one neural network (as opposed to multiple networks).

Episode 40

July 27, 2018

CNA’s expert on Russian AI and autonomous systems, Samuel Bendett, joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors to work together on information technologies, while hosting high-profile events aiming to foster dialogue between its uniformed and civilian technologists. For example, Russian state corporation Russian Technologies (Rostec), with extensive ties to the nation’s military-industrial complex, has overseen the creation of a company with the ominous name – Kryptonite. The company’s name – the one vulnerability of a super-hero – was unlikely to be picked by accident. Russia’s government is working hard to see that the Russian technology sector can compete with American, Western and Asian hi-tech leaders. This technology race is only expected to accelerate - and Russian achievements merit close attention.

Episode 39

July 20, 2018

This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive 3000.09 actually means, and how the future of AI could more closely resemble the android data than SKYNET from the Terminator movies. That leads to a discussion of some common misconceptions about artificial intelligence and autonomy in military applications, and how these misconceptions can manifest themselves in the UN talks. With data having single-handedly saved the day in the eighth and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva.

Episode 38

July 13, 2018

In the second part of this epic podcast, Andy and Dave continue their discussion with research from MIT, Vienna University of Technology, and Boston University, which uses human brainwaves and hand gestures to instantly correct robot mistakes. The research uses a combination of electroencephalogram (EEG, brain signals) and electromyogram (EMG, muscle signals) in combination to allow a human (without training) to provide corrective input to a robot while it performs tasks. On a related topic, MIT’s Picower Institute for Learning and Memory demonstrated the rules for human brain plasticity, by showing that when one synapse connection strengthens, the immediately neighboring synapses weaken; while suspected for some time, this research showed for the first time how this balance works. Then, research from Stanford and Berkley introduces a Taskonomy, a system for disentangling task transfer learning. This structured approach maps out 25 different visual tasks to identify the conditions under which transfer learning works from one task to another; such a structure would allow data in some dimensions to compensate for the lack of data in other dimensions. Next up, OpenAI has developed an AI tool for spotting photoshopped photos, by examining three types of manipulation techniques (splicing, copy-move, and removal), and by also examining local noise features. Researchers at Stanford have used machine learning to recreate the periodic table of elements after providing the system with a database of chemical formulae. And finally, Andy and Dave wrap up with a selection of papers and other media, including CNAS’s AI: What Every Policymaker Needs to Know; a beautifully-done tutorial on machine learning; the Question for AI by Nilsson; Nonserviam by Lem; IPI’s Governing AI; the US Congressional Hearing on the Power of AI; and Twitch Plays Robotics.

Episode 37

July 6, 2018

In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consumes 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.

Episode 36

June 29, 2018

In breaking news, Andy and Dave discuss the recently unveiled Wolfram Neural Net Repository with 70 neural net models (as of the podcast recording) accessible in the Wolfram Language; Carnegie Mellon and STRUDEL announce the Code/Natural Language (CoNaLa) Challenge with a focus on Python; Amazon releases its Deep Lens video camera that enables deep learning tools; and the Computer Vision and Pattern Recognition 2018 conference in Salt Lake City. Then, Andy and Dave discuss DeepMind’s Generative Query Network, a framework where machines learn to turn 2D scenes into 3D views, using only their own sensors. MIT’s RF-Pose trains a deep neural net to “see” people through walls by measuring radio frequencies from WiFi devices. Research at the University of Bonn is attempting to train an AI to predict future results based on current observations (with the goal of “seeing” 5 minutes into the future), and a healthcare group of Google Brain has been developing an AI to predict when a patient will die, based on a swath of historical and current medical data. The University of Wyoming announced DeepCube, an “autodidactic iteration” method from McAleer that allows solving a Rubik’s Cube without human knowledge. And finally, Andy and Dave discuss a variety of books and videos, including The Next Step: Exponential Life, The Machine Stops, and a Ted Talk from Max Tegmark on getting empowered, not overpowered, by AI.

Episode 35

June 22, 2018

In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data, and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to using drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.

Videos

Episode 34

June 15, 2018

In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possibly groundbreaker paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.

Episode 33

June 8, 2018

Andy and Dave didn’t have time to do a short podcast this week, so they did a long one instead. In breaking news, they discuss the establishment of the Joint Artificial Intelligence Center (JAIC), yet-another-Tesla autopilot crash, Geurts defending the decision to dissolve the Navy’s Unmanned Systems Office, and Germany publishes a paper that describes its stance on autonomy in weapon systems. Then, Andy and Dave discuss DeepMind’s approach to using YouTube videos to train an AI to learn “hard exploration games” (with sparse rewards). In another “centaur” example, facial recognition experts form best when combined with an AI. University of Manchester researches announce a new footstep-recognition AI system, but Dave pulls a Linus and has a fit of “footstep awareness.” In other recent reports, Andy and Dave discuss another example of biomimicry, where researchers at ETH Zurich have modeled the schooling behavior of fish. And in brain-computer interface research, a noninvasive BCI system co-trained with tetraplegics to control avatars in a racing-game. Finally, they round out the discussion with a mention of ZAC Inc and its purported general AI, a book on How People and Machines are Smarter Together, and a video on deep reinforcement learning.

Books

Videos

Episode 32

June 1, 2018

In breaking news, Andy and Dave discuss a few cracks seem to be appearing in Google's Duplex demonstration; more examples of the breaking of Moore's Law; a Princeton effort to advance the dialogue on AI and ethics; India joins the global AI-sabre-rattling; the UK Ministry of Defence launches an AI hub/lab; and the U.S. Navy dissolves its secretary-level unmanned systems office. Andy and Dave then discuss a demonstration of "zero-shot" learning, by which a robot learns to do a task by watching a human perform it once. The work reminds Andy of the early natural language "virtual block world" SHRDLU, from the 1970s. In other news, the research team that designed Libratus (a world-class poker-playing AI) announced they had developed a better AI that, more importantly, is also computationally orders of magnitude less expensive (using a 4-core CPU with 16 GB of memory). Next, research with Intel and the University of Illinois UC has developed a convolutional neural net to significantly improve low-ISO image quality while shooting at faster shutter speeds; Andy and Dave both found the results for improving low-light images to be quite stunning. Finally, after yet-another-round of a generative adversarial example (in which Dave predicts the creation of a new field), Andy closes with some recommendations on papers, books, and videos, including Galatea 2.2 and The Space of Possible Minds.

Videos

Episode 31

May 25, 2018

In a review of the latest news, Andy and Dave discuss: the White House’s “plan” for AI, the departure of employees from Google due to Project Maven, another Tesla crash, the first AI degree for undergraduates at CMU, and Boston Dynamics’ jumping and climbing robots. Next, two AI research topics have implications for neuroscience. First, Andy and Dave discuss AI research at DeepMind, which showed that an AI trained to navigate between two points developed “grid cells,” very similar to those found in the mammalian brain. And second, another finding from DeepMind on “meta-learning” suggests that dopamine in the human brain may have a more integral role in meta-learning than previously thought. In another example of “AI-chemy,” Andy and Dave discuss the looming problem of (lack of) explainability in health care (with implications for many other areas, such as DoD), and they also discuss some recent research on adding an option for an AI to defer a decision with “I Don’t Know” (IDK). After a quick romp through the halls of AI-generated DOOM, the two discuss a recent proof that reveals the fundamental limits of scientific knowledge (so much for super-AIs). And finally, they close with a few media recommendations, including “The Book of Why: The New Science of Cause and Effect.”

Videos

Episode 30

May 18, 2018

In a review of the most recent news, Andy and Dave discuss the latest information on the fatal self-driving Uber accident, the AI community reacts (poorly) to Nature's announcement of a new closed-access section on machine learning, on-demand self-driving cars will be coming soon to north Dallas, and the Chinese government is adding AI to high school curriculum with a mandated textbook. For more in-depth topics, Andy and Dave discuss the latest information from DARPA's Lifelong Learning Machines (L2M) project, which has announced its initial teams and topics, which seek to generate "paradigm-changing approaches" as opposed to incremental improvements. Next, they discuss an experiment from OpenAI that provides visibility into dialogue between two AI on a topic, one of which is lying. This discussion segues into recent comparisons of the field of machine learning to the ancient art of alchemy. Dave avoids using the word "alcheneering," but thinks that "AI-chemy" might be worth considering. Finally, after a discussion on a couple of photography-related developments, they close with a discussion on some papers and videos of interest, including the splash of the Google's new "Turing-test-beating" Duplex assistant for conducting natural conversations over the phone.

Episode 29

May 11, 2018

Andy and Dave discuss a couple of recent reports and events on AI, including the Sixth International Conference on Learning Representations (ICLR). Next, Edward Ott and fellow researchers have applied machine learning to replicate chaotic attractors, using "reservoir computing." Andy describes the reasons for his excitement in seeing how far out this technique is able to predict a 4th order nonlinear partial differential equation. Next, Andy and Dave discuss a few adversarial attack-related topics: a single-pixel attack for fooling deep neural network (DNN) image classifiers; an Adversarial Robustness Toolbox from IBM Research Ireland, which provides an open-source software library to help researchers in defending DNN against adversarial attacks; and the susceptibility of the medical field to fraudulent attacks. The BAYOU project takes another step toward giving AI the ability to program new methods for implementing tasks. And Uber Labs releases source code that can train a DNN to play Atari games in about 4 hours on a *single* 48-core modern desktop! Finally, after a review of a few books and videos, including Paul Scharre's new book "Army of None," Andy and Dave conclude with a discussion on potatoes.

Episode 28

May 4, 2018

This week, Andy, Larry, and Dave welcome Major General Mick Ryan, Commander of the Australian Defence College. Mick has recently published a report on Human-Machine Teaming for Future Ground Forces, in which he identifies keys areas for human-machine teams, as well as challenges that military forces will have in incorporating these new capabilities. The group discusses some of these issues, and some of the broader challenges in both the near- and far-term.

Guest

Episode 27

April 27, 2018

Andy and Dave start this week's podcast with a review of some of the latest announcements: the latest meeting of the UN Convention on Certain Convention Weapons, SecDef Mattis's announcement of a new joint program office for AI, a declaration of cooperation on AI by 25 European countries, and a UK Parliament report on AI. They then discuss the latest Center for the Study of the Drone report, which compares U.S. Dept of Defense drone spending for FY19 with FY18. The MIT-IBM Watson AI Lab has launched a "Moments in Time" dataset, the first steps toward building a large and robust set of short videos for action classification purposes. Google has increased the quality of its AI in picking voices out of a noisy room, by making use of additional information (here, video). And Google has introduce a way to "talk to books;" Andy and Dave were a bit underwhelmed, but check it out and judge for yourself. Finally, Andy and Dave close with a selection of whimsical comments from the news, and a selection of videos.

Video

Episode 26

April 20, 2018

Anna Williams joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges that the CAPT has observed in his engagements. In the second part, the group discusses various steps that the U.S. Navy can take to move forward more deliberately, to include the consideration for a new Naval Reactors-like office to oversee AI.

Episode 25

April 13, 2018

Andy and Dave cover a wide variety of topics this week, starting with two prominent examples of employees and researchers objecting to certain uses of AI technology. Andy and Dave then discuss a recent GAO report on AI, as well as France’s announcement to invest in AI. They also discuss AI in designing chemical synthesis pathways, AI in reading echocardiograms, meta-learning (learning how to learn in unsupervised learning), helping robots express themselves when they fail, and a collection of papers, graphic novels, and videos. By the end, Dave’s arms are flailing wildly!

Episode 24

April 6, 2018

Dave starts with a shocking revelation! Can you pass the test?? Andy and Dave then discuss MIT Tech Review’s EmTech Digital Conference, which highlighted the latest in AI research. Next, Andy and Dave discuss the rapid expansion of newly reported AI models, including the “GAN Zoo.” Venture capital funding in the U.S. suggests that the AI market may be cooling. Andy describes new insight into brain function that will likely lead to further AI breakthroughs. And after a discussion of an AI playing Battlefield 1, Andy and Dave close with a look at AIs learning in electric dreams, and a GAN that can lip sync a face to an audio-video clip.

Audio

Videos

We Are Here To Create (40 min) A Conversation with Kai-Fu Lee, author of forthcoming book AI Superpowers: China, Silicon Valley, and the New World Order

Episode 23

March 30, 2018

With the news of the first death at the digital hands of a driverless vehicle, Andy and Dave discuss some of the broader issues surrounding the understanding and implementation of AI technology. In other news, they discuss the creation of a digital version of yeast (DCell) as a way to provide insight into the otherwise “black box” of AI. Then, after describing DeepMind’s efforts into using evolutionary Auto Machine Learning to discover neural network architectures, Andy and Dave discuss an example of how background knowledge (“priors”) transfers to the world of games, and how that compares with AI.

Videos

Episode 22

March 23, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, again sits in for Dave this week. He and Andy discuss: the recent passing of physicist Stephen Hawking (along with his "cautionary" views on AI); CNAS’s recent launch of a new Task Force on AI and National Security, Microsoft’s AI breakthrough in matching human performance translating news from Chinese to English; a report that looks at China’s "AI Dream" (and introduces an "AI Potential Index" to assess China’s AI capabilities compared to other nations); a second index, from a separate report, called the "Government AI Readiness Index," which inexplicably excludes China from the top 35 ranked nations; and the issue of legal liability of AI systems. They conclude with call outs to a fun-to-read crowd-sourced paper written by researchers in artificial life, evolutionary computation, and AI that tells stories about the surprising creativity of digital evolution, and three videos: a free BBC-produced documentary on Stephen Hawking, a technical talk on deep learning, and a Q&A session with Elon Musk (that includes an exchange on AI).

Videos

Episode 21

March 16, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, sits in for Dave this week, as he and Andy discuss: a recent report that not all Google employees are happy with Google’s partnership with DoD (in developing a drone-footage-analyzing AI); research efforts designed to lift the lid – just a bit - on the so-called “black box” reasoning of neural-net-based AIs; some novel ways of getting robots/AIs to teach themselves; and an arcade-playing AI that has essentially “discovered” that if you can’t win at the game, it is best to either kill yourself or cheat. The podcast ends with a nod to a new free online AI resource offered by Google, another open access book (this time on the subject of Robotics), and a fascinating video of Stephen Wolfram of Mathematica fame, lecturing about artificial general intelligence and the “computational universe” to a computer science class at MIT.

Book

Videos

Episode 20

March 9, 2018

Andy and Dave discuss a recently released report on the Malicious Use of AI: Forecasting, Prevention, and Mitigation, which describes scenarios where AI might have devious applications (hint: there’s a lot). They also discuss a recent report that describes the extent of missing data in AI studies, which makes it difficult to reproduce published results. Andy then describes research that looks into ways to alter information (in this case, classification of an image) to fool both AI and humans. Dave has to repeat the research in order to understand the sheer depth of the terror that could be lurking below. Then Andy and Dave quickly discuss a new algorithm that can mimic any voice with just a few snippets of audio. The only non-terrifying topic they discuss involves an attempt to make Alexa more chatty. Even then, Dave decides that this effort will only result in a more-empty wallet.

Episode 19

March 2, 2018

Andy and Dave welcome Sam Bendett, a research analyst for CNA's Center for Strategic Studies, where he is a member of the Russia Studies Program. His work involves Russian defense and security technology and developments, Russian geopolitical influence in the former Soviet states, as well as Russian unmanned systems development, Russian naval capabilities and Russian decision-making calculus during military crises. Sam is in our studio to discuss recent Russian developments in AI and unmanned systems, and to preview an upcoming Defense One summit called "Genius Machines," which he will be speaking at on March 7.

Upcoming Event

Episode 18

Feb 23, 2018

In another smattering of topics, Andy and Dave discuss the latest insight into the dispersion of global AI start-ups, as well as AI talent. They also describe a commercially available drone that can navigate landscapes and obstacles as it tracks a target. And they discuss an AI algorithm with “social skills” that can teach humans how to collaborate. After chat bots and Deep TAMER, Andy and Dave discuss a few recent videos, including one about door-opening dogs; and, Dave has a meltdown as he fails to recall The Earth Stood Still, but instead substitutes a different celestial body. Klaatu barada nikto.

Episode 17

Feb 16, 2018

Andy and Dave start this week’s episode with a superconducting ‘synapse’ that could enable powerful future neuromorphic supercomputers. They discuss an attempt to use AI to decode the mysterious Voynich manuscript, and then move on to Hofstadter’s take on the shallowness of Google Translate (with mention of the ELIZA effect). After discussing DroNet’s drones that can learn to fly by watching a driving video, and updating the Domain-Adaptive Meta-Learning discussion where a robot can learn a task by watching a video, they close with some recommendations of videos and books, including Lem’s ‘Golem XIV.’

Video

Episode 16a & 16b

Feb 9, 2018

Andy and Dave welcome back Larry Lewis, the Director for CNA's Center for Autonomy and Artificial Intelligence, and welcome Merel Ekelhof, a Ph.D. candidate at VU University Amsterdam and visiting scholar at Harvard Law School. Over the course of this two-part series, the group discusses the idea of "meaningful human control" in the context of the military targeting process, the increasing role of autonomous technologies (and that autonomy is not simply an issue "at the boom"), and the potential directions for future meetings of the U.N. Convention on Certain Weapons.

Episode 15

Feb 2, 2018

Andy and Dave discuss two recent AI announcements that employ generative adversarial networks: an AI algorithm that can crack classic encryption ciphers (without prior knowledge of English), and an AI algorithm that can "draw" (generate) an image based on simple text instructions. They start, however, with a discussion on the recent rash of autonomous (and semi-autonomous) vehicle incidents, and they also discuss "brain-on-a-chip" hardware, as well as a robot that can learn to do tasks by watching video.

Episode 14

Jan 26, 2018

Andy and Dave cover a series of topics that evoke broader to connect with the "meta" questions about the role and nature of AI. They begin with Google's Cloud AutoML announcement, which offers ways to more easily build your own AI. They discuss the announcement of AIs that "defeated" humans on a Standard University reading comprehension text, and the misrepresentation of that achievement. They discuss deep image reconstruction, with a neural net that "read minds" by piecing together images from a human's visual cortex. And they close with discussions about Gary Marcus's recent article, which offers a critical appraisal of Deep Learning, and a recent paper that suggests that convolutional neural nets may not be as good at "grasping" higher-level abstract concepts as is typically believed.

Video

Episode 13

Jan 19, 2018

Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.

The documentary about Google DeepMind's 'AlphaGo' algorithm is now available on Netflix

Episode 12

Jan 12, 2018

Andy and Dave discuss “Tacotron 2,” the latest text-to-speech capability from Google that produces results nearly indistinguishable from human speech. They also discuss efforts at Google to create a Neural Image Assessment (NIMA), that not only can evaluate the quality of an image, but can also be trained to rate the aesthetics (as defined by the user) of an image. And after a look at some of the AI predictions for 2018, they play a musical game with two pieces of music – can Andy guess which piece Dave wrote, and which the AI composer AIVA, the Artificial Intelligence Virtual Artist, wrote?

Books

Episode 11

Jan 5, 2018

It’s a smorgasbord of topics, as Andy and Dave discuss: the “AI 100” top companies report; the implications of Google’s new AI Research Center in Beijing; a workshop from the National Academy of Science and the Intelligence Community Studies Board on the challenges of machine generation of analytic products from multi-source data; Ethically Aligned Design and the IEEE; Quantum Computing; and finally, some Kasparov-related materials.

Related: IZBM announces 50-Qubit quantum computer on 10 Nov; caveat (as for all state-of-the-art q-computers: the quantum state is preserved for 90 microseconds—a record for the industry, but still an extremely short period of time. IBM Raises the Bar with a 50-Qubit Quantum Computer

Book/Video

Episode 10

Dec 29, 2017

Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.

Episode 9

Dec 22, 2017

After some brief speculation on the announcement from NASA (which was being held at the same time as this podcast was recorded), and a quick review of AlphaGo Teach, Andy and Dave discuss the 31st Annual Conference on Neural Information Processing Systems (NIPS). With over 8,000 attendees, 7 invited speakers, seminar and poster sessions, NIPS provides insight into the latest and greatest developments in deep learning, neural nets, and related fields.

Episode 8

Dec 15, 2017

Andy and Dave discuss how DeepMind's AI continues to bust through the record books while AlphaZero takes one step closer to world domination (of all board games). After a brief discussion on protein folding, they discuss the "AI Index," which seeks to measure the evolution and advances in AI over time.

Episode 7

Dec 8, 2017

Andy and Dave discuss a market analysis report that identifies where the Department of Defense is spending money in artificial intelligence, big data, and the cloud. They also elaborate on the challenge of "catastrophic forgetting," and a 4-year program at DARPA that seeks to develop "Lifelong Learning Machines," which can continuously apply the results of past experiences. After a conversation about SquishedNets, they cover a Harvard research paper that asserts the need for AI to have explanatory capabilities and accountability.

Video

Episode 6a & 6b

Nov 24, 2017

Dr. Larry Lewis joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group�s schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.

Topics

November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries)

22 countries now support a prohibition with Brazil, Iraq and Uganda joining the list of ban endorsers during the GGE meeting. Cuba, Egypt, Pakistan and other states that support the call to ban fully autonomous weapons also forcefully reiterated the urgent need for a prohibition.

States will take a final decision on the CCW’s future on this challenge, including 2018 meeting duration/dates, at the CCW’s annual meeting on Friday, 24 November.”

“The vast majority of CCW high contracting parties participating in this meeting do want concrete action. The majority of those want a legally binding instrument, while others prefer—at least for now—a political declaration or other voluntary arrangements. However, China, Japan, Latvia, Republic of Korea, Russia, and the United States made it clear that they do not want to consider tangible outcomes at this time.”

Video

Related: In Aug 2017, Elon Musk lead 116 AI experts in open letter calling for ban of killer robots. Read.

Episode 5

Nov 17, 2017

Andy and Dave discuss the recent Geneva Convention on Conventional Weapons, which met to lay the groundwork for discussing the role of lethal autonomous weapons. They also discuss a new technique, called Capsule Networks, that aims to improve recognition of an object due to a change in spatial orientation. Andy and Dave conclude with a discussion of why fruit flies are so awesome.

Video

Episode 4

Nov 10, 2017

Andy and Dave discuss MIT efforts to create a tool to train AIs, in this case, using another AI to provide the training. They discuss efforts to crack the "cocktail party" dilemma of picking out individual voices in a noisy room, as well as an AI that can "upres" photographs with remarkable use of texture (that is, taking a lower resolution photo and making it larger in a realistic way). Finally, they discuss the latest MIT Tech Review magazine, which focused on AI.

Magazine

Video

Episode 3

Nov 10, 2017

Andy and Dave follow up on the discussion of AlphaGo Zero and the never-before-seen patterns of play that the AI discovered, and the implications of such discoveries (which seem to be the "norm" for AI). They also discuss Google's AutoML project, which applies machine learning to help improve machine learning.

Video

Episode 2

Nov 3, 2017

Andy and Dave discuss the late-breaking news of AlphaGo Zero, a new iteration of the Go playing AI, which surpassed its predecessor AI in about 3 days of learning, using only the basic rules of Go (as opposed to the 6+ months of the original, using thousands of games as examples).

Topics

AlphaGo Zero beats AlphaGo 100-0 after 3 days of training (compared to several months for original AlphaGo) and without any human intervention/human-game-playing-data! Read: Technology Review and Nature

Video

Episode 1

Nov 3, 2017

In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.

"The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts." - analogy to renormalization (as used in statistical physics), may lead to better understanding and new architectures

An AI developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records performed well - at first - but it was discovered that the AI "learned" to associate patients with confirmed cases with a specific clinic to which they were sent.