Tsinghua AI Summit at Sanya

2019-03-22 ~ 2019-03-24
766

General Information

Welcome to Sanya Summit

The Tsinghua AI Summit at Sanya is intended to provide a major international summit for exchanging novel research ideas and significant practical results in Artificial Intelligence. After more than 60 years of development and accumulation, artificial intelligence technology has exerted more and more influence and changed people's life and work style. Meanwhile, the development of artificial intelligence is also facing new challenges. The summit will not only target on new progress in research and development in AI, also on Mathematics theories of artificial intelligence and big data.

The first Tsinghua AI Summit will be held from March 22 to March 24 at Sanya, a beautiful city in Hainan Province of China.

Keynote Speakers

Shape Differences and Variability

Abstract: The world of both natural and man-made objects provides us with an abundance of geometric forms obtained through evolution or design. The shapes of humans, animals, vehicles, furniture, clothes, and organs has been forged by a myriad of forces or considerations related to their functionality, structure, aesthetics, tradition, or materials. Computational disciplines that deal with models of 3D geometry, including computer graphics, computer vision, and robotics have studied shape representations and used them to develop various notions of shape similarity, as needed by particular applications, such as shape search. So far, though, relatively little effort has been spent on trying to quantify differences between the shapes of 3D forms, beyond various notions of distance or "dissimilarity". This makes it difficult to navigate the shape space around a given shape -- as, for example, when one wants to refine a search in specific ways (e.g., "shoes like these, but with higher heels"). It is especially interesting to try to develop low-dimensional parametrizations of shape differences and variability that reflect the underlying semantics of the shapes.

This talk will present some mathematical and algorithmic efforts in this direction, trying to address the many challenges present due to the vast diversity of ways that shapes can be compared in both their geometry and their structure. For example, shape differences can be continuous (e.g., changes in size) or discrete (e.g., addition/removal of parts). In fact, beyond comparing just two shapes, one may want to understand variability across an entire shape collection, or to compare two collections. It can be important to separate different kinds of variability, for some may be nuisance and others may matter (e.g., normal human organ variability vs. various pathologies). Furthermore, variability of form is interesting not only across shapes but even within a single shape, to capture articulations of deformations, often related to shape function. Finally, the correlation between geometry and language when it comes to describing variability is an interesting object of study on its own. The talk, through small vignettes, will aim to throw a bit of light on these topics.

Short Bio:

Leonidas Guibas is the Paul Pigott Professor of Computer Science (and by courtesy, Electrical Engineering) at Stanford University, where he heads the Geometric Computation group. Prof. Guibas obtained his Ph.D. from Stanford University under the supervision of Donald Knuth. His main subsequent employers were Xerox PARC, DEC/SRC, MIT, and Stanford. He is a member and past acting director of the Stanford Artificial Intelligence Laboratory and a member of the Computer Graphics Laboratory, the Institute for Computational and Mathematical Engineering (iCME) and the Bio-X program. Professor Guibas has been elected to the US National Academy of Engineering and the American Academy of Arts and Sciences, and is an ACM Fellow, an IEEE Fellow and winner of the ACM Allen Newell award and the ICCV Helmholtz prize.

Hsiao-Wuen HonCorporate vice president of Microsoft Director of Microsoft Research Asia

A Brief History of Intelligence

Abstract: Intelligence is the deciding factor of how human beings become the most dominant life forms on earth. Throughout history, human beings have developed tools and technologies which help civilizations evolve and grow. Computers, and by extension, artificial intelligence (AI), has played important roles in that continuum of technologies. Recently artificial intelligence has garnered much interest and discussion. As artificial intelligence are tools that can enhance human capability, a sound understanding of what the technology can and cannot do is also necessary to ensure their appropriate use. While developing artificial intelligence, we also found out the definition and understanding of our own human intelligence continue evolving. The debates of the race between human and artificial intelligence have been ever growing. In this talk, I will describe the history of both artificial intelligence and human intelligence (HI). From the great insights of the such historical perspectives, I would like to illustrate how AI and HI will co-evolve with each other and project the future of AI and HI.

Short Bio:

Dr. Hsiao-Wuen Hon is corporate vice president of Microsoft, chairman of Microsoft's Asia-Pacific R&D Group, and managing director of Microsoft Research Asia. He drives Microsoft's strategy for research and development activities in the Asia-Pacific region, as well as collaborations with academia.

Dr. Hon received a PhD in Computer Science from Carnegie Mellon University. He has been with Microsoft since 1995. He joined Microsoft Research Asia in 2004 as deputy managing director, stepping into the role of managing director in 2007. An IEEE Fellow and a distinguished scientist of Microsoft, Dr. Hon is an internationally recognized expert in speech technology. Dr. Hon has published more than 100 technical papers in international journals and at conferences. He co-authored a book, Spoken Language Processing, which is a graduate-level textbook and reference book in the area of speech technology used in universities around the world.

SysML: On System and Algorithm co-design for Practical Machine Learning

ABSTRACT: The rise of Big Data and AI computing has led to new demands for Machine Learning systems to learn complex models with millions to billions of parameters that promise adequate capacity to digest massive datasets and offer powerful and real-time predictive analytics thereupon. In this talk, I discuss a recent trend toward building new distributed frameworks for AI at massive scale known as ¡°system and ML algorithm co-design¡±, or SysML -- system designs are tailored to the unique properties of ML algorithms, and algorithms are re-designed to better fit into the system architecture. I show how one can explore the underlying statistical and algorithmic characteristics unique to ML programs but not typical in traditional computer programs in designing the system architecture to achieve significant, universal, and theoretically sound power-up of ML program across the board. I also present a briefly introduction of the Petuum system based on such interdisciplinary innovations, which intends to dramatically improve adoption of AI solutions by lowering the barrier of entry to AI technologies via Automatic Machine Learning through Petuum. I show how, through automatable, product-grade, hardware-agnostic, standardized building blocks that can be assembled and customized, AI users can liberate themselves from the demanding experience of algorithm programming and system tuning, and easily experiment with different AI methods, parameters, and speed/resource trade-offs by themselves or automatically.

To put this in a broader context, recent discussions about AI in both research community, and the general public have been championing a novelistic view of AI, that AI can mimic, surpass, threaten, or even destroy mankind. And such discussions are fueled by mainly recent advances in deep learning experimentations and applications, which are however often plagued by its craftiness, un-interpretability, and poor generalizability. I will discuss a different view of AI as a rigorous engineering discipline and as a commodity, where standardization, modularity, repeatability, reusability, and transparency are commonly expected, just as in civil engineering where builders apply principles and techniques from all sciences to build reliable constructions. I will discuss how such a view sets different focus, approach, metric, and expectation for AI research and engineering, which we practiced in our SysML work.

Short Bio:

Eric P. Xing is a Professor of Computer Science at Carnegie Mellon University, and the Founder, CEO and Chief Scientist of Petuum Inc., a 2018 World Economic Forum Technology Pioneer company that builds standardized artificial intelligence development platform and operating system for broad and general industrial AI applications. He completed his undergraduate study at Tsinghua University, and holds a PhD in Molecular Biology and Biochemistry from the State University of New Jersey, and a PhD in Computer Science from the University of California, Berkeley. His main research interests are the development of machine learning and statistical methodology, and large-scale computational system and architectures, for solving problems involving automated learning, reasoning, and decision-making in high-dimensional, multimodal, and dynamic possible worlds in artificial, biological, and social systems.

Prof. Xing currently serves or has served the following roles: associate editor of the Journal of the American Statistical Association (JASA), Annals of Applied Statistics (AOAS), IEEE Journal of Pattern Analysis and Machine Intelligence (PAMI) and the PLoS Journal of Computational Biology; action editor of the Machine Learning Journal (MLJ) and Journal of Machine Learning Research (JMLR); member of the United States Department of Defense Advanced Research Projects Agency (DARPA) Information Science and Technology (ISAT) advisory group. He is a recipient of the National Science Foundation (NSF) Career Award, the Alfred P. Sloan Research Fellowship in Computer Science, the United States Air Force Office of Scientific Research Young Investigator Award, the IBM Open Collaborative Research Faculty Award, as well as several best paper awards. Prof Xing is a board member of the International Machine Learning Society; he has served as the Program Chair (2014) and General Chair (2019) of the International Conference of Machine Learning (ICML); he is also the Associate Department Head of the Machine Learning Department, founding director of the Center for Machine Learning and Health at Carnegie Mellon University.

Explainable AI: How Machines Gain Justified Trust from Humans

Abstract: The recent progresses in computer vision, machine learning, and AI in general have produced machines for a broad range of applications, however, some key underlying representations, especially neural networks, remain opaque or black boxes. This generates renewed interest in studying representations and algorithms that are interpretable and developing systems that can explain their behaviors and decisions to human users. In this talk, I will introduce our work on explainable AI: how machines gain justified trust from humans. The objective is to let human users understand how an AI system works, when it will succeed and fail for what reasons. Thus human and machine can collaborate more effectively in various tasks. We propose a framework called X-ToM: Explanation with Theory of Minds, which poses explanation as an iterative dialogue process between human and AI system. In this process, human and machine learn the mental representations of each other to establish better understanding. Our experiments on human subject shows X-ToM gains justified trust and reliance from users over time.

Short Bio:

Songchun Zhu received his Ph.D. degree from Harvard University. He is currently professor of Statistics and Computer Science at UCLA, and heads The Center for Vision, Cognition, Learning, and Autonomy (VCLA). His research interests include vision, statistical modeling, learning, cognition, situated dialogues,robot autonomy and AI. He is in Editorial board of IJCV, and was associate editr of IEEE PAMI, general Chair of CVPR 2012 and CVPR 2019. He received a number of honors, including the Helmholtz Test-of-time award in ICCV 2013, the Aggarwal prize from the IAPR in 2008, the David Marr Prize in 2003 with Z. Tu et al. for image parsing, twice Marr Prize honorary nominations with Y. Wu et al. in 1999 for texture modeling and 2007 for object modeling respectively.He received the Sloan Fellowship in 2001, a US NSF Career Award in 2001, and an US ONR Young Investigator Award in 2001. He is a Fellow of IEEE since 2011.

Neural Network: Deep Supervision and Sensitivity Analysis

Abstract: In this talk, we describe two pieces of work: Deep supervision for training a neural network, and sensitivity analysis for using a neural network. Our proposed Deeply-Supervised Nets (DSN) intends to boost the classification performance by studying a new formulation in deep networks. Our method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent. We introduce ¡°companion objective¡± to the individual hidden layers, in addition to the overall objective at the output layer (a different strategy to layer-wise pre-training). The advantage of our method is evident and our experimental result on benchmark datasets shows significant performance gain over existing methods. We also conduct sensitivity analysis of a trained network to study how important a feature or a scale is to the task in hand. This is done by cumulating the gradients of the objective function with respect to the feature of interest by applying the chain rule. The sensitivity analysis reveals many insights, and has been applied to facial expression recognition.

Short Bio:

Zhengyou Zhang is the Director of Tencent AI Lab and Tencent Robotics X Lab since March 2018. He is an ACM Fellow and an IEEE Fellow. He was a Partner Research Manager with Microsoft Research, Redmond, WA, USA, for 20 years.. Before joining Microsoft Research in March 1998, he was a Senior Research Scientist with INRIA (French National Institute for Research in Computer Science and Control), France, for 11 years. In 1996-1997, he spent a one-year sabbatical as an Invited Researcher with the Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan. He received the IEEE Helmholtz Test of Time Award in 2013 for his work published in 1999 on camera calibration, now known as Zhang's method.

Besides keynote speakers, the Sanya AI Summit invited the following distinguish scientists to give invited talks in sessions on machine learning, computer Vision, multimedia computing and AI applications, also in the workshop on machine learning in graphics and media computing.