We propose characterizing blockchain/distributed ledger (DL) systems by three simultaneous requirements: organizational and technical decentralization; tamper-proof recording of events, including their evidence; and guaranteed resource (= asset) preservation. This encompasses a wide design space for blockchain/DL systems, including but not limited to current permissioned and nonpermissioned systems. Including evidence recording (whether provided by IoT devices or human agents) extends blockchain/DL systems to serving as digital twins for physical processes and managing contracts involving real-word resources, including tracking and tracing of goods and the reverse flows of payments. Resources can be of arbitrarily many types; their guaranteed preservation is decoupled from credit limit enforcement, which generalizes the usual ＇＇no-double-spend&#39;&#39; property to admit approved overspending. Enforcing credit limits turns out to be the only problem requiring event order consensus across multiple (smart) contracts. This naturally gives rise to a highly parallel, lightweight architecture for permissioned blockchain/DL systems where almost all interactions between contract parties are private (&#39;off-chain&#39;), resource transfers may involve both blockchain/DL systems and ordinary databases, and high throughputs are achievable by exploiting that most transaction naturally commute with each other since because they have tend to have nothing to do with each other, and thus their order is irrelevant.

Bio:

Fritz Henglein&#39;s research interests are in semantic, logical and algorithmic aspects of programming languages, specifically type inference, type-based program analysis, algorithmic functional programming and domain-specific languages, and the application of programming language technology, most recently in distributed ledger technology, high-performance stream processing, data-parallel programming, e-health, enterprise systems. He is presently Professor of programming languages and systems at DIKU, the Department of Computer Science at the University of Copenhagen (UCPH) and Chief Science Officer at Deon Digital AG (Z&uuml;rich and Copenhagen), with previous positions at Rutgers University (where he received his Ph.D. in computer science), IBM Research, New York University, Utrecht University, Hafnium ApS, IT University of Copenhagen and, as guest professor, at University of New South Wales, Cornell University and Kellogg College at Oxford University. He heads the Decentralized Systems and the Functional Technology for High-performance Architectures research groups at DIKU. His professional activities include being editor of Journal of Functional Programming, member of IFIP Working Groups 2.1 and 2.8, initiator of topical SIGPLAN workshop series (SPACE, FHPC), local/general chair of various conferences including FPCA/PEPM &rsquo;93, POPL ICFP 2013, POPL 1997 and 2019, and PC member of a cross section of PL conferences including POPL, PLDI and ICFP ]]>03/22 01:17Title: Dynamic Typing Reloaded: A View of Gradual Typing from the 90s

Abstract:The Dynamic Typing Calculus (DTC) is a conservative extension of Simply Typed Lambda Calculus with tagging and untagging coercions introduced in the early 1990s, based on Scott&#39;s embedding of (untyped) lambda calculus into the simply type lambda calculus employing embedding/projection pairs. It was motivated by and used for compile-time elimination of tagged data overhead and run-time checks in languages such as Scheme, including fast polymorphic type and coercion inference algorithms.We review central concepts, techniques and theorems from DTC and view them from a gradual typing point of view. We investigate how they relate to blame assignment, gradual guarantees, and parametric polymorphism.This is ongoing (and unfinished) work. It is part of research on developing a purely semantic basis of contracts and deriving notions such as instrumented execution with blame labels rather than taking those as the definitive point of departure.

Bio:Fritz Henglein&#39;s research interests are in semantic, logical and algorithmic aspects of programming languages, specifically type inference, type-based program analysis, algorithmic functional programming and domain-specific languages, and the application of programming language technology, most recently in distributed ledger technology, high-performance stream processing, data-parallel programming, e-health, enterprise systems. He is presently Professor of programming languages and systems at DIKU, the Department of Computer Science at the University of Copenhagen (UCPH) and Chief Science Officer at Deon Digital AG (Z&uuml;rich and Copenhagen), with previous positions at Rutgers University (where he received his Ph.D. in computer science), IBM Research, New York University, Utrecht University, Hafnium ApS, IT University of Copenhagen and, as guest professor, at University of New South Wales, Cornell University and Kellogg College at Oxford University. He heads the Decentralized Systems and the Functional Technology for High-performance Architectures research groups at DIKU. His professional activities include being editor of Journal of Functional Programming, member of IFIP Working Groups 2.1 and 2.8, initiator of topical SIGPLAN workshop series (SPACE, FHPC), local/general chair of various conferences including FPCA/PEPM &rsquo;93, POPL ICFP 2013, POPL 1997 and 2019, and PC member of a cross section of PL conferences including POPL, PLDI and ICFP.]]>03/22 01:17

The KAIST School of Computing is excited to announce AI＋X Forum, a series of talks and discussions designed to ask how AI can impact various aspects of our society, including but not limited to politics, policy, education, law, labor, life, and art. Our second event of the series will be on AI＋Education with four distinguished speakers and a panel discussion. Please refer to the poster below for details. The talks and discussion will be in Korean this time.

Real world entities such as people, organizations and countries play a critical role in text. Reading offers rich explicit and implicit information about these entities, such as the categories they belong to, relationships they have with other entities, and events they participate in. In this talk, we introduce approaches to infer implied information about entities, and to automatically query such information in an interactive setting. We expand the scope of information that can be learned from text for a range of tasks, including sentiment extraction, entity typing and question answering. To this end, we introduce new ideas for how to find effective training data, including crowdsourcing and large-scale naturally occurring weak supervision data. We also describe new computational models, that represent rich social and conversation contexts to tackle these tasks. Together, these advances significantly expand the scope of information that can be incorporated into the next generation of machine reading systems.

Bio:

Eunsol Choi is a Ph.D candidate at the Paul G. Allen School of Computer Science at the University of Washington. Her research focuses on natural language processing, specifically applying machine learning to recover semantics from text. She completed a B.A. in Computer Science and Mathematics at Cornell University, and is a recipient of the Facebook fellowship.

]]>03/22 01:17Title:
Overlapping Community Detection in Massive Social Networks

Abstract:
Massive social networks have become increasingly popular in recent years. Community detection is one of the most important techniques for the analysis of such complex networks. A community is a set of cohesive vertices that has more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this talk, I will introduce scalable overlapping community detection algorithms that effectively identify high quality overlapping communities in various real-world networks.

I will first talk about an efficient overlapping community detection algorithm using a seed set expansion approach. The key idea of this algorithm is to find good seeds and then greedily expand these seeds using a personalized PageRank clustering scheme. Experimental results show that our algorithm significantly outperforms other state-of-the-art overlapping community detection methods in terms of run time, cohesiveness of communities, and ground-truth accuracy. To develop more principled methods, we formulate the overlapping community detection problem as a non-exhaustive, overlapping graph clustering problem where clusters are allowed to overlap with each other, and some nodes are allowed to be outside of any cluster. To tackle this non-exhaustive, overlapping clustering problem, we propose a simple and intuitive objective function that captures the issues of overlap and non-exhaustiveness in a unified manner. To optimize the objective, we develop not only fast iterative algorithms but also more sophisticated algorithms using a low-rank semidefinite programming technique. Our experimental results show that the new objective and the algorithms are effective in finding ground-truth clusterings that have varied overlap and non-exhaustiveness.

Biography:
Joyce Jiyoung Whang is an assistant professor of Computer Science and Engineering at Sungkyunkwan University. She received her B.S. degree in Computer Science and Engineering from Ewha Womans University, and Ph.D. in Computer Science from the University of Texas at Austin. Her main research interests are in big data, data mining, machine learning, and social network analysis with specific interests in community detection, overlapping clustering, and graph partitioning.

]]>03/22 01:17김동선 (Dongsun Kim), Research Associate, University of Luxembourg, Luxembourg

Title: Making Programs Understand Programs

Abstract:
Traditional software maintenance relies on developers&#39; effort for understanding programs, which are costly. As software systems are getting larger and more complex, this practice is, however, overwhelmed by the size of software. The current program characterization techniques focus on features manually designed by researchers. Unfortunately, the features are subjective and do not guarantee precise program understanding. Recent advances of deep neural networks (DNNs) offer new opportunities for program understanding. Autoencoders implemented on DNNs can automatically identify features of given data.
This talk presents novel approaches to understand programs without manually designing features. The approaches first transform program entities into a simplified form acceptable by autoencoders that identify features. The features can represent any program entity. The talk introduces two applications of the program representation. First, it is able to detect inconsistencies between function names and bodies, based on the assumption that similar bodies would have similar names. Another application is mining common fix patterns to improve automated program repair. Since patches are also program entities, it is available to apply the same idea (i.e., feature identification). This talk demonstrates how to use autoencoders to extract fix patterns from human-written patches and how to repair real-world bugs with the patterns.

Biography

Dongsun Kim is a Research Associate at the University of Luxembourg. He was formerly a post-doctoral fellow at the Hong Kong University of Science and Technology. His research interest includes automatic patch generation, fault localization, static analysis, and search-based software engineering (SBSE). In particular, automated debugging is his current focus. His recent work has been recognized by several awards such as a featured article of the IEEE Transactions on Software Engineering (TSE) and ACM SIGSOFT Distinguished Paper of the International Conference on Software Engineering (ICSE). He is leading the FIXPATTERN project funded by FNR (Luxembourg National Research Fund) CORE programme.

한동균 (Donggyun Han), PhD Candidate, University College London, UK

Title: Supporting Modern Code Review

Abstract:
Modern code review is a lightweight and asynchronous process of auditing code changes by a reviewer other than the author of the change. Code review is widely spread in both open-source and industrial projects because of its diverse benefits including finding defects, code improvement, and knowledge transfer.
This thesis presents three research results on code review. First, we conduct a large scale developer survey. We sent a questionnaire to open source project developers to understand how they conduct the code review, their difficulties during code review, and what are the differences between the proprietary and open source projects. Furthermore, we reproduce the previous survey questions to broaden empirical knowledge in code review research community. Second, in-depth investigation of coding convention during code review. The coding conventions guide developers to write source code in a consistent format. We investigated how many coding convention violations are introduced, removed, and addressed based on a review comment during code review. The result shows that developers usually introduce more convention violations at the end of a code review than before they make code changes and spend lots of time to check a convention violation although diverse convention checking tools are available. Third, we proposed a technique that automatically recommends related code review requests. If a new code review request is submitted for code review, our technique compares the new request with the previously reviewed code review requests that contains meaningful discussions that may support developers to review a new code review request.
Based on the two empirical studies and an automated technique, recommending related code reviews, this thesis broadens the empirical knowledge for code review practitioners and provides a useful technique that supports developers to reduce their review effort.

Biography

DongGyun is a PhD student at the Centre for Research on Evolution, Search and Testing (CREST) in Software Systems Engineering Group, Department of Computer Science, University College London. He is currently working under the supervision of Dr. Jens Krinke, Prof. Mark Harman, and Dr. Federica Sarro. He was a researcher at the knowledge convergence team at KAIST Institute for IT Convergence. He received his MPhil. in the Department of Computer Science &amp; Engineering at the Hong Kong University of Science and Technology under the supervision of Dr. Sunghun Kim. He received B.Eng. in Computer Engineering Department at the Jeju National University under the supervision of Prof. Hoyoung Kwak. His research area is Software Engineering, mainly focusing on modern code review and empirical studies.

]]>03/22 01:17The KAIST School of Computing is excited to announce AI＋X Forum, a series of talks and discussions designed to ask how AI can impact various aspects of our society, including but not limited to politics, policy, education, law, labor, life, and art. Our first event will be on AI＋Politics with two distinguished speakers. Please refer to the poster below for details. Refreshments will be provided from 9:30. The talks and discussion will be in English.

Date &amp; Time: 10:00-12:00, Jan. 17, 2019 (Thu)

Location: N1-201

Speakers:

- Co-designing Policy and Technology for AI (Prof. So Young Kim, KAIST)

- Mapping Political Communities: A Statistical Analysis of Lobbying Networks in Legislative Politics (Prof. In Song Kim, MIT)

&ldquo;The purpose of computing is insight, not numbers,&rdquo; R.W. Hamming said, the founder of the ACM. This has certainly been the driving force for most visualization systems to date, which focus on exploring data and discovering the unknown. However, these systems typically have complex designs that are unintuitive and cumbersome for non-expert users. On the other hand, visualizations are more and more being used to communicate data and messages to a general audience. In this talk, I will reexamine the role of visualization beyond data exploration. I will use concrete examples from my own work to illustrate how we might enable individuals to design expressive visualizations for communication, author engaging visual stories about data, and evaluate the cognitive aspects of visualizations and stories. I will conclude by outlining future research opportunities on deepening our understanding of visual data stories and designing better visualization systems for interacting with data.

Bio

Nam Wook Kim is a Ph.D. student in Computer Science at Harvard University, working with Hanspeter Pfister and Krzysztof Gajos. His research in visualization and human-computer interaction focuses on lowering the barriers for a general audience to understand and communicate data. He has collaborated with MIT, Microsoft Research, Disney Research, and Adobe Research. His work on Data-Driven Guides received a Rising Star award from The Kantar Information is Beautiful Awards. His graduate studies are supported by Kwanjeong Educational Foundation and Siebel Scholars Foundation. Prior to Harvard, he worked at KAIST, Samsung, and LG. He received M.S. from Stanford and B.S. from Stony Brook and Ajou University.

]]>03/22 01:17Please RSVP so that we can get an accurate headcount for lunch preparation.

My research vision is to enable expert and non-experts to successfully make sense of complex world problems. As a Human-Computer Interaction researcher, I iteratively focus on studying how sensemaking is performed to identify challenges in collaborative data analytics, design tools using computational techniques that overcome these challenges and evaluate my designs using human participants to inform subsequent designs. Solving crimes correctly is one such critical and life-altering problem. National Registry at the University of Michigan points out that almost 175 wrongfully incriminated folks were exonerated after having spent a non-trivial amount of their life in prison for crimes they did not commit in 2016 alone. This is 4X the number 10 years ago and continues an upward trend. During my work, I have discovered that sharing information socially, succumbing to cognitive biases, and lack of support afforded by changing interaction paradigms as key challenges in collaborative data analytics. Subsequently, I have iteratively developed multiple tools, including SAVANT, REFLECTIVA, CROWDS4ANALYTICS, TEMPORA, and RAMPARTS to overcome these challenges. My approach establishes a research framework for creating rich collaborative data analytic systems by: (1) utilizing human generated analytic artifacts to inform and design the interactions (2) leveraging &quot;off-the-shelf&quot; natural language processing, sensors and crowds creatively to design intelligent data analytic tools, and (3) evaluating the effect of these designs in controlled settings to identify the cost vs. benefit of each design decision.

BIO:

Tesh (Nitesh) Goyal is a researcher at Google, where his collaborative sensemaking research has been used in Google Maps and Web experiences. Tesh&#39;s research develops design approaches to build novel data analytics tools that enhance information sharing, reduce biases using visualizations, minimize distractions using physiological data, and support collaborative problem-solving with crowds. His research has also contributed to the theory of Sensemaking by inventing Sensemaking Translucence as a design metaphor for a mirror that enables self-reflection. He received his MSc in Computer Science from University of California, Berkeley and RWTH Aachen under Prof. John Canny&#39;s advice, prior to receiving his PhD from Cornell University in Information Science where he was advised by Prof. Susan R. Fussell. His research has been supported by German Govt. Fellowship, National Science Foundation, and MacArthur Genius Grant. Frequently collaborating with industry (Google Research, Yahoo Labs, HP Labs, Bloomberg Labs), he has published 10 first-author papers in top-tier HCI conferences and journals (CHI, CSCW, JASIST, ICTD, ICIC and Ubicomp/IMWUT) and has received two best paper honorable nomination awards.

]]>03/22 01:1703/22 01:17- When: 12pm-1pm, December 21st, 2018 (Fri)

- Where: Room 201, Building N1

- Host: Juho Kim

Please RSVP so that we can get an accurate headcount for lunch preparation.

Toward human-centered Artificial Intelligence: New opportunities for designers in the age of machine intelligence

ABSTRACT

As Artificial Intelligence has become the engine of the information society, more challenges are found at the intersection of people and AI. However, most AI-powered systems are still developed by small groups of computer scientists, who may not have a thorough understanding of human-AI interaction. According to a report by McKinsey &amp; Company, only 10％ of AI projects are eventually shipped to users. In this talk, I would argue that members of an AI project must have multi-disciplinary knowledge to improve the success rate. For instance, Human-Computer Interaction knowledge can improve data collection. Rapid prototyping skills enable exploring alternative ways to help end-users even before AI models being built. GDPR (General Data Protection Regulation) is not optional to apply commercial AI products. Second, we also need common language and infrastructure to make jargons around AI technology accessible to non-technical people. Finally, I would argue that designers must develop a profound (but non-technical) sense of AI, as they did it with mechanical parts of modern product design.

BIO

I&#39;m a research scientist in Systems Technology Lab at Adobe. My work at Adobe is to build an interactive system to help marketers efficiently monitor links in marketing emails.

I received my Ph.D. in Computer Science from the University of Maryland, College Park. I also have an MSc in Design for Interaction from Delft University of Technology, and BSc in Industrial Design (major) and Computer Science (minor) from Korea Advanced Institute of Science and Technology. During my graduate studies, I was fortunate to intern at a number of great research labs including IBM Watson Lab and Adobe Research.

My research focuses on designing, building, and evaluating interactive technology that bridges the gap between artificial intelligence and people. My work often involves the entire spectrum of data flow: extracting data from people&#39;s behavior to feed AI; making AI more understandable and trustworthy to people; building positive feedback loop for evolving AI. The eventual goal of my research is to develop a multi-disciplinary, human-centered process of designing AI systems.

]]>03/22 01:17TITLE: Test Diversity as a General Driver for Test Automation

BIOGRAPHY

Prof. Feldt is a full professor of software engineering at Chalmers University of Technology, Sweden and at Blekinge Institute of Technology, Sweden. He has also worked as an IT and software consultant for more than 20 years. His research interests include human-centered software engineering, software testing and verification and validation, automated software engineering, requirements engineering and user experience. Most of the research is of empirical nature and conducted in close collaboration with industry partners. He received a Ph.D. (Techn. Dr.) in computer engineering from Chalmers University of Technology in 2002. He is currently co-editor in chief for Journal of Empirical Software Engineering, and was a co-program chair of IEEE International Conference on Software Testing, Verification and Validation in 2018.

]]>03/22 01:17Speaker:

BongShin Lee, Microsoft Research

Title:

Data-Driven Storytelling with Expressive Visualization

Abstract:

Practitioners are increasingly using visualizations to tell compelling stories supported by data, and continually developing novel techniques that integrate data visualization into a cohesive narrative. In response, those of us in the visualization research community have set to identify and refine design principles and to develop innovative techniques and tools. In this talk, I will present my recent research on data-driven storytelling, which focuses on empowering people to easily create data-driven stories leveraging expressive visualizations without the need for programming. I will also briefly discuss future research directions in this exciting field.

Bio:

Bongshin Lee is a Senior Researcher at Microsoft Research. She explores innovative ways to enable people to create visualizations, interact with their data, and share data-driven stories. She has been recently focusing on helping people collect &amp; explore the data about themselves, and share insights with others by leveraging visualizations. Bongshin currently serves as General Co-Chair for ACM ISS 2019 and Associate Editor for IEEE TVCG. She has served as General Co-Chair for IEEE PacificVis 2017 and Papers Co-Chair for IEEE InfoVis 2015 &amp; 2016, and IEEE PacificVis 2018. She earned her MS and PhD in Computer Science from University of Maryland at College Park in 2002 and 2006, respectively.

]]>03/22 01:17Title: Blockchain Use Cases and Challenges to Mass Adoption

Speaker: Jason Han, CEO of Ground X

Abstract:
This talk will introduce the principles and key features of blockchain briefly and discuss the current status and problems of blockchain industry. The most important issue is to prove the usability of blockchain by building real use cases and then achieving mass adoption of blockchain. This talk will talk about potential use cases and challenges in mass adoption of blockchain.

Bio:
Jason is a serial entrepreneur and CEO of Ground X, a blockchain subsidiary of Kakao. He received Ph.D at EECS of KAIST in 2005. His research topics were P2P algorithm like DHT and distributed system. In 2007, He founded NexR, the first big data and cloud computing tech startup in Korea, which was acquired by KT 4 years later. After that, he co-founded FuturePlay and took a role of CTO. FuturePlay is a tech-centric accelerator and investor, focusing on tech startups in APAC. He invested in dozens of startups and gave them technical mentoring. He founded Ground X with Kakao in 2018. He also taught an innovative business model and IT trends in KAIST MBA as an adjunct professors for 7 years since 2007. ]]>03/22 01:17HCI＠KAIST is organizing a seminar with Prof. Jeongmi Lee of the Graduate School of Culture Technology at KAIST.

- When: 12:00 ~ 01:00 pm on November 30th, 2018 (Fri)

- Where: Room 102, Building N1

- Host: Geehyuk Lee

Lunch will be provided. Please RSVP so that we can get an accurate headcount for lunch preparation.

In the face of massive influx of sensory stimulation, humans are confronted with a critical problem of selecting a subset of information, making the best use of limited cognitive capacity. Attention is the cognitive mechanism that solves this selection problem, allowing for enhancement of currently relevant information while inhibiting irrelevant information. The efficiency of attentional control, however, fluctuates within and across individuals due to many factors. In this talk, my previous research will be presented, specifically focusing on how distinct attributes of sensory input (value, relevance, salience, contexts, etc.) are integrated to optimally guide attentional deployment, and what factors determine the variability in attentional performance within and across individuals.

Bio

Jeongmi Lee is an assistant professor of the Graduate School of Culture Technology at KAIST, where she directs the Visual Cognition lab. She earned her B.A. and M.A degrees in Psychology at Seoul National University, and Ph.D. degree (Cognitive Neuroscience) at George Washington University (2013). She was a postdoc researcher at UC Davis, before joining KAIST (2018). Her research interests are concerned with human visual attention and perception, focusing on the general principles of attentional guidance, and the factors that determine the variability in attentional performance. She utilizes converging methodologies including behavioral experiments and neuroimaging.