3D Touch Point Detection on Load Sensitive Surface based on Continuous Fluctuation of A User Hand

Abstract: Expanding the concept of tangible interaction from designed interfaces to everyday objects around the life, we are working on 3D touch detection project, which has deployable load sensitive surface to capture the tangible interaction. On top of the key insight that human hand is continuously moving and never stops completely, our mathematical approach with pseudo inverse matrix method enabled the system to understand three dimensional touch interaction between user and objects above the 2D surface with embedded loadcels.

Authors/Presenter(s): Takatoshi Yoshida, MIT Media Lab, United States of AmericaXiaoyan Shen, MIT Art Culture & Technology, United States of AmericaTal Achituv, MIT Media Lab, United States of AmericaHiroshi Ishii, MIT Media Lab, United States of America

A Device for Reconstructing Light Field Data as 3D Aerial Image by Retro-reflection (Topic: Virtual Reality)

Abstract: The reconstructed the image by the light field display is projected in the air by fresnel lens and aerial imaging by retro-reflection and we can get to feel more reality.

Abstract: We present a novel motion retargeting system using the deep autoencoder. We conduct several experiments and ours achieves more accuracy and less computational burden when compared with other approaches.

Adaptation of Manga Face Representation for Accurate Clustering

Abstract: To accurately cluster faces within an individual manga, we propose a method to adapt manga face representations to an individual manga.

Authors/Presenter(s): Koki Tsubota, The University of Tokyo, JapanToru Ogawa, The University of Tokyo, JapanToshihiko Yamasaki, The University of Tokyo, JapanKiyoharu Aizawa, The University of Tokyo, Japan

Abstract: The poster discusses the challenges, opportunities, and lessons gained from working with students from different cultures, inspiring novel ways in which animation education in Asia can be conducted.

An Interface for Post-Match Play-by-Play Analysis of a Fighting Game Based on the Two Players' Eye Movements (Topic: Video Gaming)

Abstract: We propose an interface to support post-match play-by-play analysis of a hand-to-hand fighting game based on the two players' eye movements. An e-Sports match, like a professional chess match, is followed by analysis and commentary about the performance of the players. In this study, we constructed an interface for visualizing information about the match based on the players' eye movements to facilitate post-match play-by-play analysis and commentary. Our interface highlights commonalities and differences in the areas on the screen where the players focus their attention, as well as commonalities and differences in the direction of their eye movements.

Authors/Presenter(s): Ryohei Oda, Okayama University of Science, JapanYuto Mizumatsu, Okayama University of Science, JapanTomoki Kajinami, Okayama University of Science, Japan

Abstract: We propose a semi-automatic system to calculate the image registrations of projections for leaves and to interactively track the projection area. We describe our results with some animated effects.

Abstract: Dancheong is designed to decorate various parts of wooden buildings with beautiful and majestic colors. The painting process involves a stage called Cheoncho which is a process whiich a craftsman punches holes one by one on the paper with a needle and repeats this action over millions of times. In order to reduce those kinds of time consuming job, we propose a system that automatically performs Cheoncho process to assist a craftsman in copying the desired pattern to the target building part with easy and accurate manner.

Authors/Presenter(s): Yoon-Seok Choi, Electronics and Telecommunications Research Institute (ETRI), South KoreaSoonchul Jung, Electronics and Telecommunications Research Institute (ETRI), South KoreaIn-Su Jang, Electronics and Telecommunications Research Institute (ETRI), South KoreaTaeWon Choi, Electronics and Telecommunications Research Institute (ETRI), South KoreaJin-Seo Kim, Electronics and Telecommunications Research Institute (ETRI), South Korea

Authors/Presenter(s): Shih-Hao Liu, National Taipei University of Technology, TaiwanTung-Ju Hsieh, National Taipei University of Technology, Taiwan

Color Enhancement Factors to Control Spectral Power Distribution of Illumination (Topic: Computer Vision and Image Understanding)

Abstract: We introduced color enhancement factors to control spectral power distribution of illumination, which enabled us to enhance one or more colors at once while retaining the color appearance of white.

Abstract: A Wide-area Mixed Reality Multiplayer Game System
Virtual-real Registration game environment

Authors/Presenter(s): Yihua Bao, AICFVE Beijing Film Academy, ChinaDong Li, School of Optics and Photonics,Beijing Institute of Technology, ChinaDongdong Weng, School of Optics and Photonics,Beijing Institute of Technology, ChinaMo Su, School of Optics and Photonics,Beijing Institute of Technology, China

Abstract: This paper made a digital twin musical instrument system of Zheng. We build the whole interactive experience (the interactive playing on the DMI, sound generation, and the immersive virtual environment).

Authors/Presenter(s): Ning Xie, School of Computer Science and Engineering,University of Electronic Science and Technology of China; Center for Future Media, University of Electronic Science and Technology of China, ChinaXinrui Cai, University of Electronic Science and Technology of China, ChinaSipei Li, University of Electronic Science and Technology of China, ChinaYifan Lu, Nanchang Hangkong University, ChinaKai Tan, University of Electronic Science and Technology of China, ChinaMingyue Lou, University of Electronic Science and Technology of China, ChinaHeng Tao Shen, School of Computer Science and Engineering,University of Electronic Science and Technology of China; Center for Future Media, University of Electronic Science and Technology of China, China

Abstract: We introduce two simple techniques to enhance visualization of human anatomy, one is post processing for rendering, and another is a simple trick for geometry processing.

Authors/Presenter(s): Hirofumi Seo, The University of Tokyo, JapanTakeo Igarashi, The University of Tokyo, Japan

Evaluation of Reducing Three-dimensionality of Movement to Create 3DCG Animation looks more like 2D Animation (Topic: Animation and Visual Effects)

Abstract: We evaluated the 3DCG movement to make 3DCG a 2D animation style. We conducted a subjective evaluation of 3DCG movement when frame rate and projection method were changed. As a result, we could make the movement of 3DCG more similar to that of 2D animation.

FingertipCubes: An Inexpensive D.I.Y Wearable for 6-DoF per Fingertip Pose Estimation using a Single RGB Camera (Topic: Computer Vision and Image Understanding)

Abstract: We present a 1 USD Do-It-Yourself wearable which coupled with a webcam, can provide 6-DoF per fingertip tracking in real-time. Applications include in-air writing, input for games and 3D UIs.

Abstract: We present a drone with a stereoscopic camera based on human binocular vision, which offers a novel mixed-reality environment with a bidirectional interaction between the real and virtual worlds.

Fukushima Nuclear Plant as a Synthetic Learning Environment

Abstract: Education-specific, 3D virtual worlds as simulations – termed Synthetic Learning Environments - are ideal for 21st century learning. An SLE project of the Fukushima Dai’ichi nuclear power plant was designed, modeled, programmed and implemented for education purposes, motivated by the fact that the disaster of March 2011 revealed much about Japan’s lack of preparedness for nuclear accidents. An iterative process of design, make, share and reflect was adopted by the student developers. In Japan, the creative process is termed TKF: Tsukutte つくって; Katatte かたって; Furikaeru ふりかえる.

Abstract: GoThro is a capturing system that exceeds the physical limit due to the camera body, and that is composed of a camera and optical systems.

Authors/Presenter(s): Yudai Niwa, The University of Tokyo, JapanHajime Kajita, The University of Tokyo, JapanNaoya Koizumi, The University of Electro-Communications, JST PRESTO, JapanTakeshi Naemura, The University of Tokyo, Japan

Hair Modeling from a Single Anime-Style Image (Topic: Geometry and Modeling)

Abstract: We present a technique for reconstructing three-dimensional cartoon hair from a single anime-style portrait image.

Historical Streetscape Simulation System that Reflects Changes in Weather, Time, and Seasons (Topic Computer-Aided Design)

Abstract: We developed a streetscape simulation system for the post-station town of Fujisawa-juku on the Old Tokaido for inheriting its historical culture. This system can reflect different weather, time, and seasons.

Abstract: We propose a novel system that enables a user to see stereoscopic 3DCG images in mid-air and interact with them directly. This system displays 3DCG objects with motion parallax in mid-air. The user can observe the 3DCG objects in mid-air while feeling a stereoscopic effect by the motion parallax. It is also possible to interact with the mid-air 3DCG objects by fingers. The user can move, deform and draw 3DCG objects as if they were there.

Interaction System with Mid-Air CG Character that Has Own Eyes (Topic: Virtual Reality)

Abstract: This study allows users to interact with a mid-air CG character by displaying mid-air images and capturing from that viewpoint with optical transfer of the camera viewpoint. This system enables an interaction in which the user and the mid-air CG character stare each other.

Authors/Presenter(s): Kei Tsuchiya, The University of Electro-Communications, JapanAyaka Sano, The University of Electro-Communications, JapanNaoya Koizumi, The University of Electro-Communications, JST PRESTO, Japan

Interactive Modeling for Craft Band Design (Topic: Geometry and Modeling)

Abstract: We propose a support system that a novice can easily design a craft band work of the desired design.

Abstract: In this paper, a gaze navigation method for an interactive visual narrative application is proposed, and a prototype system, developed for touchscreen computer devices, such as the iPad, is described.

Authors/Presenter(s): Negar Kaghazchi, The University of Electro-Communications, JapanSachiko Kodama, The University of Electro-Communications, JapanMasakatsu Kaneko, The University of Electro-Communications, Japan

Life Sciences in Virtual Reality: First-Year Students Learning as Creators (Topic: Information Visualization and Scientific Visualization)

Abstract: We present learning activities where first-year biology students learn about complex biological structures by creating their own VR experiences, rather than using VR as a learning tool.

Authors/Presenter(s): Christopher Hammang, The University of Sydney, AustraliaPhillip Gough, The University of Sydney, AustraliaWeber Liu, The University of Sydney, AustraliaEric Jiang, The University of Sydney, AustraliaJim Cook, The University of Sydney, AustraliaPauline Ross, The University of Sydney, AustraliaPhilip Poronnik, The University of Sydney, Australia

Abstract: Our system generates koi animation in the style of Chinese painting fast and easily based on 3D models and other dynamic Chinese Painting elements according to the user inputs.

Authors/Presenter(s): Rina Savista Halim, National Taiwan University of Science and Technology, TaiwanPhillip Pan, National Taiwan University of Science and Technology, TaiwanKuo Wei Chen, National Taiwan University of Science and Technology, TaiwanChih-Yuan Yao, National Taiwan University of Science and Technology, TaiwanTong-Yee Lee, National Cheng Kung University, Taiwan

Abstract: We propose a deformation structure that expands in xzy axes simultaneously using Ron Resch pattern. this structure is an Integrated Design, which enabled us to design according to the application.

Abstract: We propose a novel foveated super-resolution convolutional neural network (SRCNN) for HMD using an object tracking algorithm to reduce computation load for rendering high resolution images. We implement the object tracking on the region to compensate for a frame processing speed of eye-tracking devices, relatively slow to apply the resolution conversion. SRCNN applies to cognitive regions, and typical interpolation applies to other regions to reduce the rendering cost. As a result, the computation is decreased by 90.4059%, and PSNR is higher than the conventional foveated rendering algorithm.

OpenISS Depth Camera as a Near-Realtime Broadcast Service for Performing Arts and Beyond (Topic: Computer Vision and Image Understanding)

Abstract: OpenISS is an open source software suite, which exhibits a multimodal interaction architecture with individual components providing a platform for artists to augment art performances, entertainment and technology also as SaaS. It is positioned to be the backend core for ISSv2 and similar systems.

Pre- and Post-Processes for Automatic Colorization using a Fully Convolutional Network

Abstract: Automatic colorization is a significant task especially for Anime industry. An original trace image to be colorized contains not only outlines but also boundary contour lines of shadows and highlight areas. Unfortunately, these lines tend to decrease the consistency among all images. Thus, this paper provides a method for a cleaning pre-process of anime dataset to improve the prediction quality of a fully convolutional network, and a refinement post-process to enhance the output of the network.

Abstract: We propose a system of real-time measurement and visualization for a broad sound field by using optical see-through head mounted display (OSTHMD) with simultaneous localization and mapping (SLAM). The proposed system can superimpose the measurement results directly on the real space by OSTHMD and SLAM. The system also achieves free movement of measurement positions in a broad area without multiple AR markers. Visualizing the 3D sound intensity map, which is a stationary vector field representing the energy flow of sound, of an entire room by the proposed system helps us to design the sound field within a space.

Real-Virtual Bridge: A Modular Mechanism to Mediate Between Real and Virtual Objects

Abstract: We propose a real-virtual bridge, a conceptual model that can be used to integrate real objects and virtual objects for constructing virtual reality application systems, introduce the concept and architecture of a real-virtual bridge, and describe two implementations of the bridge on smartphones and microscopes.

Realistic AR Makeup over Diverse Skin Tones on Mobile

Abstract: We propose a novel approach to the application of realistic makeup over a diverse set of skin tones in mobile phones using augmented reality.

Authors/Presenter(s): Bruno Evangelista, Instagram, United States of AmericaHouman Meshkin, Instagram, United States of AmericaHelen Kim, Instagram, United States of AmericaAnaelisa Aburto, Instagram, United States of AmericaBen Max Rubinstein, Instagram, United States of AmericaAndrea Ho, Instagram, United States of America

Research and Development of Augmented FPV Drone Racing System (Topic: Animation and Visual Effects)

Abstract: Augmented FPV Drone Racing is a system that allows spectators to understand the situation of drone races easily by using augmented feedback techniques, including projection mapping and autonomous commentaries. In this project, we have been developing visualization solutions for FPV (first person view) drone racing to allow the spectators/pilots to understand easily (or intuitively) the race situation.

Retinal HDR: HDR Image Projection Method onto Retina

Abstract: There are multiple problems in the area of near-eye display: field of view, vergence-accommodation conflict, image quality, and dynamic range. A study has been conducted to solve such problems. However, compared with other near-eye display tasks, a few studies have targeted HDR near-eye display. We propose HDR representation method for near-eye display combining Dihedral Corner Reflector Array (DCRA)-based retinal projection and a high-contrast projector.

Abstract: We developed a novel graph visualization technique designed specifically for virtual reality. Furthermore, we conducted a user study that compared our novel ring visualization to a traditional node-based graph visualization.

Authors/Presenter(s): Raghav Gupta, University of Maryland, College Park, United States of AmericaAlex Busch, University of Maryland, College Park, United States of AmericaBrian Russin, Expert Consultants, Inc., United States of AmericaSamir Khuller, University of Maryland, College Park, United States of AmericaCeleste Lyn Paul, U.S. Department of Defense, United States of AmericaMikhail Sorokin, University of Maryland, College Park; MPLEXVR, United States of AmericaGalen Stetsyuk, University of Maryland, College Park, United States of America

Scientific and Visual Effects Software Integration for the Visualization of a Chromatophore

Abstract: A custom software integration and rendering pipeline combines scientific and visual effects tools to cinematically visualize a supercomputer simulation of a photosynthetic organelle called a chromatophore.

Authors/Presenter(s): Kalina M. Borkiewicz, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of AmericaAJ Christensen, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of AmericaStuart A. Levy, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of AmericaRobert M. Patterson, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of AmericaDonna J. Cox, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of AmericaJeff D. Carpenter, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, United States of America

Shape and Texture Reconstruction for Insects by using X-ray CT and Focus Stack Imaging (Topic: Geometry and Modeling)

Abstract: Reconstructing textured three dimensional (3D) models of insect specimens is important, since digital format has various advantages such as high space efficiency, high accessibility, degradation free and so on. This poster presents a technique for reconstructing a 3D shape and texture of an insect by using X-ray CT measurement and focus stack imaging. We construct a 3D shape by segmenting CT volume and obtain a texture from focus stack images taken from multiple viewpoints. By combining the both, we achieved to reconstruct highly detailed shapes and textures of insect specimens.

Simulating Kimono Fabrication based on the Production Process of Yuki-tsumugi (Topic: Image and Video Processing Applications)

Abstract: Yuki-tsumugi is a traditional Japanese silk fabric. In the production of this silk fabric, a splashed pattern based on a picture is first created on a sheet of exclusive grid paper. Second, piece goods is woven based on the pattern plan. Finally, a kimono is produced from the piece goods. However, estimating the appearance of changes to the kimono during each step of the production is difficult. Therefore, we propose a method for generating a kimono image of Yuki-tsumugi from a picture based on the actual production process.

Simulation of Bubbles with Floating and Rupturing Effect for SPH (Topic: Animation and Visual Effects)

Abstract: Our method based on incompressible SPH and multi-fluid simulation framework realizes the various behaviors of bubbles, especially the floating and rupturing effect can be simulated physically.

Authors/Presenter(s): Hiroki Watanabe, University of Tsukuba, JapanMakoto Fujisawa, University of Tsukuba, JapanMasahiko Mikawa, University of Tsukuba, Japan

Abstract: This article explores the possibilities of a light and inexpensive way of doing haptic feedback through texture simulation using tactile vibration. With the popularization of virtual reality, the field of haptic feedback is in turmoil. The goal in this study is to expose a moderately realistic but cheap way of simulating haptic feedback including the texture of a surface, to propose a system accessible by the great majority.

Abstract: We propose a method for rendering fluorescence under global illumination environment efficiently by using importance sampling of wavelength considering both spectra of fluorescent materials and light sources.

Abstract: We analyze relationship between the gaps of multiple process cycles and tolerance to motion acceleration of the camera. Then we verify the result on the actual moving object detection system.

Tactile Microcosm of ALife: Interaction with Artificial Life by Aerial Mixed Reality Display

Abstract: Holographic-look small fish is floating within the water filled in a petri dish. A user can enjoy interacting with the simulated schooling fish-like artificial life through aerial imaging and haptic feeling. The virtual creatures moves autonomously by combination of a predator-prey model and BOIDs algorithm implemented by a potential method.

Abstract: We introduce tangible interaction through multi-channel sensors into 3D printed modular robots. A user study shows that young students can effectively improve their spatial-reasoning skill after interacting with these robots.

Abstract: The Tentacle Flora is a robotic sculpture inspired by a vision of a colony of the sea anemone growing on the coral. A shape-memory alloy actuator is used as tentacles and is composed of a BioMetal Fiber such that it can bend in three directions. The top of the actuator glows softly mimicking a bioluminescent organism using a full colored LED. The Tentacle Flora induces the beauty, wonder, and existence of living sea anemones in the depths of the ocean.

Abstract: Our mission is to transform medical education and training of surgeons, through innovative VR simulation and skill transfer from the virtual to the real operating room in a fail-safe environment.

Abstract: Transitory project is an artistic and digital installation, centered around artificial intelligence. It interacts directly with the audience, diffusing abstract audiovisual elements and expressing herself through poems.

Abstract: In this paper, we develop a web-based vector graphics interweaving and penetrating editing system. We propose a data structure to dealing with interweaving and penetrating, allowing users to assign depth value for each edge of a polygon. As a result, when we click on a polygon and move it to interweave with another one, the intersecting edge is calculated using linear interpolation of the depth values. In contrast, the conventional SVG format arranges layers to separate two polygons for interweaving and penetrating.

Authors/Presenter(s): Yu-Lin Chao, National Taipei University of Technology, TaiwanTung-Ju Hsieh, National Taipei University of Technology, Taiwan