You are here

MatCam: A Camera that Sees Materials

Prof. Kristin Dana is awarded a 3 year NSF Grant for the project: MatCam: A Camera that Sees Materials. Rutgers is the lead institution on this 500K collaborative grant with K. Dana as the Rutgers PI. Drexel University is the partner institution with PI Ko Nishino.

The proposed research program will create the first material camera or MatCam that outputs a per-pixel label of object material and its properties that can be used in any visual computing task. In the everyday real world there are a vast number of materials that are useful to discern including concrete, metal, plastic, velvet, satin, water layer on asphalt, carpet, tile, skin, hair, wood and marble. A camera device for identifying these materials has important implications in developing new algorithms and new technologies for a broad set of application domains including robotics, digital architecture, human-computer interaction, intelligent vehicles and advanced manufacturing.

Abstract:

This project develops the first material camera, or MatCam, that outputs a per-pixel label of object material and its properties that can be used in visual computing tasks. In the everyday real world there are a vast number of materials that are useful to discern including concrete, metal, plastic, velvet, satin, water layer on asphalt, carpet, tile, wood, and marble. A device for identifying materials has important implications in developing new technologies. For example, a mobile robot may use a MatCam to determine whether the terrain is grass, gravel, pavement, or snow in order to optimize mechanical control. In e-commerce, the material composition of objects can be tagged by a MatCam for advertising and inventory. The potential applications are limitless in areas such as robotics, digital architecture, human-computer interaction, intelligent vehicles and advanced manufacturing. Furthermore, material maps have foundational importance in nearly all vision algorithms including segmentation, feature matching, scene recognition, image-based rendering, context-based search, and object recognition and motion estimation. The camera brings material recognition to the broader scientific and engineering communities, in a similar way that depth cameras are currently used in many fields outside of computer vision.

This research brings high accuracy material estimation out of the lab and into the real-world for fast high-accuracy per-pixel material estimates. The program has three technical aims. First, a material appearance database is captured and stored with an exploration robot viewing surfaces from multiple angles. This large, structured and actionable visual dataset is then used to develop computational appearance models. A novel methodology using angular reflectance gradients is integrated for characterizing features of surface appearance. Using the training data and statistical inference methods, these models are designed for hardware implementation. The final aim is the material camera implementation as a near real-time prototype of point-and-shoot material acquisition that extends RGB-D cameras to RGB-DM cameras that provide color, depth, and material. The hardware implementation of the material appearance models utilizes FPGA and SoC (system-on-chip) technology.