a solution for increased efficiency and value creation in the movie business

Unified 3D scenes

Public research dataset

&nbsp&nbsp The European Commission funded IMPART (Nov 2012 - Oct 2015) focused on 'big (multimodal) data' problems in the field of the digital cinema production.

&nbsp&nbsp The tools produced have been integrated in the production software of Double Negative, and new products from FilmLight have resulted. Twenty papers in journals and 70 conference publications demonstrate the research carried out. The project has created Open Data for research in this field and Open Source software (for acceleration and 3D web).

&nbsp&nbsp The multimodal data have been unified though a 3D paradigm, leading to tools to speed up in orders of magnitude and improve 3D reconstruction, for assessing quality (of capture environments, of 3D reconstructions, in-focus, …). Video semantic analysis suitable for large scale multi-view has been provided, as well as 3D-2D (mainly web) integrated visualizations.

&nbsp&nbsp This page showcases part of the results.

Data acquisition and registration

&nbsp&nbsp On-set capture for media production is challenging due to moving background, uncontrolled illumination and limited system support. It requires aligned background scene information as well as dynamic actions in the main capture volume, on-set system monitoring and assessment tools against unsecured capture environments, and accurate composition of footage from various capture devices.

Multi-modal data registration

&nbsp&nbsp 2D and 3D data footage acquired from various sensors are registered to a unified 3D space for efficient multisource data management. 3D data from active sensors is directly registered to the reference coordinates through 3D feature detection and matching. 2D footage is registered via 3D reconstruction such as stereo matching or structure-from-motion techniques. Details of the pipeline and algorithms can be found in the following papers:

Quality assurance tools

&nbsp&nbsp “If in doubt, reshoot” is an attitude born out of the fact that media production generates just too much data for manual review. Within IMPART, University of Surrey developed a suite of tools for on-set quality assurance, monitoring and decision support tools, offering the following capabilities:

Prior to capture, assuring that the cameras capture what the director has in mind.

During the capture, monitoring that the capture equipment operates correctly.

After the capture, validating the data, and if possible, correcting the existing issues without a reshoot.

&nbsp&nbsp Our work tackles the big data problem in media production by “preventing a multiplication of data beyond necessity”. The specific capabilities include synchronisation and coverage assessment for multicamera networks, along with validation of calibration parameters. These are enabled through an ensemble of state-of-the-art techniques in computer vision, specifically for feature tracking and matching, and robust geometry estimation.

3D reconstruction quality assessment

&nbsp&nbsp At Brno University of Technology, we developed algorithms for quality assessment of 3D reconstruction of scenes from stills using the Bundle Adjustment (BA) algorithm, which takes advantage of novel data structures and accelerated algorithms developed in the IMPART project. A similar algorithm was also developed for quality assessment of LIDAR scans, which is a popular method of capturing 3D information in the digital cinema industry. Thanks to those algorithms, the capture crews can now get a feedback on set, which was not previously possible.

In-focus area detection

&nbsp&nbsp At Brno University of Technology, we developed algorithms for in-focus detection and 2D image technical quality assessment. It is different from the automatic focus algorithms employed in today's digital cameras, which work by comparing high frequency content of an image of the same scene, given lens setting. Our novel algorithm requires no reference and can give absolute focus estimate for individual pixels of an image and also for the whole images. This gives our users the power through metadata, as this information can be easily indexed by the FLUX system. As in Woody Allen's classic The Hollywood Ending, now even a blind man can direct a movie.

Semantic Video Analysis

Video content analysis and description

&nbsp&nbsp At AUTH, we have developed algorithms that analyze the footage from multiple cameras, in order to extract semantic information. We have improved the state-of-the-art in human-centered video analysis, such as neural network-based and fast (approximate) classification methods. IMPART solutions include algorithms and tools for human activity recognition, face recognition and shot type characterization. The extracted semantic information is stored in AVDP Light XML format.

Semantic content summarization

&nbsp&nbsp At AUTH, we have also developed algorithms which can perform temporal video segmentation of each take, based on activity information. We have developed software that divides the videos in segments and stores the semantic information in an XML format. It permits the selection and browsing of specific segments of the camera recordings according to actions and actors, which leads to determination of similar scenes, groups of activities and groups of actors, or other summarizations for later use in post-production.

N.Kourous, A.Iosifidis, A.Tefas and I.Pitas, "Video Characterization based on Activity Clustering," in International Conference on Electrical and Computer Engineering (ICECE), Dhaka, Bangladesh, 2014

Interactive 3D Web

&nbsp&nbsp Universitat Pompeu Fabra explored visualization and annotation concepts through ground work on web-based multi-platform tools, through 3D visualization of the structure of the dataset, and through simulation tools for dailies.

Progressive Point Cloud

&nbsp&nbsp We focused on the development of a prototype system for remote visualization of data recorded on-set, such as the LIDAR scan of the environment and the camera footage. The goal is that this visualisation could be integrated with content analysis and camera coverage work in order to have a holistic view of all the data recorded on-set. By clicking in the following image or link you can check the web demo.

Integrated tools

&nbsp&nbsp Many solutions were developed during the project. In order to test and prove the usability of these solutions, they were integrated in Double Negative software.

IMPART tools in Double Negative's Jigsaw package

&nbsp&nbsp Jigsaw, a proprietary software package developed by Double Negative, allows users to efﬁciently manage and process various data from digital photographs to 3D point clouds.
IMPART technology has been fully integrated into Jigsaw and greatly enhances its capabilities.

&nbsp&nbsp Point cloud data from various sources (LIDAR, stereo Spheron imagery, photos via photogrammetry) can get registered into a common coordinate frame using tools from the University of Surrey. Surrey has also contributed a robust video stream alignment method that helps with the processing and conforming of witness camera data, an otherwise very time-consuming and manual task.

&nbsp&nbsp These algorithms have been significantly accelerated through contributions from BUT. Additionally, Brno University has also supplied a fast algorithm that visualises the estimated quality of point cloud registration, which is very useful during onset data acquisition and can run on a laptop.

&nbsp&nbsp AUTH has contributed semantic video indexing and search technology, which is useful for artists in animation departments that often need to find very specific reference footage (e.g. walk cycles).

&nbsp&nbsp Last but not least, UPF has contributed fast and efficient methods to compress, stream and display meshes, point cloud data and registered videos in 3D to a normal web browser. This technology is very useful for the communication over limited bandwidth channels, something that happens regularly during data capture onset.

Integrated visualization

&nbsp&nbsp At UPF, we used the technologies of the web to display point cloud data and registred videos in the web browser. We developed progressive visualizations, mesh compression techniques and other methods for a fast visualization. The unified 3D scene can be easly shared via the web. You can check out several scenarios here.

Industrial tools

&nbsp&nbsp The data recorded on-set typically measure in the Terabytes and, thus, can be stored in multiple physical locations. FilmLight’s research within IMPART intended to create a homogenous view of all data recorded on-set, regardless of its physical location, thus creating a heterogeneous “cloud” of file systems. FilmLight was addressing big data issues through three approaches: Generate less data, by contextual monitoring with FLIP On-set processor appliance; Prune useless data early, by contextual review with Baselight Dailies software; and Manage data efficiently through post, by metadata driven file manipulation with FLUX Manage software in conjunction with the FLUX+ indexing system.

&nbsp&nbsp The tools developed by FilmLight have been showcased in several events around the world, including Dimension 3 in Paris, June '13; IBC in Amsterdam, Sept. '13 and '14; NAB in Las Vegas, April '14 and '15; and Cinegerard in LA, June 2015.

FLIP

&nbsp&nbsp It is a hardware device that for on-set preview of live camera output with real-time application of looks. FLIP takes away the guesswork of digital cinematography, and enables the first thoughts of the DoP or the director to become the foundation for the final grade.

Daylight & Flux+

&nbsp&nbsp Daylight is a powerful dailies platform for shot management and high-performance transcoding. It is designed as a compact yet powerful grading decision tool to help DoPs and directors establish looks and visualise what they have shot, on set or on location, as well as meeting all of the sophisticated deliverables requirements—in one application.

&nbsp&nbsp FLUX comprises a post-production server and a management tool, which, when combined, provide a revolutionary new way to store image assets and build an image factory.

Open Source and Open Data

&nbsp&nbsp Since the middle of the first year of the project, IMPART partners Brno University of Technology and Universitat Pompeu Fabra made available initial software packages as Open Source software from university websites, from SourceForge, or GitHub.

Research dataset

&nbsp&nbsp In October 2014 the IMPART project made publicly available the dataset for research into multimodal movie production created by the University of Surrey and Double Negative. It consists of about 20TB of 2D and 3D data and metadata captured during sessions at different locations at the University of Surrey and Double Negative premises in both indoor and outdoor environments. The dataset can be found here.

SLAM++

&nbsp&nbsp To support parallelisation and speed-up in an efficient way, Brno University of Technology developed (enhanced) a high-performance nonlinear least squares solver for graph problems, SLAM++, which outperforms existing implementations for large 3D reconstruction datasets, among other qualities. The software pacakge already has more than 16000 downloads and can be found here.

WebGLStudio

&nbsp&nbsp WebGLStudio is a set of web graphics libraries. The main application is a platform to create interactive 3D scenes directly from the browser. It allows to edit the scene visually, code your behaviours, edit the shaders, and all directly from within the app. The libraries have over 1400 stars in GitHub and can be found here.

Publications

Credits

&nbsp&nbsp IMPART stands for Intelligent Management Platform for Advanced Real-Time media processes. It was a European Commission funded project which started in November 2012 and finished in November 2015.

&nbsp&nbsp Its overall aim was to research, develop and evaluate information management solutions for 'big data' problems in the field of the digital cinema production. It has developed new ways of managing, visualising and analysing very large multimodal data sets so that creative personnel can review three-dimensional scene representations on the set, understand the data, identify errors, evaluate the quality of the shot and take creative decisions in real-time.

Its partners were:

Universitat Pompeu Fabra UPF

Aristotle University of Thessaloniki AUTH

Brno University of Technology BUT

University of Surrey UniS

Double Negative Visual Effects DNeg

FilmLight

Coordinator: Josep Blat (josep.blat@upf.edu)

IMPART is co-funded by the European Commission under the Seventh Framework Programme