Abstract: Embodiments can provide a strategy for controlling information flow both from known opacity regions to unknown regions, as well as within the unknown region itself. This strategy is formulated through the use and refinement of various affinity definitions. As a result of this strategy, a final linear system can be obtained, which can be solved in closed form. One embodiment pertains to identifying opacity information flows. The opacity information flow may include one or more of flows from pixels in the image that have similar colors to a target pixel, flows from pixels in the foreground and background to the target pixel, flows from pixels in the unknown opacity region in the image to the target pixel, flows from pixels immediately surrounding the target pixels in the image to the target pixel, and any other flow.

Abstract: The present disclosure relates to techniques for reconstructing an object in three dimensions that is captured in a set of two-dimensional images. The object is reconstructed in three dimensions by computing depth values for edges of the object in the set of two-dimensional images. The set of two-dimensional images may be samples of a light field surrounding the object. The depth values may be computed by exploiting local gradient information in the set of two-dimensional images. After computing the depth values for the edges, depth values between the edges may be determined by identifying types of the edges (e.g., a texture edge, a silhouette edge, or other type of edge). Then, the depth values from the set of two-dimensional images may be aggregated in a three-dimensional space using a voting scheme, allowing the reconstruction of the object in three dimensions.

Abstract: The present disclosure relates to techniques for reconstructing an object in three dimensions that is captured in a set of two-dimensional images. The object is reconstructed in three dimensions by computing depth values for edges of the object in the set of two-dimensional images. The set of two-dimensional images may be samples of a light field surrounding the object. The depth values may be computed by exploiting local gradient information in the set of two-dimensional images. After computing the depth values for the edges, depth values between the edges may be determined by identifying types of the edges (e.g., a texture edge, a silhouette edge, or other type of edge). Then, the depth values from the set of two-dimensional images may be aggregated in a three-dimensional space using a voting scheme, allowing the reconstruction of the object in three dimensions.

Abstract: Methods, systems, and computer-readable memory are provided for determining time-varying anatomical and physiological tissue characteristics of an animation rig. For example, shape and material properties are defined for a plurality of sample configurations of the animation rig. The shape and material properties are associated with the plurality of sample configurations. An animation of the animation rig is obtained, and one or more configurations of the animation rig are determined for one or more frames of the animation. The determined one or more configurations include shape and material properties, and are determined using one or more sample configurations of the animation rig. A simulation of the animation rig is performed using the determined one or more configurations. Performing the simulation includes computing physical effects for addition to the animation of the animation rig.

Abstract: Enhanced removing of noise and outliers from one or more point sets generated by image-based 3D reconstruction techniques is provided. In accordance with the disclosure, input images and corresponding depth maps can be used to remove pixels that are geometrically and/or photometrically inconsistent with the colored surface implied by the input images. This allows standard surface reconstruction methods (such as Poisson surface reconstruction) to perform less smoothing and thus achieve higher quality surfaces with more features. In some implementations, the enhanced point-cloud noise removal in accordance with the disclosure can include computing per-view depth maps, and detecting and removing noisy points and outliers from each per-view point cloud by checking if points are consistent with the surface implied by the other input views.

Abstract: A method is provided for rendering a representation of and interacting with transmedia content on an electronic device. Transmedia content data is received at the electronic device. The transmedia content data comprises: a plurality of transmedia content data items; linking data which define time-ordered content links between the plurality of transmedia content data items, whereby the plurality of transmedia content data items are arranged into linked transmedia content subsets comprising different groups of the transmedia content data items and different content links therebetween; a visualisation model of the transmedia content data; and a hierarchical structure of the linked transmedia content subsets and clusters of linked transmedia content subsets.

Abstract: There is provided a system for linking transmedia content subsets. A memory stores a plurality of transmedia content data items and associated linking data which define time-ordered content links between the plurality of transmedia content data items. The plurality of transmedia content data items are arranged into linked transmedia content subsets comprising different groups of the transmedia content data items and different content links therebetween; a transmedia content model that represents the transmedia content data items as nodes and the content links between the transmedia content data items as edges in one or more time-varying graphs. A processor is configured to associate the transmedia content data items with the time-ordered content links and store the linking data in the memory. It assigns the transmedia content data items to nodes of a graph structure, assign the time-ordered content links to edges of the graph structure and store them in the transmedia content model.

Abstract: A system and method for non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient) are provided. A teeth statistic model defining individual teeth in a teeth row can be developed. The teeth statistical model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. In some embodiments, the teeth statistic model can be trained using teeth information from 3D scan data of different sample subjects. The 3D scan data can be used to establish a database of teeth of various shapes and poses. Geometry information regarding the individual teeth can be extracted from the 3D scan data. The teeth statistic model can be trained using the geometry information regarding the individual teeth.

Abstract: A method of rendering content items on a display via an electronic device involves mapping linked content items to a three-dimensional object defined by layout data. The layout data is then transmitted to an electronic device for display.

Abstract: Techniques and systems are described for performing video segmentation using fully connected object proposals. For example, a number of object proposals for a video sequence are generated. A pruning step can be performed to retain high quality proposals that have sufficient discriminative power. A classifier can be used to provide a rough classification and subsampling of the data to reduce the size of the proposal space, while preserving a large pool of candidate proposals. A final labeling of the candidate proposals can then be determined, such as a foreground or background designation for each object proposal, by solving for a posteriori probability of a fully connected conditional random field, over which an energy function can be defined and minimized.

Abstract: Systems and techniques for generating a parametric eye model of one or more eyes are provided. The systems and techniques may include obtaining eye data from an eye model database. The eye data includes eyeball data and iris data corresponding to a plurality of eyes. The systems and techniques may further include generating an eyeball model using the eyeball data. Generating the eyeball model includes establishing correspondences among the plurality of eyes. The systems and techniques may further include generating an iris model using the iris data. Generating the iris model includes sampling one or more patches of one or more of the plurality of eyes using an iris control map and merging the one or more patches into a synthesized texture. The systems and techniques may further include generating the parametric eye model that includes the eyeball model and the iris model.

Abstract: Systems and techniques for reconstructing one or more eyes using a parametric eye model are provided. The systems and techniques may include obtaining one or more input images that include at least one eye. The systems and techniques may further include obtaining a parametric eye model including an eyeball model and an iris model. The systems and techniques may further include determining parameters of the parametric eye model from the one or more input images. The parameters can be determined to fit the parametric eye model to the at least one eye in the one or more input images. The parameters include a control map used by the iris model to synthesize an iris of the at least one eye. The systems and techniques may further include reconstructing the at least one eye using the parametric eye model with the determined parameters.

Abstract: Systems and method for the reconstruction of an articulated object are disclosed herein, The articulated object can be reconstructed from image data collected by a moving camera over a period of time. A plurality of 2D feature points can be identified within the image data. These 2D feature points can be converted into three-dimensional space, which converted points are identified as 3D feature points. These 3D feature points can be used to identify one or several rigidity constrains and/or kinematic constraints. These rigidity and/or kinematic constraints can be applied to a model of the reconstructed articulated object.

Abstract: Techniques and systems are described for performing video segmentation using fully connected object proposals. For example, a number of object proposals for a video sequence are generated. A pruning step can be performed to retain high quality proposals that have sufficient discriminative power. A classifier can be used to provide a rough classification and subsampling of the data to reduce the size of the proposal space, while preserving a large pool of candidate proposals. A final labeling of the candidate proposals can then be determined, such as a foreground or background designation for each object proposal, by solving for a posteriori probability of a fully connected conditional random field, over which an energy function can be defined and minimized.

Abstract: Techniques are disclosed for creating digital assets that can be used to personalize themed products. For example, a workflow and pipeline used to generate a 3D model from digital images of a person's face and to manufacture a personalized, physical figurine customized with the 3D model are disclosed. The 3D model of the person's face may be simplified to match a topology of a desired figurine. While the topology is deformed to match that of the figurine, the 3D model retains the geometry of the child's face. Simplifying the topology of the 3D model in this manner allows the mesh to be integrated with or attached to a mesh representing desired figurine.

Abstract: Systems and method for the reconstruction of an articulated object are disclosed herein, The articulated object can be reconstructed from image data collected by a moving camera over a period of time. A plurality of 2D feature points can be identified within the image data. These 2D feature points can be converted into three-dimensional space, which converted points are identified as 3D feature points. These 3D feature points can be used to identify one or several rigidity constrains and/or kinematic constraints. These rigidity and/or kinematic constraints can be applied to a model of the reconstructed articulated object.

Abstract: Systems and techniques for reconstructing one or more surfaces of an object including one or more opaque surfaces behind one or more refractive surfaces are provided. The systems and techniques may include obtaining one or more images of the object including an opaque surface located behind a refractive surface and determining one or more refractive surface constraints using the one or more images. The one or more refractive surface constraints constrain one or more characteristics of the refractive surface. The systems and techniques may further include reconstructing an opaque surface representation or a refractive surface representation using the one or more refractive surface constraints, the opaque surface representation representing the opaque surface of the object, and the refractive surface representation representing the refractive surface of the object.

Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.

Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.

Abstract: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization.