PRN is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. More examples on Multi-PIE and 300VW can be seen in YouTube .

The main features are:

End-to-End our method can directly regress the 3D facial structure and dense alignment from a single image bypassing 3DMM fitting.

Multi-task By regressing position map, the 3D geometry along with semantic meaning can be obtained. Thus, we can effortlessly complete the tasks of dense alignment, monocular 3D face reconstruction, pose estimation, etc.

Faster than real-time The method can run at over 100fps(with GTX 1080) to regress a position map.

Applications

Basics(Evaluated in paper)

Face Alignment

3D Face Reconstruction

Get the 3D vertices and corresponding colours from a single image. Save the result as mesh data(.obj), which can be opened with Meshlab or Microsoft 3D Builder. Notice that, the texture of non-visible area is distorted due to self-occlusion.

New:

you can choose to output mesh with its original pose(default) or with front view(which means all output meshes are aligned)

obj file can now also written with texture map(with specified texture size), and you can set non-visible texture to 0.

More(To be added)

3D Pose Estimation

Rather than only use 68 key points to calculate the camera matrix(easily effected by expression and poses), we use all vertices(more than 40K) to calculate a more accurate pose.

Depth image

Texture Editing

Data Augmentation/Selfie Editing

modify special parts of input face, eyes for example:

Face Swapping

replace the texture with another, then warp it to original pose and use Poisson editing to blend images.

Getting Started

Prerequisite

Python 2.7 (numpy, skimage, scipy)

TensorFlow >= 1.4

Optional:

dlib (for detecting face. You do not have to install if you can provide bounding box information. )

opencv2 (for showing results)

GPU is highly recommended. The run time is ~0.01s with GPU(GeForce GTX 1080) and ~0.2s with CPU(Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz).

Due to the restriction of training data, the precision of reconstructed face from this demo has little detail. You can train the network with your own detailed data or do post-processing like shape-from-shading to add details.

b. texture precision.

I just added an option to specify the texture size. When the texture size > face size in original image, and render new facial image with texture mapping, there will be little resample error.

Changelog

2018/7/19 add training part. can specify the resolution of the texture map.