Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies

In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. In the next phase, the refined version of images is used to create 3D facial geometry structures by using end to end Convolution Neural Networks (CNN). The same technique is also used on our local thermal face data acquired using uncooled prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated.