The present invention relates to a cross-spectral face recognition training method as well as a cross-spectral face recognition method based on a trained image set.
A cross-spectral face recognition method is disclosed in Zhang et al. “Tv-gan: Generative adversarial network based thermal to visible face recognition”. In International Conference on Biometrics, pages 174-181, 2018. Other publications are Chen et al. “Matching thermal to visible face images using a semantic-guided generative adversarial network” in IEEE International Conference on Automatic Face & Gesture Recognition, pages 1-8, 2019, Di et al. “Multi-scale thermal to visible face verification via attribute guided synthesis” in IEEE Transactions on Biometrics, Behavior, and Identity Science, 3 (2): 266-280, 2021, Wang et al. “Thermal to visible facial image translation using generative adversarial networks” in IEEE Signal Processing Letters, 25 (8): 1161-1165, 2018, Iranmanesh et al. “Coupled generative adversarial network for heterogeneous face recognition” in Image and Vision Computing, 94:103861, 2020, Kezebou et al. “TR-GAN: thermal to RGB face synthesis with generative adversarial network for crossmodal face recognition” in Mobile
Multimedia/Image Processing, Security, and Applications, volume 11399, pages 158-168, 2020, Di et al. “Polarimetric thermal to visible face verification via self-attention guided synthesis” in International Conference on Biometrics, pages 1-8, 2019.
Cross-spectral face recognition is more challenging than traditional FR for both human examiners as well as computer vision algorithms, due to following three limitations. Firstly, there can be large intra-spectral variation, where within the same spectrum, face samples of the same subject may exhibit larger variations in appearance than face samples of different subjects. Secondly, the appearance variation between two face samples of the same subject in different spectral bands can be larger than that of two samples belonging to two different subjects, referred to as modality gap. Finally, limited availability of training samples of cross-modality face image pairs can significantly impede learning-based schemes, including those based on deep learning models. Thermal sensors have been widely deployed in nighttime and low-light environments for security and surveillance applications. Some of them capture face images beyond the visible spectrum. However, there is considerable performance degradation when a direct matching is performed between thermal (THM) face images and visible (VIS) face images (due to the modality gap). This is mainly due to the change in identity determining features across the thermal and visible domains.
One of the main challenges in performing thermal-to-visible face recognition (FR) is preserving the identity across different spectral bands. In particular, there is considerable performance degradation when a direct matching is performed between thermal (THM) face images and visible (VIS) face images. This is mainly due to the change in identity determining features across the thermal and visible domains.
Based on the prior art it is an object of the invention to provide a CFR method overcoming the cited problems. This is achieved for a CFR training method with the features of claim 1. A cross-spectral recognition method within a visual image database is disclosed in claim 2. For completeness, a cross-spectral recognition method within a thermal image database is disclosed in claim 3.
The present invention is based on the insight that a supervised learning framework that addresses Cross-spectral Face Recognition (CFR), i.e: Thermal-to-Visible Face Recognition can be improved, if the encoded features are disentangled between style features solely related to the spectral domain of the image and identity features which are present in both spectral versions of the image.
The present invention minimizes the spectral difference by synthesizing realistic visible faces from their thermal counterparts. In particular, it translates facial images from one spectrum to another, while preserving explicitly the identity or in other words, it disentangle the identity from other confounding factors, and as a result the true appearance of the face is now preserved during the spectral translation. In this context an input image is explicitly decomposed into an identity code that is spectral-invariant and a style code that is spectral-dependent.
To enable thermal-to-visible translation and vice versa, the method according to the invention incorporates three networks per spectrum, (i) identity encoder, (ii) style encoder and (iii) decoder. To translate an image from a source spectrum to a target spectrum, the identity code is combined with a style code denoting the target domain. By using such disentanglement, the identity during the spectral translation is preserved as well as the identity preservation is analyzed by interpreting and visualizing the identity code.
As mentioned above the method proposes a supervised learning framework for CFR that translates facial images from one spectrum to another, while preserving the explicitly the identity. This is done with the concept of introducing a latent space with identity and style codes. X. Huang et al. have published in “Multimodal unsupervised image-to-image translation” in European Conference on Computer Vision, 2018 a method for the latent space decomposition in a context not similar to the issues raised in the connection with the problem of CFR.
Face recognition beyond the visible spectrum allows for increased robustness in the presence of different poses, illumination variations, noise, as well as occlusions. Further benefits include incorporating the absolute size of objects, as well as robustness to presentation attacks such as makeup and masks. Therefore, comparing RGB face images against those acquired beyond the visible spectrum is of particular pertinence in designing Face Recognition (FR) systems for defense, surveillance, and public safety and is referred to as Cross-spectral Face Recognition (CFR).
Four loss functions have been introduced in order to enhance both image as well as latent reconstructions.
The latent space has been analyzed and decomposed into a shared identity space and a spectrum dependent style space, by visualizing the encoding using heatmaps.
The method has been evaluated on two benchmark multispectral face datasets and achieve improved results with respect to visual quality, as well as face recognition matching scores.
Thus, one object of the invention is a cross-spectral face recognition training method using a visible light face image set comprising a number of visual images and an infrared face image set comprising a number of thermal images, both sets related to the identical group of persons, wherein each thermal image has a corresponding visual image of an identical person, characterized in that it includes:
Further embodiments of the invention are laid down in the dependent claims.
Preferred embodiments of the invention are described in the following with reference to the drawings, which are for the purpose of illustrating the present preferred embodiments of the invention and not for the purpose of limiting the same. In the drawings,
The following description in connection with
The thermal image 10T is coded in two different ways. A style encoder 200T provides a style code 420T of the thermal image 10T, also denoted sthm. An identity encoder 100T provides an identity code 410T of the thermal image 10T, also denoted idthm.
The visual image 10V is also coded in two different ways, based on the same principles. A style encoder 200V provides a style code 420V of the visual image 10V, also denoted svis. An identity encoder 100V provides an identity code 410V of the visual image 10V, also denoted idvis.
The two face images of the same person share in identity features a common part in the respective identity codes 410T and 410V which is noted in
On the left side, the handling of the visual image 10V of such a pair of visual/thermal training images is shown. The visual image 10V, also denoted xvis, is style encoded in style encoder 200V and identity encoded in identity encoder 100V, which is also shown as Ev, generating the visual style code 420V and visual identity code 410V, respectively, which build the visual latent space 400V. These codes are then decoded in the visual decoder 300V, which is also shown as Gv, generating the recreated visual image 10VR, also shown as xvisrec. The learning instance is shown by the arrow connection between the two images 10V and 10VR, provided as a loss function 20IR, comprising in fact a loss function part Lrec and a function part Li. The loss functions parts are related to the identity and the recreation of the visual image.
On the right side of
Beside these learning steps, solely conducted in the separated thermal and visual image sets with the possible interaction of the loss function, especially in view of the main part of the supervised learning method, the entanglement of thermal and visual images and their reconstruction which is shown with the two middle parts of
On the left side of the middle of
This simulated thermal image 10TF is fed together with a target thermal image 10TT, xthmtarget, to a thermal discriminator 50T, also Dist, to recognize the simulated thermal image 10TF as real or fake, i.e. a binary decision. The target thermal image 10TT can be the original thermal image 10T. The learning process is improved through the target thermal image 10TT being connected with the simulated thermal image 10TF via the loss function 20P, also mentioned as LP.
On the right side of the middle of
This simulated visual image 10VF is fed together with a target visual image 10VT, xvistarget to a visual discriminator 50V, also Dist, to recognize the simulated visual image 10VF as real or fake, i.e. a binary decision. The target visual image 10VT can be the original visual image 10V. The learning process is improved through the target visual image 10VT being connected with the simulated visual image 10VF via the loss function 20P, also mentioned as LP.
The input is mentioned as xinput, being an image handled separately in identity encoder 100 and style encoder 200. The identity encoder 100 uses a downsampler 110 to be applied as well as a residual block unit 120, generating the identity code 410 as part of the latent space 400. Identity code 410 is part of a set. On the other hand, the entry data is downsampled in 210 of the style encoder 200 and subsequently used as input for the global average pooling layer 220, followed by a last fully connected layer or FC 230 generating the style code 420 as part of the latent space 400.
On the other side, the decoder 300 uses the style code 420 in a MLP 340 which is followed by a AdaIN parameter storage 330. This result together with the identity code 410 is fed to the residual block unit 320 which generates after upscaling 310 the simulated or synthetic image, which is also mentioned as fake image.
The reference numerals are associated to scientific denominations. The following specification part is related to the development of the scientific denominations.
Let and
be the visible and thermal domains. Let xvis ∈
and xthm ∈
be drawn from the marginal distributions xvis˜p
and xthm˜p
, respectively. Thermal-to-visible face recognition based on (Generative adversarial networks) GAN-synthesis aims to estimate the conditional distribution p
|
(xvis|xthm), where
involves the joint distribution p,
(xvis, xthm). As the joint distribution is not known, we adopt the assumption of “partially shared latent space” from the above mentioned MUNIT publication as follows.
A pair (xvis, xthm)˜pz,24 ,of images, corresponding to the same face from the joint distribution, can be generated through the support of
Hence, the joint distribution is approximated via the latent space of the following two phases.
Firstly, the identity latent code and style latent code are extracted from the input images xvis and xthm
Then, given the embedding of Equation (2), the face is reconstructed via the generator,
in order to learn the latent space for the specific face. Here, ∈ {
,
} represents the domain,
denotes the factorized identity code and style code auto-encoder,
is the underlying decoder, and xvis and xthmrec are the corresponding reconstructed images.
The objective of the present method is to learn the global image reconstruction mapping for a fixed m ∈ {vis, thm}, i.e.,
while preserving facial identity features and allowing for a non-identity shift through latent reconstruction between
where (idmrec, smrec) are part of the extraction ((
(idm, sm,noise)),
(
(id
and
In the domain translation phase, image-to-image translation is performed by swapping the encoder-modality (i.e., spectrum) with the opposite modality of the input image and imposing an explicit supervision on the style domain transfer functions (xthm)=(idthm, svis-noise) and
(xvis)=(idvis, sthm-noise), and then using
(idthm, svis-noise) and
(idvis, sthm-noise) to produce the final output image in the target spectrum. This is formalized as follows.
Consequently, Θt→v and Θv→t are the functions that synthesize the corresponding visible (t→v) and thermal (v→t) faces. Finally, the present method learns the spectral conditional distribution P|
(xvisfake|xthm) and P
|
(xthmfake|xvis) through a guided latent generation, where both these conditional distributions overcome the fact that we do not have access to the joint distribution P
,
(xvis, xthm). Indeed, the method is able to generate, as an alternative, the joint distributions P
,
(xvisrec, xthmfake) and P
,
(xvisfake, xthmrec) respectively. The translation is learned using neural networks, and the method as applied and shown in
The following paragraphs are related to the loss functions as explained in the framework of the invention.
He present method is trained with the help of objective functions that include adversarial and bi-directional reconstruction loss as well as conditional, perceptual, identity, and semantic loss. Bi-directional refers to the reconstruction learning process between image→latent→image and latent→image→latent by the sub-networks, depicted in
1) Adversarial Loss: Images generated during the translated phase through Equations (7) and (8) must be realistic and not distinguishable from real images in the target domain. Therefore, the objective of the generators, Θ, is to maximize the probability of the discriminator Dis making incorrect decisions. The objective of the discriminator Dis, on the other hand, is to maximize the probability of making a correct decision, i.e., to effectively distinguish between real and fake (synthesized) images.
The adversarial loss is denoted as follows.
2) Bi-directional Reconstruction Loss: Loss functions in the Encoder-Decoder network encourage the domain reconstruction with regards to both the image reconstruction and latent space (identity+style) reconstruction.
The bi-directional reconstruction loss function is computed as follows:
3) Conditional Loss: Imposing a condition on the spectral distribution provides an improvement and is a major difference from the baseline model. Indeed, this allows for a translation that is conditioned to the distribution of the target style code and, further, adds an explicit supervision on the final mapping Θt→v and Θv→t. The conditional loss cond is defined as follows.
To improve the quality of the synthesized images and render them more realistic, three additional objective functions can be incorporated.
4) Perceptual Loss: The perceptual loss P affects the perceptive rendering of the image by measuring the high level semantic difference between synthesized and target face images. It reduces artefacts and enables the reproduction of realistic details.
P is defined as follows:
where, ϕP represents features extracted by VGG-19, pretrained on ImageNet.
5) Identity Loss: The identity loss I is responsible for preserving identity-specific features during the image reconstruction phase and, therefore, encourages the translated image to preserve the identity content of the input image.
l is defined as follows:
where, ϕI denotes the features extracted from the VGG-19 network pre-trained on the large-scale VGGFace2 dataset.
6) Semantic Loss: The semantic loss S guides the texture synthesis from thermal to visible domain and imparts attention to specific facial details. A parsing network (see at https://github.com/zllrunning/face-parsing.PyTorch) is used to detect semantic labels and to classify them into 19 different classes which correspond to the segmentation mask of facial attributes provided by CelebAMask-HQ as introduced by C.-H. Lee, Z. Liu, L. Wu, and P. Luo. Maskgan in “Towards diverse and interactive facial image manipulation” in IEEE Conference on Computer Vision and Pattern Recognition, 2020, face parsing is applied to images in the datasets.
S is defined as follows.
where, ϕS is the parsing network, providing corresponding parsing class label.
Total loss: The overall loss function for the present method is denoted as follows:
An embodiment of the invention is based on the following implementation details. The framework of the method is implemented in Pytorch by adapting the mentioned MUNIT package (see: https://github.com/nvlabs/MUNIT) and designing the architecture for the modality-translation task. It is noted that the implementation omits their proposed domain-invariant perceptual loss as well as the style-augmented cycle consistency. The model is trained until convergence. The initial learning rate for Adam optimization is 0.0001 with β1=0:5 and β2=0:999. For all experiments of the exemplified embodiment, the batch size is set to 1 and, based on empirical analysis, the loss weights are set to λGAN=1, λrec=10, λcond=35, λP=15, λI=20 and λS=10.
The experimental results are as follows:
The ARL-MultiModal Face dataset as published by S. Hu, N. J. Short, B. S. Riggan, C. Gordon, K. P. Gurton, M. Thielke, P. Gurram, and A. L. Chan “A polarimetric thermal database for face recognition research” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, in short (ARL-MMFD), contains visible, LWIR, and polarimetric face images of over 60 subjects and includes variations in both expression and standoff distances. The experiments only uses visible and LWIR (i.e., thermal) images at one particular stand-off distance: 2.5 m. The first 30 subjects are used for testing and evaluation, and the remaining 30 subjects are used for training. The images in this dataset are already aligned and cropped.
2) ARL-VTF dataset: The ARL-Visible Thermal Face dataset as published by D. Poster, M. Thielke, R. Nguyen, S. Rajaraman, X. Di, C. N. Fondje, V. M. Patel, N. J. Short, B. S. Riggan, N. M. Nasrabadi, and S. Hu in “A large-scale, time-synchronized visible and thermal face dataset” in IEEE Winter Conference on Applications of Computer Vision, pages 1559-1568, 2021. (ARL-VTF) represents the largest collection of paired visible and thermal face images acquired in a time synchronized manner. It contains data from 395subjects with over 500,000 images captured with variations in expression, pose, and eyewear. The established evaluation protocol is followed, which assigns 295 subjects for training and 100 subjects for testing and evaluation. The baseline gallery was selected and subjects without glasses, named G VB0- and P TB0-, respectively, was probed. Furthermore, the images based on the provided eyes, nose and mouth landmarks were aligned and processed.
1) Face recognition Matcher: The ArcFace matcher published by J. Deng, J. Guo, N. Xue, and S. Zafeiriou in “Arcface: Additive angular margin loss for deep face recognition” in IEEE Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019was trained on normalized face images of size 112_112 from the MS-Celeb-1M dataset published by Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao in “Ms-celeb-1m: A dataset and benchmark for large-scale face recognition” in European Conference on Computer Vision, 2016 with additive angular margin loss. ResNet-50 was used as the embedding network and the final embedded feature size was set to 512. l2 normalization was applied to the extracted feature vectors (embeddings), prior to the computation of cosine distance (match) scores.
Face verification experiments are conducted on the above mentioned ARLMMFD and ARL-VTF datasets. Area Under the Curve (AUC) and Equal Error Rate (EER) metrics are computed using the ArcFace matcher. Table I reports face verification results on the ARL-MMFD and ARL-VTF datasets. A higher AUC indicates better performance, whereas a lower EER is better. It is observed that translating thermal face images into visible-like face images significantly boosts the verification performance. For example, the direct comparison approach, in which a thermal probe is directly compared to the visible gallery, is related to the lowest AUC and highest EER scores, viz., 73: 71% AUC and 32: 73% EER in ARL-MMFD, and 54: 80% AUC and 46: 36% EER in ARL-VTF. When the base is applied from the baseline approach, and the adversarial
GAN (Equation (9)), bi-directional reconstruction
rec (Equation (13) and conditional
cond (Equation (14) loss functions are incorporated, the performance improves to 79:33% AUC and 29:16% EER on ARL-MMFD, and 92:21% AUC and 15:88% EER on ARL-VTF. This confirms that image-to-image translation significantly reduces the modality gap. The proposed method when in includes
P+I+S (Equation (18)), built on the basis of
base, that is improved by adding the perceptual
P (Equation (15)), identity
I (Equation (16)) and semantic
S (Equation (17)) loss functions, exhibits the best performance of 93:99% AUC and 13:02% EER on ARL-MMFD, and 94:26% AUC and 12:99% EER on ARLVTF.
In optimizing LG-GAN on the large-scale dataset ARL-VTF, the hyper-parameters (weights) are tuned, thereby enabling objective functions to be combined in an effective manner. λrec=20 is set, placing emphasis on both image reconstruction ηrecimage (Equation (10)) and identity code reconstruction recidentity (Equation (11)). On the other hand, λI=10 is increased towards improving the face verification accuracy to 96:96% AUC and 5:94% EER.
base
P
I
P+I
P+I+S = LG-GAN
To illustrate the impact of loss functions included in the present method on visual quality, an ablation study is conducted using both ARL-MMFD and ARL-VTF datasets. The quality of generated images is evaluated by the structural similarity index measure (SSIM) introduced by Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli in “Image quality assessment: from error visibility to structural similarity” in IEEE Transactions on Image Processing, pages 600-612, 2004, where an SSIM score of 1 is the extreme case of comparing identical images. Table I reports average SSIM scores computed on both datasets under different experimental configurations.
The first intuitive observation has to do with the low performance of direct matching between thermal and visible face images that can be explained in terms of the lower SSIM of 0:2899 and 0:3739, respectively, on the ARLMMFD and ARL-VTF datasets. This illustrates once more the modality gap. In an attempt to overcome this modality gap, the first baseline experiment base, without visual quality optimization, boosts the SSIM score to 0:4409 and 0:6049, respectively. However, it is noted that generated results are rather blurry. By adding additional loss functions to
base,
P (Equation (15) and
1 (Equation (16)), namely the perceptual and identity losses, the SSIM score only marginally increases. Jointly,
P+1 improves the SSIM score further. Finally, the method including all loss functions, mentioned as LG-GAN is able to generate more realistic images with less artefacts, and with higher similarity to the visible ground-truth images. The resulting SSIM scores are 0:4652 and 0:6145, respectively. When LG-GAN is optimized on the large scale VTF dataset, a SSIM score of 0.6787 is observed.
Besides the impact of individual and combined loss functions on the visual quality of images, their related impact on the face verification performance is shown with.
Understanding the latent code is critical for the present method, as it aims to elicit identity-specific information while ignoring spectrum induced information. A disentangled latent space is produced using an identity encoder and a style encoder that decomposes the input image into an identity code and a style code. As discussed earlier, the style code represents the spectral information and drives the domain translation, without adversely affecting the identity information. However, here is explored whether identity is explicitly encoded in the latent space. Towards this goal, the identity code, idm; is directly visualized after the encoding step (xm). This is realized for the visual image 10V as well as for the thermal image 10T. Then, by up-scaling the code to the target image size, the pertinent pixels are determined that are responsible for the identity information in the latent space, once for the visual identity code 410V′ and once for the thermal identity code 410T′. This is visualized in
Moreover, identity codes—idvis and idthm—extracted from both spectra also highlight the same visual information.
The method comprises a latent-guided generative adversarial network (LG-GAN) that explicitly decomposes an input image into an identity code and a style code. The identity code is learned to encode spectral-invariant identity features between thermal and visible image domains in a supervised setting. In addition, the identity code offers useful insights in explaining salient facial structures that are essential to the synthesis of high-fidelity visible spectrum face images. Experiments on two datasets suggest that the present method achieves competitive thermal-to-visible cross-spectral face recognition accuracy, while enabling explanations on salient features used for thermal-to-visible image translation.
Number | Date | Country | Kind |
---|---|---|---|
21306772.1 | Dec 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/085482 | 12/13/2022 | WO |