Implementation of Generative Adversarial Network to Generate Fake Face Image

Authors

  • Jasman Pardede Department of Informatics, Faculty of Industry, Institut Teknologi Nasional Bandung, Indonesia, Indonesia
  • Anisa Putri Setyaningrum Department of Informatics, Faculty of Industry, Institut Teknologi Nasional Bandung, Indonesia, Indonesia

DOI:

https://doi.org/10.15575/join.v8i1.790

Keywords:

Fake Image, LSGAN, Original Image, Supervised Contrastive Loss

Abstract

In recent years, many crimes use technology to generate someone's face which has a bad effect on that person. Generative adversarial network is a method to generate fake images using discriminators and generators. Conventional GAN involved binary cross entropy loss for discriminator training to classify original image from dataset and fake image that generated from generator. However, use of binary cross entropy loss cannot provided gradient information to generator in creating a good fake image. When generator creates a fake image, discriminator only gives a little feedback (gradient information) to generator update its model. It causes generator take a long time to update the model. To solve this problem, there is an LSGAN that used a loss function (least squared loss). Discriminator can provide a
strong gradient signal to generator update the model even though image was far from decision boundary. In making fake images, researchers used Least Squares GAN (LSGAN) with discriminator-1 loss value is 0.0061, discriminator-2 loss value is 0.0036, and generator loss value is 0.575. With the small loss value of the three important components, discriminator accuracy value in terms of classification reaches 95% for original image and 99% for fake image. In classified original image and fake image in this study
using a supervised contrastive loss classification model with an accuracy value of 99.93%.

References

S. Mahdizadehaghdam, A. Panahi, and H. Krim, “Sparse generative adversarial network,” Proc. - 2019 Int. Conf. Comput. Vis. Work. ICCVW 2019, pp. 3063–3071, 2019, doi: 10.1109/ICCVW.2019.00369.

Y. Chen, Y. K. Lai, and Y. J. Liu, “CartoonGAN: Generative Adversarial Networks for Photo Cartoonization,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 9465–9474, 2018, doi: 10.1109/CVPR.2018.00986.

H. Zhang et al., “StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 8, pp. 1947–1962, 2019, doi: 10.1109/TPAMI.2018.2856256.

Y. Choi, M. Choi, M. Kim, J. W. Ha, S. Kim, and J. Choo, “StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8789–8797, 2018, doi: 10.1109/CVPR.2018.00916.

C. C. Hsu, C. Y. Lee, and Y. X. Zhuang, “Learning to detect fake face images in the wild,” Proc. - 2018 Int. Symp. Comput. Consum. Control. IS3C 2018, pp. 388–391, 2019, doi: 10.1109/IS3C.2018.00104.

L. Nataraj et al., “Detecting GAN generated fake images using Co-occurrence matrices,” arXiv, 2019.

C. C. Hsu, Y. X. Zhuang, and C. Y. Lee, “Deep fake image detection based on pairwise learning,” Appl. Sci., vol. 10, no. 1, pp. 1–10, 2020, doi: 10.3390/app10010370.

Y. Zhang, P. Lv, and X. Lu, “A Deep Learning Approach for Face Detection and Location on Highway,” IOP Conf. Ser. Mater. Sci. Eng., vol. 435, no. 1, 2018, doi: 10.1088/1757-899X/435/1/012004.

X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “Least Squares Generative Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 2813–2821, 2017, doi: 10.1109/ICCV.2017.304.

H. Thanh-Tung, T. Tran, and S. Venkatesh, “Improving generalization and stability of generative adversarial networks,” arXiv, pp. 1–18, 2019.

M. Elgendy, “Human-in-the-Loop Machine Learning Version 1 MEAP Edition Manning Early Access Program Copyright 2019 Manning Publications,” 2019.

P. Khosla et al., “Supervised Contrastive Learning,” no. NeurIPS, pp. 1–23, 2020, [Online]. Available: http://arxiv.org/abs/2004.11362.

Downloads

Published

2023-06-28

Issue

Section

Article

Citation Check

Similar Articles

1 2 3 4 > >> 

You may also start an advanced similarity search for this article.