A Comparison of YOLOv8 Series Performance in Student Facial Expressions Detection on Online Learning

Authors

  • Dewi Tresnawati Department of Computer Science, Institut Teknologi Garut, Garut, Indonesia
  • Shopi Nurhidayanti Department of Computer Science, Institut Teknologi Garut, Garut, Indonesia
  • Nina Lestari Department of Electrical Engineering, Universitas Sangga Buana, Bandung, Indonesia

DOI:

https://doi.org/10.15575/join.v10i1.1390

Keywords:

Emotion Recognition, Facial Expression Detection, Online Learning, Student Engagement, YOLOv8

Abstract

Student engagement in online learning is an important factor that can affect learning outcomes. One indicator of engagement is facial expression. However, research on facial expression detection in online learning environments is still limited, especially in the use of the YOLOv8 algorithm. This study aims to compare the performance of several YOLOv8 variants, namely YOLOv8x, YOLOv8m, YOLOv8s, YOLOv8n, and YOLOv8l in recognizing six facial expressions: happy, sad, angry, surprised, afraid, and neutral. Student facial expression data was collected through the Moodle platform every 15 seconds during the learning process. All models were trained using 640x640 pixel images for 100 epochs to improve facial expression detection capabilities. The main contribution of this study is to provide a comprehensive analysis of the effectiveness of YOLOv8 in detecting student facial expressions, which can be used to improve the online learning experience. The evaluation results show that the YOLOv8s model has the best performance with the highest mAP of 0.840 and the fastest inference speed of 2.4 ms per image. YOLOv8m and YOLOv8x also performed well with mAP of 0.816 and 0.815, respectively. Although YOLOv8x had the slowest inference speed, it was superior in detecting fear, happiness, and sadness expressions with mAP above 0.9. YOLOv8n had mAP of 0.636, while YOLOv8l achieved mAP of 0.813 with an inference speed of 9.1 ms per image. This study shows that the YOLOv8 algorithm, especially YOLOv8s, can be an effective solution to analyze student engagement based on their facial expressions during online learning.

References

[1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.

[2] I. P. Sary, S. Andromeda, and E. U. Armin, “Performance Comparison of YOLOv5 and YOLOv8 Architectures in Human Detection using Aerial Images,” Ultim. Comput. J. Sist. Komput., vol. 15, no. 1, pp. 8–13, 2023, doi: 10.31937/sk.v15i1.3204.

[3] A. Ma’aruf and M. Hardjianto, “Application of the You Only Look Once Version 8 Algorithm for Indonesian Sign Language Alphabet,” Semin. Nas. Mhs. Fak. Teknol. Inf., vol. 2, no. September, pp. 567–576, 2023.

[4] D. Reis, J. Kupec, J. Hong, and A. Daoudi, “Real-Time Flying Object Detection with YOLOv8,” 2023, [Online]. Available: http://arxiv.org/abs/2305.09972

[5] M. Safran, A. Alajmi, and S. Alfarhood, “Efficient Multistage License Plate Detection and Recognition Using YOLOv8 and CNN for Smart Parking Systems,” J. Sensors, vol. 2024, pp. 1–18, 2024, doi: 10.1155/2024/4917097.

[6] A. L. Cîrneanu, D. Popescu, and D. Iordache, “New Trends in Emotion Recognition Using Image Analysis by Neural Networks, A Systematic Review,” Sensors, vol. 23, no. 16, 2023, doi: 10.3390/s23167092.

[7] P. Sharma, P. Sharma, V. Deep, and V. K. Shukla, “Facial Emotion Recognition Model,” Lect. Notes Mech. Eng., no. 1, pp. 751–761, 2021, doi: 10.1007/978-981-15-9956-9_73.

[8] Jian-Ming Sun, Xue-Sheng Pei, and Shi-Sheng Zhou, “Facial emotion recognition in modern distant education system using SVM,” in 2008 International Conference on Machine Learning and Cybernetics, IEEE, Jul. 2008, pp. 3545–3548. doi: 10.1109/ICMLC.2008.4621018.

[9] P. Ekman and H. Oster, “Facial Expressions of Emotion,” Annu. Rev. Psychol., vol. 30, no. 1, pp. 527–554, Jan. 1979, doi: 10.1146/annurev.ps.30.020179.002523.

[10] M. R. Reyes, M. A. Brackett, S. E. Rivers, M. White, and P. Salovey, “Classroom emotional climate, student engagement, and academic achievement.,” J. Educ. Psychol., vol. 104, no. 3, pp. 700–712, Aug. 2012, doi: 10.1037/a0027268.

[11] K. Seashore Louis, Cultivating Teacher Engagement: Breaking the Iron Law of Social Class, no. 7. 2020. doi: 10.4324/9780203012543-16.

[12] R. Pekrun, “The Control-Value Theory of Achievement Emotions: Assumptions, Corollaries, and Implications for Educational Research and Practice,” Educ. Psychol. Rev., vol. 18, no. 4, pp. 315–341, Nov. 2006, doi: 10.1007/s10648-006-9029-9.

[13] O. Stanley and G. Hansen, ABSTUDY: An Investment forTomorrow’s Employment A Review ofABSTUDY forthe AboriginalandTorres StraitIslanderCommission by Owen Stanley andGeoffHansen. 1998.

[14] R. Pekrun et al., “Academic Emotions in Students ’ Self-Regulated Learning and Achievement : A Program of Qualitative and Quantitative Research Academic Emotions in Students ’ Self-Regulated Learning and Achievement : A Program of Qualitative and Quantitative Research,” no. July 2013, pp. 37–41, 2010, doi: 10.1207/S15326985EP3702.

[15] H. Gunes and M. Pantic, “Automatic, Dimensional and Continuous Emotion Recognition,” Int. J. Synth. Emot., vol. 1, no. 1, pp. 68–99, 2010, doi: 10.4018/jse.2010101605.

[16] S. Gupta, P. Kumar, and R. K. Tekchandani, “Facial emotion recognition based real-time learner engagement detection system in online learning context using deep learning models,” Multimed. Tools Appl., vol. 82, no. 8, pp. 11365–11394, 2023, doi: 10.1007/s11042-022-13558-9.

[17] A. Combs and M. Roman, Python Machine Learning Blueprints, Second. 2019.

[18] Z. He, L. Xie, X. Chen, Y. Zhang, Y. Wang, and Q. Tian, “Data Augmentation Revisited :,” 2019.

[19] G. Jocher, A. Chaurasia, and J. Qiu, “YOLO by Ultralytics.,” https://github.com/ultralytics/ultralytics.

[20] P. Henderson and V. Ferrari, “End-to-end training of object class detectors for mean average precision,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10115 LNCS, pp. 198–213, 2017, doi: 10.1007/978-3-319-54193-8_13.

[21] S. Yohananda, “What is Mean Average Precision (MAP) and how does it work,” xailient.com. Accessed: Mar. 12, 2024. [Online]. Available: https://xailient.com/blog/what-is-mean-average-precision-and-how-does-it-work/

[22] D. Rosmala and V. Setyaningsih, “Tlemc (Teaching & Learning English in Multicultural Contexts) Classroom English Learning Activities: Students’ Facial Expressions With a Focus on Interpersonal Meanings,” vol. 5, no. 2, 2021, [Online]. Available: http://jurnal.unsil.ac.id/index.php/tlemc/index

Downloads

Published

2025-04-01

Issue

Section

Article

Citation Check

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.