A Comparison of YOLOv8 Series Performance in Student Facial Expressions Detection on Online Learning
DOI:
https://doi.org/10.15575/join.v10i1.1390Keywords:
Emotion Recognition, Facial Expression Detection, Online Learning, Student Engagement, YOLOv8Abstract
References
[1] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.
[2] I. P. Sary, S. Andromeda, and E. U. Armin, “Performance Comparison of YOLOv5 and YOLOv8 Architectures in Human Detection using Aerial Images,” Ultim. Comput. J. Sist. Komput., vol. 15, no. 1, pp. 8–13, 2023, doi: 10.31937/sk.v15i1.3204.
[3] A. Ma’aruf and M. Hardjianto, “Application of the You Only Look Once Version 8 Algorithm for Indonesian Sign Language Alphabet,” Semin. Nas. Mhs. Fak. Teknol. Inf., vol. 2, no. September, pp. 567–576, 2023.
[4] D. Reis, J. Kupec, J. Hong, and A. Daoudi, “Real-Time Flying Object Detection with YOLOv8,” 2023, [Online]. Available: http://arxiv.org/abs/2305.09972
[5] M. Safran, A. Alajmi, and S. Alfarhood, “Efficient Multistage License Plate Detection and Recognition Using YOLOv8 and CNN for Smart Parking Systems,” J. Sensors, vol. 2024, pp. 1–18, 2024, doi: 10.1155/2024/4917097.
[6] A. L. Cîrneanu, D. Popescu, and D. Iordache, “New Trends in Emotion Recognition Using Image Analysis by Neural Networks, A Systematic Review,” Sensors, vol. 23, no. 16, 2023, doi: 10.3390/s23167092.
[7] P. Sharma, P. Sharma, V. Deep, and V. K. Shukla, “Facial Emotion Recognition Model,” Lect. Notes Mech. Eng., no. 1, pp. 751–761, 2021, doi: 10.1007/978-981-15-9956-9_73.
[8] Jian-Ming Sun, Xue-Sheng Pei, and Shi-Sheng Zhou, “Facial emotion recognition in modern distant education system using SVM,” in 2008 International Conference on Machine Learning and Cybernetics, IEEE, Jul. 2008, pp. 3545–3548. doi: 10.1109/ICMLC.2008.4621018.
[9] P. Ekman and H. Oster, “Facial Expressions of Emotion,” Annu. Rev. Psychol., vol. 30, no. 1, pp. 527–554, Jan. 1979, doi: 10.1146/annurev.ps.30.020179.002523.
[10] M. R. Reyes, M. A. Brackett, S. E. Rivers, M. White, and P. Salovey, “Classroom emotional climate, student engagement, and academic achievement.,” J. Educ. Psychol., vol. 104, no. 3, pp. 700–712, Aug. 2012, doi: 10.1037/a0027268.
[11] K. Seashore Louis, Cultivating Teacher Engagement: Breaking the Iron Law of Social Class, no. 7. 2020. doi: 10.4324/9780203012543-16.
[12] R. Pekrun, “The Control-Value Theory of Achievement Emotions: Assumptions, Corollaries, and Implications for Educational Research and Practice,” Educ. Psychol. Rev., vol. 18, no. 4, pp. 315–341, Nov. 2006, doi: 10.1007/s10648-006-9029-9.
[13] O. Stanley and G. Hansen, ABSTUDY: An Investment forTomorrow’s Employment A Review ofABSTUDY forthe AboriginalandTorres StraitIslanderCommission by Owen Stanley andGeoffHansen. 1998.
[14] R. Pekrun et al., “Academic Emotions in Students ’ Self-Regulated Learning and Achievement : A Program of Qualitative and Quantitative Research Academic Emotions in Students ’ Self-Regulated Learning and Achievement : A Program of Qualitative and Quantitative Research,” no. July 2013, pp. 37–41, 2010, doi: 10.1207/S15326985EP3702.
[15] H. Gunes and M. Pantic, “Automatic, Dimensional and Continuous Emotion Recognition,” Int. J. Synth. Emot., vol. 1, no. 1, pp. 68–99, 2010, doi: 10.4018/jse.2010101605.
[16] S. Gupta, P. Kumar, and R. K. Tekchandani, “Facial emotion recognition based real-time learner engagement detection system in online learning context using deep learning models,” Multimed. Tools Appl., vol. 82, no. 8, pp. 11365–11394, 2023, doi: 10.1007/s11042-022-13558-9.
[17] A. Combs and M. Roman, Python Machine Learning Blueprints, Second. 2019.
[18] Z. He, L. Xie, X. Chen, Y. Zhang, Y. Wang, and Q. Tian, “Data Augmentation Revisited :,” 2019.
[19] G. Jocher, A. Chaurasia, and J. Qiu, “YOLO by Ultralytics.,” https://github.com/ultralytics/ultralytics.
[20] P. Henderson and V. Ferrari, “End-to-end training of object class detectors for mean average precision,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10115 LNCS, pp. 198–213, 2017, doi: 10.1007/978-3-319-54193-8_13.
[21] S. Yohananda, “What is Mean Average Precision (MAP) and how does it work,” xailient.com. Accessed: Mar. 12, 2024. [Online]. Available: https://xailient.com/blog/what-is-mean-average-precision-and-how-does-it-work/
[22] D. Rosmala and V. Setyaningsih, “Tlemc (Teaching & Learning English in Multicultural Contexts) Classroom English Learning Activities: Students’ Facial Expressions With a Focus on Interpersonal Meanings,” vol. 5, no. 2, 2021, [Online]. Available: http://jurnal.unsil.ac.id/index.php/tlemc/index
Downloads
Published
Issue
Section
Citation Check
License
Copyright (c) 2025 Dewi Tresnawati, Shopi Nurhidayanti, Nina Lestari

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
-
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
-
NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
-
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
- You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
- No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License