Land Cover Classification in Mountainous Regions Using Multi-Scale Fusion and Convolutional Neural Networks: A Case Study on Mount Slamet

Authors

  • Yulis Rijal Fauzan Master of Informatics, Faculty of Science and Technology, UIN Sunan Kalijaga, Yogyakarta, Indonesia
  • Shofwatul 'Uyun Master of Informatics, Faculty of Science and Technology, UIN Sunan Kalijaga, Yogyakarta, Indonesia

DOI:

https://doi.org/10.15575/join.v10i2.1612

Keywords:

CNN, DenseNet121, Guided Filter, Land Cover Classification, MobileNetV2, Multi-Scale Fusion, VGG-16

Abstract

Mount Slamet, located in Central Java, Indonesia, is a high-risk volcanic region where accurate land cover classification is essential for disaster mitigation and sustainable land management. However, satellite imagery in this area often suffers from haze and cloud cover, posing challenges to reliable classification. This study aims to develop an effective land cover classification model using Sentinel-2 imagery by addressing these visual distortions. The specific goal is to classify land cover into five classes—Forest, Settlements, Summit, RiceField, and River—using enhanced satellite images. A total of 1101 labeled images were processed through dehazing with Multi-Scale Fusion (MSF) and smoothing using a Guided Filter to improve image quality. The classification was performed using three Convolutional Neural Network (CNN) architectures: VGG-16, MobileNetV2, and DenseNet121. The main contribution of this study is the integration of a tailored preprocessing pipeline with CNN-based modeling for haze-affected mountainous satellite imagery. Among the models tested, MobileNetV2 achieved the highest accuracy of 85.4%, outperforming DenseNet121 (83.8%) and VGG-16 (82.3%). The results demonstrate the effectiveness of combining image enhancement techniques with lightweight CNN architectures for land cover classification in challenging environments with limited and imbalanced dataset.

References

[1] R. W. Van Bemmelen, “The Geology of Indonesia. General Geology of Indonesia and Adjacent Archipelagoes,” 1949.

[2] I. Maryanto, M. Noerdjito, and T. Partomihardjo, Ekologi Gunung Slamet, no. January 2012. 2016.

[3] A. C. Anissa, E. F. Rini, and S. Soedwiwahjono, “Analisis perbandingan perubahan tutupan lahan menggunakan Citra Satelit Landsat 8 di Kecamatan Tawangmangu,” Reg. J. Pembang. Wil. dan Perenc. Partisipatif, vol. 19, no. 1, p. 184, 2024, doi: 10.20961/region.v19i1.66929.

[4] Basuki, B. Hermiyanto, and S. A. Budiman, “Identifikasi Dan Estimasi Kerusakan Tanah Dengan Metode Berbasis Obia Citra Satelit Sentinel-2b Dan Pembobotan Lereng Gunung Raung,” vol. 11, no. 1, pp. 56–72, 2023, doi: 10.29303/jrpb.v11i1.443.

[5] B. Aprilia, “Analisis Perubahan Penggunaan Lahan Kawasan Lindung Sekitar Gunung Slamet, Kabupaten Banyumas Tahun 2008 & 2019,” pp. 2019–2020, 2020.

[6] J. R. Jensen, Introductory Digital Image Processing A Remote Sensing Perspective. 2015.

[7] M. Fayaz, J. Nam, L. M. Dang, H. Song, and H. Moon, “Land-Cover Classification Using Deep Learning with High-Resolution Remote-Sensing Imagery,” no. Lc, pp. 1–15, 2024.

[8] S. Talukdar et al., “Land-Use Land-Cover Classification by Machine Learning Classifiers for Satellite Observations—A Review,” 2020.

[9] I. Daqiqil, “Machine Learning: Teori, Studi Kasus, dan Implementasi Menggunakan Phyton,” 2021.

[10] C. A. S. Kinasih and H. Hidayat, “Ekstraksi Data Bangunan Dari Data Citra Unmanned Aerial Vehicle Menggunakan Metode Convolutional Neural Networks (CNN) (Studi Kasus: Desa Campurejo, Kabupaten Gresik),” Geoid, vol. 17, no. 1, p. 81, 2022, doi: 10.12962/j24423998.v17i1.10289.

[11] R. Venkatesan and B. Li, Convolutional Neural Networks in Visual Computing : A Concise Guide. 2017.

[12] Q. Liu, B. Wang, S. Tan, S. Zou, and W. Ge, “Remote Sensing Image Dehazing Using Multi-Scale Gated Attention for Flight Simulator,” IEICE Trans. Inf. Syst., vol. E107.D, no. 9, pp. 1206–1218, 2024, doi: 10.1587/transinf.2023EDP7191.

[13] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: BEYOND EMPIRICAL RISK MINIMIZATION,” Iclr, no. March, pp. 1–8, 2018.

[14] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN Features Off-the-Shelf: An Astounding Baseline for Recognition,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 512–519. doi: 10.1109/CVPRW.2014.131.

[15] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474.

[16] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243.

[17] P. Helber, B. Bischke, A. Dengel, and D. Borth, “EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 12, no. 7, pp. 2217–2226, 2019, doi: 10.1109/JSTARS.2019.2918242.

[18] Geetha, A. G. Karegowda, Nandeesha, and N. B. V, “Classification of Sentinel 2 Images using Customized Convolution Neural Networks,” Orig. Res. Pap. Int. J. Intell. Syst. Appl. Eng. IJISAE, vol. 2023, no. 1s, pp. 136–142, 2023, [Online]. Available: www.ijisae.org

[19] M. Ba, P. I. Thiam, E. Delay, C. A. Ngom, I. Diop, and A. Bah, “Deep Learning-based Land Use and Land Cover Changes Detection from Satellite Imagery : a case study of the city of Richard Toll,” ACM Int. Conf. Proceeding Ser., pp. 60–68, 2024, doi: 10.1145/3653946.3653956.

[20] S. Basheer, X. Wang, R. A. Nawaz, T. Pang, T. Adekanmbi, and M. Q. Mahmood, “A comparative analysis of PlanetScope 4-band and 8-band imageries for land use land cover classification,” Geomatica, vol. 76, no. 2, p. 100023, 2024, doi: 10.1016/j.geomat.2024.100023.

[21] S. S. Burrewar, M. Haque, and T. U. Haider, “Convolutional Neural Network Methods for Detecting Land-Use Changes,” Int. J. Intell. Syst. Appl. Eng., vol. 12, no. 14s, pp. 573–590, 2024.

[22] ESA, “Sentinel-2 Mission (European Space Agency).” [Online]. Available: https://sentinel.esa.int/web/sentinel/missions/sentinel-2

[23] A. Asokan, J. Anitha, M. Ciobanu, A. Gabor, A. Naaji, and D. J. Hemanth, “Image processing techniques for analysis of satellite images for historical maps classification-An overview,” Appl. Sci., vol. 10, no. 12, 2020, doi: 10.3390/app10124207.

[24] L. A. Santos, K. R. Ferreira, G. Camara, M. C. A. Picoli, and R. E. Simoes, “Quality control and class noise reduction of satellite image time series,” ISPRS J. Photogramm. Remote Sens., vol. 177, no. May, pp. 75–88, 2021, doi: 10.1016/j.isprsjprs.2021.04.014.

[25] M. Yao, Q. Miao, and Q. Hao, “Image Dehazing Method Based on Multi-scale Feature Fusion,” vol. 119, no. Essaeme, pp. 2163–2166, 2017, doi: 10.2991/essaeme-17.2017.438.

[26] A. Codruta and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Trans. Image Process., vol. 22, 2013, doi: 10.1109/TIP.2013.2262284.

[27] K. He, J. Sun, and X. Tang, “Guided Image Filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, 2013, doi: 10.1109/TPAMI.2012.213.

[28] C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.

[29] L. Perez and J. Wang, “The Effectiveness of Data Augmentation in Image Classification using Deep Learning,” 2017.

[30] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger, Eds., Curran Associates, Inc., 2012. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

[31] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.

[32] C. Szegedy et al., “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9. doi: 10.1109/CVPR.2015.7298594.

[33] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Available: http://arxiv.org/abs/1704.04861

[34] D. M. W. Powers, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” pp. 37–63, 2011, [Online]. Available: http://arxiv.org/abs/2010.16061

Downloads

Published

2025-08-17

Issue

Section

Article

Citation Check

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.