magazinelogo

Advances in Computer and Communication

Downloads: 171794 Total View: 1232257
Frequency: quarterly ISSN Online: 2767-2875 CODEN: ACCDC3
Email: acc@hillpublisher.com Citations: 149
ArticleOpen Access http://dx.doi.org/10.26855/acc.2025.07.006

Robustness Optimization Strategies for Mitigating Overfitting in Machine Learning Models

Jian Sun1,*, Yizheng Xu2, Yansong Li3

1Iowa State University, Ames, Iowa 50011, USA.

2University of Malaya, Kuala Lumpur 50603, Malaysia.

3Zhengzhou Police College, Zhengzhou 450000, Henan, China.

*Corresponding author: Jian Sun

Published: August 20,2025

Abstract

The overfitting problem in machine learning models is one of the key factors affecting their generalization capability and practical application performance, occurring when a model performs exceptionally well on training data but significantly deteriorates on unseen test data, leading to insufficient robustness and difficulty in adapting to real-world complex and dynamic environments. This paper systematically investigates the causes, impacts, and robustness optimization strategies for overfitting, first conducting an in-depth analysis of the mechanisms behind overfitting from three dimensions: data, model architecture, and training processes, then proposing a multi-level robustness optimization framework including methods such as data augmentation, improved regularization, dynamic network structure optimization, and adversarial training, and finally validating through experiments across multiple domains including computer vision, natural language processing, and financial risk control that demonstrate the proposed optimization strategies effectively enhance the model’s generalization ability and stability in noisy environments. The study not only provides novel solutions to the overfitting problem but also offers theoretical and practical guidance for the robust application of machine learning models in real-world scenarios, with its contributions including a comprehensive analysis of overfitting mechanisms, innovative optimization techniques, and extensive experimental validation across diverse application fields that collectively advance the understanding and mitigation of this critical challenge in machine learning model development and deployment.

Keywords

Machine learning; Overfitting; Robustness optimization; Generalization capability; Regularization; Adversarial training

References

[1] Wu L, Chen J, Lan L, et al. Key node identification and robustness analysis of Zhejiang seismic information network based on complex network theory. Earthq Disaster Prev Technol. 2024;19(4):837-49.

[2] Liang Z, Liu W, Wu T, et al. Research progress and prospects of robust neural network training methods. Front Sci Technol. 2023;2(1):78-89.

[3] Wang DS, Yang C, Sui GY. Application of machine learning in pulsating flow measurement in pipelines. J Shandong Inst Pet Chem Technol. 2025;39(1):78-83.

[4] Xiong D, Tang L. Fitting active contour model based on local partitioning. Inf Technol. 2023;(8):1-7.

[5] Pan H, Zhang W, Hu B, et al. Construction and robustness analysis of urban bus-subway weighted composite network. J Jilin Univ (Eng Technol Ed). 2022;52(11):2582-91.

[6] Zhang H, Lü W, Guo Q, et al. Research on fault localization in deep learning models based on explanation-metamorphic testing. Intell Comput Appl. 2025;15(4):184-90.

[7] Zhou D. Research on cross-modal magnetic resonance imaging methods based on deep learning models [dissertation]. Chengdu: University of Electronic Science and Technology of China; 2025.

[8] Ruan L. Adaptive gradient descent optimization algorithm for training neural networks. J Harbin Univ Commer (Nat Sci Ed). 2024;40(1):25-31.

How to cite this paper

Robustness Optimization Strategies for Mitigating Overfitting in Machine Learning Models

How to cite this paper: Jian Sun, Yizheng Xu, Yansong Li. (2025) Robustness Optimization Strategies for Mitigating Overfitting in Machine Learning Models. Advances in Computer and Communication6(3), 139-143.

DOI: http://dx.doi.org/10.26855/acc.2025.07.006