Enhancing Cybersecurity with Adversarial Defense: A Multi-Domain Machine Learning Perspective
Keywords:
Adversarial Machine Learning, Cyber Security, Defense mechanisms, Attack detection, DNS TunnelingAbstract
The rise of adversarial threats to machine learning models affects more and more use cases nowadays, such as cybersecurity or predictive maintenance, where the cost of prediction failures is extremely high. This research investigates adversarial machine learning defense in three significant areas of interest: DNS tunneling detection, vehicle platooning security, and RUL estimation. The data set contains benign and adversarial attacked data for three application scenarios from realistic systems: (i) DNS tunneling, (ii) platooning, and (iii) Remaining Useful Life (RUL) prediction for aircraft engines. This work employs four defense techniques: adversarial training, defensive distillation, input pre-processing, and ensembling, to utilize metrics such as accuracy, precision, recall, F1-score, AUC, false positive rate, and false negative rate. In the results, we can see that Random Forest achieved 89.1%, 85.6% and 87.8% respectively in terms of accuracy on the datasets used in the study (as in DNS tunneling, vehicle platooning and RUL estimation). The performance of Random Forest and SVM is significantly different by statistical analysis (p < 0.01). Among the modalities reviewed, the Carlini-Wagner attack achieves the highest empirical success rates. Ensemble methods are generated by enhancing the security of a solid model, leading to a higher accuracy soar considerably for Random Forests, Neural Networks and SVM. The result of the important feature indicates that mDt is the most discriminative feature. Cross-domain evaluation results in drops ranging from 5.8% to 20.2% when retraining the models on different domains. This work also introduces a multi-domain performance evaluation framework, reveals the cross-domain transferability limitations and provides valuable guidance on safeguarding critical infrastructure with adversarial machine learning technologies.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Well Testing Journal

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license requires that re-users give credit to the creator. It allows re-users to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only.

