For many years, executable packing has been used for a variety of applications, including software protection but also malware obfuscation. Even today, this evasion technique remains an open issue, particularly in malware analysis. Numerous studies have proposed static detection techniques based on various algorithms and features, taking advantage of machine learning to build increasingly powerful models. These studies have focused in particular on supervised learning, but unsupervised learning remains relatively unexploited yet. Furthermore, most studies related to adversarial learning focused on attacks in the feature space while those targeting features identified as significant in supervised models are still rather limited. Such features may be still manipulated from the problem space for causing misclassification. The objective of this study is to apply alterations on packed samples based on realistic modifications and visualize their effect using unsupervised learning. To this end, the Packing Box experimental toolkit is used to build a dataset, train models, apply alterations, retrain models and then highlight the consequences of these alterations on the trained models.
D'Hondt, Alexandre ; Bertrand Van Ouytsel, Charles-Henry ; Legay, Axel ; et. al. Highlighting the Impact of Packed Executable Alterations with Unsupervised Learning.19th International Conference on Risks and Security of Internet and Systems (CRiSIS 2024) (France, du 26/11/2024 au 28/11/2024).