Feature Analysis for ML-based IIoT Intrusion Detection
Industrial Internet of Things (IIoT) networks have become an increasingly attractive target of cyberattacks. Powerful Machine Learning (ML) models have recently been adopted to implement Network Intrusion Detection Systems (NIDSs), which can protect IIoT networks. For the successful training of such ML models, it is important to select the right set of data features, which maximise the detection accuracy as well as computational efficiency. This paper provides an extensive analysis of the optimal feature sets in terms of the importance and predictive power of network attacks. Three feature selection algorithms; chi-square, information gain and correlation have been utilised to identify and rank data features. The features are fed into two ML classifiers; deep feed-forward and random forest, to measure their attack detection accuracy. The experimental evaluation considered three NIDS datasets: UNSW-NB15, CSE-CIC-IDS2018, and ToN-IoT in their proprietary flow format. In addition, the respective variants in NetFlow format were also considered, i.e., NF-UNSW-NB15, NF-CSE-CIC-IDS2018, and NF-ToN-IoT. The experimental evaluation explored the marginal benefit of adding features one-by-one. Our results show that the accuracy initially increases rapidly with the addition of features, but converges quickly to the maximum achievable detection accuracy. Our results demonstrate a significant potential of reducing the computational and storage cost of NIDS while maintaining near-optimal detection accuracy. This has particular relevance in IIoT systems, with typically limited computational and storage resource.
READ FULL TEXT