Given the always increasing size of computer systems, manually protecting them in case of attacks is unfeasible and error-prone. For this reason, until now, several model-based Intrusion Response Systems (IRSs) have been proposed with the purpose of limiting the amount of work of the system administrators. However, since the most advanced IRSs adopt a stateful approach, they are subject to what Richard Bellman defined as the curse of dimensionality. Furthermore, modern computer systems are non-stationary, that is, they are subject to frequent changes in their configuration and in their software base, which in turn could make a model-based approach ineffective due to deviations in system behavior with respect to the model. In this paper we propose, to the best of our knowledge, the first approach based on deep reinforcement learning for the implementation of a hybrid model-free IRS. Experimental results show that the proposed IRS is able to deal with non-stationary systems, while reducing the time needed for the computation of the defense policies by orders of magnitude with respect to model-based approaches, and being still able to provide near-optimal rewards.
Iannucci, S., Cardellini, V., Barba, O.D., Banicescu, I. (2020). A hybrid model-free approach for the near-optimal intrusion response control of non-stationary systems. FUTURE GENERATION COMPUTER SYSTEMS, 109, 111-124 [10.1016/j.future.2020.03.018].
A hybrid model-free approach for the near-optimal intrusion response control of non-stationary systems
Iannucci S.
;
2020-01-01
Abstract
Given the always increasing size of computer systems, manually protecting them in case of attacks is unfeasible and error-prone. For this reason, until now, several model-based Intrusion Response Systems (IRSs) have been proposed with the purpose of limiting the amount of work of the system administrators. However, since the most advanced IRSs adopt a stateful approach, they are subject to what Richard Bellman defined as the curse of dimensionality. Furthermore, modern computer systems are non-stationary, that is, they are subject to frequent changes in their configuration and in their software base, which in turn could make a model-based approach ineffective due to deviations in system behavior with respect to the model. In this paper we propose, to the best of our knowledge, the first approach based on deep reinforcement learning for the implementation of a hybrid model-free IRS. Experimental results show that the proposed IRS is able to deal with non-stationary systems, while reducing the time needed for the computation of the defense policies by orders of magnitude with respect to model-based approaches, and being still able to provide near-optimal rewards.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.