Intrusion Response is a relatively new field of research. Several model-based techniques have been proposed that range from static mapping to complex stateful approaches. However, the main limitation that all of them have in common is that they do not consider the non-stationary behavior of the protected system which, in combination with long planning times, makes it unfeasible to use them on dynamic and large-scale systems. In this work, we propose an Intrusion Response controller based on deep reinforcement learning and transfer learning, which automatically adapts to system changes. We empirically demonstrate its effectiveness and its performance on Online Boutique, a cloud-based web application that Google uses to showcase its cloud technologies. We first carry out an extensive tuning of the hyper-parameters of the neural networks that implement our approach. Afterwards, we empirically show the effectiveness and the performance of the realized Intrusion Response controller in a typical cloud scenario, that is, when instances are added or removed from the system. Experimental results show that a proper hyper-parameter tuning can reduce the training time by up to 50%. Furthermore, transfer learning completely zeroes the transient adaptation stage when the number of replicas of a given service is reduced. The training during the transient stage exhibits instead a speed-up of 1.25x in case a replica is added. For reproducibility, the source code of the Intrusion Response System is released with the onen-source Apache 2.0 license.

Iannucci, S., Casalicchio, E., Lucantonio, M. (2021). An Intrusion Response Approach for Elastic Applications Based on Reinforcement Learning. In 2021 IEEE Symposium Series on Computational Intelligence, SSCI 2021 - Proceedings (pp.01-10). Institute of Electrical and Electronics Engineers Inc. [10.1109/SSCI50451.2021.9659882].

An Intrusion Response Approach for Elastic Applications Based on Reinforcement Learning

Iannucci S.;
2021-01-01

Abstract

Intrusion Response is a relatively new field of research. Several model-based techniques have been proposed that range from static mapping to complex stateful approaches. However, the main limitation that all of them have in common is that they do not consider the non-stationary behavior of the protected system which, in combination with long planning times, makes it unfeasible to use them on dynamic and large-scale systems. In this work, we propose an Intrusion Response controller based on deep reinforcement learning and transfer learning, which automatically adapts to system changes. We empirically demonstrate its effectiveness and its performance on Online Boutique, a cloud-based web application that Google uses to showcase its cloud technologies. We first carry out an extensive tuning of the hyper-parameters of the neural networks that implement our approach. Afterwards, we empirically show the effectiveness and the performance of the realized Intrusion Response controller in a typical cloud scenario, that is, when instances are added or removed from the system. Experimental results show that a proper hyper-parameter tuning can reduce the training time by up to 50%. Furthermore, transfer learning completely zeroes the transient adaptation stage when the number of replicas of a given service is reduced. The training during the transient stage exhibits instead a speed-up of 1.25x in case a replica is added. For reproducibility, the source code of the Intrusion Response System is released with the onen-source Apache 2.0 license.
2021
978-1-7281-9048-8
Iannucci, S., Casalicchio, E., Lucantonio, M. (2021). An Intrusion Response Approach for Elastic Applications Based on Reinforcement Learning. In 2021 IEEE Symposium Series on Computational Intelligence, SSCI 2021 - Proceedings (pp.01-10). Institute of Electrical and Electronics Engineers Inc. [10.1109/SSCI50451.2021.9659882].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/402074
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact