Multi-Cloud Security Orchestration Using Deep Reinforcement Learning
Vamshidhar Reddy Vemula
Independent researcher, Plano, Texas, USA
Download PDFAbstract
In today’s digital landscape, multi-cloud environments have become essential for organizations seeking scalability, flexibility, and resilience. However, the adoption of multiple cloud providers introduces complex security challenges, including inconsistent policy enforcement, increased attack surfaces, and varying threat dynamics across platforms. This paper presents a novel framework for Multi-Cloud Security Orchestration using Deep Reinforcement Learning (DRL) to address these challenges. By leveraging a Proximal Policy Optimization (PPO) algorithm, our approach enables real-time, autonomous threat detection and response, dynamically adapting to evolving threats across heterogeneous cloud infrastructures. The proposed DRL model orchestrates security policies, optimizes resource allocation, and minimizes response latency through a feedback-driven learning loop.
Keywords:
Deep Reinforcement Learning (DRL); Multi-Cloud Security Orchestration; Proximal Policy Optimization (PPO)
References
- D. Bernstein, 'containers and cloud: from lxc to docker to kubernetes,' ieee cloud computing, vol. 5, no. 3, pp. 81-84, 2019.
- M. Jensen, n. Gruschka, and r. Herkenhoner, 'a survey of attacks on web services,' in ieee transactions on services computing, vol. 4, no. 2, pp. 65-81, 2011.
- S. Subashini and v. Kavitha, 'a survey on security issues in service delivery models of cloud computing,' in journal of network and computer applications, vol. 34, no. 1, pp. 1-11, 2011.
- Z. Wu, 'deep reinforcement learning in network security applications,' in ieee communications surveys & tutorials, vol. 21, no. 4, pp. 3035-3051, 2019.
- X. Liu, y. Zhang, and d. Xu, “a review on multi-cloud security management,” ieee transactions on cloud computing, vol. 8, no. 2, pp. 345–358, 2020.
- H. Li, j. Zheng, and t. Chen, “leveraging reinforcement learning for automated security in cloud systems,” ieee security & privacy, vol. 15, no. 6, pp. 70–78, 2019.
- Z. Lin, k. J. Miller, and m. T. Zhu, “proximal policy optimization for multi-cloud security orchestration,” ieee access, vol. 6, pp. 11250–11259, 2018.
- https://scholar.google.com/citations?user=GZZEOHkAAAAJ&hl=en
- https://scholar.google.com/citations?user=R94525UAAAAJ&hl=en
- https://scholar.google.com/citations?user=WYURT_IAAAAJ&hl=en
- https://scholar.google.com/citations?user=xh9HeqgAAAAJ&hl=en
- https://scholar.google.com/citations?user=n8yVDWQAAAAJ&hl=en
- https://scholar.google.com/citations?user=MOfCYLwAAAAJ&hl=en