Abstract
Cyberattacks are the “new normal” in the hyper-connected and all-digitized modern world, as breaches, denial-of-service, ransomware, and a myriad of other attacks occur every single day. As the attacks and breaches increase in complexity, diversity, and frequency, cybersecurity actors (both ethical and cybercrime) turn to automating these attacks in various ways and for a variety of reasons, including the development of effective and superior cybersecurity defenses. In this chapter, we address innovations in machine learning, deep learning, and artificial intelligence within the offensive cybersecurity fields. We structure this chapter inline with the Lockheed Martin’s Cyber Kill Chain taxonomy in order to cover adequate grounds on this broad topic, and occasionally refer to the more granular MITRE ATT&CK taxonomy whenever relevant.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Due to the breadth and depth of various definitions related to ML/DL/RL/AI and cybersecurity, throughout this paper we will refer to these technologies simply as MLsec.
References
Abbate, P.: Internet Crime Report 2020. Tech. rep., Federal Bureau of Investigation (2021). https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf
Al-Hababi, A., Tokgoz, S.C.: Man-in-the-middle attacks to detect and identify services in encrypted network flows using machine learning. In: 3rd International Conference on Advanced Communication Technologies and Networking (CommNet). IEEE, Piscataway (2020)
Alhuzali, A., Gjomemo, R., Eshete, B., Venkatakrishnan, V.: {NAVEX}: precise and scalable exploit generation for dynamic web applications. In: 27th {USENIX} Security Symposium (2018)
Alqahtani, F.H., Alsulaiman, F.A.: Is image-based CAPTCHA secure against attacks based on machine learning? An experimental study. Comput. Secur. 88, 101635 (2020)
Anderson, H.S., Woodbridge, J., Filar, B.: DeepDGA: adversarially-tuned domain generation and detection. In: ACM Workshop on Artificial Intelligence and Security. ACM, New York (2016)
Antonakakis et al., M.: Understanding the Mirai Botnet. In: 26th USENIX Security Symposium. USENIX Association (2017)
Avgerinos, T., Cha, S.K., Rebert, A., Schwartz, E.J., Woo, M., Brumley, D.: Automatic exploit generation. Communications of the ACM 57(2), 74–84 (2014)
Bahnsen, A.C., Torroledo, I., Camacho, L.D., Villegas, S.: DeepPhish: simulating malicious AI. In: APWG Symposium on Electronic Crime Research (eCrime) (2018)
Behzadan, V., Munir, A.: Vulnerability of deep reinforcement learning to policy induction attacks (2017). https://arxiv.org/abs/1701.04143
Brewster, T.: Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find (2021). https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=1456cb187559
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., et al.: The malicious use of artificial intelligence: forecasting, prevention, and mitigation (2018). https://arxiv.org/abs/1802.07228
Chomiak-Orsa, I., Rot, A., Blaicke, B.: Artificial intelligence in cybersecurity: the use of AI along the cyber kill chain. In: International Conference on Computational Collective Intelligence. Springer, Berlin (2019)
Chung, H., Iorga, M., Voas, J., Lee, S.: Alexa, can I trust you? IEEE Comput. Mag. 50(9), 100–104 (2017)
Chung, K., Kalbarczyk, Z.T., Iyer, R.K.: Availability attacks on computing systems through alteration of environmental control: smart malware approach. In: 10th ACM/IEEE International Conference on Cyber-Physical Systems. ACM, New York (2019)
Conti, M., De Gaspari, F., Mancini, L.V.: Know your enemy: stealth configuration-information gathering in SDN. In: International Conference on Green, Pervasive, and Cloud Computing. Springer, Berlin (2017)
Costin, A.: Security of CCTV and video surveillance systems: threats, vulnerabilities, attacks, and mitigations. In: 6th International Workshop on Trustworthy Embedded Devices (TrustED) (2016)
Cruz-Perez, C., Starostenko, O., Uceda-Ponga, F., Alarcon-Aquino, V., Reyes-Cabrera, L.: Breaking reCAPTCHAs with unpredictable collapse: heuristic character segmentation and recognition. In: Mexican Conference on Pattern Recognition. Springer, Berlin (2012)
Dalvi, N., Domingos, P., Sanghai, S., Verma, D.: Adversarial classification. In: 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004)
Darktrace: Darktrace Cyber AI Analyst: Autonomous Investigations (White Paper)
Descript: Lyrebird AI. https://www.descript.com/lyrebird
Fang, M., Damer, N., Kirchbuchner, F., Kuijper, A.: Real Masks and Fake Faces: On the Masked Face Presentation Attack Detection (2021). https://arxiv.org/abs/2103.01546
Fang, Y., Liu, Y., Huang, C., Liu, L.: FastEmbed: predicting vulnerability exploitation possibility based on ensemble machine learning algorithm. PLOS ONE 15(2), e0228439 (2020)
Flaticon.com: (2021). https://www.flaticon.com/
Floridi, L., Chiriatti, M.: GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 30(4), 681–694 (2020)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015)
Gitlin, J.M.: Hacking street signs with stickers could confuse self-driving cars (2017). https://arstechnica.com/cars/2017/09/hacking-street-signs-with-stickers-could-confuse-self-driving-cars/
Gong, N.Z., Liu, B.: You are who you know and how you behave: attribute inference attacks via users’ social friends and behaviors. In: 25th {USENIX} Security Symposium (2016)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). https://arxiv.org/abs/1412.6572
Google: What is reCAPTCHA? https://www.google.com/recaptcha/about/
Gossweiler, R., Kamvar, M., Baluja, S.: What’s up captcha? A captcha based on image orientation. In: 18th International Conference on World Wide Web (2009)
Grieco, G., Grinblat, G.L., Uzal, L., Rawat, S., Feist, J., Mounier, L.: Toward large-scale vulnerability discovery using machine learning. In: 6th ACM Conference on Data and Application Security and Privacy (2016)
Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: Identifying vulnerabilities in the machine learning model supply chain (2017). https://arxiv.org/abs/1708.06733
Guri, M., Bykhovsky, D.: air-jumper: covert air-gap exfiltration/infiltration via security cameras & infrared (IR). Comput. Secur. 82, 15–29 (2019)
Guri, M., Kachlon, A., Hasson, O., Kedma, G., Mirsky, Y., Elovici, Y.: Gsmem: data exfiltration from air-gapped computers over {GSM} frequencies. In: 24th {USENIX} Security Symposium (2015)
Guri, M., Kedma, G., Kachlon, A., Elovici, Y.: AirHopper: bridging the air-gap between isolated networks and mobile phones using radio frequencies. In: 9th International Conference on Malicious and Unwanted Software: The Americas (MALWARE). IEEE, Piscataway (2014)
Guri, M., Zadov, B., Elovici, Y.: LED-it-GO: leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LED. In: International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, Berlin (2017)
HackTheBox: A Massive Hacking Playground. https://www.hackthebox.eu/
Hitaj, B., Gasti, P., Ateniese, G., Perez-Cruz, F.: Passgan: a deep learning approach for password guessing. In: International Conference on Applied Cryptography and Network Security. Springer, Berlin (2019)
Hong, S., Kaya, Y., Modoranu, I.V., Dumitraş, T.: A Panda? No, It’s a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference (2020). https://arxiv.org/abs/2010.02432
Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN (2017). https://arxiv.org/abs/1702.05983
Huang, S.K., Huang, M.H., Huang, P.Y., Lai, C.W., Lu, H.L., Leong, W.M.: Crax: software crash analysis for automatic exploit generation by modeling attacks as symbolic continuations. In: IEEE 6th International Conference on Software Security and Reliability. IEEE, Piscataway (2012)
Huang, S.K., Huang, M.H., Huang, P.Y., Lu, H.L., Lai, C.W.: Software crash analysis for automatic exploit generation on binary programs. IEEE Trans. Reliab. 63(1), 270–289 (2014)
Hutchins, E.M., Cloppert, M.J., Amin, R.M., et al.: Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains. Leading Issues Inform. Warfare Secur. Res. 1, 80 (2011)
Kaloudi, N., Li, J.: The AI-based cyber threat landscape: a survey. ACM Comput. Surv. 53, 1–34 (2020)
Khan, S.A., Khan, W., Hussain, A.: Phishing attacks and websites classification using machine learning and multiple datasets (a comparative analysis). In: International Conference on Intelligent Computing. Springer, Berlin (2020). https://arxiv.org/abs/2101.02552
Khurana, N., Mittal, S., Piplai, A., Joshi, A.: Preventing poisoning attacks on AI based threat intelligence systems. In: IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, Piscataway (2019)
Knabl, G.: Machine Learning-Driven Password Lists. Ph.D. Thesis (2018)
Kos, J., Fischer, I., Song, D.: Adversarial examples for generative models (2017). https://arxiv.org/abs/1702.06832
Kotey, S.D., Tchao, E.T., Gadze, J.D.: On distributed denial of service current defense schemes. Technologies 7, 19 (2019)
Lee, K., Yim, K.: Cybersecurity threats based on machine learning-based offensive technique for password authentication. Appl. Sci. 10, 1286 (2020)
Li, J.H.: Cyber security meets artificial intelligence: a survey. Front. Inform. Technol. Electron. Eng. 19, 1462–1474 (2018)
Lin, Z., Shi, Y., Xue, Z.: Idsgan: Generative adversarial networks for attack generation against intrusion detection (2018). https://arxiv.org/abs/1809.02077
Liu, Y., Ma, S., Aafer, Y., Lee, W.C., Zhai, J., Wang, W., Zhang, X.: Trojaning attack on neural networks (2017)
Lockheed Martin Corporation: Gaining the advantageapplying cyber kill chain methodology to network defense. https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/Gaining_the_Advantage_Cyber_Kill_Chain.pdf
Maestre Vidal, J., Sotelo Monge, M.A.: Obfuscation of malicious behaviors for thwarting masquerade detection systems based on locality features. Sensors 20, 2084 (2020)
Manky, D.: Rise of the ‘Hivenet’: Botnets That Think for Themselves (2018). https://www.darkreading.com/vulnerabilities-threats/rise-of-the-hivenet-botnets-that-think-for-themselves
Martins, N., Cruz, J.M., Cruz, T., Abreu, P.H.: Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access 8, 35403–35419 (2020)
Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Lee, W., Elovici, Y., Biggio, B.: The Threat of Offensive AI to Organizations (2021). https://arxiv.org/abs/2106.15764
Narayanan, A., Shmatikov, V.: How to break anonymity of the netflix prize dataset (2006). https://arxiv.org/abs/cs/0610105
Novo, C., Morla, R.: Flow-based detection and proxy-based evasion of encrypted malware c2 traffic. In: 13th ACM Workshop on Artificial Intelligence and Security (2020)
Oh, S.J., Schiele, B., Fritz, M.: Towards reverse-engineering black-box neural networks. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Berlin (2019)
van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., Kavukcuoglu, K.: WaveNet: A Generative Model for Raw Audio (2016). https://arxiv.org/abs/1609.03499
Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the DeepCAPTCHA! Introducing immutable adversarial examples, with applications to CAPTCHA generation. IEEE Trans. Inform. Forensics Secur. 12, 2640–2653 (2017)
Pacheco, F., Exposito, E., Gineste, M., Baudoin, C., Aguilar, J.: Towards the deployment of machine learning solutions in network traffic classification: a systematic survey. IEEE Commun. Surv. Tutorials 21, 1988–2014 (2018)
PimEyes: PimEyes: Face Recognition Search Engine and Reverse Image Search. https://pimeyes.com/en
Polyakov, A.: Machine learning for cybercriminals 101 (2018). https://towardsdatascience.com/machine-learning-for-cybercriminals-a46798a8c268
Polyakov, A.: Ai security and adversarial machine learning 101 (2019). https://towardsdatascience.com/ai-and-ml-security-101-6af8026675ff
Ringberg, H., Soule, A., Rexford, J., Diot, C.: Sensitivity of PCA for traffic anomaly detection. In: ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems (2007)
Rubinstein, B.I., Nelson, B., Huang, L., Joseph, A.D., Lau, S.h., Rao, S., Taft, N., Tygar, J.: Stealthy poisoning attacks on PCA-based anomaly detectors. ACM SIGMETRICS Perform. Eval. Rev. 37, 73–74 (2009)
Schwarzschild, A., Goldblum, M., Gupta, A., Dickerson, J.P., Goldstein, T.: Just how toxic is data poisoning? A unified benchmark for backdoor and data poisoning attacks. In: International Conference on Machine Learning. PMLR (2021)
Seymour, J., Tully, P.: Weaponizing data science for social engineering: automated e2e spear phishing on twitter. BlackHat USA 37, 1–39 (2016)
Seymour, J., Tully, P.: Generative models for spear phishing posts on social media (2018). https://arxiv.org/abs/1802.05196
Shafiq, M., Yu, X., Laghari, A.A., Yao, L., Karn, N.K., Abdessamia, F.: Network traffic classification techniques and comparative analysis using machine learning algorithms. In: 2nd IEEE International Conference on Computer and Communications (ICCC). IEEE, Piscataway (2016)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: ACM SIGSAC Conference on Computer and Communications Security (2016)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, Piscataway (2017)
Shu, D., Leslie, N.O., Kamhoua, C.A., Tucker, C.S.: Generative adversarial attacks against intrusion detection systems using active learning. In: 2nd ACM Workshop on Wireless Security and Machine Learning (2020)
Sitawarin, C., Wagner, D.: On the robustness of deep k-nearest neighbors (2019). http://arxiv.org/abs/1903.08333v1
Sivakorn, S., Polakis, J., Keromytis, A.D.: I’m not a human: breaking the Google reCAPTCHA. BlackHat (2016)
Stoecklin, M.P.: Deeplocker: how AI can power a stealthy new breed of malware. Secur. Intell. 8 (2018)
Stone, G., Talbert, D., Eberle, W.: Using ai/machine learning for reconnaissance activities during network penetration testing. In: International Conference on Cyber Warfare and Security. Academic Conferences International Limited (2021)
Subramaniam, T., Jalab, H.A., Taqa, A.Y.: Overview of textual anti-spam filtering techniques. Int. J. Phys. Sci. 5, 1869–1882 (2010)
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks (2013). https://arxiv.org/abs/1312.6199
The MITRE Corporation: MITRE ATT&CK Matrix for Enterprise. https://attack.mitre.org/
Tong, L., Yu, S., Alfeld, S., Vorobeychik, Y.: Adversarial regression with multiple learners (2018). https://arxiv.org/abs/1806.02256
Valea, O., Oprişa, C.: Towards pentesting automation using the metasploit framework. In: IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, Piscataway (2020)
Wang, M., Su, P., Li, Q., Ying, L., Yang, Y., Feng, D.: Automatic polymorphic exploit generation for software vulnerabilities. In: International Conference on Security and Privacy in Communication Systems. Springer, Berlin (2013)
Wichers, D., Williams, J.: OWASP TOP-10 2017. OWASP Foundation (2017)
Yadav, T., Rao, A.M.: Technical aspects of cyber kill chain. In: International Symposium on Security in Computing and Communication. Springer, Berlin (2015)
Yamin, M.M., Ullah, M., Ullah, H., Katt, B.: Weaponized AI for cyber attacks. J. Inform. Secur. Appl. 57, 102722 (2021)
Yim, K.: A new noise mingling approach to protect the authentication password. In: International Conference on Complex, Intelligent and Software Intensive Systems (2010)
Yu, N., Darling, K.: A low-cost approach to crack python CAPTCHAs using AI-based chosen-plaintext attack. Appl. Sci. 9, 2010 (2019)
Zargar, S.T., Joshi, J., Tipper, D.: A survey of defense mechanisms against distributed denial of service (ddos) flooding attacks. IEEE Commun. Surv. Tutorials 15, 2046–2069 (2013)
Zhang, R., Chen, X., Lu, J., Wen, S., Nepal, S., Xiang, Y.: Using AI to hack IA: a new stealthy spyware against voice assistance functions in smart phones (2018). https://arxiv.org/abs/1805.06187
Acknowledgements
The authors would like to thank the technical team of Adversa.AI for valuable feedback and insights throughout the draft stages of this chapter. Hannu Turtiainen would like to thank the Finnish Cultural Foundation / Suomen Kulttuurirahasto (https://skr.fi/en) for supporting his Ph.D. dissertation work and research (grant decision 00211119), and the Faculty of Information Technology of University of Jyvaskyla (JYU), in particular Prof. Timo Hämäläinen, for partly supporting his PhD supervision at JYU in 2021–2022.
The authors acknowledge icon authors Gregor Cresnar, Freepik, and Good Ware (courtesy of https://flaticon.com [23]) for their royalty-free icons that are used in Fig. 1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Turtiainen, H., Costin, A., Polyakov, A., Hämäläinen, T. (2023). Offensive Machine Learning Methods and the Cyber Kill Chain. In: Sipola, T., Kokkonen, T., Karjalainen, M. (eds) Artificial Intelligence and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-15030-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-15030-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15029-6
Online ISBN: 978-3-031-15030-2
eBook Packages: Computer ScienceComputer Science (R0)