Skip to main content

Offensive Machine Learning Methods and the Cyber Kill Chain

  • Chapter
  • First Online:
Artificial Intelligence and Cybersecurity

Abstract

Cyberattacks are the “new normal” in the hyper-connected and all-digitized modern world, as breaches, denial-of-service, ransomware, and a myriad of other attacks occur every single day. As the attacks and breaches increase in complexity, diversity, and frequency, cybersecurity actors (both ethical and cybercrime) turn to automating these attacks in various ways and for a variety of reasons, including the development of effective and superior cybersecurity defenses. In this chapter, we address innovations in machine learning, deep learning, and artificial intelligence within the offensive cybersecurity fields. We structure this chapter inline with the Lockheed Martin’s Cyber Kill Chain taxonomy in order to cover adequate grounds on this broad topic, and occasionally refer to the more granular MITRE ATT&CK taxonomy whenever relevant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Due to the breadth and depth of various definitions related to ML/DL/RL/AI and cybersecurity, throughout this paper we will refer to these technologies simply as MLsec.

References

  1. Abbate, P.: Internet Crime Report 2020. Tech. rep., Federal Bureau of Investigation (2021). https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf

  2. Al-Hababi, A., Tokgoz, S.C.: Man-in-the-middle attacks to detect and identify services in encrypted network flows using machine learning. In: 3rd International Conference on Advanced Communication Technologies and Networking (CommNet). IEEE, Piscataway (2020)

    Google Scholar 

  3. Alhuzali, A., Gjomemo, R., Eshete, B., Venkatakrishnan, V.: {NAVEX}: precise and scalable exploit generation for dynamic web applications. In: 27th {USENIX} Security Symposium (2018)

    Google Scholar 

  4. Alqahtani, F.H., Alsulaiman, F.A.: Is image-based CAPTCHA secure against attacks based on machine learning? An experimental study. Comput. Secur. 88, 101635 (2020)

    Article  Google Scholar 

  5. Anderson, H.S., Woodbridge, J., Filar, B.: DeepDGA: adversarially-tuned domain generation and detection. In: ACM Workshop on Artificial Intelligence and Security. ACM, New York (2016)

    Google Scholar 

  6. Antonakakis et al., M.: Understanding the Mirai Botnet. In: 26th USENIX Security Symposium. USENIX Association (2017)

    Google Scholar 

  7. Avgerinos, T., Cha, S.K., Rebert, A., Schwartz, E.J., Woo, M., Brumley, D.: Automatic exploit generation. Communications of the ACM 57(2), 74–84 (2014)

    Article  Google Scholar 

  8. Bahnsen, A.C., Torroledo, I., Camacho, L.D., Villegas, S.: DeepPhish: simulating malicious AI. In: APWG Symposium on Electronic Crime Research (eCrime) (2018)

    Google Scholar 

  9. Behzadan, V., Munir, A.: Vulnerability of deep reinforcement learning to policy induction attacks (2017). https://arxiv.org/abs/1701.04143

  10. Brewster, T.: Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find (2021). https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=1456cb187559

  11. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., et al.: The malicious use of artificial intelligence: forecasting, prevention, and mitigation (2018). https://arxiv.org/abs/1802.07228

  12. Chomiak-Orsa, I., Rot, A., Blaicke, B.: Artificial intelligence in cybersecurity: the use of AI along the cyber kill chain. In: International Conference on Computational Collective Intelligence. Springer, Berlin (2019)

    Google Scholar 

  13. Chung, H., Iorga, M., Voas, J., Lee, S.: Alexa, can I trust you? IEEE Comput. Mag. 50(9), 100–104 (2017)

    Article  Google Scholar 

  14. Chung, K., Kalbarczyk, Z.T., Iyer, R.K.: Availability attacks on computing systems through alteration of environmental control: smart malware approach. In: 10th ACM/IEEE International Conference on Cyber-Physical Systems. ACM, New York (2019)

    Google Scholar 

  15. Conti, M., De Gaspari, F., Mancini, L.V.: Know your enemy: stealth configuration-information gathering in SDN. In: International Conference on Green, Pervasive, and Cloud Computing. Springer, Berlin (2017)

    Google Scholar 

  16. Costin, A.: Security of CCTV and video surveillance systems: threats, vulnerabilities, attacks, and mitigations. In: 6th International Workshop on Trustworthy Embedded Devices (TrustED) (2016)

    Google Scholar 

  17. Cruz-Perez, C., Starostenko, O., Uceda-Ponga, F., Alarcon-Aquino, V., Reyes-Cabrera, L.: Breaking reCAPTCHAs with unpredictable collapse: heuristic character segmentation and recognition. In: Mexican Conference on Pattern Recognition. Springer, Berlin (2012)

    Google Scholar 

  18. Dalvi, N., Domingos, P., Sanghai, S., Verma, D.: Adversarial classification. In: 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004)

    Google Scholar 

  19. Darktrace: Darktrace Cyber AI Analyst: Autonomous Investigations (White Paper)

    Google Scholar 

  20. Descript: Lyrebird AI. https://www.descript.com/lyrebird

  21. Fang, M., Damer, N., Kirchbuchner, F., Kuijper, A.: Real Masks and Fake Faces: On the Masked Face Presentation Attack Detection (2021). https://arxiv.org/abs/2103.01546

  22. Fang, Y., Liu, Y., Huang, C., Liu, L.: FastEmbed: predicting vulnerability exploitation possibility based on ensemble machine learning algorithm. PLOS ONE 15(2), e0228439 (2020)

    Article  Google Scholar 

  23. Flaticon.com: (2021). https://www.flaticon.com/

  24. Floridi, L., Chiriatti, M.: GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 30(4), 681–694 (2020)

    Article  Google Scholar 

  25. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015)

    Google Scholar 

  26. Gitlin, J.M.: Hacking street signs with stickers could confuse self-driving cars (2017). https://arstechnica.com/cars/2017/09/hacking-street-signs-with-stickers-could-confuse-self-driving-cars/

  27. Gong, N.Z., Liu, B.: You are who you know and how you behave: attribute inference attacks via users’ social friends and behaviors. In: 25th {USENIX} Security Symposium (2016)

    Google Scholar 

  28. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). https://arxiv.org/abs/1412.6572

  29. Google: What is reCAPTCHA? https://www.google.com/recaptcha/about/

  30. Gossweiler, R., Kamvar, M., Baluja, S.: What’s up captcha? A captcha based on image orientation. In: 18th International Conference on World Wide Web (2009)

    Google Scholar 

  31. Grieco, G., Grinblat, G.L., Uzal, L., Rawat, S., Feist, J., Mounier, L.: Toward large-scale vulnerability discovery using machine learning. In: 6th ACM Conference on Data and Application Security and Privacy (2016)

    Google Scholar 

  32. Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: Identifying vulnerabilities in the machine learning model supply chain (2017). https://arxiv.org/abs/1708.06733

  33. Guri, M., Bykhovsky, D.: air-jumper: covert air-gap exfiltration/infiltration via security cameras & infrared (IR). Comput. Secur. 82, 15–29 (2019)

    Google Scholar 

  34. Guri, M., Kachlon, A., Hasson, O., Kedma, G., Mirsky, Y., Elovici, Y.: Gsmem: data exfiltration from air-gapped computers over {GSM} frequencies. In: 24th {USENIX} Security Symposium (2015)

    Google Scholar 

  35. Guri, M., Kedma, G., Kachlon, A., Elovici, Y.: AirHopper: bridging the air-gap between isolated networks and mobile phones using radio frequencies. In: 9th International Conference on Malicious and Unwanted Software: The Americas (MALWARE). IEEE, Piscataway (2014)

    Google Scholar 

  36. Guri, M., Zadov, B., Elovici, Y.: LED-it-GO: leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LED. In: International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, Berlin (2017)

    Google Scholar 

  37. HackTheBox: A Massive Hacking Playground. https://www.hackthebox.eu/

  38. Hitaj, B., Gasti, P., Ateniese, G., Perez-Cruz, F.: Passgan: a deep learning approach for password guessing. In: International Conference on Applied Cryptography and Network Security. Springer, Berlin (2019)

    Google Scholar 

  39. Hong, S., Kaya, Y., Modoranu, I.V., Dumitraş, T.: A Panda? No, It’s a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference (2020). https://arxiv.org/abs/2010.02432

  40. Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN (2017). https://arxiv.org/abs/1702.05983

  41. Huang, S.K., Huang, M.H., Huang, P.Y., Lai, C.W., Lu, H.L., Leong, W.M.: Crax: software crash analysis for automatic exploit generation by modeling attacks as symbolic continuations. In: IEEE 6th International Conference on Software Security and Reliability. IEEE, Piscataway (2012)

    Google Scholar 

  42. Huang, S.K., Huang, M.H., Huang, P.Y., Lu, H.L., Lai, C.W.: Software crash analysis for automatic exploit generation on binary programs. IEEE Trans. Reliab. 63(1), 270–289 (2014)

    Article  Google Scholar 

  43. Hutchins, E.M., Cloppert, M.J., Amin, R.M., et al.: Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains. Leading Issues Inform. Warfare Secur. Res. 1, 80 (2011)

    Google Scholar 

  44. Kaloudi, N., Li, J.: The AI-based cyber threat landscape: a survey. ACM Comput. Surv. 53, 1–34 (2020)

    Article  Google Scholar 

  45. Khan, S.A., Khan, W., Hussain, A.: Phishing attacks and websites classification using machine learning and multiple datasets (a comparative analysis). In: International Conference on Intelligent Computing. Springer, Berlin (2020). https://arxiv.org/abs/2101.02552

  46. Khurana, N., Mittal, S., Piplai, A., Joshi, A.: Preventing poisoning attacks on AI based threat intelligence systems. In: IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, Piscataway (2019)

    Google Scholar 

  47. Knabl, G.: Machine Learning-Driven Password Lists. Ph.D. Thesis (2018)

    Google Scholar 

  48. Kos, J., Fischer, I., Song, D.: Adversarial examples for generative models (2017). https://arxiv.org/abs/1702.06832

  49. Kotey, S.D., Tchao, E.T., Gadze, J.D.: On distributed denial of service current defense schemes. Technologies 7, 19 (2019)

    Article  Google Scholar 

  50. Lee, K., Yim, K.: Cybersecurity threats based on machine learning-based offensive technique for password authentication. Appl. Sci. 10, 1286 (2020)

    Article  Google Scholar 

  51. Li, J.H.: Cyber security meets artificial intelligence: a survey. Front. Inform. Technol. Electron. Eng. 19, 1462–1474 (2018)

    Article  Google Scholar 

  52. Lin, Z., Shi, Y., Xue, Z.: Idsgan: Generative adversarial networks for attack generation against intrusion detection (2018). https://arxiv.org/abs/1809.02077

  53. Liu, Y., Ma, S., Aafer, Y., Lee, W.C., Zhai, J., Wang, W., Zhang, X.: Trojaning attack on neural networks (2017)

    Google Scholar 

  54. Lockheed Martin Corporation: Gaining the advantageapplying cyber kill chain methodology to network defense. https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/Gaining_the_Advantage_Cyber_Kill_Chain.pdf

  55. Maestre Vidal, J., Sotelo Monge, M.A.: Obfuscation of malicious behaviors for thwarting masquerade detection systems based on locality features. Sensors 20, 2084 (2020)

    Article  Google Scholar 

  56. Manky, D.: Rise of the ‘Hivenet’: Botnets That Think for Themselves (2018). https://www.darkreading.com/vulnerabilities-threats/rise-of-the-hivenet-botnets-that-think-for-themselves

  57. Martins, N., Cruz, J.M., Cruz, T., Abreu, P.H.: Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access 8, 35403–35419 (2020)

    Article  Google Scholar 

  58. Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Lee, W., Elovici, Y., Biggio, B.: The Threat of Offensive AI to Organizations (2021). https://arxiv.org/abs/2106.15764

  59. Narayanan, A., Shmatikov, V.: How to break anonymity of the netflix prize dataset (2006). https://arxiv.org/abs/cs/0610105

  60. Novo, C., Morla, R.: Flow-based detection and proxy-based evasion of encrypted malware c2 traffic. In: 13th ACM Workshop on Artificial Intelligence and Security (2020)

    Google Scholar 

  61. Oh, S.J., Schiele, B., Fritz, M.: Towards reverse-engineering black-box neural networks. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Berlin (2019)

    Google Scholar 

  62. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., Kavukcuoglu, K.: WaveNet: A Generative Model for Raw Audio (2016). https://arxiv.org/abs/1609.03499

  63. Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the DeepCAPTCHA! Introducing immutable adversarial examples, with applications to CAPTCHA generation. IEEE Trans. Inform. Forensics Secur. 12, 2640–2653 (2017)

    Article  Google Scholar 

  64. Pacheco, F., Exposito, E., Gineste, M., Baudoin, C., Aguilar, J.: Towards the deployment of machine learning solutions in network traffic classification: a systematic survey. IEEE Commun. Surv. Tutorials 21, 1988–2014 (2018)

    Article  Google Scholar 

  65. PimEyes: PimEyes: Face Recognition Search Engine and Reverse Image Search. https://pimeyes.com/en

  66. Polyakov, A.: Machine learning for cybercriminals 101 (2018). https://towardsdatascience.com/machine-learning-for-cybercriminals-a46798a8c268

  67. Polyakov, A.: Ai security and adversarial machine learning 101 (2019). https://towardsdatascience.com/ai-and-ml-security-101-6af8026675ff

  68. Ringberg, H., Soule, A., Rexford, J., Diot, C.: Sensitivity of PCA for traffic anomaly detection. In: ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems (2007)

    Google Scholar 

  69. Rubinstein, B.I., Nelson, B., Huang, L., Joseph, A.D., Lau, S.h., Rao, S., Taft, N., Tygar, J.: Stealthy poisoning attacks on PCA-based anomaly detectors. ACM SIGMETRICS Perform. Eval. Rev. 37, 73–74 (2009)

    Google Scholar 

  70. Schwarzschild, A., Goldblum, M., Gupta, A., Dickerson, J.P., Goldstein, T.: Just how toxic is data poisoning? A unified benchmark for backdoor and data poisoning attacks. In: International Conference on Machine Learning. PMLR (2021)

    Google Scholar 

  71. Seymour, J., Tully, P.: Weaponizing data science for social engineering: automated e2e spear phishing on twitter. BlackHat USA 37, 1–39 (2016)

    Google Scholar 

  72. Seymour, J., Tully, P.: Generative models for spear phishing posts on social media (2018). https://arxiv.org/abs/1802.05196

  73. Shafiq, M., Yu, X., Laghari, A.A., Yao, L., Karn, N.K., Abdessamia, F.: Network traffic classification techniques and comparative analysis using machine learning algorithms. In: 2nd IEEE International Conference on Computer and Communications (ICCC). IEEE, Piscataway (2016)

    Google Scholar 

  74. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: ACM SIGSAC Conference on Computer and Communications Security (2016)

    Google Scholar 

  75. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, Piscataway (2017)

    Google Scholar 

  76. Shu, D., Leslie, N.O., Kamhoua, C.A., Tucker, C.S.: Generative adversarial attacks against intrusion detection systems using active learning. In: 2nd ACM Workshop on Wireless Security and Machine Learning (2020)

    Google Scholar 

  77. Sitawarin, C., Wagner, D.: On the robustness of deep k-nearest neighbors (2019). http://arxiv.org/abs/1903.08333v1

  78. Sivakorn, S., Polakis, J., Keromytis, A.D.: I’m not a human: breaking the Google reCAPTCHA. BlackHat (2016)

    Google Scholar 

  79. Stoecklin, M.P.: Deeplocker: how AI can power a stealthy new breed of malware. Secur. Intell. 8 (2018)

    Google Scholar 

  80. Stone, G., Talbert, D., Eberle, W.: Using ai/machine learning for reconnaissance activities during network penetration testing. In: International Conference on Cyber Warfare and Security. Academic Conferences International Limited (2021)

    Google Scholar 

  81. Subramaniam, T., Jalab, H.A., Taqa, A.Y.: Overview of textual anti-spam filtering techniques. Int. J. Phys. Sci. 5, 1869–1882 (2010)

    Google Scholar 

  82. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks (2013). https://arxiv.org/abs/1312.6199

  83. The MITRE Corporation: MITRE ATT&CK Matrix for Enterprise. https://attack.mitre.org/

  84. Tong, L., Yu, S., Alfeld, S., Vorobeychik, Y.: Adversarial regression with multiple learners (2018). https://arxiv.org/abs/1806.02256

  85. Valea, O., Oprişa, C.: Towards pentesting automation using the metasploit framework. In: IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, Piscataway (2020)

    Google Scholar 

  86. Wang, M., Su, P., Li, Q., Ying, L., Yang, Y., Feng, D.: Automatic polymorphic exploit generation for software vulnerabilities. In: International Conference on Security and Privacy in Communication Systems. Springer, Berlin (2013)

    Google Scholar 

  87. Wichers, D., Williams, J.: OWASP TOP-10 2017. OWASP Foundation (2017)

    Google Scholar 

  88. Yadav, T., Rao, A.M.: Technical aspects of cyber kill chain. In: International Symposium on Security in Computing and Communication. Springer, Berlin (2015)

    Google Scholar 

  89. Yamin, M.M., Ullah, M., Ullah, H., Katt, B.: Weaponized AI for cyber attacks. J. Inform. Secur. Appl. 57, 102722 (2021)

    Google Scholar 

  90. Yim, K.: A new noise mingling approach to protect the authentication password. In: International Conference on Complex, Intelligent and Software Intensive Systems (2010)

    Google Scholar 

  91. Yu, N., Darling, K.: A low-cost approach to crack python CAPTCHAs using AI-based chosen-plaintext attack. Appl. Sci. 9, 2010 (2019)

    Article  Google Scholar 

  92. Zargar, S.T., Joshi, J., Tipper, D.: A survey of defense mechanisms against distributed denial of service (ddos) flooding attacks. IEEE Commun. Surv. Tutorials 15, 2046–2069 (2013)

    Article  Google Scholar 

  93. Zhang, R., Chen, X., Lu, J., Wen, S., Nepal, S., Xiang, Y.: Using AI to hack IA: a new stealthy spyware against voice assistance functions in smart phones (2018). https://arxiv.org/abs/1805.06187

Download references

Acknowledgements

The authors would like to thank the technical team of Adversa.AI for valuable feedback and insights throughout the draft stages of this chapter. Hannu Turtiainen would like to thank the Finnish Cultural Foundation / Suomen Kulttuurirahasto (https://skr.fi/en) for supporting his Ph.D. dissertation work and research (grant decision 00211119), and the Faculty of Information Technology of University of Jyvaskyla (JYU), in particular Prof. Timo Hämäläinen, for partly supporting his PhD supervision at JYU in 2021–2022.

The authors acknowledge icon authors Gregor Cresnar, Freepik, and Good Ware (courtesy of https://flaticon.com [23]) for their royalty-free icons that are used in Fig. 1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrei Costin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Turtiainen, H., Costin, A., Polyakov, A., Hämäläinen, T. (2023). Offensive Machine Learning Methods and the Cyber Kill Chain. In: Sipola, T., Kokkonen, T., Karjalainen, M. (eds) Artificial Intelligence and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-15030-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15030-2_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15029-6

  • Online ISBN: 978-3-031-15030-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics