Description
Artificial Intelligence (AI) is transforming the cybersecurity practice by providing sophisticated tools that allow the detection, evasion, and reaction to cyber attacks at levels and speeds beyond human capabilities. With the utilisation of machine learning models and large language models (LLMs), AI can analyse enormous amounts of data to identify patterns, recognise anomalies, and initiate responses to newly emerging threats. Yet, the dual-use technology also allows attackers to develop sophisticated attacks, including adversarial AI and data poisoning, thereby posing challenges to well-established defence strategies.
- The intersection of AI with cybersecurity has sector-specific consequences. In the BFSI industry, for example, AI-powered fraud detection systems are afflicted with regulatory compliance and transparency issues. Telecommunication networks are also becoming more susceptible to deepfake frauds and AI-powered infrastructure attacks. Digital Public Infrastructure (DPI) has sector-specific threats in public and private deployments, running the gamut from citizen data governance concerns in government systems to accountability concerns in private sector deployments. These sectoral threats are the foundation for sector-specific approaches to contain AI-powered risks.
- The analysis relies on established taxonomies of cyberattacks and their respective mitigations, such as the standards established by the National Institute of Standards and Technology (NIST), especially NIST AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. These models are used to categorise threats, like evasion and poisoning attacks on predictive AI systems, and suggest mitigation such as zero-trust frameworks and encryption.