Cybersecurity team monitors a global threat map in a data center, responding to AI-driven attacks.
Anthropic’s recent introduction of Claude Mythos Preview, an AI model with advanced coding and agentic capabilities, underscores both the potential and the risks of AI in cybersecurity. While it can autonomously identify vulnerabilities, it also amplifies the scale of cyberattacks, raising concerns about the security landscape.
For years, cybersecurity has evolved alongside infrastructure shifts. Now, AI is deeply embedded in enterprise workflows and data pipelines, creating new attack surfaces. Prompt injection and data poisoning are emerging threats, but the bigger concern is the vulnerability of AI infrastructure itself. Misconfigured access to tools like Google Gemini can expose sensitive enterprise data, and compromised APIs can propagate malicious data unknowingly.
Attackers are increasingly targeting the supply chains of AI systems, including third-party libraries, APIs, and data sources. A recent LiteLLM breach, where attackers compromised a dependency library used by an AI gateway, illustrates this risk. OpenAI also disclosed a supply chain incident involving a compromised Axios dependency in its macOS signing workflow.
AI is also changing the speed and scale of cyber threats. Attackers can now continuously scan systems, identify vulnerabilities, and exploit them without manual triggers. To counter this, some companies are moving towards continuous threat exposure management (CTEM) models.
Neeraj Chauhan, the global CISO of PayU, notes the growing asymmetry in the fintech space, where AI is improving fraud detection but also enabling attackers to evolve faster. He highlighted emerging gaps, including AI-generated deepfakes bypassing KYC systems and shrinking attack timelines.
One widely adopted approach to AI security is the use of guardrails, which are predefined rules to restrict model behavior. However, experts caution that attackers can often bypass these through alternative phrasing or contextual manipulation.
Enterprises are shifting towards more structural approaches:
- Least Privilege Access: Limiting what an AI system can see and do.
- Continuous Visibility: Understanding how attack paths evolve.
- Resilience-First Design: Building systems that can detect, respond, and recover quickly.
FYERS, an AI-first online trading platform, combines strong authentication, role-based access controls, and real-time behavioral monitoring with ML-driven detection systems to surface anomalies early. Similarly, Deep Algorithms’ systems continuously test themselves rather than relying on periodic checks.
As attack surfaces expand across systems, supply chains, and infrastructure, and threats become faster and harder to detect, organizations must rethink ways to become hypervigilant.