From Agent-on-Agent Fraud to Autonomous AI Attacks: The Next Escalation
In 2024, the big worry was humans being fooled by AI. By 2026, a new threat is emerging: Agent-on-Agent (AoA) fraud. As more organizations rely on AI agents for procurement, treasury, and customer service, the traditional “perimeter of trust” is disappearing. Fraud isn’t just human-to-machine anymore. Now, it can happen machine-to-machine, where one autonomous AI agent tricks another.
When Your AI Agents Start Tricking Each Other
In 2024, the big worry was humans being fooled by AI. By 2026, a new threat is emerging: Agent-on-Agent (AoA) fraud. As more organizations rely on AI agents for procurement, treasury, and customer service, the traditional “perimeter of trust” is disappearing. Fraud isn’t just human-to-machine anymore. Now, it can happen machine-to-machine, where one autonomous AI agent tricks another.
Vibe Coding: Hype, Reality, and What It Actually Means
Most AI projects don’t fail because of the model — they fail because the knowledge base behind it is messy, outdated, or impossible to retrieve from. Before embeddings, vector databases, or fancy architectures, the real work starts with how your documents are written, structured, and indexed.
In this piece, I break down what it actually means to prepare a knowledge base for AI agents in security, compliance, and risk environments. We’ll look at when indexing really matters, how to structure documents so AI can retrieve answers faster, and why clear titles, meaningful tags, and proper chunking often outperform more complex solutions.
I also share practical examples, a simple readiness checklist, and lessons learned from building AI systems that had to stand up to audits and incident reviews — not just demos.
If you wouldn’t trust a document in an investigation, your AI shouldn’t either.
Building AI Knowledge Bases That Actually Work
Most AI projects don’t fail because of the model — they fail because the knowledge base behind it is messy, outdated, or impossible to retrieve from. Before embeddings, vector databases, or fancy architectures, the real work starts with how your documents are written, structured, and indexed.
In this piece, I break down what it actually means to prepare a knowledge base for AI agents in security, compliance, and risk environments. We’ll look at when indexing really matters, how to structure documents so AI can retrieve answers faster, and why clear titles, meaningful tags, and proper chunking often outperform more complex solutions.
I also share practical examples, a simple readiness checklist, and lessons learned from building AI systems that had to stand up to audits and incident reviews — not just demos.
If you wouldn’t trust a document in an investigation, your AI shouldn’t either.
How AI is Connecting Analysis, Threat Hunting and Cloud Investigations
Security, fraud, and cybersecurity teams face a growing challenge: protecting increasingly complex environments shaped by cloud adoption, AI, and IoT, while threats evolve faster than ever. Bad actors are also leveraging AI to scale and accelerate their attacks.
AI is no longer optional, it has become essential. When used correctly, AI can improve security posture and operational efficiency, not by replacing people, but by connecting critical security functions and breaking down silos.