Albert Planas Albert Planas

From Agent-on-Agent Fraud to Autonomous AI Attacks: The Next Escalation

In 2024, the big worry was humans being fooled by AI. By 2026, a new threat is emerging: Agent-on-Agent (AoA) fraud. As more organizations rely on AI agents for procurement, treasury, and customer service, the traditional “perimeter of trust” is disappearing. Fraud isn’t just human-to-machine anymore. Now, it can happen machine-to-machine, where one autonomous AI agent tricks another.

Read More
Albert Planas Albert Planas

When Your AI Agents Start Tricking Each Other

In 2024, the big worry was humans being fooled by AI. By 2026, a new threat is emerging: Agent-on-Agent (AoA) fraud. As more organizations rely on AI agents for procurement, treasury, and customer service, the traditional “perimeter of trust” is disappearing. Fraud isn’t just human-to-machine anymore. Now, it can happen machine-to-machine, where one autonomous AI agent tricks another.

Read More
Albert Planas Albert Planas

Vibe Coding: Hype, Reality, and What It Actually Means

Most AI projects don’t fail because of the model — they fail because the knowledge base behind it is messy, outdated, or impossible to retrieve from. Before embeddings, vector databases, or fancy architectures, the real work starts with how your documents are written, structured, and indexed.

In this piece, I break down what it actually means to prepare a knowledge base for AI agents in security, compliance, and risk environments. We’ll look at when indexing really matters, how to structure documents so AI can retrieve answers faster, and why clear titles, meaningful tags, and proper chunking often outperform more complex solutions.

I also share practical examples, a simple readiness checklist, and lessons learned from building AI systems that had to stand up to audits and incident reviews — not just demos.

If you wouldn’t trust a document in an investigation, your AI shouldn’t either.

Read More