ARMORIQ

ARMORIQ - INTENT IS THE NEW PERIMETER

Researchers Are Right: AI Security Isn't Broken. It Was Never Designed for Intent.

A

ArmorIQ

license@armoriq.io

Jan 12, 2026
5 min read
AI securityprompt injectionintent enforcementBusiness InsiderSander Schulhoff

A recent Business Insider article highlighted research by Sander Schulhoff that should give every enterprise deploying AI pause. The research shows how easily modern AI systems can be redirected, manipulated, or coerced into doing things they were never meant to do. Clever prompts override safeguards. Context reshapes behavior. Systems follow instructions they should ignore. The article frames this as an AI security gap. That description is accurate, but incomplete.

What the research actually reveals is not that AI systems are insecure in the traditional sense. It reveals that AI systems are operating without a way to enforce intent. And without intent enforcement, security controls are always one step behind.

Why Prompt Injection Keeps Working

Prompt injection is often treated as a technical vulnerability. Patch the model. Improve filters. Add heuristics. Train better classifiers.

Schulhoff's research shows why this approach keeps failing. Prompt injection works because AI systems do exactly what they are designed to do. They interpret language, weigh context, and reason about what action to take next.

The system is not malfunctioning. It is following its reasoning. Existing defenses try to detect malicious inputs or anomalous outputs. But detection is probabilistic. Clever inputs blend in. Benign context becomes instruction. Models comply because nothing in the system definitively says, "This action does not belong to the task you were given." That is not a prompt problem. It is a control problem.

The Missing Question in Every AI System

Most security frameworks ask three questions.

  • Who is acting?
  • What can they access?
  • Where are they operating?

AI systems answer all three reasonably well. Identities exist. Permissions are scoped. Logs are generated. The question they cannot answer is the most important one.

Why is this system taking this action?

When an AI agent calls an API, modifies data, sends a message, or triggers a workflow, existing controls can confirm that it was allowed. They cannot confirm that it was appropriate. Prompt injection succeeds because the system has no ground truth for intent. It cannot distinguish between an action that belongs to the original task and one that emerged from adversarial influence or reasoning drift.

Why This Is an Enterprise Problem, Not an AI Bug

The Business Insider article emphasizes that many companies underestimate how easily AI systems can be manipulated. That underestimation is understandable. Traditional software does not reinterpret its instructions mid-execution. Autonomous AI does.

Enterprises are deploying systems that reason in natural language, adapt to context, and make decisions dynamically. They are doing so using control planes built for static logic and human actors.

The result is predictable. Actions are technically authorized but operationally wrong. Incidents are explainable only after the fact. Oversight becomes reactive. Teams compensate with more reviews, more monitoring, and more manual intervention.

This is why SREs, CISOs, compliance teams, and platform leaders all feel unease at the same time, even if they describe it differently.

Why Better Detection Won't Fix This

The instinctive response to prompt injection is better detection. More sophisticated filters. AI watching AI. Behavioral scoring. These approaches are useful, but they are not sufficient. Detection tells you that something suspicious happened. It does not prevent the action from executing. By the time a model is flagged, the damage may already be done.

Schulhoff's research implicitly points to this limitation. As long as systems rely on inference to decide whether an action is acceptable, attackers will find ways to steer that inference. Security cannot be built on guessing intent. It must be built on enforcing it.

What Changes When Intent Is Enforced

ArmorIQ approaches AI security from a different starting point.

Instead of asking whether an input looks malicious or an output looks wrong, the Intent Assurance Plane asks a simpler, more decisive question before execution.

Does this action belong to the task the system was authorized to perform?

In an intent-aware system, every AI-driven task begins with an explicit plan. That plan defines what the system is allowed to do, which tools it may use, and where the boundaries lie. The plan is cryptographically anchored and becomes the source of authority.

When the AI attempts an action, the system verifies that the action is part of the approved intent. If it is not, execution is blocked. Prompt injection attempts lose their power because they cannot expand authority. Reasoning drift cannot turn into side effects.

This does not require understanding or classifying the prompt. It does not depend on predicting attacker creativity. It simply enforces purpose.

Why This Reframes the Research Warnings

The Business Insider article and Schulhoff's research are not arguing that AI is unsafe. They are arguing that enterprises are deploying AI without the right control model. Prompt injection is a symptom. The root cause is that intent remains implicit and unenforced.

Once intent becomes explicit and verifiable, many classes of attacks stop mattering. An AI can be exposed to untrusted input without being allowed to act on it. The system can reason freely without being granted new authority. That is the difference between intelligence and autonomy.

The Takeaway

The warning from AI security researchers is real, and enterprises should take it seriously. But the conclusion should not be panic or paralysis. It should be architecture. AI systems are not broken. They are behaving exactly as designed. What's missing is a control plane that matches how they behave.

ArmorIQ's Intent Assurance Plane fills that gap. It turns intent from an assumption into an enforceable contract. It allows enterprises to deploy AI systems that reason, adapt, and interact with the world, without accepting unlimited risk. Prompt injection will keep working as long as systems guess intent. It stops working when intent is enforced.

A

ArmorIQ

Security Expert at ArmorIQ

Published on January 12, 20265 min read