Global AI Agent
Regulation Tracker
Governments worldwide are racing to regulate autonomous AI agents. NIST, Taiwan, the EU, South Korea, Japan — the frameworks being written right now will define what “securely deployed AI agent” means for the next decade. IBA is at the table.
NIST Launches AI Agent Standards Initiative — Pillar 3: “Identity and Authorization”
The Center for AI Standards and Innovation (CAISI) at NIST announced the AI Agent Standards Initiative — a federal program to make autonomous AI agents secure, trusted, and interoperable across the US economy. The third pillar directly addresses the architectural gap that caused the OpenClaw crisis: formal authorization for autonomous agent actions.
Docket NIST-2025-0035 · Submit at regulations.gov · Closes March 9, 2026 at 11:59pm ET
NIST’s CAISI launched a three-pillar AI Agent Standards Initiative on February 17, 2026, alongside an open Request for Information on securing AI agent systems. The companion ITL concept paper is explicitly titled “AI Agent Identity and Authorization.” Voluntary guidelines expected late 2026.
Taiwan’s Legislative Yuan passed the AI Basic Act on December 23, 2025 — the island’s first comprehensive AI governance framework. Designates the National Science and Technology Council (NSTC) as central authority. Mandates risk classification aligned with international standards, with MODA targeting Q1 2026 for the framework. Seven core principles include cybersecurity, human autonomy, and accountability.
Full enforcement began August 2, 2025. The world’s first comprehensive AI regulation imposes binding obligations on high-risk AI systems, requires conformity assessments, and mandates human oversight mechanisms. Agentic AI systems interacting with critical infrastructure face the highest compliance burden. Fines up to €35M or 7% of global turnover.
South Korea’s “Basic Act on the Development of AI and the Establishment of Foundation for Trustworthiness” came into effect January 1, 2026. Balances national AI competitiveness with risk mitigation — specifically targeting high-impact AI systems with regulatory provisions. Establishes a national AI committee and requires impact assessments for high-risk deployments.
Japan’s first AI-specific legislation passed the National Diet on May 28, 2025. Adopts a light-touch regulatory approach favoring governmental guidance over binding rules — consistent with Japan’s historic technology policy stance. Focuses on trustworthy AI and innovation promotion. Implementing guidelines expected through 2026.
Australia’s government is reviewing AI legislation after stalling its law push. Canada is advancing its Artificial Intelligence and Data Act (AIDA). The UK’s pro-innovation approach is being formalized. India and Singapore are advancing sector-specific frameworks. The global regulatory wave is accelerating — driven by incidents like OpenClaw and EU compliance pressure on multinationals.
NIST AI Agent Standards Initiative Announced
CAISI launches three-pillar initiative. Pillar 3: AI agent security and identity. ITL concept paper titled “AI Agent Identity and Authorization.” IBA filing formal RFI response.
South Korea AI Basic Act Takes Effect
SKAIA becomes operative. High-impact AI systems face mandatory risk assessments. National AI committee established to oversee implementation.
Taiwan AI Basic Act Passes Third Reading
Legislative Yuan passes landmark AI governance law. NSTC designated central authority. MODA developing risk classification framework targeting Q1 2026. Seven core principles including cybersecurity and accountability.
EU AI Act Full Enforcement Begins
World’s first comprehensive AI regulation enters full force. High-risk AI systems face binding conformity requirements. Agentic systems in critical infrastructure under highest scrutiny.
Japan AI Promotion Act Passes National Diet
Japan’s first AI-specific legislation. Light-touch, guidance-based approach. Implementing regulations developing through 2026.
NIST CAISI Issues RFI on AI Agent Security
Federal Register docket NIST-2025-0035. Seeking input on authorization, prompt injection defense, persistent memory risks, and deployment safeguards. IBA submitting framework and OpenClaw incident analysis.
Why Every Regulatory Framework Converges on Authorization
From NIST’s “Identity and Authorization” pillar to Taiwan’s cybersecurity principle to the EU AI Act’s human oversight requirements — every major AI governance framework independently arrives at the same fundamental requirement: autonomous agents must have a verifiable, auditable mechanism for authorization that operates independently of the model.
Intent-Bounded Authorization (IBA) is that mechanism. Patent GB2603013.0 provides a mathematically provable framework that satisfies the authorization requirements of every current and emerging regulatory regime simultaneously — because it addresses the architectural root cause rather than jurisdiction-specific symptoms.
- Model-agnostic — works above any LLM, any jurisdiction
- Formally verifiable — produces auditable authorization records
- Trajectory-aware — evaluates sequences, not just individual actions
- Temporally bounded — defeats memory-based attacks automatically
- EU AI Act compliant — satisfies human oversight requirements
- NIST-aligned — directly addresses Pillar 3 authorization gap
- Taiwan-ready — maps to cybersecurity & accountability principles
- Scales across tiers — consumer, enterprise, critical infrastructure
Submit to NIST RFI
Docket NIST-2025-0035 closes March 9. Your input shapes the standard.
regulations.gov →Read the IBA Framework
Open-source reference implementation. Apache 2.0. Patent GB2603013.0.
IntentBound.com →