Profile:
UC Berkeley's Center for Long-Term Cybersecurity (CLTC), a research center focused on cybersecurity and AI risk, published the 67-page Agentic AI Risk-Management Standards Profile, Version 1.0 (Feb 2026). This is a voluntary standards profile that extends NIST (National Institute of Standards and Technology) AI Risk Management Framework with controls designed for autonomous, or "agentic," systems.
Read the full profile: Agentic AI Risk-Management Standards Profile (PDF)
Big idea:
Autonomy is not just on or off. The profile emphasizes practices that keep meaningful human responsibility while allowing bounded autonomy inside clearly defined limits.
What the tiers mean:
These tiers give teams a simple way to describe how much decision-making power an agent has and what safeguards are needed.
Concrete controls the profile calls for:
Reporting:
Developers and deployers should log incidents and near-misses and contribute to public repositories such as the AI Incident Database and MITRE ATLAS. Expect reports to cover problems like unsupervised execution, reward hacking (when an agent pursues a goal in unintended or harmful ways), self-proliferation, and shutdown resistance.
Who’s on the hook:
Where this bites first:
High-stakes domains like health care and finance, where agent errors or reward hacking can cause real harm.
Why it matters now:
The ad industry is moving quickly toward agentic systems. The IAB Tech Lab published an Agentic Roadmap to avoid fragmentation, and Yahoo DSP turned on built-in agents across planning, activation, optimization, and measurement. This profile gives teams a practical playbook before autonomy scales.
Read the IAB announcement: IAB Tech Lab press release
Founder takeaway:
Map your agents to L0-L5, ship shutdowns and immutable logs, add real-time monitoring, and set contractual guardrails. Do it now to avoid operational and legal risk as autonomy grows.
Get daily insider tech news delivered to your inbox every weekday morning.