Notes from BSidesSF 2026 (March 22, 2026). Six talks on security in the age of AI.

  • A Blueprint for Building a Generic Authorization Service – Ashwin & Fletcher (Roblox)

    Roblox built Guard, a centralized auth control plane using Topaz/OPA sidecars. The MCP server security angle was unexpected — same identity-agnostic framework for humans, agents, and workloads.

    authorization opa microservices
  • Seeing the Forest Through the Trees: A Business Approach to Risk and Threat Modeling – Sean (SoundCloud)

    Practical framework for translating security risks into dollars for execs. 68% of breaches are non-malicious human factors, and 98% of those stopped by MFA. The layered data flow diagram approach was immediately useful.

    risk threat-modeling
  • The Epistemology of Trust – Mike Wilkes (Former CISO)

    Philosophical but grounded — shift focus from breach prevention to breach cadence. ‘Backups are useless, it’s restores that matter.’ The AI sandbagging research from Anthropic was a sobering addition.

    ai trust resilience
  • The Great Credential Caper: How to Perform and Defend Against the Nearly Impossible to Defend – Dan Hollinger & Christo (Cloudflare)

    Live demo of Claude Code + Playwright solving CAPTCHAs autonomously was jaw-dropping. The ‘parfait model’ of layered defense across password, request, account, and agent layers is the right framing for the post-bot-score era.

    credentials ai bots
  • The Tyranny of Optimization and the Stability of Automated Governments – Katie Moussouris

    Bug bounties drowning in AI-generated ‘slop’ — curl shut down theirs entirely. The K-shaped economy framing and critique of the US ‘dominance over safety’ AI policy was sharp.

    ai policy bug-bounty
  • Your AI Agent Has Production Access, Now What? – Jack (Anthropic)

    Best talk of the day. The ‘lethal trifecta’ (egress + sensitive data + untrusted input) is a clean mental model. Tool proxies for credential isolation and using agent transcripts as ‘confessions’ during incident response were immediately actionable.

    ai-agents security sandboxing

All Talks