BLOG

Research & writeups

Deep dives on AI security, agentic threat modeling, secure automation, and smart contract patterns.

CROSSOVER 2025-03-18

Smart contracts taught us to audit AI agents — here's what carries over

Six years of blockchain auditing mapped to agentic security. Reentrancy, flash loans, oracle manipulation — the failure modes rhyme.

Smart ContractsAI AgentsThreat ModelDeep Dive
GUIDE 2025-03-01

You automated your business with AI. Here's what you probably didn't secure.

Four security problems in common AI automations — prompt injection, data leakage, credential management, and silent failures — with practical fixes.

AutomationSmall BusinessPractical GuideChecklist
FRAMEWORK 2025-02-15

ASTRIDE: a threat modeling framework for agentic AI systems

STRIDE extended with three new threat categories for LLM agents — Confused Deputy, Context Pollution, and Trust Boundary Violation. Open spec, MIT licensed.

FrameworkThreat ModelingOpen SpecSTRIDEFree to Use