Build the Immune System for Agentic AI.
Autonomous AI systems are shipping into production faster than the security infrastructure to govern them. We're building that infrastructure. If that problem keeps you up at night, we should talk.
Apply NowWhy This Matters Now
We don't do "patch Tuesday" for AI agents. We're architecting the control plane that will govern millions of autonomous systems: runtime behavioral analysis, threat detection, guardrail enforcement, governance at scale. The security gap is real, it's growing, and the window to close it is now.
Adversarial by Default
To defend agents, you have to know how to break them. We think in attack surfaces, failure modes, and trust boundaries. We build systems that assume compromise, not systems that assume everything works.
No Borrowed Patterns
The security models built for request-response architectures don't map to autonomous, multi-step, tool-using agents. We're not adapting old frameworks. We're designing new primitives, informed by what came before, but not constrained by it.
Open Positions
Software Engineer
EngineeringArchitect the high-scale distributed systems that will intercept, evaluate, and govern agent capabilities in real-time. You will build the engine that makes the "Trust Control Plane" possible.
You are:
- A distributed systems expert
- Obsessed with latency & throughput
- A polyglot (Go, Rust, TS)
Security Engineer
SecurityDefine the threat models for the agentic era. You will build the simulation engines and runtime defenses that protect agents from prompt injection, tool abuse, and logic subversion.
You are:
- A breaker & builder
- Deeply knowledgeable in LLM security
- Comfortable with ambiguity
Agentic Security Researcher
ResearchProbe the boundaries of autonomous AI systems. You will design and execute adversarial experiments against agentic workflows, uncover novel failure modes, and turn findings into defensive primitives that ship into the platform.
You are:
- Deep in LLM internals and agent architectures
- Published or active in AI safety/security research
- Driven to turn research into production defenses
Applied ML Engineer
Machine LearningBuild the ML models that power runtime behavioral analysis, anomaly detection, and trust scoring for autonomous agents. You will turn research into production-grade systems that operate at scale with strict latency requirements.
You are:
- Experienced in production ML pipelines
- Strong in anomaly detection and behavioral modeling
- Comfortable shipping models with real-time constraints