Remote AI Safety / Red Teaming / Evaluations Jobs

Server-rendered list of remote AI Safety / Red Teaming / Evaluations roles for crawlers and assistive tech.

Loading content...

Frequently Asked Questions

What is AI safety red teaming?

AI safety red teaming is the practice of adversarially testing AI systems to find failure modes, harmful outputs, and vulnerabilities before deployment. Red teamers attempt to elicit dangerous, biased, or unexpected behavior from AI models to inform safety improvements.

What background is needed for AI safety red teaming?

Red teaming roles value diverse backgrounds — security researchers, social scientists, policy experts, and ML engineers all contribute. Common skills include adversarial thinking, knowledge of LLM failure modes, prompt engineering, and familiarity with AI safety research.

Which organizations hire AI safety red teamers?

Leading AI labs (Anthropic, OpenAI, Google DeepMind, Meta AI) run internal red teams. Government agencies, defense contractors, and specialized AI safety organizations also hire. The field is growing rapidly as AI regulation increases.