Product Leader (GenAI Safety Evaluation) - Platform Responsibility
TikTok
Worldwide
Server-rendered list of remote AI Safety / Red Teaming / Evaluations roles for crawlers and assistive tech.
TikTok
Worldwide
Salesforce
United States
$237.7k - $344.7k
Elevenlabs
United Kingdom
Zoom
United States
$146.7k - $339.3k
AI safety red teaming is the practice of adversarially testing AI systems to find failure modes, harmful outputs, and vulnerabilities before deployment. Red teamers attempt to elicit dangerous, biased, or unexpected behavior from AI models to inform safety improvements.
Red teaming roles value diverse backgrounds — security researchers, social scientists, policy experts, and ML engineers all contribute. Common skills include adversarial thinking, knowledge of LLM failure modes, prompt engineering, and familiarity with AI safety research.
Leading AI labs (Anthropic, OpenAI, Google DeepMind, Meta AI) run internal red teams. Government agencies, defense contractors, and specialized AI safety organizations also hire. The field is growing rapidly as AI regulation increases.