AI Security Architect at Deloitte
Worldwide
<p>We are seeking an experienced and highly skilled <strong>AI Security Architect</strong> to join AI Security team in Israel. This is a <strong>hands-on, highly technical</strong> role responsible for defining security architecture and implementing robust security controls for our <strong>AI/ML systems and their underlying platforms</strong>.</p><p>You will serve as the team’s <strong>technical mentor and architecture authority</strong>, driving secure-by-design patterns across the AI/ML lifecycle (data, training, evaluation, deployment, and production monitoring) and proactively mitigating AI-specific threats such as <strong>model integrity risks, data poisoning, adversarial attacks, prompt injection, model extraction, and inference-time abuse</strong>. While you won’t manage people, you will <strong>lead technically</strong>, set standards, and guide engineers day-to-day through architecture, reviews, and delivery.</p><p><strong>Key Responsibilities:</strong></p><p><strong>Architecture & Secure-by-Design Leadership</strong></p><ul><li>Define and maintain <strong>AI security reference architectures</strong> for multiple AI deployment patterns, including <strong>MCP / Agentic AI</strong> and LLM application stacks (RAG, tools/plugins, agents, orchestration).</li><li>Establish and evolve <strong>security requirements, patterns, and guardrails</strong> across the AI/ML SDLC (design → build → run), including secure pipelines and platform controls.</li><li>Own AI security architecture decisions across critical domains: <strong>identity, secrets, data protection, network controls, tenancy boundaries, logging/telemetry, and isolation</strong> for training/inference.</li></ul><p><strong>Control Design & Implementation (Hands-on)</strong></p><ul><li>Design and deploy controls to ensure <strong>model integrity and governance</strong>, including <strong>RBAC/ABAC</strong> for models, feature stores, data sets, registries, and evaluation artifacts.</li><li>Build/enable technical mechanisms for <strong>provenance, attestation, signing, and approval workflows</strong> (where applicable) across datasets, models, prompts, and deployments.</li><li>Drive implementation of <strong>runtime protections</strong> for AI services (abuse prevention, rate limiting, input/output validation, prompt-injection mitigations, model endpoint hardening, and monitoring).</li></ul><p><strong>Threat Modeling, Assurance, and Risk Reduction</strong></p><ul><li>Conduct and lead <strong>AI/ML-specific threat modeling</strong> (data poisoning, model evasion, extraction, inversion, supply-chain, prompt attacks), translate findings into actionable backlogs, and drive remediation.</li><li>Define and run <strong>security design reviews</strong> for AI initiatives; provide clear, pragmatic architecture guidance and document exceptions with risk acceptance paths.</li><li>Establish <strong>AI security testing</strong> approaches (adversarial testing, red-teaming enablement, evaluation security, misuse/abuse cases) and integrate into delivery pipelines.</li></ul><p><strong>Tooling, Automation, and Operational Enablement</strong></p><ul><li>Design and deliver <strong>AI security tooling</strong> to improve and automate cybersecurity posture (e.g., controls coverage, policy-as-code, detection engineering, vulnerability management integration, incident response playbooks for AI-specific events).</li><li>Define <strong>logging/monitoring standards</strong> and detection use-cases for AI platforms and LLM apps (drift signals, anomalous access, suspicious prompt patterns, exfiltration indicators, policy violations).</li></ul><p><strong>Technical Mentorship & Influence (No Line Management)</strong></p><ul><li>Act as the team’s <strong>technical mentor</strong>: coach engineers through designs, implementations, and trade-offs; raise engineering quality via reviews, pairing, and knowledge sharing.</li><li>Lead by influence across Data Science, Engineering, Product, Platform, and Cybersecurity—driving alignment without formal authority.</li><li>Create internal enablement materials: <strong>runbooks, architecture standards, reusable patterns, and reference implementations</strong>.</li></ul> <br><h3>Requirements</h3> <p><strong>Required Qualifications</strong></p><p><strong>Experience</strong></p><ul><li><strong>6+ years</strong> in Information Security, Cloud Security, or Application Security.</li><li><strong>2+ years</strong> securing AI/ML systems or LLM applications in production (or equivalent depth in architecture and threat modeling for AI-enabled systems).</li><li>Proven track record designing security architectures and driving adoption across multiple teams.</li></ul><p><strong>Technical Expertise</strong></p><ul><li>Deep understanding of the <strong>ML/AI lifecycle</strong> and associated security risks (training/inference threats, data governance, evaluation integrity, model/prompt supply chain).</li><li>Strong expertise in <strong>cloud security</strong> (AWS/Azure/GCP) and AI/ML services (e.g., <strong>SageMaker, Vertex AI, Azure ML</strong>) plus container platforms/orchestration.</li><li>Strong knowledge of <strong>data security</strong> (classification, encryption, masking/tokenization, key management, lineage/provenance).</li><li>Strong knowledge of <strong>application security architecture</strong> and secure design patterns (API security, authz/authn, secrets, CI/CD, policy-as-code).</li><li>Deep understanding of AI-specific threats/defenses: <strong>adversarial ML, data poisoning, prompt injection, model inversion, model extraction, inference-time attacks</strong>.</li><li>Strong coding ability in <strong>Python and/or Go</strong> (building security tooling, automation, integrations, prototypes).</li></ul><p><strong>Soft Skills</strong></p><ul><li>Excellent communication—able to translate complex AI security risks into <strong>clear engineering requirements</strong> and decision-ready trade-offs.</li><li>Strong stakeholder management and ability to <strong>drive alignment and delivery</strong> across diverse teams.</li><li>Practical, proactive mindset with strong problem-solving in ambiguous, fast-moving AI environments.</li></ul><p><strong>Preferred Qualifications</strong></p><ul><li>BA/BS degree required; advanced degree (MS/PhD) in Computer Science, Data Science, Cybersecurity, or related field is a plus.</li><li>Certifications such as <strong>CISSP, CSSLP</strong>, and/or relevant cloud/security certifications; AI security-focused training is a plus.</li><li>Familiarity with AI security frameworks/standards and enterprise governance expectations.</li></ul><p><br></p><p>We at Deloitte believe that diversity and inclusion among our people is a critical component of our success and that is why we cultivate an organizational culture that contains and embraces diversity in all its forms.</p><p><br></p> <br><h3>Description Hebrew</h3> null <br><h3>Requirements Hebrew</h3> null
Apply Now