AI safety & security controls/Associate Director, Software Engineering Specialist at HSBC
Worldwide
<p><span style="">Some careers shine brighter than others.</span></p><p><span style="">If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further.</span></p><p><span style="">HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions.</span></p><p>We are currently seeking an experienced professional to join our team in the role of Associate Director, Software Engineering Specialist.</p><p><span style="">Key Responsibilities:</span></p><ul><li><span style="">Lead design, implementation, and scaling of enterprise AI safety & security guardrail services (LLM + agent ecosystems). Combine deep AI safety engineering with high-performance Python backend (FastAPI/Django), container/Kubernetes operations, and cross‑functional stakeholder alignment (Cybersecurity, Model Risk Management, Privacy, Compliance). Drive vendor collaboration and internal architecture strategy for resilient, auditable, low-latency guardrail enforcement.</span></li><li style=""><span style="">Own technical architecture for multi-layer guardrails: request pre-processing, model mediation, post-output validation, agent step auditing.</span></li><li style=""><span style="">Design/extend services for: jailbreak & prompt/indirect injection detection, retrieval poisoning defense, PII detection/redaction/masking/tokenization, toxicity/hate/harassment/self-harm/extremism filtering, data loss prevention (secrets, source provenance, outbound scanning), sensitive credential scanning, hallucination/factuality/citation enforcement, misinformation (medical/financial), regulatory compliance filters, copyright/IP/brand safety, policy orchestration (allow/block/redact/escalate/human review), agent session safety (tool invocation constraints, multi-step trace auditing), metrics & experimentation APIs.</span></li><li style=""><span style="">Architect Python microservices (FastAPI/Django, async I/O, streaming) with versioned policies, feature flags, canary and shadow deployments.</span></li><li style=""><span style="">Build evaluation framework: automated red-teaming suites, adversarial scenario generation, precision/recall/F1 dashboards, safety regression gates, latency/cost profiling, drift (content, behavior, model) detection.</span></li><li style=""><span style="">Integrate ML/NLP components (Presidio, spaCy, Hugging Face, vector similarity, custom classifiers, rule + ML ensembles, entropy and checksum validators).</span><span style="">Productionize RAG safety: document provenance scoring, source trust tiers, citation completeness, leakage prevention.</span></li><li style=""><span style="">Implement full observability: structured audit logs, OpenTelemetry tracing, Prometheus metrics (guardrail hit ratios, decision distributions, latency percentiles), incident classification feeds.</span></li><li style=""><span style="">Harden services: authN/Z (OIDC/OAuth2, service principals), rate limiting, circuit breakers, sandboxing, secure config/secrets, runtime isolation, network policies, API contract governance.</span><span style="">Lead Docker/Kubernetes strategy: multi-stage builds, image minimization, SBOM, Helm/Kustomize, HPA/autoscaling policies, PodSecurity, resource tuning, rollback playbooks.</span></li><li style=""><span style="">Coordinate requirements with Cybersecurity, Model Risk Management, Privacy, Compliance, Legal—translate policies into executable rules and evaluators.</span><span style="">Run stakeholder UAT cycles: test planning, evidence collection, false-positive adjudication, iterative tuning.</span></li><li style=""><span style="">Manage vendor engagements: technical due diligence, integration interfaces, performance/SLA validation, joint solution design and escalation paths.</span><span style="">Mentor engineering teams; establish coding standards, review protocols, architecture decision records, incident runbooks.</span></li></ul><p><span style="">To be successful in this role you should meet the following requirements</span></p><ul style="list-style-type: disc;"><li style=""><span style="">Bachelor’s degree in Computer Science/Engineering. </span><span style="">12+ years software engineering (majority in Python) with 5+ years focused on AI/ML or content safety/security.</span></li><li style=""><span style="">Proven delivery of large-scale guardrail or trust/safety platforms for LLMs or high-risk content systems.</span><span style="">Deep FastAPI/Django, async patterns, streaming moderation, middleware pipelines.</span></li><li style=""><span style="">Strong ML/NLP integration experience: pattern + ML hybrid detectors, evaluator services, vector stores.</span><span style="">Kubernetes production operations (autoscaling, resilience, security hardening) and CI/CD (policy gates, security scanning, artifact signing).</span></li><li style=""><span style="">Advanced observability and performance tuning (profilers, tracing, queue/backpressure management, caching strategies).</span><span style="">Risk & compliance alignment (MRM validation workflows, audit evidence, model governance).</span><span style="">Vendor technical management (RFP criteria, integration architectures, SLA/perf oversight).</span></li><li style=""><span style="">Candidate with less relevant experience or skills may be offered a lower Global Career Band than stated above. </span>Bachelor’s in Computer Science/Engineering (or equivalent).</li><li>Proven experience building advanced applications for AI control of LLM/agent solutions, including guardrail services, policy orchestration/enforcement, automated evaluation (red-teaming), and audit-ready governance aligned to security, privacy and regulatory needs.</li><li>Operational excellence: Experience running AI control applications in production with strong observability, performance tuning, and reliability/SRE practices.Secure-by-design: Demonstrated ability to engineer AI control platforms with secure-by-design patterns (authN/Z, isolation/sandboxing, secrets management, DLP).Risk & governance: Strong track record translating risk/compliance requirements into executable controls, evidence, and measurable outcomes.</li></ul><p>You’ll achieve more when you join HSBC.</p><p><a href="http://www.hsbc.com/careers">www.hsbc.com/careers</a></p><p><span style="">HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working, and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website.</span></p><p><span style="">Issued by – HSBC Software Development India.</span></p>
Apply Now