Manager, Responsible AI Consultant (Risk, AI & Modelling)
PWC Uk
About the role:
PwC is rapidly scaling its market leading Responsible AI (RAI) practice in the UK as client demand accelerates across sectors. As a Manager in our RAI team, you will lead workstreams that help organisations design, build and deploy AI that is trusted, ethical and compliant. You will work at the intersection of emerging technology, risk management and regulation, supporting clients to operationalise AI governance, implement practical controls and embed “trust by design” across the AI lifecycle, including GenAI and agentic AI. This new role supports our growing practice and offers strong opportunities for rapid progression.
What your days will look like:
We are seeking a Manager to lead client workstreams that ensure AI is developed and deployed ethically, safely and in line with emerging regulation. This role sits at the intersection of technology, governance and risk, offering the opportunity to shape how organisations build trust in AI and accelerate responsible adoption.
Lead RAI client engagements, supporting clients to establish governance, operating models, policies and controls for AI at scale.
Assess AI use cases and development practices to identify gaps (e.g., governance, bias/fairness, transparency, privacy, security) and recommend pragmatic improvements.
Translate Responsible AI principles and regulatory expectations into actionable delivery guidance across the AI and product lifecycle.
Produce or oversee key artefacts such as AI risk assessments, governance documentation, model testing outputs (bias/robustness/explainability), and evidence packs for assurance/audit.
Collaborate with data science, engineering and cyber teams to implement technical guardrails, monitoring and incident processes for AI systems (including GenAI).
Contribute to model evaluations and control design (e.g., explainability, robustness, fairness, human oversight, monitoring/observability).
Manage delivery quality, plan and run client workshops, and communicate complex AI risks and mitigations clearly to technical and non-technical stakeholders.
Contribute to a rapidly growing, high profile practice with strong opportunities for progression.
This role is for you if:
You have worked in consulting, technology, risk or transformation roles for 5+ years, with experience leading workstreams and managing stakeholders
Experience supporting AI governance, model risk, data governance, compliance, assurance, or technology risk, ideally with exposure to Responsible AI concepts (fairness, transparency, accountability, privacy, safety).
Regulatory and standards awareness: Working knowledge of relevant regulation and frameworks (e.g., EU AI Act trajectory, GDPR, NIST AI RMF, ISO/IEC 42001).
Analytical problem solving: Comfortable working with technical topics and evidence (model documentation/testing, controls, monitoring), with strong structured thinking and attention to detail.
Communication: Strong written and verbal communication; able to translate technical issues into clear business implications and recommendations.
GenAI exposure: Familiarity with GenAI/LLM risks and mitigations (e.g., guardrails, prompt/design controls, monitoring, human-in-the-loop) is preferred


