AI Governance & Risk Strategy Lead

  • BLOOMBERG
  • New York, New York
  • 09/03/2025
Full time

Job Description

AI Governance & Risk Strategy Lead

Location
New York

Business Area
Legal, Compliance, and Risk

Ref #

Description & Requirements

The energy of a newsroom, the pace of a trading floor, the buzz of a recent tech breakthrough; we work hard, and we work fast - while keeping up the quality and accuracy we're known for. It's what keeps us inventing and reinventing, all the time. Our culture is wide open, just like our spaces. We bring out the best in each other through collaboration. Through our countless volunteer projects, we also help network with the communities around us, too. You can do amazing work here. Work you couldn't do anywhere else. It's up to you to make it happen.

Bloomberg's Chief Risk Office (CRO) plays a central role in ensuring that innovation is pursued responsibly across our global operations. As AI becomes increasingly embedded in our products and platforms, the CRO Strategy and Operations team is focused on designing robust frameworks, policies, and controls to govern AI adoption with transparency, fairness, and accountability. Our cross-functional work spans Legal, Engineering, Product, CISO, and Compliance to ensure Bloomberg's AI systems operate safely, ethically, and in alignment with evolving regulatory standards.
What's the role?
We're seeking an AI Governance & Risk Strategy Lead to help refine and scale our enterprise-wide AI risk program. This person will play a critical role in maturing our frameworks for responsible AI-partnering with senior stakeholders across Technology, Legal, Compliance, Data, and Product to ensure the safe, ethical, and compliant use of AI systems across Bloomberg.
We'll Trust You To:
AI Governance & Frameworks

  • Enhance our enterprise AI Risk Management framework-including inventory, classification, and risk-tiering mechanisms
  • Develop scalable, end-to-end governance processes across the AI lifecycle: design, development, deployment, production, and retirement
  • Identify opportunities for automation and process improvements to strengthen controls and oversight
Cross-Functional Collaboration

  • Partner with Legal, Compliance, Privacy, Security, Engineering, and Product teams to address emerging AI risks and ensure effective policy implementation
  • Facilitate stakeholder working groups, communications, and executive updates on AI risk and governance
Monitoring & Oversight

  • Establish and monitor key risk indicators for AI systems (e.g., model drift, hallucination, bias)
  • Ensure alignment with global AI regulatory requirements (e.g., EU AI Act) and respond to regulatory inquiries or reviews
  • Evaluate risks tied to third-party AI solutions, including sourcing, onboarding, integration, and ongoing oversight
Enablement

  • Serve as an internal subject matter expert and thought leader on responsible AI use
  • Support AI risk training, awareness, and culture-building across the organization
You'll Need to Have:

  • 10+ years of experience in Technology Risk, Data and Security Risk, or AI/ML-at least 3 years directly focused on AI governance or oversight
  • Direct experience designing and implementing enterprise AI Risk or Responsible AI programs
  • Strong grasp of AI/ML technical risks (e.g., bias, explainability, model drift, robustness) and associated controls
  • Hands-on familiarity with generative AI tools (e.g., ChatGPT, Claude, AWS Bedrock) and their risk implications
  • Strong change management and stakeholder engagement skills, with a track record of influencing without authority across technical and business domains
  • Knowledge of data governance practices, including metadata management, data lineage, and data minimization as they pertain to AI models
  • Working knowledge of privacy, compliance, and regulatory frameworks (e.g., GDPR, CPRA, EU AI Act)
  • Excellent communication skills with experience presenting to senior stakeholders
We'd love to see:

  • Experience in Data Risk Management or direct collaboration with AI/ML development teams
  • Familiarity with AI risk management platforms or tools for model monitoring, documentation, and compliance reporting
  • Experience designing training, awareness, or enablement programs focused on AI risk, model governance, or responsible AI practices
  • Familiarity with frameworks such as NIST AI RMF, ISO/IEC 23894, or OECD AI Principles
  • Certifications in risk, privacy, or compliance (e.g., CIPP, CIPM, CRISC, CRCM)
  • Passion for AI and a desire to build a world-class risk management function

Salary Range = 185000 - 245000 USD Annually + Benefits + Bonus

The referenced salary range is based on the Company's good faith belief at the time of posting. Actual compensation may vary based on factors such as geographic location, work experience, market conditions, education/training and skill level.

We offer one of the most comprehensive and generous benefits plans available and offer a range of total rewards that may include merit increases, incentive compensation (exempt roles only), paid holidays, paid time off, medical, dental, vision, short and long term disability benefits, 401(k) +match, life insurance, and various wellness programs, among others. The Company does not provide benefits directly to contingent workers/contractors and interns.