Role Purpose
The AI Risk & Compliance role is responsible for ensuring that the organisation's use of artificial intelligence is ethical, compliant, and aligned with regulatory, legal, and internal governance requirements. This role provides oversight across the AI lifecycle from use-case ideation to deployment and ongoing monitoring by identifying, assessing, and mitigating risks related to AI systems while enabling responsible innovation.
The AI Risk & Compliance function works closely with business units, technology teams, legal, privacy, and risk stakeholders to evaluate and manage risks arising from the development and deployment of AI use cases. The role ensures that AI systems comply with applicable data protection laws (including PDPA), internal policies, and sector-specific regulations, and that appropriate controls are in place for high-risk and sensitive use cases.
Responsibilities
- Conduct AI risk assessments covering use-case context, model design, data sources, and deployment
- Together with relevant stakeholders to Identify and assess ethical, legal, operational, privacy, and reputational risks associated with AI systems
- Ensure AI use case complies with PDPA, internal governance policies, and applicable sector regulations
- Define and apply risk controls for high-risk and sensitive AI use cases
- Manage third-party and vendor AI risk
- Support internal audits, regulatory examinations, and compliance reviews related to AI Use Case
Required Knowledge & Skills
- Strong understanding of AI/ML model lifecycle, having experience in end to end AI Model development will be advantageous
- Strong understanding of AI risk management concepts across data, model, and decision layers
- Practical knowledge of PDPA and data protection principles (consent, purpose limitation, data minimization, retention)
- Familiarity with technology, model, or operational risk frameworks
- Experience assessing vendor AI solutions, including black-box models and cloud-based AI services
- Ability to interpret regulatory requirements and translate them into actionable controls
Experience
- 37+ years of experience in AI governance, technology risk, data privacy, compliance, or model risk
- Experience working in regulated environments preferred
- Exposure to audits or regulatory engagements involving AI, data, or advanced analytics