The Governance Gap: Why AI Policy Lags Behind Deployment
As organisations rush to integrate AI systems, the frameworks meant to guide responsible use struggle to keep pace. This gap creates real risks that demand attention.
Essays and long-form analysis on how artificial intelligence is designed, governed, and integrated into real-world systems.
This writing explores AI governance and regulation, safety by design, uncertainty in AI-supported decision-making, system architectures, and the specific challenges of building AI that interacts with or affects children. The focus is on responsible design: how AI systems behave under ambiguity, how safeguards are embedded at the architectural level, and how human judgement and agency are preserved in high-stakes contexts.
As organisations rush to integrate AI systems, the frameworks meant to guide responsible use struggle to keep pace. This gap creates real risks that demand attention.
Trust isn't a feature to be added later. It must be built into the architecture of how we develop, deploy, and govern AI systems from the beginning.
Moving from abstract AI ethics principles to concrete, actionable risk assessments that organisations can actually implement in their operations.
Why most AI documentation fails to serve its purpose, and what a more honest, useful approach to documenting AI capabilities and limitations might look like.
Analysing patterns in AI failures to understand what they reveal about systemic issues in how we develop, deploy, and monitor these systems.
How the patchwork of emerging AI regulations across jurisdictions creates challenges—and opportunities—for organisations operating globally.
Children interact with AI in education, entertainment, and social contexts. What design principles should guide systems built for or encountered by young people?