Areas of focus

These areas reflect the questions that currently guide my academic research into how artificial intelligence is deployed and expressed in real-world contexts. They frame my ongoing analysis of AI systems, with particular attention to behaviour, assumptions, and consequences once systems are in use.

AI governance and policy

How do institutions—governments, corporations, international bodies—make decisions about artificial intelligence? What structures and processes shape AI development and deployment, and where do existing governance models fall short? I am interested in the gap between stated intentions and practical outcomes, and in how governance frameworks evolve under pressure.

Risk, trust, and accountability

AI systems introduce new kinds of uncertainty. Who bears responsibility when things go wrong? How do organisations assess risks they do not fully understand, and how do individuals and communities decide whether to trust AI-mediated decisions? These questions sit at the intersection of technical design, institutional practice, and public perception.

Real-world impacts of AI

Beyond laboratory benchmarks and corporate announcements, how are AI systems actually changing daily life, work, and institutions? I try to look carefully at specific cases—successes, failures, unintended consequences—rather than relying on generalised claims about AI's promise or peril.

Regulation and standards

The regulatory landscape for AI is fragmented and evolving. Different jurisdictions take different approaches; standards bodies, industry coalitions, and civil society organisations all seek to shape the rules. I follow these developments with attention to how regulatory choices create incentives, distribute power, and sometimes produce effects opposite to those intended.

AI and children

Children encounter AI systems in education, entertainment, social media, and surveillance contexts—often without meaningful consent or understanding. The long-term effects of these exposures remain largely unknown. I am interested in how we might think more carefully about the particular vulnerabilities and rights of young people in relation to AI.

My approach tends to be slow and careful rather than reactive. I try to read widely, across disciplines and perspectives, and to sit with difficult questions rather than rushing toward answers. The writing that emerges from this process is provisional—an attempt to think through problems in public, not to deliver final verdicts.

Read my writing
Built with v0