Areas of focus

These areas reflect the questions currently guiding research into how artificial intelligence is deployed and expressed in real-world contexts. They frame ongoing analysis of AI systems, with particular attention to behaviour, assumptions, and consequences once systems are in use.

AI and Children

Children encounter AI in education, entertainment, social media, and surveillance contexts, often without meaningful consent or understanding. Whilst long-term effects remain largely unknown, profound risks have emerged, including concrete cases of psychological harm and documented instances of severe distress linked to algorithmic content curation. This area examines proactive design strategies to build safer AI systems that prioritise children's rights, privacy, and well-being.

AI Safety by Design

This area examines proactive approaches to AI safety, integrating considerations such as bias mitigation, value alignment, ethical governance, and risk assessment from the very inception of systems through deployment and ongoing monitoring. Rather than relying on retroactive fixes, the emphasis is on a full cycle methodology starting at the ideation stage to ensure AI is inherently safe and trustworthy.

My approach tends to be slow and careful rather than reactive. I try to read widely, across disciplines and perspectives, and to sit with difficult questions rather than rushing toward answers. The writing that emerges from this process is provisional—an attempt to think through problems in public, not to deliver final verdicts.

Read my writing