Back to Writing
Governance & RegulationAI & Children

AI Regulation and Emotional Risk: What is Still Missing

Written: September 2025 · Published: January 2026

This article is adapted from earlier academic work developed in September 2025 and is published here in January 2026. The analysis is based on peer-reviewed research and policy literature and has been edited for a general audience.

Artificial intelligence regulation has expanded significantly in recent years, reflecting growing awareness that AI systems can affect individuals and society in complex ways. New frameworks such as the European Union's Artificial Intelligence Act, alongside digital safety initiatives in the United Kingdom and policy developments in the United States, signal a shift towards more structured oversight of AI technologies. These efforts focus on accountability, transparency, and the protection of fundamental rights, and represent meaningful progress in a rapidly evolving field.

However, emerging evidence suggests that certain forms of harm remain difficult to capture within existing regulatory approaches. In particular, emotional and psychological risks associated with conversational AI systems are not yet clearly defined or consistently addressed. This gap does not indicate regulatory failure, but rather highlights an area where governance models have not fully caught up with how these systems are used in everyday life.

Technical risk and lived experience

Most current AI regulation is organised around identifiable technical and legal risks, including data protection, discrimination, transparency, and misuse. These concerns are essential and well established within governance frameworks. Yet emotional and psychological impacts often arise in different ways. Rather than resulting from isolated system errors, they tend to develop through prolonged interaction, perceived relationality, and the social context in which AI systems are embedded.

Research has shown that users can form strong emotional attachments to conversational and companion-style AI systems, sometimes experiencing distress, dependency, or a sense of loss when access is interrupted (Banks, 2024). Other studies suggest that while AI companions may offer short-term relief from loneliness, there is no clear evidence of sustained psychological benefit (De Freitas et al., 2024). These effects are difficult to measure using conventional risk categories, which are better suited to assessing accuracy or bias than cumulative emotional influence.

Such dynamics are particularly relevant for children, adolescents, and individuals facing mental health challenges. Evidence indicates that younger users may struggle to maintain boundaries between AI systems and human relationships, increasing the risk of over-reliance or emotional confusion (Kurian, 2025; Common Sense Media, 2025a). Although wellbeing is frequently mentioned in policy discourse, emotional harm is rarely translated into concrete regulatory obligations for AI design and deployment.

Conversational systems and regulatory assumptions

Conversational AI systems challenge several assumptions that underpin existing governance models. Regulation often classifies systems according to intended function, such as productivity tools, information services, or entertainment platforms. In practice, these distinctions are increasingly blurred. Systems designed for general assistance or efficiency can evolve into sources of emotional support or companionship through sustained interaction, even when this was not their original purpose.

This functional ambiguity complicates oversight. Emotional influence may emerge without explicit design intent, and psychological harm may accumulate gradually rather than appearing as a single identifiable incident. As a result, it can be unclear when safeguards should apply and how responsibility should be allocated between developers, deployers, and platform operators.

Fragmentation across jurisdictions

Another challenge is the fragmented nature of AI governance across regions. The European Union has adopted a risk-based framework with strong enforcement mechanisms, but its treatment of emotional and psychological harm remains limited and largely implicit. The United Kingdom's principle-based approach prioritises flexibility and innovation, but offers less clarity on enforcement. In the United States, the absence of a unified federal framework leads to inconsistent protection and regulatory uncertainty.

Across these models, mental health and emotional wellbeing are recognised as important societal concerns, yet they are not systematically integrated into AI-specific regulation. Even where digital safety frameworks address wellbeing more broadly, guidance on mitigating emotional harm from generative and conversational AI remains underdeveloped. As these systems become embedded in education, healthcare-adjacent settings, and everyday communication, the consequences of this gap are likely to grow.

What this suggests going forward

Current regulatory frameworks provide an important foundation for AI governance, but they may need to evolve further to reflect the emotional dimensions of human–AI interaction. Emotional and psychological safety could be treated more explicitly as a design-stage consideration, rather than an indirect outcome of general safety measures. This would involve clearer definitions of emotional risk, stronger expectations around testing and monitoring, and greater involvement of mental health and child development expertise in system design.

As existing research has noted, many of the most significant impacts of AI arise not from narrow technical failures, but from complex social and psychological interactions over time (Whittlestone and Clarke, 2022). Recognising emotional risk as a legitimate governance concern is therefore less about expanding regulation, and more about aligning policy with lived experience as conversational AI continues to evolve.

References

Banks, J. (2024) 'Deletion, departure, death: Experiences of AI companion loss', Journal of Social and Personal Relationships, 41(12), pp. 3547–3572. https://doi.org/10.1177/02654075241269688

Common Sense Media (2025a) AI risk assessment: Social AI companions. San Francisco: Common Sense Media. https://commonsensemedia.org/sites/default/files/pug/csm-ai-risk-assessment-social-ai-companions_final.pdf

De Freitas, J., Uğuralp, A.K., Uğuralp, Z. and Puntoni, S. (2024) AI companions reduce loneliness. Harvard Business Working Paper No. 24-078, The Wharton School Research Paper. https://ssrn.com/abstract=4893097

Kurian, N. (2025) 'AI's empathy gap: The risks of conversational Artificial Intelligence for young children's well-being and key ethical considerations for early childhood education and care', Contemporary Issues in Early Childhood, 26(1), pp. 132–139. https://doi.org/10.1177/14639491231206004

Whittlestone, J. and Clarke, S. (2022) 'AI challenges for society and ethics', in Bullock, J.B. (ed.) The Oxford Handbook of AI Governance. Oxford: Oxford University Press, pp. 45–64. https://doi.org/10.1093/oxfordhb/9780197579329.013.3