Back to Writing
AI & Uncertainty

When health decisions are uncertain: how conversational AI reshapes sense-making

Written: September 2025 · Published: January 2026

This article is adapted from earlier academic work written in September 2025 and published here in January 2026. The content has been edited for a general audience while preserving the original analytical focus and cited sources.

Human Agency

Health-related decision-making rarely follows clear or linear paths. Symptoms often appear gradually, overlap with one another, or fluctuate over time. Advice from professionals, online resources, and peer communities can be inconsistent, leaving individuals unsure whether what they are observing is normal, concerning, or requires intervention. This uncertainty is particularly pronounced in everyday, non-urgent situations, where decisions must still be made despite incomplete information.

Research on parenting and caregiving highlights how decisions are frequently shaped by fragmented data rather than definitive evidence. Oster (2019) notes that even when data exists, it rarely provides unambiguous answers, and must be interpreted alongside context, values, and judgement. In practice, this results in repeated cycles of searching, comparing, and second-guessing, rather than clear resolution.

Where the real friction lies

The primary challenge in these situations is not diagnostic failure, but cognitive and emotional load. Individuals are required to track symptoms over time, interpret subtle changes, and decide when professional input is necessary. Medical consultations are often brief and infrequent, making it difficult to convey the full context of an evolving situation. As a result, uncertainty persists between appointments, and the responsibility for sense-making falls largely on the individual.

This ongoing effort creates friction that accumulates over time. The burden is not simply informational, but psychological. Constant vigilance, repeated searching, and fear of overlooking something important contribute to decision fatigue. These pressures are rarely captured by traditional assessments of risk, which tend to focus on discrete errors rather than prolonged uncertainty.

Why people turn to conversational AI

Conversational AI systems increasingly enter this space not as authoritative medical tools, but as readily available companions for reflection and exploration. Research suggests that users often disclose more information to AI chatbots than to clinicians, reporting lower feelings of judgement and greater comfort when discussing concerns (Adamopoulou and Moussiades, 2020). This makes conversational systems particularly attractive during periods of uncertainty, when individuals are seeking reassurance, clarification, or a way to organise their thoughts.

These systems offer continuity and immediacy that formal healthcare interactions cannot always provide. They allow users to articulate concerns as they arise, revisit questions, and explore possible interpretations without the constraints of time-limited appointments. Importantly, this use does not depend on diagnostic authority. It relies instead on perceived neutrality, responsiveness, and conversational tone.

Risk beyond technical accuracy

The influence of conversational AI in health-adjacent contexts does not stem primarily from diagnostic outputs. Risk emerges through interaction. Tone, framing, repetition, and implied reassurance can shape how users interpret their own experiences and decide what action to take. Over time, repeated interactions may subtly influence judgement, confidence, and thresholds for seeking professional help.

These forms of impact are difficult to capture through traditional technical evaluations. They do not arise from a single incorrect response, but from cumulative exposure and relational dynamics. As a result, systems that are explicitly non-diagnostic may still exert meaningful influence on behaviour, particularly when users are emotionally invested or uncertain.

Implications for responsible design

Designing conversational AI for health-adjacent use requires attention to these interaction-based dynamics. Responsible systems should prioritise uncertainty-aware language, avoid deterministic or reassuring claims, and clearly communicate their limitations. Rather than offering answers, they can support sense-making by helping users organise information, reflect on patterns, and recognise when professional input may be appropriate.

Such an approach shifts the goal from maximising capability to reducing cognitive load. By supporting reflection without replacing judgement, conversational AI can help individuals navigate uncertainty while preserving agency. In contexts where lived experience and emotional vulnerability shape decision-making, restraint and clarity may be as important as technical sophistication.

References

Adamopoulou, E. and Moussiades, L. (2020) 'Chatbots: History, technology, and applications', Machine Learning with Applications, 2, 100006. https://doi.org/10.1016/j.mlwa.2020.100006

Oster, E. (2019) Cribsheet: A data-driven guide to better, more relaxed parenting, from birth to preschool. London: Penguin Books.