Abstract
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.
| Original language | English |
|---|---|
| Article number | 132 |
| Journal | npj Digital Medicine |
| Volume | 8 |
| Issue number | 1 |
| DOIs | |
| State | Published - 1 Dec 2025 |
ASJC Scopus subject areas
- Medicine (miscellaneous)
- Health Informatics
- Computer Science Applications
- Health Information Management
Fingerprint
Dive into the research topics of 'Assessing and alleviating state anxiety in large language models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver