Skip to main navigation Skip to search Skip to main content

Robot-Led Vision Language Model Wellbeing Assessment of Children

  • Nida Itrat Abbasi
  • , Fethiye Irmak Dogan
  • , Guy Laban
  • , Joanna Anderson
  • , Tamsin Ford
  • , Peter B. Jones
  • , Hatice Gunes

Research output: Working paper/PreprintPreprint

22 Downloads (Pure)

Abstract

This study presents a novel robot-led approach to assessing children's mental wellbeing using a Vision Language Model (VLM). Inspired by the Child Apperception Test (CAT), the social robot NAO presented children with pictorial stimuli to elicit their verbal narratives of the images, which were then evaluated by a VLM in accordance with CAT assessment guidelines. The VLM's assessments were systematically compared to those provided by a trained psychologist. The results reveal that while the VLM demonstrates moderate reliability in identifying cases with no wellbeing concerns, its ability to accurately classify assessments with clinical concern remains limited. Moreover, although the model's performance was generally consistent when prompted with varying demographic factors such as age and gender, a significantly higher false positive rate was observed for girls, indicating potential sensitivity to gender attribute. These findings highlight both the promise and the challenges of integrating VLMs into robot-led assessments of children's wellbeing.
Original languageEnglish
PublisherarXiv
Number of pages7
DOIs
StatePublished - 3 Apr 2025
Externally publishedYes

Keywords

  • cs.RO

Fingerprint

Dive into the research topics of 'Robot-Led Vision Language Model Wellbeing Assessment of Children'. Together they form a unique fingerprint.

Cite this