The Rater Applied Performance Scale: Development and reliability

Joshua Lipsitz, Ken Kobak, Alan Feiger, Dawn Sikich, Georges Moroz, Nina Engelhardt

Research output: Contribution to journalArticlepeer-review

27 Scopus citations

Abstract

Previous studies of rater performance and interrater reliability have used passive scoring tasks such as rating patients from a videotaped interview. Little is known, however, about how well raters conduct assessments on real patients or how reliably they apply scoring criteria during actual assessment sessions. With growing recognition of the importance of monitoring and review of actual evaluation sessions, there is need for a systematic approach to quantify raters' applied performance. The Rater Applied Performance Scale (RAPS) measures six dimensions of rater performance (adherence, follow-up, clarification, neutrality, rapport, and accuracy) based on reviews of audiotaped or videotaped assessment sessions or on live monitoring of assessment sessions. We tested this new scale by having two reviewers rate 20 Hamilton Depression Scale rating sessions ascertained from a multi-site depression trial. We found good internal consistency for the RAPS. Interrater (i.e. inter-reviewer) reliability was satisfactory for RAPS total score ratings. In addition, RAPS ratings correlated with quantitative measures of scoring accuracy based on independent expert ratings. Preliminary psychometric data suggest that the RAPS may be a valuable tool for quantifying the performance of clinical raters. Potential applications of the RAPS are considered.

Original languageEnglish
Pages (from-to)147-155
Number of pages9
JournalPsychiatry Research
Volume127
Issue number1-2
DOIs
StatePublished - 30 Jun 2004
Externally publishedYes

Keywords

  • Clinical trial
  • Depression
  • Independent evaluator
  • Methodology
  • Psychometrics
  • Rater Applied Performance Scale
  • Reliability

ASJC Scopus subject areas

  • Psychiatry and Mental health
  • Biological Psychiatry

Fingerprint

Dive into the research topics of 'The Rater Applied Performance Scale: Development and reliability'. Together they form a unique fingerprint.

Cite this