TY - JOUR
T1 - Development of a standardized training program for the Hamilton Depression Scale using internet-based technologies
T2 - Results from a pilot study
AU - Kobak, Kenneth A.
AU - Lipsitz, Joshua D.
AU - Feiger, Alan
N1 - Funding Information:
This study was supported by NIMH SBIR contract #N43MH12049 awarded to Kenneth A. Kobak, PhD.
PY - 2003/1/1
Y1 - 2003/1/1
N2 - Poor inter-rater reliability is a major concern, contributing to error variance, which decreases power and increases the risk for failed trials. This is particularly problematic with the Hamilton Depression Scale (HAMD), due to lack of standardized questions or explicit scoring procedures. Establishing standardized procedures for administering and scoring the HAMD is typically done at study initiation meetings. However, the format and time allotted is usually insufficient, and evaluation of the trainee's ability to actually conduct a clinical interview is limited. To address this problem, we developed a web-based, interactive rater education program for standardized training to diverse sites in multi-center trials. The program includes both didactic training on scoring conventions and live, remote observation of trainees applied skills. The program was pilot tested with nine raters from a single site. Results found a significant increase in didactic knowledge pre-to-post testing, with the mean number of incorrect answers decreasing from 6.5 (S.D.=1.64) to 1.3 (S.D.=1.03), t(5)=7.35, P=0.001 (20 item exam). Seventy-five percent of the trainees' interviews were within two points of the trainer's score. Inter-rater reliability (intraclass correlation) (based on trainees actual interviews) was 0.97, P<0.0001. Results support the feasibility of this methodology for improving rater training. An NIMH funded study is currently underway examining this methodology in a multi-site trial.
AB - Poor inter-rater reliability is a major concern, contributing to error variance, which decreases power and increases the risk for failed trials. This is particularly problematic with the Hamilton Depression Scale (HAMD), due to lack of standardized questions or explicit scoring procedures. Establishing standardized procedures for administering and scoring the HAMD is typically done at study initiation meetings. However, the format and time allotted is usually insufficient, and evaluation of the trainee's ability to actually conduct a clinical interview is limited. To address this problem, we developed a web-based, interactive rater education program for standardized training to diverse sites in multi-center trials. The program includes both didactic training on scoring conventions and live, remote observation of trainees applied skills. The program was pilot tested with nine raters from a single site. Results found a significant increase in didactic knowledge pre-to-post testing, with the mean number of incorrect answers decreasing from 6.5 (S.D.=1.64) to 1.3 (S.D.=1.03), t(5)=7.35, P=0.001 (20 item exam). Seventy-five percent of the trainees' interviews were within two points of the trainer's score. Inter-rater reliability (intraclass correlation) (based on trainees actual interviews) was 0.97, P<0.0001. Results support the feasibility of this methodology for improving rater training. An NIMH funded study is currently underway examining this methodology in a multi-site trial.
KW - Assessment: Inter-rater reliability
KW - Computerized assessment
KW - Depression
KW - Hamilton Depression Scale
KW - Internet
KW - Rater training
UR - http://www.scopus.com/inward/record.url?scp=0141888371&partnerID=8YFLogxK
U2 - 10.1016/S0022-3956(03)00056-6
DO - 10.1016/S0022-3956(03)00056-6
M3 - Article
C2 - 14563382
AN - SCOPUS:0141888371
SN - 0022-3956
VL - 37
SP - 509
EP - 515
JO - Journal of Psychiatric Research
JF - Journal of Psychiatric Research
IS - 6
ER -