TY - JOUR
T1 - MaxSub
T2 - An automated measure for the assessment of protein structure prediction quality
AU - Siew, Naomi
AU - Elofsson, Arne
AU - Rychlewski, Leszek
AU - Fischer, Daniel
N1 - Funding Information:
Thanks to Erez Karpas for his assistance in setting the MaxSub server. We are grateful to Adam Zemla and Krzysztof Fidelis from the Livermore Prediction Center for critically reading this manuscript, for providing the data via the Prediction Center url, and for discussions. We acknowledge Adam Zemla for developing GDT. We thank Manfred Sippl for critically reading the manuscript and for helpful suggestions. Thanks also to Mark Gerstein, Klara Kedem, Adam Godzik and John Moult for fruitful discussions. Special thanks to John Moult and to the organizers and assessors of CASP3 for their important contribution to the field. Finally, we are also grateful to other CASP and CAFASP organizers and participants for encouragement. N.S. is partially supported by the Israeli High-Tech and by the Kreitman Foundation Fellowships.
PY - 2000/1/1
Y1 - 2000/1/1
N2 - Motivation: Evaluating the accuracy of predicted models is critical for assessing structure prediction methods. Because this problem is not trivial, a large number of different assessment measures have been proposed by various authors, and it has already become an active subfield of research (Moult et al., 1999). The CASP (Moult et al., 1997, 1999) and CAFASP (Fischer et al., 1999) prediction experiments have demonstrated that it has been difficult to choose one single, 'best' method to be used in the evaluation. Consequently, the CASP3 evaluation was carried out using an extensive set of especially developed numerical measures, coupled with human-expert intervention. As part of our efforts towards a higher level of automation in the structure prediction field, here we investigate the suitability of a fully automated, simple, objective, quantitative and reproducible method that can be used in the automatic assessment of models in the upcoming CAFASP2 experiment. Such a method should (a) produce one single number that measures the quality of a predicted model and (b) perform similarly to human-expert evaluations. Results: MaxSub is a new and independently developed method that further builds and extends some of the evaluation methods introduced at CASP3. MaxSub aims at identifying the largest subset of Cα atoms of a model that superimpose 'well' over the experimental structure, and produces a single normalized score that represents the quality of the model. Because there exists no evaluation method for assessment measures of predicted models, it is not easy to evaluate how good our new measure is. Even though an exact comparison of MaxSub and the CASP3 assessment is not straightforward, here we use a test-bed extracted from the CASP3 fold-recognition models. A rough qualitative comparison of the performance of MaxSub vis-a-vis the human-expert assessment carried out at CASP3 shows that there is a good agreement for the more accurate models and for the better predicting groups. As expected, some differences were observed among the medium to poor models and groups. Overall, the top six predicting groups ranked using the fully automated MaxSub are also the top six groups ranked at CASP3. We conclude that MaxSub is a suitable method for the automatic evaluation of models.
AB - Motivation: Evaluating the accuracy of predicted models is critical for assessing structure prediction methods. Because this problem is not trivial, a large number of different assessment measures have been proposed by various authors, and it has already become an active subfield of research (Moult et al., 1999). The CASP (Moult et al., 1997, 1999) and CAFASP (Fischer et al., 1999) prediction experiments have demonstrated that it has been difficult to choose one single, 'best' method to be used in the evaluation. Consequently, the CASP3 evaluation was carried out using an extensive set of especially developed numerical measures, coupled with human-expert intervention. As part of our efforts towards a higher level of automation in the structure prediction field, here we investigate the suitability of a fully automated, simple, objective, quantitative and reproducible method that can be used in the automatic assessment of models in the upcoming CAFASP2 experiment. Such a method should (a) produce one single number that measures the quality of a predicted model and (b) perform similarly to human-expert evaluations. Results: MaxSub is a new and independently developed method that further builds and extends some of the evaluation methods introduced at CASP3. MaxSub aims at identifying the largest subset of Cα atoms of a model that superimpose 'well' over the experimental structure, and produces a single normalized score that represents the quality of the model. Because there exists no evaluation method for assessment measures of predicted models, it is not easy to evaluate how good our new measure is. Even though an exact comparison of MaxSub and the CASP3 assessment is not straightforward, here we use a test-bed extracted from the CASP3 fold-recognition models. A rough qualitative comparison of the performance of MaxSub vis-a-vis the human-expert assessment carried out at CASP3 shows that there is a good agreement for the more accurate models and for the better predicting groups. As expected, some differences were observed among the medium to poor models and groups. Overall, the top six predicting groups ranked using the fully automated MaxSub are also the top six groups ranked at CASP3. We conclude that MaxSub is a suitable method for the automatic evaluation of models.
UR - http://www.scopus.com/inward/record.url?scp=0033670439&partnerID=8YFLogxK
U2 - 10.1093/bioinformatics/16.9.776
DO - 10.1093/bioinformatics/16.9.776
M3 - Article
AN - SCOPUS:0033670439
SN - 1367-4803
VL - 16
SP - 776
EP - 785
JO - Bioinformatics
JF - Bioinformatics
IS - 9
ER -