Integration of pre-trained protein language models into geometric deep learning networks

Fang Wu, Lirong Wu, Dragomir Radev, Jinbo Xu, Stan Z. Li

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Geometric deep learning has recently achieved great success in non-Euclidean domains, and learning on 3D structures of large biomolecules is emerging as a distinct research area. However, its efficacy is largely constrained due to the limited quantity of structural data. Meanwhile, protein language models trained on substantial 1D sequences have shown burgeoning capabilities with scale in a broad range of applications. Several preceding studies consider combining these different protein modalities to promote the representation power of geometric neural networks but fail to present a comprehensive understanding of their benefits. In this work, we integrate the knowledge learned by well-trained protein language models into several state-of-the-art geometric networks and evaluate a variety of protein representation learning benchmarks, including protein-protein interface prediction, model quality assessment, protein-protein rigid-body docking, and binding affinity prediction. Our findings show an overall improvement of 20% over baselines. Strong evidence indicates that the incorporation of protein language models’ knowledge enhances geometric networks’ capacity by a significant margin and can be generalized to complex tasks.

Original languageEnglish
Article number876
JournalCommunications Biology
Volume6
Issue number1
DOIs
StatePublished - 1 Dec 2023
Externally publishedYes

ASJC Scopus subject areas

  • Medicine (miscellaneous)
  • General Biochemistry, Genetics and Molecular Biology
  • General Agricultural and Biological Sciences

Fingerprint

Dive into the research topics of 'Integration of pre-trained protein language models into geometric deep learning networks'. Together they form a unique fingerprint.

Cite this