Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet.

Victor Makarenkov, Lior Rokach

Research output: Working paper/PreprintPreprint

Abstract

One of the challenges in the NLP field is training large classification models, a task that is both difficult and tedious. It is even harder when GPU hardware is unavailable. The increased availability of pre-trained and off-the-shelf word embeddings, models, and modules aim at easing the process of training large models and achieving a competitive performance. We explore the use of off-the-shelf BERT models and share the results of our experiments and compare their results to those of LSTM networks and more simple baselines. We show that the complexity and computational cost of BERT is not a guarantee for enhanced predictive performance in the classification tasks at hand.
Original languageEnglish
DOIs
StatePublished - 15 Sep 2020

Fingerprint

Dive into the research topics of 'Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet.'. Together they form a unique fingerprint.

Cite this