Skip to main navigation Skip to search Skip to main content

Recent advances in natural language processing via large pre-trained language models: A survey

  • Bonan Min
  • , Hayley Ross
  • , Elior Sulem
  • , Amir Pouran Ben Veyseh
  • , Thien Huu Nguyen
  • , Oscar Sainz
  • , Eneko Agirre
  • , Ilana Heinz
  • , Dan Roth

    Research output: Working paper/PreprintPreprint

    Abstract

    Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field. We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.
    Original languageEnglish
    DOIs
    StatePublished - 2021

    Fingerprint

    Dive into the research topics of 'Recent advances in natural language processing via large pre-trained language models: A survey'. Together they form a unique fingerprint.

    Cite this