Character Eyes: Seeing Language through Character-Level Taggers

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Character-level models have been used extensively in recent years in NLP tasks as both supplements and replacements for closed-vocabulary token-level word representations. In one popular architecture, character-level LSTMs are used to feed token representations into a sequence tagger predicting token-level annotations such as part-of-speech (POS) tags. In this work, we examine the behavior of POS taggers across languages from the perspective of individual hidden units within the character LSTM. We aggregate the behavior of these units into language-level metrics which quantify the challenges that taggers face on languages with different morphological properties, and identify links between synthesis and affixation preference and emergent behavior of the hidden tagger layer. In a comparative experiment, we show how modifying the balance between forward and backward hidden units affects model arrangement and performance in these types of languages.
    Original languageEnglish
    Title of host publicationProceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
    PublisherAssociation for Computational Linguistics
    Pages95-102
    Number of pages8
    ISBN (Print)9781950737307
    DOIs
    StatePublished - 1 Aug 2019

    Fingerprint

    Dive into the research topics of 'Character Eyes: Seeing Language through Character-Level Taggers'. Together they form a unique fingerprint.

    Cite this