Using Lexical Chains for Text Summarization

Michael Elhadad, Regina Barzilay

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We investigate one technique to produce a summary of an original text without requiring full semantic interpretation, but instead relying on a model of
the topic progression of the text derived from lexical chains. We present a new algorithm to compute lexical chains in a text, merging several robust
knowledge sources: the WordNet thesaurus, a part-of-speech tagger and shallow parser for the identification of nominal groups, and a segmentation algorithm derived from (Hearst, 1994). Summarization proceeds in three steps: the original text is first segmented, lexical chains are constructed, strong chains are identified and significant sentences are extracted from the text. We present in this paper empirical results on the identification of strong chains and of significant sentences.
Original languageEnglish
Title of host publicationIntelligent Scalable Text Summarization
Subtitle of host publicationWorkshop held at ACL 1997
Place of PublicationMadrid
PublisherAssociation for Computational Linguistics (ACL)
Number of pages8
StatePublished - 1997


Dive into the research topics of 'Using Lexical Chains for Text Summarization'. Together they form a unique fingerprint.

Cite this