
Evaluation Measures for Text Summarization
We explain the ideas of automatic text summarization approaches andthe taxonomy of summary evaluation methods. Moreover, we propose a new evaluationmeasure for assessing the quality of a summary. The core of the measure iscovered by Latent Semantic Analysis (LSA) which can capture the main topics ofa document. The summarization systems are ranked according to the similarity ofthe main topics of their summaries and their reference documents. Results show ahigh correlation between human rankings and the LSA-based evaluation measure.The measure is designed to compare a summary with its full-text. It can comparea summary with a human written abstract as well; however, in this case using astandard ROUGE measure gives more precise results. Nevertheless, if abstracts arenot available for a given corpus, using the LSA-based measure is an appropriatechoice.
Keywords: Text summarization, automatic extract, summary evaluation, latent semantic analysis, singular value decomposition
Year: 2009

Authors of this publication:

Josef Steinberger
E-mail: jstein@kiv.zcu.cz

Karel Ježek
Phone: +420 377632475
E-mail: jezek_ka@kiv.zcu.cz
WWW: https://cs.wikipedia.org/wiki/Karel_Je%C5%BEek_(informatik)
Related Projects:

Automatic Text Summarisation | |
Authors: | Josef Steinberger, Karel Ježek, Michal Campr, Jiřà Hynek |
Desc.: | Automatic text summarisation using various text mining methods, mainly Latent Semantic Analysis (LSA). |