Evaluation Measures for Text Summarization

Evaluation Measures for Text Summarization

We explain the ideas of automatic text summarization approaches andthe taxonomy of summary evaluation methods. Moreover, we propose a new evaluationmeasure for assessing the quality of a summary. The core of the measure iscovered by Latent Semantic Analysis (LSA) which can capture the main topics ofa document. The summarization systems are ranked according to the similarity ofthe main topics of their summaries and their reference documents. Results show ahigh correlation between human rankings and the LSA-based evaluation measure.The measure is designed to compare a summary with its full-text. It can comparea summary with a human written abstract as well; however, in this case using astandard ROUGE measure gives more precise results. Nevertheless, if abstracts arenot available for a given corpus, using the LSA-based measure is an appropriatechoice.

Keywords: Text summarization, automatic extract, summary evaluation, latent semantic analysis, singular value decomposition

Year: 2009

Journal ISSN: 1335-9150
Download: download Full text 
View record in Web of Science®

Authors of this publication:


Josef Steinberger


E-mail: jstein@kiv.zcu.cz

Josef is an associated professor at the Department of computer science and engineering at the University of West Bohemia in Pilsen, Czech Republic. He is interested in media monitoring and analysis, mainly automatic text summarisation, sentiment analysis and coreference resolution.

Karel Ježek


Phone:  +420 377632475
E-mail: jezek_ka@kiv.zcu.cz
WWW: https://cs.wikipedia.org/wiki/Karel_Je%C5%BEek_(informatik)

Karel is the former group coordinator and a supervisor of PhD students working at research projects of this Group.

Related Projects:


Project

Automatic Text Summarisation

Authors:  Josef Steinberger, Karel Ježek, Michal Campr, Jiří Hynek
Desc.:Automatic text summarisation using various text mining methods, mainly Latent Semantic Analysis (LSA).