Machine Learning and Text Mining for Retrieving Reviews of Books

[Ce billet est co-écrit par Chahinez Benkoussas (LSIS-OpenEdition), Hussam Hamdan (LSIS-LIF-OpenEdition), Patrice Bellot (LSIS-OpenEdition), Élodie Faath (OpenEdition), Marin Dacos (OpenEdition), Frédéric Béchet (LIF)]

Introduction

In the social science and humanities, book reviews are an important source of information for researchers. The reviews are part of the cycle of scholarly communication. Published in scientific journals or on the Web, they allow to be informed of the latest scientific advances. Moreover, the writing and the reading of book reviews are part of academic life. Indeed, number of studies have attempted to determine how researchers approach this subject in their academic life1 and what features was expected for writing a good book review2. On the other hand, book reviews can be employed for recommending books, a task that can be performed by combining content analysis and collaborative filtering, for example by taking into account the behavior of the readers and by analyzing their comments and the short reviews they write on online Web sites. This is what has been proposed for the Social Book Search track of INEX evaluation3 in information retrieval.

Unlike previous studies on automatic book recommendation, we are dealing exclusively with scientific books in the social science domain and with the ways one can retrieve reviews and link them to books. We are interested in collecting two kinds of reviews that express two complementary points of view on a given book: deep expert analysis on the one hand (long reviews of books written by expert reviewers in scientific journals) and short reaction, representing an opinion on the other hand (short reviews that can be found on the Web or in social media).

If automatic sentiment analysis is a classic computer science topic, retrieving reviews remains an original subject.

A Collection of Scientific Book Reviews

A review usually includes the presentation of the book, an analysis of the main arguments, and some positive or negative opinions of the book. In this example, the author’s opinion is disseminated in the conclusion4 :

<p>Perhaps the only major point of criticism worth mentioning is the unbalanced choice of topics. In their summary, the editors stress the multi-disciplinary character of the volume, which is unfortunately incongruent with my impression: seven papers are exclusively or largely numismatic. […] Nevertheless, the volume is an excellent representative of current scholarship on the still puzzling phenomenon of Hellenistic royal cult and Roman imperial worship. The countless and superbly chosen illustrations greatly enhance comprehension of the texts. One cannot but praise the fact that the editorial work is impeccable, with a near absence of typos and the publication of high quality photographs. In conclusion, the editors should be congratulated for gathering all these scholars and for producing such a major – in terms of both pages and quality – contribution.</p>

In order to train classifiers for automatically classifying (and then retrieving) documents as reviews/non reviews, we manually extracted and annotated two corpora. We developed a first training corpus of documents in French, pre-classified into two categories, the first one contains a set of 498 reviews of books and are extracted from the journals of the OpenEdition Revues.org platform and the second, a set of 88 documents that are not reviews but other ace papers.

We also constructed a multilingual corpus mixing scientific papers and scientific blog entries. Each sub-corpus is pre-classified into two categories (reviews RV and non-reviews NonRV) : German (10 RV and 10 NonRV), English (150 RV and 150 NonRV), French (201 RV and 201 NonRV), Portuguese (78 RV and 78 NonRV).

Expanding automatically the collection of reviews

Automatic identification of reviews by means of machine learning

Based on the structured documents (XML-TEI), different techniques can be applied for automatic classifying texts. Faced with the great variety of document styles on the two platforms and the diversity of languages, we have tested several approaches. The hypothesis is that classifying texts as reviews can be accomplished by taking into account the lexicon (a common lexicon for the majority of the reviews) as well as other features such the quantity and the localization of opinion sentences and of book references. This differs from classical customer review analysis5 in the sense that no clear “product feature” can be given as an input to detect and to analyze reviews.

First, we used a “bag-of-word” approach to index the documents (the order of the words is ignored, all XML TEI tags are removed and the individual words constitute the only classification features). The corpus is split into two sets, a training set TV (70% of the initial corpus) and a test set TE (30%). According to 10-fold cross-validation, we obtained 96% accuracy and 91% recall for reviews of the French speaking collection and 64% accuracy and 81% recall for non-reviews by employing a Multinomial Naive Bayes model.

Second, we defined a set of new features:

  • Sentiment: the number and the position of the occurrences of subjective words,
  • Specific lexicon: the position some words that occur very often in the beginning of reviews (book, to describe, to study, author…) or accross the reviews (chapter, section, part…),
  • Citation metadata: the close occurrences of named entities (referring to author names), titles of books, dates and places.

Searching for book reviews on the Web

Our second purpose is to build a collection of reviews from the Web for each OpenEdition book. For testing, we chose about 127 books in 3 categories: 49 books about environment, 63 about sociology and 15 about information science. Then, we queried Google Web Search by concatenating the title of the book (exact phrase search) and the name of all its authors. We downloaded the first 20 Web pages for each book and performed a manual annotation. 2000 pages were evaluated and classified into several classes such as: review, advertisement, interview, bibliography or even ‘access denied’. We filtered out all advertising Web pages which never contain any good review of book. In the 600 remaining pages we used for classification learning, 97 are reviews of books. For each retrieved Web page, we kept its URL and its HTML content. Three feature sets have been considered:

A. Deleting all HTML and Java Scripts tags in the Web pages.
B. Employing Boilerpipe6 for extracting the actual content of the Web pages The number of real pages decreased to 576 because of some empty documents generated by Boilerpipe (88 reviews and 488 not-reviews).
C. Using only the words in the URLs as features.

We tested four different classification approaches (Naive Bayes, Naive Bayes Multinomial, SVM, Rep Tree — decision trees) and obtained F-Measures greater than 0.6 for reviews and greater than 0.8 for non reviews.

Automatic linking reviews and books with BILBO

Once a document has been classified as a long scientific review, we have to link it with an identifier of the book reviewed. This process is not straightforward as a lot of ambiguities can occur. Our reference parsing software BILBO7 can be employed to detect bibliographic references in the reviews and to obtain the DOI of the corresponding book via CrossRef API (if such an identifier exists – http://www.crossref.org). BILBO is constructed with our own annotated corpora from DH papers extracted from the OpenEdition Revues.org platform. The robustness of BILBO, based on linear-chain conditional random fields8, allows largely a language independent tagging. Our preliminary results show that the quality of linking via BILBO and then CrossRef seem to be good enough to be employed at a large scale (>80% of the retrieved links are correct).



Citer ce billet
Élodie Faath (2013, 20 novembre). Machine Learning and Text Mining for Retrieving Reviews of Books. OpenEdition Lab. Consulté le 18 mars 2024, à l’adresse https://doi.org/10.58079/qnqh

  1. Riley, L.E., & Spreitzer, E.A. (1970). « Book reviewing in the social sciences ». American Sociologist, 25, 358–363. [En Ligne] URL: http://www.jstor.org/stable/27701668 (consulté le 20 novembre 2013); Hartley, James. (2006). « Reading and writing book reviews across the disciplines ». Journal of the American Society for Information Science and Technology, vol. 57, nr. 9, s. 1194-1207. [En Ligne] DOI: 10.1002/asi.20399 ; Clara Chevalier et Émilien Ruiz, « Comment (et pourquoi) écrire un compte rendu de lecture ? »,  Devenir historien-ne Méthodologie de la recherche et historiographie en master Histoire , 29 septembre 2011. [En Ligne] URL : http://devhist.hypotheses.org/492 (consulté le 20 novembre 2013) []
  2. East, John W. (2011). « The Scholarly Book Review in the Humanities: An Academic Cinderella? » Journal of Scholarly Publishing, vol. 43, nr. 1, s. 52-67. [En Ligne] DOI: 10.1353/scp.2011.0046 []
  3. Kazai, Gabriella, et al. “Overview of the INEX 2010 book track: Scaling up the evaluation using crowdsourcing.” Comparative Evaluation of Focused Retrieval. Springer Berlin Heidelberg, 2011. 98-117. [En Ligne] DOI: 10.1007/978-3-642-23577-1_9 []
  4. Joannis Mylonopoulos, « Panagiotis P. Iossif, Andrzej S. Chankowski, Catharine C. Lorber (éd.), More than Men, Less than Gods: Studies on Royal Cult and Imperial Worship », Kernos [En ligne], 26 | 2013, mis en ligne le 10 octobre 2013, consulté le 20 novembre 2013. URL : http://kernos.revues.org/2160 []
  5. Hu, Minqing, and Bing Liu. “Mining and summarizing customer reviews.” Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004. [En Ligne] DOI:10.1145/1014052.1014073 []
  6. Christian Kohlschütter, Peter Fankhauser, Wolfgang Nejdl, Boilerplate detection using shallow text features, Proceedings of the third ACM international conference on Web search and data mining, New York, USA, 441-450, 2010. [En Ligne] DOI: 10.1145/1718487.1718542 []
  7. Y.M. Kim, P. Bellot, E. Faath, M. Dacos (2012). Machine Learning for Automatic Annotation of References in DH scholarly papers. Digital Humanities 2012. Hamburg (Germany). [En Ligne] URL: http://www.dh2012.uni-hamburg.de/conference/programme/abstracts/machine-learning-for-automatic-annotation-of-references-in-dh-scholarly-papers/ ;  Y.-M. Kim, P. Bellot, J.Tavernier, E. Faath, M. Dacos, “Evaluation of BILBO Reference Parsing in Digital Humanities via a Comparison of Different Tools”, ACM 12th Symposium on Document Engineering DocEng’12, 2012. [En Ligne] DOI: 10.1145/2361354.2361400 []
  8. Lafferty, J., McCallum, A., & Pereira, F. C. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Proceeding ICML ’01 Proceedings of the Eighteenth International Conference on Machine Learning Pages 282-289. [En Ligne] URL: http://dl.acm.org/citation.cfm?id=655813 []

Vous aimerez aussi...

1 réponse

  1. Cesaria Raviola dit :

    Here is an interesting weblink on how to apply text mining techniques to analyze literature abstracts: http://provalisresearch.com/solutions/case-studies/content-analysis-of-journal-abstracts-in-communication/

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search