Positive Or Not Positive, That’s The Question

[Ce billet est co-écrit par Hussam Hamdan (LSIS-LIF-OpenEdition), Patrice Bellot (LSIS-OpenEdition), Frédéric Béchet (LIF)]

To write a review about a book means you have read it. To express a positive opinion, to recommend the others to read it, that means you have had interesting reading. In general, that’s true, but what if you find some interesting aspects or ideas but you have a negative critique about the rest, what will be the impact of the review on the review readers!

Opinion Analysis or Sentiment Analysis (SA) is a real problem, sometimes it’s difficult for people to distinguish if the review is subjective or objective or furthermore its polarity is positive or negative. Moreover, the opinion about expressed opinion is in its turn an opinion. The understanding of the review polarity depends on the person, the gender, some personal issues, and maybe issues relating to the circumstances and culture. That problem is still more difficult for computers.

We witness through these last years an exponentially increasing in the user generated content on the web, and at the same time an increasing interest in analysis of that content. Such content could be useful in Marketing where it is important to discover the opinion of the customers towards the products, in politics where the durable knowledge of voter’s opinion towards the politicians and their programs is required, and in media, it would be wise to study the impact of a message on the mass.

In this blog entry, we are interested in discovering the sentiment expressed in Tweets, and aiming at applying it for the tweets talking about the books, especially OpenEdition books. For that purpose, we play with the feature engineering and use some techniques in order to overcome some challenges of SA task in the social media especially in Twitter.

These challenges related to the limited size of posts (e.g., maximum 140 characters in Twitter), the informal language of such content containing slang words and non-standard expressions (e.g. Gr8 instead of great, LOL instead of laughing out loud, goooood etc.), and the high level of noise in the posts due to the absence of correctness verification by user or spelling checker tools.

For handling the non-standard expression and abbreviations, we have constructed a dictionary in which each entry is a non-standard expression or emotion icon associated to the standard expression or the meaning.

We have exploited some features for classifying posts :

1.Bag of words (unigrams)

The most commonly used features in text analysis are the bag of words which represent a text as unordered set of words. It assumes that words are independent from each other (this assumption is false, of course, but it is a practical, quite effective and very usual simplification…) and then disregards their order of appearance. We used these features as a baseline.

2. Micro-blog specific features

We extracted some domain specific features of tweets which are: presence of an URL or not, the tweet was retweetted or not, the number of “Not”, the number of happy emotion icons, the number of sad emotion icons, exclamation and question marks, the number of words starting by a capital letter, the number of @.

3. DBpedia features

We used the DBpedia Spotlight Web service to extract the concepts hidden in each tweet. For example, the DBpedia concept for Chapel Hill is (Settlement). Therefore, if we suppose that people post positively about settlement, it would be more probable to post positively about Chapel Hill.

4. Lexical relations via Wordnet

We used Wordnet for extracting the synonyms of nouns, verbs and adjectives, the verb groups (the hierarchies in which the verb synsets are arranged), the similar adjectives (synset) and the concepts of nouns which are related by the relation is-a.

5. Senti-features

We used SentiWordNet for extracting the number and the scores of positive, negative and neutral words in tweets, the polarity (the number of positive words divided by the number of negative ones incremented by one) and subjectivity (the number of positive and negative words divided by the neutral ones incremented by one).

For evaluation, we used the data set provided in SemEval 2013 for subtask B of sentiment analysis in Twitter. The participants were provided with training and testing tweets annotated positive, negative or neutral. The Training set contains 8498 tweets and the testing set contains 3813 tweets.

We have made many experimentations with some combinations of features using two classification algorithms: Naive-Bayes and SVM with linear kernel; the considerable evaluation metric of Semeval is f-measure for negative and positive classes which appears in the last row of the following tables of results.

F-mesure

uni-gram

+adjectives

+DBpedia

+others

Pos

0.603

0.619

0.622

0.637

Neg

0.443

0.436

0.417

0.440

Neutral

0.683

0.685

0.691

0.689

Avg neg+pos

0.523

0.527

0.520

0.538

The results of different feature vectors using linear SVM model

F-mesure

uni-gram

+adjectives

+DBpedia

+others

Pos

0.514

0.563

0.562

0.540

Neg

0.397

0.422

0.427

0.424

Neutral

0.608

0.652

0.648

0.636

Avg neg+pos

0.456

0.493

0.495

0.482

The results of different feature vectors using Naive-Bayes model

We remark that the adjectives extracted from WordNet and the DBpedia concepts improve the performance with Naive-Bayes by 4%, but the other features are not useful. However, all the proposed features improve the performance with SVM by 1,5%. The SVM model does better than Naive-Bayes.

Thus, we proposed the use of DBpedia concepts, some WordNet features, senti-features extracted from SentiWordNet and syntactic features for improving the sentiment classification. We tested the impact of each group of features. We remark that SVM model has allowed to obtain the best accuracy when extending the original tweets with all proposed features, whereas the Naïve-Bayes model has given the best accuracy when extending the original tweet with only the adjectives and the DBpedia concepts.

For more information you can read our paper « Experiments with DBpedia, WordNet and SentiWordNet as resources for sentiment analysis in micro-blogging » that was published in the proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), June 14-15, 2013, Atlanta, Georgia (USA).


Vous aimerez aussi...

4 réponses

  1. ici dit :

    Billet qui donne enfin des réponses à mes questions . Merci

  2. fournier dit :

    It is not clear for me, what features do you use in which expériments ?

    Can you explain in more details ?

    • hamdan dit :

      We have constructed 4 vectors of features representing the tweet :

      1- The original representation : the first feature vector is the words existing in the tweet (binary features)

      2- We extend the first vector by the adjectives extracted from WordNet, thus, the second feature vector is the words and the extracted adjectives (these adjectives has a Similar_To relation in WordNet to each existed adjective in the original tweet)

      3- We extend the second vector by the concepts extracted from DBPedia, thus the feature vector is the words , the adjectives extracted from WordNet and the hidden concepts

      4- We extend the third vector by all other features (Senti-features,Micro-blog specific features,the synonyms of nouns, verbs and the verb groups (the hierarchies in which the verb synsets are arranged), the concepts of nouns which are related by the relation is-a.

      Then we train Naive-Bayes and SVM with each vector feature.
      I hope that makes it clear!

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *