Experimental results of note classification with different feature generation strategies – Part 1
For the experiments, we used a well-known implementation of Support Vector Machines, SVMlight developed by Thorsten Joachims of Cornell University. The learning and test data for this program should be represented as the following format:
where each line corresponds to a document, that is a note in our case. The number at the very front indicates if the corresponding note has bibliographic information (1) or not (-1). Then the features converted into numeric values are followed. Instead of a token, a unique feature id of the token and its count in the note are represented. This format is one of the standard representations of text document for machine learning techniques.
[category] [id]:[count] [id]:[count] [id]:[count] …
Before applying a strategy for the extraction of learning and test data, we randomly select 70% of notes in the corpus as learning data and the rest as test data. We keep this selected list for the other strategies. The common indicators of the corpus 2 are represented in the following table.
Total #notes | 1532 |
#Notes in learning data | 1072 |
#Notes in test data | 460 |
#Positive notes in total data | 1147 (1147/1532 = 0.75) |
#Positive notes in learning data | 798 (798/1072 = 0.74) |
#Positive notes in test data | 349 (349/460 = 0.76) |
A positive note is a note including at least one bibliographic reference. All the positive notes in the corpus 2 are manually annotated as in the corpus 1. The proportion of the positive notes is 0.75 that reflects the real proportion in the articles of Revues.org.
Now, we show the experimental results of 10 different feature generation strategies by changing the feature types and the detailed criterions. We take again the note example in the previous posting to show how the data representation becomes different according to the selected strategy.
Example: M. Tozy et M. Mahdi, « Aspects du droit communautaire dans l’Atlas marocain », Droit et Société, 1990, p. 219-227.
Strategy 1: input words as features
Features | M. et M. Aspects du droit dans marocain Droit et Société p. |
Feature counts | m.:2 et:2 aspects:1 du:1 droit:2 dans:1 marocain:1 société:1 p.:1 |
Numerical features | 30:1 41:2 60:2 61:1 62:1 63:2 64:1 65:1 66:1 |
#Unique features | 3920 |
All notes are represented using just the input words and their counts. Then 70% of the selected notes are used as the learning data. We did not consider a more complicated format such as tf-idf, because even a stop word that is easily ignored in tf-idf representation can be an efficient indicator for our note classification. After constructing a SVM model on the learning data, we obtain the following accuracies on the test data.
Total accuracy (Micro Averaged Precision) : 87.61
[Positive] Precision : 88.42 Recall : 96.28 [Negative] Precision : 83.75 Recall : 60.36
Compared to the positive notes, both precision and especially recall are not good for the negative notes. The learned SVM classifier tends to classify a note to the positive class. It means that the used features are not sufficient to well describe the characteristics of negative notes.
Strategy 2: input words + punctuation marks as features
Features | M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. . |
Feature counts | m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 |
Numerical features | 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 |
#Unique features | 3933 |
The punctuation marks are added to the features of the strategy 1. We have verified that these features are indispensable to the sequence learning of a CRF model on our corpus 1. Even though the learning objective here is different, we expect that these marks pick up well the specificity of bibliographic references. The identical notes are selected for learning and test data in order to compare more exactly the different strategies (same for the rest).
Total accuracy : 89.35
[Positive] Precision : 90.76 Recall : 95.70 [Negative] Precision : 83.70 Recall : 69.37
Total accuracy increased about 2 point compared to the previous one. Especially a remarkable gain (9 point) is achieved on the recall of negative notes. So, the usefulness of the punctuation marks as the classification feature is confirmed.
Strategy 3: input (strategy 2) + 11 feature types (CRF local features)
Features | M. et M. , « Aspects du droit dans marocain » , Droit et Société , , p. .ALLCAP INITIAL FIRSTCAP ALLSMALL ALLCAP INITIAL FIRSTCAP PUNC PUNC FIRSTCAP ALLSMALL ALLSMALL ALLSMALL ALLSMALL NONIMPCAP ALLSMALL PUNC PUNC ITALIC FIRSTCAP ITALIC ALLSMALL ITALIC FIRSTCAP PUNC ALLNUMBERS PUNC ALLSMALL NUMBERS DASH PUNC |
Feature counts | m.:2 et:2 ,:4 «:1 aspects:1 du:1 droit:2 dans:1 marocain:1 »:1 société:1 p.:1 .:1 allcap:2 initial:2 firstcap:5 allsmall:8 punc:7 nonimpcap:1 italic:3 allnumbers:1 numbers:1 dash:1 |
Numerical features | 9:1 23:4 25:1 34:1 35:1 47:2 67:2 68:1 69:1 70:2 71:1 72:1 73:1 10001:3 10002:5 10003:2 10004:1 10005:8 10006:2 10007:7 10008:1 10009:1 10010:1 10011:1 |
#Unique features | 3944 |
The 11 local features that have been defined and verified useful during the experiments of the CRF learning on corpus 1 are now applied to the note classification. We also keep the input features of the strategy 2. All the local features describing the included tokens in a note are now counted. Since the local features capture the external form of tokens, these are expected to be useful for the classification.
Total accuracy : 86.09
[Positive] Precision : 87.80 Recall : 94.84 [Negative] Precision : 78.31 Recall : 58.56
The result is different from what we were expecting. Total accuracy decreases more than 3 point where all the precisions and recalls are far below that of the strategy 2 even less than the strategy 1. We analyze that this may be caused by the inappropriate treatment of the local features. For example, the binary value instead of the feature count may be more suitable to extract a wide pattern of local features.
Strategy 4 : input + 11 feature types + ‘posspage’ feature
Features | (same as above) + POSSPAGE |
Feature counts | (same as above) + posspage:1 |
Numerical features | (same as above) + 10012:1 |
#Unique features | 3945 |
Even though the failure of the above strategy using local features, we decided to test an additional local feature, ‘POSSPAGE’. This feature was previously tested on the corpus 1 but confirmed to be not effective for the reference annotation. However, it can be a useful indicator for the classification. It describes that a token has a form of page indicator such as ‘p.’, ‘pp.’, ‘p’ or ‘pp’.
Total accuracy : 87.39
[Positive] Precision : 89.01 Recall : 95.13 [Negative] Precision : 80.46 Recall : 63.06
The result is still below that of the second strategy, however the accuracies increased than the above case. It signifies that the new local feature is effective for the classification.
OpenEdition vous propose de citer ce billet de la manière suivante :
Young-Min Kim (20 septembre 2011). Experimental results of note classification with different feature generation strategies – Part 1. OpenEdition Lab. Consulté le 11 octobre 2024 à l’adresse https://doi.org/10.58079/qnpr