Third Experimental Result on Revues.org corpus level 1 – Part II

Part II : Experiments on various datasets with different strategies

We prepared various datasets differently extracted from the original Revues.org corpus. The first and second rules that had been applied for the label selection on the datasets ver. 1 and 2 are still kept.

Strategy a: Apply new labels extracted from attributes

a-1) <biblscope> -> <pages>, <numbering>

Use the attributes of <biblscope> instead of <biblscope> label. The attribute value “pp” becomes the new label, <pages>, and the other values; “issue”, “vol” and “part” become the other new label, <numbering>.

a-2) <title> -> <title>, <book>

Separate the attributes of <title> then use them as labels. The attribute value “a” becomes a replaced label, <title>, and the “j” and “s” values become the new label, <book>. The value “m” and “u” depend on the context.

We prepared two different versions of datasets where the first one uses only the former strategy and the second one uses both strategies. Then we learn a CRF model on each of the newly prepared datasets. We obtain 86.44 % of estimation accuracy of on the first dataset and 84.58% on the second dataset. This means that using these new labels rather reduces the overall performance on accuracy.  It maybe comes from the increased number of labels. The effects of well-separated labels could not overcome the handicap from the increased number of candidates for estimation. By complicating the learning process, these newly added labels are proved to be negative for the construction of a good CFR model.

Strategy b: Add features to the learning dataset from attributes

b-1) attributes of <biblscope>, <abbr> -> features of <biblscope>, <abbr>

As a trial version, we extracted features from the attributes of <biblscope> and <abbr>. On the dataset ver. 2, we just added feature information using the attributes of <biblscope> and <abbr>. If a token’s label is one of <biblscope> and <abbr>, we find its attribute value then take it as a feature.

This dataset dramatically reduces the estimation performance in terms of accuracy (about 65%). It’s because we use feature information for learning a CRF model but we don’t have this kind of information in the test dataset. This feature restriction on test set is reasonable because a new reference string does not have in general such detailed feature information. Moreover, the attributes of <biblscope> and <abbr> cannot be easily recognized in an automatic way. Therefore, if we want to improve the estimation performance by adding features in learning dataset, exact same types of features should be included in test set. When the features extracted from <biblscope> and <abbr> are both included in the learning and test set, we have got about 88% of the accuracy.

As a result, we should be careful for choosing the appropriate features, considering the possibility of their automatic extraction given a new reference string.

Strategy c: Basic operations over punctuation

c-1) Delete all the punctuation marks in the tokens

Actually in the other researches for reference extraction, punctuation mark is not treated as real label. In the tutorial of Mallet CRF, the punctuation is separated as token but not counted for the evaluation. On the other hand, in a recent paper of Councill et al.(ParsCit: An open-source CRF reference string parsing package, 2008), the punctuation is not separated from token but detected as features of token. Likewise, there would be many possible ways to deal with punctuation.

In this time, we try to simply ignore all the punctuation marks in learning and testing to verify the influence of the punctuation in a CRF model. The overall accuracy, 81%, is certainly reduced compared to the result on dataset ver. 2. (87.86%). This proves that an elaborate work on the extraction of punctuation features would improve the performance.

c-2) Use the attribute values of <c> tag to extract separately labels according to the types of punctuation marks.

We re-organized the dataset as in the Mallet tutorial where the data format including punctuation is as follows (‘/’ is line separator):

Call SUFF-lI VB / me TWO_LETTERS PPN / Ishmael BIBLICAL_NAME NNP / . PUNCTUATION .

The overall accuracy is almost same as that of the experiment ver. 2. But the performance of surname and forename is reduced (difference more than 10 percentage maximum). Therefore we prefer return to our original strategy over punctuation for now, using the same label for all the punctuation marks.

In the next experiment, we present an effective approach to use punctuation in learning.


Vous aimerez aussi...

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search